1. 09 Sep, 2020 2 commits
  2. 20 Aug, 2020 1 commit
  3. 19 Aug, 2020 1 commit
  4. 18 Aug, 2020 4 commits
  5. 13 Aug, 2020 4 commits
  6. 12 Aug, 2020 2 commits
  7. 06 Aug, 2020 9 commits
  8. 04 Aug, 2020 1 commit
    • Dimitri Savineau's avatar
      dashboard: allow remote TLS cert/key copy · 85dfbc9e
      Dimitri Savineau authored
      When using TLS on the ceph dashboard or grafana services, we can provide
      the TLS certificate and key.
      Those files should be present on the ansible controller and they will be
      copyied to the right node(s).
      In some situation, the TLS certificate and key could be already present
      on the target node and not on the ansible controller.
      For this scenario, we just need to copy the files locally (on each remote
      This patch adds the dashboard_tls_external variable (with default to
      false) to allow users to achieve this scenario when configuring this
      variable to true.
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1860815
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      (cherry picked from commit 0d0f1e71)
  9. 29 Jul, 2020 1 commit
  10. 27 Jul, 2020 3 commits
    • Dimitri Savineau's avatar
      rolling_update: refact dashboard workflow · 7a970ac0
      Dimitri Savineau authored
      The dashboard upgrade workflow should do the same process than the ceph
      upgrade otherwise any systemd unit modification won't be apply on the
      monitoring/dashboard stack.
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1859173
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      (cherry picked from commit a6209bd9)
    • Dimitri Savineau's avatar
      rolling_update: stop/start instead of restart · 15872e3d
      Dimitri Savineau authored
      During the daemon upgrade we're
        - stopping the service when it's not containerized
        - running the daemon role
        - start the service when it's not containerized
        - restart the service when it's containerized
      This implementation has multiple issue.
      1/ We don't use the same service workflow when using containers
      or baremetal.
      2/ The explicity daemon start isn't required since we'are already
      doing this in the daemon role.
      3/ Any non backward changes in the systemd unit template (for
      containerized deployment) won't work due to the restart usage.
      This patch refacts the rolling_update playbook by using the same service
      stop task for both containerized and baremetal deployment at the start
      of the upgrade play.
      It removes the explicit service start task because it's already included
      in the dedicated role.
      The service restart tasks for containerized deployment are also
      This following comment isn't valid because we should have backported
      ceph-crash implementation in stable-4.0 before this commit, which was not
      possible because of the needed tag v4.0.25.1 (async release for 4.1z1):
      ~~Finally, this adds the missing service stop task for ceph crash upgrade
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1859173
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      (cherry picked from commit 155e2a23)
    • Dimitri Savineau's avatar
      ceph-handler: remove iscsigws restart scripts · cce042c6
      Dimitri Savineau authored
      The iscsigws restart scripts for tcmu-runner and rbd-target-{api,gw}
      services only call the systemctl restart command.
      We don't really need to copy a shell script to do it when we can use
      the ansible service module instead.
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      (cherry picked from commit cbe79428)
  11. 24 Jul, 2020 1 commit
  12. 23 Jul, 2020 2 commits
  13. 22 Jul, 2020 2 commits
    • Guillaume Abrioux's avatar
      tests: lvm_setup.yml, add carriage return · 9e400625
      Guillaume Abrioux authored
      This commit adds crlf between each task.
      It makes the playbook more readable.
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit 8ef9fb68)
    • Guillaume Abrioux's avatar
      tests: (lvm_setup.yml), don't shrink lvol · 53793b35
      Guillaume Abrioux authored
      when rerunning lvm_setup.yml on existing cluster with OSDs already
      deployed, it fails like following:
      fatal: [osd0]: FAILED! => changed=false
        msg: Sorry, no shrinking of data-lv2 to 0 permitted.
      because we are asking `lvol` module to create a volume on an empty VG
      with size extents = `100%FREE`.
      The default behavior of `lvol` is to shrink the volume if the LV's current
      size is greater than the requested size.
      Given the requested size is calculated like this:
      `size_requested = size_percent * this_vg['free'] / 100`
      in our case, it is similar to:
      `size_requested = 100 * 0 / 100` which basically means `0`
      So the current LV size is well greater than the requested size which
      leads the module to attempt to shrink it to 0 which isn't obviously now
      Adding `shrink: false` to the module calls fixes this issue.
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit 218f4ae3)
  14. 21 Jul, 2020 2 commits
  15. 20 Jul, 2020 5 commits