1. 02 Oct, 2019 2 commits
    • Guillaume Abrioux's avatar
      common: improve keyrings generation · 13ca0531
      Guillaume Abrioux authored
      There is no need to get n * number of nodes the different keyrings.
      Adding a `run_once: true` here avoid running a ceph command too many
      times which could be impacting large cluster deployment.
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit 9bad239d)
    • Dimitri Savineau's avatar
      ceph-facts: use --admin-daemon to get fsid · 5b24c66f
      Dimitri Savineau authored
      During the rolling_update scenario, the fsid value is retrieve from the
      current ceph cluster configuration via the ceph daemon config command.
      This command tries first to resolve the admin socket path via the
      ceph-conf command.
      Unfortunately this command won't work if you have a duplicate key in the
      ceph configuration even if it only produces a warning. As a result the
      task will fail.
      Can't get admin socket path: unable to get conf option admin_socket for
      mon.xxx: warning: line 13: 'osd_memory_target' in section 'osd' redefined
      Instead of using ceph daemon we can use the --admin-daemon option
      because we already know what the socket admin path value based on the
      ceph cluster and mon hostname values.
      Closes: #4492
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      (cherry picked from commit ec3b687d)
  2. 01 Oct, 2019 8 commits
  3. 30 Sep, 2019 3 commits
  4. 29 Sep, 2019 1 commit
  5. 28 Sep, 2019 1 commit
    • Guillaume Abrioux's avatar
      update: reset mon_host after mons upgrade · 4afe1b74
      Guillaume Abrioux authored
      after all mon are upgraded, let's reset mon_host which is used in the
      rest of the playbook for setting `container_exec_cmd` so we are sure to
      use the right value.
      Typical error:
      failed: [mds0 -> mon0] (item={u'path': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'name': u'client.bootstrap-mds', u'copy_key': True}) => changed=true
        ansible_loop_var: item
        - docker
        - exec
        - ceph-mon-mon2
        - ceph
        - --cluster
        - ceph
        - auth
        - get
        - client.bootstrap-mds
        delta: '0:00:00.016294'
        end: '2019-09-27 13:54:58.828835'
          copy_key: true
          name: client.bootstrap-mds
          path: /var/lib/ceph/bootstrap-mds/ceph.keyring
        msg: non-zero return code
        rc: 1
        start: '2019-09-27 13:54:58.812541'
        stderr: 'Error response from daemon: No such container: ceph-mon-mon2'
        stderr_lines: <omitted>
        stdout: ''
        stdout_lines: <omitted>
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit d84160a1)
  6. 27 Sep, 2019 7 commits
  7. 26 Sep, 2019 11 commits
  8. 25 Sep, 2019 1 commit
  9. 18 Sep, 2019 5 commits
  10. 11 Sep, 2019 1 commit
    • Dimitri Savineau's avatar
      ceph-handler: Fix osd restart condition · b50fa236
      Dimitri Savineau authored
      In containerized deployment, the restart OSD handler couldn't be
      triggered in most ansible execution.
      This is due to the usage of run_once + a condition on the inventory
      hostname and the last filter.
      The run_once is triggered first so ansible will pick a node in the
      osd group to execute the restart task. But if this node isn't the
      last one in the osd group then the task is ignored. There's more
      probability that the task will be ignored than executed.
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      (cherry picked from commit 5b1c1565)