Skip to content
Snippets Groups Projects
  1. Nov 30, 2020
  2. Nov 27, 2020
  3. Nov 26, 2020
  4. Nov 25, 2020
  5. Nov 24, 2020
  6. Nov 23, 2020
  7. Nov 19, 2020
    • Guillaume Abrioux's avatar
      osd: ensure /var/lib/ceph/osd/{cluster}-{id} is present · 873fc8ec
      Guillaume Abrioux authored
      This commit ensures that the `/var/lib/ceph/osd/{{ cluster }}-{{ osd_id }}` is
      present before starting OSDs.
      
      This is needed specificly when redeploying an OSD in case of OS upgrade
      failure.
      Since ceph data are still present on its devices then the node can be
      redeployed, however those directories aren't present since they are
      initially created by ceph-volume. We could recreate them manually but
      for better user experience we can ask ceph-ansible to recreate them.
      
      NOTE:
      this only works for OSDs that were deployed with ceph-volume.
      ceph-disk deployed OSDs would have to get those directories recreated
      manually.
      
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1898486
      
      
      
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      873fc8ec
  8. Nov 18, 2020
  9. Nov 17, 2020
    • Dimitri Savineau's avatar
      tests: use github workflow for pytest · 3e79f032
      Dimitri Savineau authored
      
      Move the pytest testing from TravisCI to Github workflow.
      
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      3e79f032
    • Guillaume Abrioux's avatar
      containers: modify bindmount option · f5ba6d9b
      Guillaume Abrioux authored
      This commit changes the bind mount option for the mount point
      `/var/lib/ceph` in the systemd template for mon and mgr containers. This
      is needed in case of collocating mon/mgr with osds using dmcrypt
      scenario.
      Once mon/mgr got converted to containers, the dmcrypt layer sub mount is
      still seen in `/var/lib/ceph`. For some reason it makes the
      corresponding devices busy so any other container can't open/close it.
      As a result, it prevents osds from starting properly.
      
      Since it only happens on the nodes converted before the OSD play, the idea is
      to bind mount `/var/lib/ceph` on mon and mgr with the `rshared` option
      so once the sub mount is unmounted, it is propagated inside the
      container so it doesn't see that mount point.
      
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1896392
      
      
      
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      f5ba6d9b
  10. Nov 16, 2020
  11. Nov 13, 2020
    • Benoît Knecht's avatar
      ceph-facts: Fix osd_pool_default_crush_rule fact · c5f7343a
      Benoît Knecht authored
      
      The `osd_pool_default_crush_rule` is set based on `crush_rule_variable`, which
      is the output of a `grep` command.
      
      However, two consecutive tasks can set that variable, and if the second task is
      skipped, it still overwrites the `crush_rule_variable`, leading the
      `osd_pool_default_crush_rule` to be set to `ceph_osd_pool_default_crush_rule`
      instead of the output of the first task.
      
      This commit ensures that the fact is set right after the `crush_rule_variable`
      is assigned, before it can be overwritten.
      
      Closes #5912
      
      Signed-off-by: default avatarBenoît Knecht <bknecht@protonmail.ch>
      c5f7343a
    • Gaudenz Steinlin's avatar
      config: Always use osd_memory_target if set · 4d1fdd2b
      Gaudenz Steinlin authored
      
      The osd_memory_target variable was only used if it was higher than the
      calculated value based on the number of OSDs. This is changed to always
      use the value if it is set in the configuration. This allows this value
      to be intentionally set lower so that it does not have to be changed
      when more OSDs are added later.
      
      Signed-off-by: default avatarGaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
      4d1fdd2b
  12. Nov 12, 2020
    • Guillaume Abrioux's avatar
      main: followup on pr 6012 · 2fa17520
      Guillaume Abrioux authored
      
      This tag can be set at the play level.
      
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      2fa17520
    • Dimitri Savineau's avatar
      switch2container: disable ceph-osd enabled-runtime · fa2bb3af
      Dimitri Savineau authored
      When deploying the ceph OSD via the packages then the ceph-osd@.service
      unit is configured as enabled-runtime.
      This means that each ceph-osd service will inherit from that state.
      The enabled-runtime systemd state doesn't survive after a reboot.
      For non containerized deployment the OSD are still starting after a
      reboot because there's the ceph-volume@.service and/or ceph-osd.target
      units that are doing the job.
      
      $ systemctl list-unit-files|egrep '^ceph-(volume|osd)'|column -t
      ceph-osd@.service     enabled-runtime
      ceph-volume@.service  enabled
      ceph-osd.target       enabled
      
      When switching to containerized deployment we are stopping/disabling
      ceph-osd@XX.servive, ceph-volume and ceph.target and then removing the
      systemd unit files.
      But the new systemd units for containerized ceph-osd service will still
      inherit from ceph-osd@.service unit file.
      
      As a consequence, if an OSD host is rebooting after the playbook execution
      then the ceph-osd service won't come back because they aren't enabled at
      boot.
      
      This patch also adds a reboot and testinfra run after running the switch
      to container playbook.
      
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1881288
      
      
      
      Signed-off-by: default avatarDimitri Savineau <dsavinea@redhat.com>
      fa2bb3af
Loading