1. 03 Mar, 2021 2 commits
  2. 01 Mar, 2021 1 commit
    • Florian Haas's avatar
      requirements.txt: Move the six dependency into the general requirements · 21e2675a
      Florian Haas authored
      
      
      config_template.py depends on six, which isn't listed in the default
      requirements.txt. This previously frequently wasn't a problem, because
      six used to be a standard package being installed into a venv, and
      lots of other projects depended on it.
      
      It also does get installed for unit and integration tests via
      tests/requirements.txt, so any broken dependency on six wouldn't be
      detected by tox runs.
      
      However, as other projects and distributions have phased out Python
      2.7 support the dependency on six becomes less common. Thus, as long
      as ceph-ansible does require it for config_template.py, add it to the
      base requirements.
      Signed-off-by: default avatarFlorian Haas <florian@citynetwork.eu>
      (cherry picked from commit d49ea981)
      21e2675a
  3. 18 Feb, 2021 1 commit
  4. 14 Feb, 2021 1 commit
  5. 12 Feb, 2021 3 commits
  6. 11 Feb, 2021 2 commits
  7. 10 Feb, 2021 6 commits
  8. 09 Feb, 2021 1 commit
  9. 01 Feb, 2021 1 commit
  10. 29 Jan, 2021 1 commit
  11. 28 Jan, 2021 3 commits
    • Guillaume Abrioux's avatar
      rgw: avoid useless call to ceph-rgw · 78d9d9df
      Guillaume Abrioux authored
      
      
      since `ceph-rgw` may be called from `ceph-handler` in some contexts we
      should avoid rerunning it unnecessarily.
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit 86170816)
      78d9d9df
    • Guillaume Abrioux's avatar
      rgw: multisite refact · df987463
      Guillaume Abrioux authored
      Add the possibility to deploy rgw multisite configuration with a mix of
      secondary and primary zones on a same rgw node.
      Before that, on a same node, all instances were either primary
      zones *OR* secondary.
      
      Now you can define a rgw instance like following:
      
      ```
      rgw_instances:
        - instance_name: 'rgw0'
          rgw_zonemaster: false
          rgw_zonesecondary: true
          rgw_zonegroupmaster: false
          rgw_realm: 'france'
          rgw_zonegroup: 'zonegroup-france'
          rgw_zone: paris-00
          radosgw_address: "{{ _radosgw_address }}"
          radosgw_frontend_port: 8080
          rgw_zone_user: jacques.chirac
          rgw_zone_user_display_name: "Jacques Chirac"
          system_access_key: P9Eb6S8XNyo4dtZZUUMy
          system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB
          endpoint: http://192.168.101.12:8080
      ```
      
      Basically it's now possible to define `rgw_zonemaster`,
      `rgw_zonesecondary` and `rgw_zonegroupmaster` at the intsance
      level instead of the whole node level.
      
      Also, this commit adds an option `deploy_secondary_zones` (default True)
      which can be set to `False` in order to explicitly ask the playbook to
      not deploy secondary zones in case where the corresponding endpoint are
      not deployed yet.
      
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1915478
      
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit 71a5e666)
      df987463
    • Guillaume Abrioux's avatar
      library: fix bug in radosgw_zone.py · 32ac4359
      Guillaume Abrioux authored
      
      
      If for some reason `get_zonegroup()` returns a failure, we must handle
      and make the module exit properly instead of failing with the following
      python trace:
      
      ```
      Traceback (most recent call last):
        File "./AnsiballZ_radosgw_zone.py", line 247, in <module>
          _ansiballz_main()
        File "./AnsiballZ_radosgw_zone.py", line 234, in _ansiballz_main
          exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS)
        File "./AnsiballZ_radosgw_zone.py", line 202, in debug
          runpy.run_module(mod_name='ansible.modules.radosgw_zone', init_globals=None, run_name='__main__', alter_sys=True)
        File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
          return _run_module_code(code, init_globals, run_name, mod_spec)
        File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
          mod_name, mod_spec, pkg_name, script_name)
        File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
          exec(code, run_globals)
        File "/home/vagrant/.ansible/tmp/ansible-tmp-1610728441.41-685133-218973990589597/debug_dir/ansible/modules/radosgw_zone.py", line 467, in <module>
          main()
        File "/home/vagrant/.ansible/tmp/ansible-tmp-1610728441.41-685133-218973990589597/debug_dir/ansible/modules/radosgw_zone.py", line 463, in main
          run_module()
        File "/home/vagrant/.ansible/tmp/ansible-tmp-1610728441.41-685133-218973990589597/debug_dir/ansible/modules/radosgw_zone.py", line 425, in run_module
          zonegroup = json.loads(_out)
        File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads
          return _default_decoder.decode(s)
        File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode
          obj, end = self.raw_decode(s, idx=_w(s, 0).end())
        File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode
          raise JSONDecodeError("Expecting value", s, err.value) from None
      json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
      
      ```
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit fedb3668)
      32ac4359
  12. 22 Jan, 2021 1 commit
  13. 18 Jan, 2021 5 commits
  14. 13 Jan, 2021 1 commit
    • Guillaume Abrioux's avatar
      fs2bs: skip migration when a mix of fs and bs is detected · af95c34c
      Guillaume Abrioux authored
      Since the default of `osd_objectstore` has changed as of 3.2, some
      deployments might have a mix of filestore and bluestore OSDs on a same
      node. In some specific cases, there's a possibility that a filestore OSD
      shares a journal/db device with a bluestore OSD. We shouldn't try to
      redeploy in this context because ceph-volume will complain. (either
      because in lvm batch you can't pass partition or about gpt header).
      The safest option is to skip the migration on the node when such a mix
      is detected or force all osds including those already using bluestore
      (option `force_filestore_to_bluestore=True` has to be passed as an extra var).
      If all OSDs are using filestore, then they will be migrated to
      bluestore.
      
      Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1875777
      
      Signed-off-by: default avatarGuillaume Abrioux <gabrioux@redhat.com>
      (cherry picked from commit e66f12d1)
      af95c34c
  15. 11 Jan, 2021 1 commit
  16. 06 Jan, 2021 3 commits
  17. 16 Dec, 2020 4 commits
  18. 15 Dec, 2020 2 commits
  19. 14 Dec, 2020 1 commit