- 29 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
If multi-realms were deployed with several instances belonging to the same realm and zone using the same port on different nodes, the service id expected by cephadm will be the same and therefore only one service will be deployed. We need to create a service called `<node>.<realm>.<zone>.<port>` to be sure the service name will be unique and well deployed on the expected node in order to preserve backward compatibility with the rgws instances that were deployed with ceph-ansible. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 31311b03)
-
- 24 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
We need to support rgw multisite deployments. This commit makes the adoption playbook support this kind of deployment. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit fc784fc4)
-
- 14 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
When no `[mgrs]` group is defined in the inventory, mgr daemon are implicitly collocated with monitors. This task currently relies on the length of the mgr group in order to tell cephadm to deploy mgr daemons. If there's no `[mgrs]` group defined in the inventory, it will ask cephadm to deploy 0 mgr daemon which doesn't make sense and will throw an error. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit f9a73149)
-
- 29 Apr, 2021 1 commit
-
-
Guillaume Abrioux authored
ceph-ansible leaves a ceph-crash container in containerized deployment. It means we end up with 2 ceph-crash containers running after the migration playbook is complete. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1954614 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 22c18e82)
-
- 27 Apr, 2021 2 commits
-
-
Guillaume Abrioux authored
Due to a recent breaking change in ceph, this command must be modified to add the <svc_id> parameter. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 1f40c125)
-
Guillaume Abrioux authored
When migrating from a cluster with no MDS nodes deployed, `{{ cephfs_data_pool.name }}` doesn't exist so we need to create a pool for storing nfs export objects. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1950403 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit bb7d37fb)
-
- 12 Apr, 2021 3 commits
-
-
Guillaume Abrioux authored
This commit adds the nfs-ganesha adoption support in the `cephadm-adopt.yml` playbook. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944504 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit a9220654)
-
Guillaume Abrioux authored
the adoption playbook should use `radosgw_num_instances` in order to determine how much rgw instance it should set recreate. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1943170 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 1ffc4df6)
-
Guillaume Abrioux authored
This play doesn't nothing else than stopping/removing rgw daemons. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit ee44d860)
-
- 25 Mar, 2021 1 commit
-
-
Alex Schultz authored
It has come to our attention that using ansible_* vars that are populated with INJECT_FACTS_AS_VARS=True is not very performant. In order to be able to support setting that to off, we need to update the references to use ansible_facts[<thing>] instead of ansible_<thing>. Related: ansible#73654 Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1935406 Signed-off-by:
Alex Schultz <aschultz@redhat.com> (cherry picked from commit a7f2fa73)
-
- 18 Mar, 2021 2 commits
-
-
Guillaume Abrioux authored
This is a follow up on PR #6332 cephadm-adopt.yml playbook is affected by the same bug Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1938658 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit af95595c)
-
Guillaume Abrioux authored
This commit makes the playbook fetch the minimal current ceph configuration and write it later on monitoring nodes so `cephadm` can proceed with the adoption. When a monitoring stack was deployed on a dedicated node, it means no `ceph.conf` file was written, `cephadm` requires a `ceph.conf` in order to adopt the daemon present on the node. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1939887 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit b445df04)
-
- 10 Feb, 2021 1 commit
-
-
Dimitri Savineau authored
This was fixed by [1][2] [1] https://tracker.ceph.com/issues/45120 [2] https://github.com/ceph/ceph/commit/252d4b30 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 03 Feb, 2021 1 commit
-
-
Dimitri Savineau authored
There's no reason to not use the ceph_osd_flag module to set/unset osd flags. Also if there's no OSD nodes in the inventory then we don't need to execute the set/unset play. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 29 Jan, 2021 2 commits
-
-
Dimitri Savineau authored
When rerunning the cephadm-adopt.yml playbook the radosgw realm, zonegroup and zone tasks will fail because the task isn't idempotent. Using the radosgw ansible modules solves that problem. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
If the cephadm-adopt.yml fails during the first execution and some daemons have already been adopted by cephadm then we can't rerun the playbook because the old container won't exist anymore. Error: no container with name or ID ceph-mon-xxx found: no such container If the daemons are adopted then the old systemd unit doesn't exist anymore so any call to that unit with systemd will fail. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1918424 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 18 Jan, 2021 1 commit
-
-
Dimitri Savineau authored
The grafana group conversion task wasn't present in the cephadm-adopt.yml playbook. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1917530 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 16 Dec, 2020 1 commit
-
-
Dimitri Savineau authored
Instead of iterate over the host list for adding the node/label to the host orchestrator configuration then we can do it parallelly. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 02 Dec, 2020 1 commit
-
-
Dimitri Savineau authored
This adds cephadm_adopt ansible module for replacing the command module usage with the cephadm adopt command. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 01 Dec, 2020 1 commit
-
-
Dimitri Savineau authored
We should always use the ceph_volume ansible module when possible. This patch replace the ceph-volume inventory and lvm {list,zap} commands called via the command/shell modules by the corresponding call with the ceph_volume module. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 30 Nov, 2020 1 commit
-
-
Dimitri Savineau authored
This adds ceph_mgr_module ansible module for replacing the command module usage with the ceph mgr module enable/disable commands. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 23 Nov, 2020 1 commit
-
-
Guillaume Abrioux authored
ignore 302,303 and 505 errors [302] Using command rather than an argument to e.g. file [303] Using command rather than module [505] referenced files must exist they aren't relevant on these tasks. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 03 Nov, 2020 2 commits
-
-
Dimitri Savineau authored
The ceph status command returns a lot of information stored in variables and/or facts which could consume resources for nothing. When checking the quorum status, we're only using the quorum_names structure in the ceph status output. To optimize this, we could use the ceph quorum_status command which contains the same needed information. This command returns less information. $ ceph status -f json | wc -c 2001 $ ceph quorum_status -f json | wc -c 957 $ time ceph status -f json > /dev/null real 0m0.577s user 0m0.538s sys 0m0.029s $ time ceph quorum_status -f json > /dev/null real 0m0.544s user 0m0.527s sys 0m0.016s Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
The ceph status command returns a lot of information stored in variables and/or facts which could consume resources for nothing. When checking the pgs state, we're using the pgmap structure in the ceph status output. To optimize this, we could use the ceph pg stat command which contains the same needed information. This command returns less information (only about pgs) and is slightly faster than the ceph status command. $ ceph status -f json | wc -c 2000 $ ceph pg stat -f json | wc -c 240 $ time ceph status -f json > /dev/null real 0m0.529s user 0m0.503s sys 0m0.024s $ time ceph pg stat -f json > /dev/null real 0m0.426s user 0m0.409s sys 0m0.016s The data returned by the ceph status is even bigger when using the nautilus release. $ ceph status -f json | wc -c 35005 $ ceph pg stat -f json | wc -c 240 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 02 Nov, 2020 1 commit
-
-
Dimitri Savineau authored
Instead of using ceph auth get command via the ansible command module then we can use the ceph_key module and the info state. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 29 Sep, 2020 1 commit
-
-
Guillaume Abrioux authored
This change default value of grafana-server group name. Adding some tasks in ceph-defaults in order to keep backward compatibility. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 20 Jul, 2020 1 commit
-
-
Dimitri Savineau authored
Set the cephadm cmd as a fact instead of rewriting the same command over and over. This also fix an issue when using docker as container engine because the --docker cephadm parameter should be use before the subcommand not after. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 16 Jul, 2020 1 commit
-
-
Dimitri Savineau authored
This is a partial revert of b38019e3 because we don't want to execute the whole play on the monitor otherwise if we have some empty group like rgws or mdss then the orchestrator commands will still be executed. Instead we should keep the real target group name at play level and delegate the orchestator commands to the monitor. The whole play will be skipped is the group is empty. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 15 Jul, 2020 3 commits
-
-
Dimitri Savineau authored
Print a message at the end of the playbook to inform users that they don't have to user ceph-ansible playbooks anymore as everything else need to be done via cephadm (day 2 operation). Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
When reporting the orchestrator service/daemon list at the end of the playbook, we can use the --refresh option otherwise we could have an outdated output. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
This reverts commit c3bbc6b1 . Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 13 Jul, 2020 5 commits
-
-
Dimitri Savineau authored
After adopting a monitor we need to wait that monitor to join back the quorum before moving to the next node. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
Like rolling_update or switch2container playbooks, we need to set/unset some osd flags before and after the OSD daemons adoption. This also adds a task for waiting for clean pgs at then of an OSd node. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
The iSCSI support has been added recently in cephadm. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
At the end of the process when don't need the cephadm script. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
At the end of the playbook we can show the orchestrator status like we do with the ceph status in initial deployment. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 10 Jul, 2020 4 commits
-
-
Dimitri Savineau authored
It's better to use the --placement parameter when using ceph orch apply commands to avoid confusion in the parameters. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
cephadm uses default value for dashboard container images which need to be customized by ansible for upstream or downstream purpose. This feature wasn't present when cephadm-adopt.yml has been designed. Also set the container_image_base variable for upgrade purpose. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
It looks like we can't run the ceph orch apply commands on nodes other than monitors even if it used to work in the past. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
If the systemd service exists successfully then we don't need to reset the failed state. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-