- 09 Jul, 2021 3 commits
-
-
Neelaksh Singh authored
Fixes: #6529 Signed-off-by:
Neelaksh Singh <neelaksh48@gmail.com> (cherry picked from commit d18a9860)
-
Guillaume Abrioux authored
add mising 'osd' command. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 4eb4268d)
-
Guillaume Abrioux authored
After an upgrade, the presence of straw buckets will produce the following warning (HEALTH_WARN): ``` crush map has legacy tunables (require firefly, min is hammer) ``` because straw bucket is a firefly feature it needs to be converted to straw2. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967964 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit eee57647)
-
- 07 Jul, 2021 1 commit
-
-
Guillaume Abrioux authored
When deploying dashboard with ssl certificates generated by ceph-ansible, we enforce the CN to 'ceph-dashboard' which can makes application such alertmanager complain like following: `err="Post https://mgr0:8443/api/prometheus_receiver: x509: certificate is valid for ceph-dashboard, not mgr0" context_err="context deadline exceeded"` The idea here is to add alternative names matching all mgr/mon instances in the certificate so this error won't appear in logs. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 72a0336c)
-
- 06 Jul, 2021 2 commits
-
-
Dimitri Savineau authored
The dashboard/monitoring stack can be deployed via the dashboard_enabled variable. But there's nothing similar if we can to remove that part only and keep the ceph cluster up and running. The current purge playbooks remove everything. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786691 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 8e4ef7d6)
-
Guillaume Abrioux authored
This introduces a new variable `dashboard_network` in order to support deploying the dashboard on a different subnet. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1927574 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit f4f73b61)
-
- 05 Jul, 2021 1 commit
-
-
Dimitri Savineau authored
The ceph crash insatll checkpoint callback was missing in the main playbooks. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 993d06c4)
-
- 03 Jul, 2021 1 commit
-
-
Guillaume Abrioux authored
Add any_errors_fatal: true in cephadm-adopt playbook. We should stop the playbook execution when a task throws an error. Otherwise it can lead to unexpected behavior. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1976179 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 3b804a61)
-
- 02 Jul, 2021 9 commits
-
-
Dimitri Savineau authored
Instead of reusing the condition 'inventory_hostname in groups[osds]' on each device facts tasks then we can move all the tasks into a dedicated file and set the condition on the import_tasks statement. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit d704b05e)
-
Dimitri Savineau authored
We currently don't check if the logical volume used in lvm_volumes list for either bluestore data/db/wal or filestore data/journal exist. We're only doing this on raw devices for batch scenario. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 55bca07c)
-
Dimitri Savineau authored
When using dedicated devices for db/journal/wal objecstore with ceph-volume lvm batch then we should also validate that those devices exist and don't use a gpt partition table in addition of the devices and lvm_volume.data variables. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 808e7106)
-
Dimitri Savineau authored
Instead of using findmnt command to find the device associated to the root mount point then we can use the ansible_mounts fact. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 7e50380f)
-
Dimitri Savineau authored
This is already done in the ceph-facts role. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 0df99dda)
-
Dimitri Savineau authored
Instead of doing two parted calls we can check first if the device exist and then test the partition table. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 14d458b3)
-
Dimitri Savineau authored
2888c082 introduced a regression as the check_devices tasks file was only included based on the devices variable. But that file also validate some devices from the lvm_volumes variable. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1906022 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit ac0342b7)
-
Dimitri Savineau authored
The prometheus service isn't binding on localhost. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1933560 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 1d568186)
-
Guillaume Abrioux authored
This adds the monitoring group in the "final cleanup play" so any cid files generated are well removed when purging the cluster. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1974536 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 037d8cd0)
-
- 01 Jul, 2021 1 commit
-
-
Dimitri Savineau authored
All ceph daemons need to have the TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES environment variable set to 128MB by default in container setup. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970913 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 9758e3c5)
-
- 30 Jun, 2021 9 commits
-
-
Guillaume Abrioux authored
There's no benefit to gather facts again on each play in rolling_update.yml Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 2c77d009)
-
Guillaume Abrioux authored
When calling the `ceph_key` module with `state: info`, if the ceph command called fails, the actual error is hidden by the module which makes it pretty difficult to troubleshoot. The current code always states that if rc is not equal to 0 the keyring doesn't exist. `state: info` should always return the actual rc, stdout and stderr. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1964889 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d58500ad)
-
Boris Ranto authored
It was requested for us to update our alerting definitions to include a slow OSD Ops health check. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1951664 Signed-off-by:
Boris Ranto <branto@redhat.com> (cherry picked from commit 2491d4e0)
-
Dimitri Savineau authored
We need to set the ceph_stable_release variable during the switch2container playbook. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
This adds the ceph-validate role before starting the switch to a containerized deployment. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1968177 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit fc160b3b)
-
Guillaume Abrioux authored
Let's drop py3.6 and py3.7 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d191ba38)
-
Guillaume Abrioux authored
This adds a github workflow for checking the signed off line in commit messages. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8c094975)
-
Guillaume Abrioux authored
let's use github workflow for checking defaults values. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d71db816)
-
Guillaume Abrioux authored
This adds the ansible --syntax-check test in the ansible-lint workflow Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 5ed423ad)
-
- 29 Jun, 2021 3 commits
-
-
Guillaume Abrioux authored
This inventory isn't used anywhere. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 304d1cbb)
-
Guillaume Abrioux authored
Do not rely on the inventory aliases in order to check if the selected manager to be removed is present. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967897 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 26a7256c)
-
Guillaume Abrioux authored
If multi-realms were deployed with several instances belonging to the same realm and zone using the same port on different nodes, the service id expected by cephadm will be the same and therefore only one service will be deployed. We need to create a service called `<node>.<realm>.<zone>.<port>` to be sure the service name will be unique and well deployed on the expected node in order to preserve backward compatibility with the rgws instances that were deployed with ceph-ansible. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 31311b03)
-
- 24 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
We need to support rgw multisite deployments. This commit makes the adoption playbook support this kind of deployment. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit fc784fc4)
-
- 17 Jun, 2021 2 commits
-
-
VasishtaShastry authored
Playbook failing saying: msg: 'Could not find the requested service lvmetad: host' Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955040 Signed-off-by:
VasishtaShastry <vipin.indiasmg@gmail.com> (cherry picked from commit e49c38f8)
-
Guillaume Abrioux authored
When running the switch-to-containers playbook with multisite enabled, the fact "rgw_instances" is only set for the node being processed (serial: 1), the consequence of that is that the set_fact of 'rgw_instances_all' can't iterate over all rgw node in order to look up each 'rgw_instances_host'. Adding a condition checking whether hostvars[item]["rgw_instances_host"] is defined fixes this issue. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967926 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8279d14d)
-
- 16 Jun, 2021 2 commits
-
-
Guillaume Abrioux authored
There's no need to copy this keyring when using nfs with mds Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8dbee998)
-
Guillaume Abrioux authored
needed for the update job in stable-6.0 branch. the upgrade from either nautilus or octopus to pacific isnt supported when nfs/rgw is deployed. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 14 Jun, 2021 2 commits
-
-
Guillaume Abrioux authored
When no `[mgrs]` group is defined in the inventory, mgr daemon are implicitly collocated with monitors. This task currently relies on the length of the mgr group in order to tell cephadm to deploy mgr daemons. If there's no `[mgrs]` group defined in the inventory, it will ask cephadm to deploy 0 mgr daemon which doesn't make sense and will throw an error. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit f9a73149)
-
Guillaume Abrioux authored
When monitors and rgw are collocated with multisite enabled, the rolling_update playbook fails because during the workflow, we run some radosgw-admin commands very early on the first mon even though this is the monitor being upgraded, it means the container doesn't exist since it was stopped. This block is relevant only for scaling out rgw daemons or initial deployment. In rolling_update workflow, it is not needed so let's skip it. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970232 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit f7166ccc)
-
- 11 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
CentOS 8.4 vagrant image is available at https://cloud.centos.org let's use it. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit c2aaa96f)
-
- 07 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
When using grafana behind https `cookie_secure` should be set to `true`. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1966880 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 4daed1f1)
-
- 26 May, 2021 1 commit
-
-
Guillaume Abrioux authored
during backport of c8b92deba10c0b6e0ebcb0e31315b1e6174fdc0c the pattern should have been s/monitoring_group_name/grafana_server_group_name/ Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1964907 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-