- 29 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
If multi-realms were deployed with several instances belonging to the same realm and zone using the same port on different nodes, the service id expected by cephadm will be the same and therefore only one service will be deployed. We need to create a service called `<node>.<realm>.<zone>.<port>` to be sure the service name will be unique and well deployed on the expected node in order to preserve backward compatibility with the rgws instances that were deployed with ceph-ansible. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 31311b03)
-
- 24 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
We need to support rgw multisite deployments. This commit makes the adoption playbook support this kind of deployment. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit fc784fc4)
-
- 17 Jun, 2021 2 commits
-
-
Guillaume Abrioux authored
There's no need to copy this keyring when using nfs with mds Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8dbee998)
-
Guillaume Abrioux authored
When running the switch-to-containers playbook with multisite enabled, the fact "rgw_instances" is only set for the node being processed (serial: 1), the consequence of that is that the set_fact of 'rgw_instances_all' can't iterate over all rgw node in order to look up each 'rgw_instances_host'. Adding a condition checking whether hostvars[item]["rgw_instances_host"] is defined fixes this issue. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967926 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8279d14d)
-
- 16 Jun, 2021 3 commits
-
-
VasishtaShastry authored
Playbook failing saying: msg: 'Could not find the requested service lvmetad: host' Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955040 Signed-off-by:
VasishtaShastry <vipin.indiasmg@gmail.com>
-
Guillaume Abrioux authored
This is an unsupported configuration since there are issues with RGW+NFS upgraded from Nautilus to Pacific. This approach might be seen as a bit aggressive but it is preferable to wait before upgrading in that case. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970003 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
Since nfs+rgw isn't going to be supported in Ceph Pacific, let's not cover this in the CI. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 14 Jun, 2021 2 commits
-
-
Guillaume Abrioux authored
When monitors and rgw are collocated with multisite enabled, the rolling_update playbook fails because during the workflow, we run some radosgw-admin commands very early on the first mon even though this is the monitor being upgraded, it means the container doesn't exist since it was stopped. This block is relevant only for scaling out rgw daemons or initial deployment. In rolling_update workflow, it is not needed so let's skip it. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970232 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit f7166ccc)
-
Guillaume Abrioux authored
When no `[mgrs]` group is defined in the inventory, mgr daemon are implicitly collocated with monitors. This task currently relies on the length of the mgr group in order to tell cephadm to deploy mgr daemons. If there's no `[mgrs]` group defined in the inventory, it will ask cephadm to deploy 0 mgr daemon which doesn't make sense and will throw an error. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit f9a73149)
-
- 11 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
CentOS 8.4 vagrant image is available at https://cloud.centos.org let's use it. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit c2aaa96f)
-
- 07 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
When using grafana behind https `cookie_secure` should be set to `true`. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1966880 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 4daed1f1)
-
- 26 May, 2021 1 commit
-
-
Guillaume Abrioux authored
0990ae41 changed the filter in selectattr() from 'match' to 'equalto' but due to an incompatibility with the Jinja2 version for python 2.7 on el7 we must stick to using 'match' filter. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d6745e9c)
-
- 25 May, 2021 4 commits
-
-
Guillaume Abrioux authored
using 'match' filter in that task will lead to bad behavior if I have the following node names for instance: - node1 - node11 - node111 with `selectattr('name', 'match', inventory_hostname)` it will match 'node1' along with 'node11' and 'node111'. using 'equalto' filter will make sure we only match the target node. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1963066 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 0990ae41)
-
Guillaume Abrioux authored
temporary work around vagrant cloud issue which seems broken at the time of pushing this commit. Let's pull images from cloud.centos.org for now since vagrant cloud hosted images return a 403 error. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 9efca34a)
-
Guillaume Abrioux authored
When osd nodes are collocated in the clients group (HCI context for instance), the current logic will exclude osd nodes since they are present in the client group. The best fix would be to exclude clients node only when they are not member of another group but for now, as a workaround, we can enforce the addition of osd nodes to fix this specific case. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1947695 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 664dae05)
-
Guillaume Abrioux authored
Enabling lvmetad in containerized deployments on el7 based OS might cause issues. This commit make it possible to disable this service if needed. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955040 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 21 May, 2021 1 commit
-
-
Dimitri Savineau authored
It looks like the generate_group_vars_sample.sh script wasn't executed during previous PRs that were modifying the default values. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 83a8dd5a)
-
- 07 May, 2021 2 commits
-
-
Guillaume Abrioux authored
Since we need to revert 33bfb10a , this is an alternative to initial approach. We can avoid maintaining this file since it is present in container image. The idea is to simply get it from the image container and write it to the host. Fixes: #6501 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit e6d8b058)
-
Dimitri Savineau authored
The pg_autoscale_mode for rgw pools introduced in 9f03a527 was wrong and was missing a `value` keyword because `rgw_create_pools` is a dict. Fixes: #6516 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit a670982a)
-
- 04 May, 2021 1 commit
-
-
Guillaume Abrioux authored
This is a workaround for an issue in ansible. When trying to stop/mask/disable this service in one task, the stop didn't actually happen, the task doesn't fail but for some reason the container is still present and running. Then the task starting the service in the role ceph-crash fails because it can't start the container since it's already running with the same name. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955393 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 3db1ea7e)
-
- 30 Apr, 2021 3 commits
-
-
Benoît Knecht authored
Skip the `get initial keyring when it already exists` task when both commands whose `stdout` output it requires have been skipped (e.g. when running in check mode). Signed-off-by:
Benoît Knecht <bknecht@protonmail.ch> (cherry picked from commit 2437f145)
-
Seena Fallah authored
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES is for both bluestore and filestore Signed-off-by:
Seena Fallah <seenafallah@gmail.com> (cherry picked from commit 41295f0e)
-
Guillaume Abrioux authored
We need to filter with the OS architecture in order to fetch the right dev repository in shaman Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8f87754b)
-
- 29 Apr, 2021 1 commit
-
-
Guillaume Abrioux authored
ceph-ansible leaves a ceph-crash container in containerized deployment. It means we end up with 2 ceph-crash containers running after the migration playbook is complete. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1954614 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 22c18e82)
-
- 27 Apr, 2021 2 commits
-
-
Guillaume Abrioux authored
Due to a recent breaking change in ceph, this command must be modified to add the <svc_id> parameter. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 1f40c125)
-
Guillaume Abrioux authored
When migrating from a cluster with no MDS nodes deployed, `{{ cephfs_data_pool.name }}` doesn't exist so we need to create a pool for storing nfs export objects. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1950403 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit bb7d37fb)
-
- 16 Apr, 2021 1 commit
-
-
Francesco Pantano authored
When dashboard_frontend_vip is provided, all the services should be configured using the related VIP. A new VIP variable is added for both prometheus and alertmanager: we're already able to properly config the grafana vip using dashboard_frontend_vip variable. This change adds the same variable for both prometheus and alertmanager. Signed-off-by:
Francesco Pantano <fpantano@redhat.com> (cherry picked from commit 44165163)
-
- 15 Apr, 2021 2 commits
-
-
Benoît Knecht authored
The `set_fact rgw_ports` task was failing due to a templating error, because `hostvars[item].rgw_instances` is a list, but it was treated as if it was a dictionary. Another issue was the fact that the `unique` filter only applied to the list being appended to `rgw_ports` instead of the entire list, which means it was possible to have duplicate items. Lastly, `rgw_ports` would have been a list of integers, but the `seport` module expects a list of strings. This commit fixes all of the issues above, allowing the `ceph-rgw-loadbalancer` role to work on systems with SELinux enabled. Signed-off-by:
Benoît Knecht <bknecht@protonmail.ch> (cherry picked from commit c0785134)
-
Guillaume Abrioux authored
When collocating daemons, if we chown all files under `/var/lib/ceph` it can cause issues for the collocated daemons that wouldn't have been migrated yet. This commit makes the playbook chown only the files corresponding to the daemon being migrated. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit ddbc11c4)
-
- 14 Apr, 2021 3 commits
-
-
Guillaume Abrioux authored
This adds a `ExecStartPre=-/usr/bin/mkdir -p /var/log/ceph` in all systemd service templates for all ceph daemon. This is specific to RHCS after a Leapp upgrade is done. Indeed, the `/var/log/ceph` seems to be removed after the upgrade. In order to work around this issue let's ensure the directory is present before trying to start the containers with podman. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1949489 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit bab403b6)
-
Guillaume Abrioux authored
This removes the fact `skipped_nodes` which is useless when we run with `--limit` since it gets reset when a new iteration is made. Instead, let's print within a final play which node has been skipped reusing the `skip_this_node` fact. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 3d426705)
-
Guillaume Abrioux authored
`configure_mirroring.yml` is called right after the daemon is started. Sometimes, it can happen the first task in `configure_mirroring.yml` is run while the daemon isn't yet ready, adding a retries/until on that task should help to avoid causing the playbook to fail. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944996 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit b1e7e1ad)
-
- 12 Apr, 2021 7 commits
-
-
Guillaume Abrioux authored
This commit adds the nfs-ganesha adoption support in the `cephadm-adopt.yml` playbook. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944504 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit a9220654)
-
Guillaume Abrioux authored
This fact is never used, let's remove the task. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 0772b3d2)
-
Guillaume Abrioux authored
set the name of those tasks accordingly with the fact name being set. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d3d3d015)
-
Guillaume Abrioux authored
the adoption playbook should use `radosgw_num_instances` in order to determine how much rgw instance it should set recreate. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1943170 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 1ffc4df6)
-
Guillaume Abrioux authored
This play doesn't nothing else than stopping/removing rgw daemons. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit ee44d860)
-
Guillaume Abrioux authored
when running docker-to-podman playbook, there's no need to call `ceph-config` and `ceph-rgw` from the role `ceph-handler`. It can even have side effects when coming from a baremetal cluster that was previously migrated using the switch-to-containers playbook. Indeed it might complain about missing .target systemd unit since they are removed during that migration. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944999 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 70f19be3)
-
Guillaume Abrioux authored
this adds a small documentation in the header of the playbook in order to explain what is the goal of this playbook. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 36b4227d)
-
- 09 Apr, 2021 1 commit
-
-
Guillaume Abrioux authored
This adds the iscsigws migration to containers. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=<bz-number > Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 2c74c273)
-