- 30 Jun, 2021 2 commits
-
-
Dimitri Savineau authored
Starting RHCS 5, there's no ISO available anymore. This removes all ISO variables and the ceph_repository_type variable. Closes: #6626 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Wong Hoi Sing Edison authored
This commit ensure all ceph-ansible modules pass flake8 properly. Signed-off-by:
Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
-
- 29 Jun, 2021 7 commits
-
-
Guillaume Abrioux authored
Let's drop py3.6 and py3.7 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This adds a github workflow for checking the signed off line in commit messages. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
let's use github workflow for checking defaults values. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This adds the ansible --syntax-check test in the ansible-lint workflow Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This inventory isn't used anywhere. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
Do not rely on the inventory aliases in order to check if the selected manager to be removed is present. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967897 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
If multi-realms were deployed with several instances belonging to the same realm and zone using the same port on different nodes, the service id expected by cephadm will be the same and therefore only one service will be deployed. We need to create a service called `<node>.<realm>.<zone>.<port>` to be sure the service name will be unique and well deployed on the expected node in order to preserve backward compatibility with the rgws instances that were deployed with ceph-ansible. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 28 Jun, 2021 1 commit
-
-
Dimitri Savineau authored
This adds the ceph-validate role before starting the switch to a containerized deployment. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1968177 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 24 Jun, 2021 2 commits
-
-
Wong Hoi Sing Edison authored
Also code lint with flake8 Signed-off-by:
Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
-
Boris Ranto authored
It was requested for us to update our alerting definitions to include a slow OSD Ops health check. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1951664 Signed-off-by:
Boris Ranto <branto@redhat.com>
-
- 23 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
We need to support rgw multisite deployments. This commit makes the adoption playbook support this kind of deployment. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 16 Jun, 2021 3 commits
-
-
Guillaume Abrioux authored
When running the switch-to-containers playbook with multisite enabled, the fact "rgw_instances" is only set for the node being processed (serial: 1), the consequence of that is that the set_fact of 'rgw_instances_all' can't iterate over all rgw node in order to look up each 'rgw_instances_host'. Adding a condition checking whether hostvars[item]["rgw_instances_host"] is defined fixes this issue. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967926 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
David Galloway authored
Signed-off-by:
David Galloway <dgallowa@redhat.com>
-
Guillaume Abrioux authored
There's no need to copy this keyring when using nfs with mds Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 15 Jun, 2021 1 commit
-
-
Guillaume Abrioux authored
Enabling lvmetad in containerized deployments on el7 based OS might cause issues. This commit make it possible to disable this service if needed. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955040 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 14 Jun, 2021 3 commits
-
-
Guillaume Abrioux authored
When calling the `ceph_key` module with `state: info`, if the ceph command called fails, the actual error is hidden by the module which makes it pretty difficult to troubleshoot. The current code always states that if rc is not equal to 0 the keyring doesn't exist. `state: info` should always return the actual rc, stdout and stderr. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1964889 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
When no `[mgrs]` group is defined in the inventory, mgr daemon are implicitly collocated with monitors. This task currently relies on the length of the mgr group in order to tell cephadm to deploy mgr daemons. If there's no `[mgrs]` group defined in the inventory, it will ask cephadm to deploy 0 mgr daemon which doesn't make sense and will throw an error. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
Since we fire up much less VMs than other job, we can affoard allocating more memory here for this job. Each VM hosts more daemon so 1024Mb can be too few. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 11 Jun, 2021 2 commits
-
-
Guillaume Abrioux authored
When monitors and rgw are collocated with multisite enabled, the rolling_update playbook fails because during the workflow, we run some radosgw-admin commands very early on the first mon even though this is the monitor being upgraded, it means the container doesn't exist since it was stopped. This block is relevant only for scaling out rgw daemons or initial deployment. In rolling_update workflow, it is not needed so let's skip it. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970232 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
CentOS 8.4 vagrant image is available at https://cloud.centos.org let's use it. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 08 Jun, 2021 2 commits
-
-
Neelaksh Singh authored
Fixes: #6529 Signed-off-by:
Neelaksh Singh <neelaksh48@gmail.com>
-
Guillaume Abrioux authored
This reverts commit 2e19d170 . A new build of ceph@master including the fix is available so this is not needed anymore. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 04 Jun, 2021 2 commits
-
-
Guillaume Abrioux authored
Due to a recent commit that has introduced a regression in ceph, this test is failing. Temporarily disabling it to unblock the CI. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
When using grafana behind https `cookie_secure` should be set to `true`. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1966880 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 26 May, 2021 1 commit
-
-
Guillaume Abrioux authored
0990ae41 changed the filter in selectattr() from 'match' to 'equalto' but due to an incompatibility with the Jinja2 version for python 2.7 on el7 we must stick to using 'match' filter. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 25 May, 2021 4 commits
-
-
Guillaume Abrioux authored
using 'match' filter in that task will lead to bad behavior if I have the following node names for instance: - node1 - node11 - node111 with `selectattr('name', 'match', inventory_hostname)` it will match 'node1' along with 'node11' and 'node111'. using 'equalto' filter will make sure we only match the target node. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1963066 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
When osd nodes are collocated in the clients group (HCI context for instance), the current logic will exclude osd nodes since they are present in the client group. The best fix would be to exclude clients node only when they are not member of another group but for now, as a workaround, we can enforce the addition of osd nodes to fix this specific case. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1947695 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This commit rewrites the deprecated syntax used in vagrant_up.sh Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
temporary work around vagrant cloud issue which seems broken at the time of pushing this commit. Let's pull images from cloud.centos.org for now since vagrant cloud hosted images return a 403 error. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 22 May, 2021 1 commit
-
-
Guillaume Abrioux authored
There's no benefit to gather facts again on each play in rolling_update.yml Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 07 May, 2021 1 commit
-
-
Guillaume Abrioux authored
Since we need to revert 33bfb10a , this is an alternative to initial approach. We can avoid maintaining this file since it is present in container image. The idea is to simply get it from the image container and write it to the host. Fixes: #6501 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 06 May, 2021 1 commit
-
-
Dimitri Savineau authored
The pg_autoscale_mode for rgw pools introduced in 9f03a527 was wrong and was missing a `value` keyword because `rgw_create_pools` is a dict. Fixes: #6516 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- 04 May, 2021 1 commit
-
-
Guillaume Abrioux authored
This is a workaround for an issue in ansible. When trying to stop/mask/disable this service in one task, the stop didn't actually happen, the task doesn't fail but for some reason the container is still present and running. Then the task starting the service in the role ceph-crash fails because it can't start the container since it's already running with the same name. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955393 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 29 Apr, 2021 1 commit
-
-
Guillaume Abrioux authored
We need to filter with the OS architecture in order to fetch the right dev repository in shaman Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 28 Apr, 2021 2 commits
-
-
Seena Fallah authored
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES is for both bluestore and filestore Signed-off-by:
Seena Fallah <seenafallah@gmail.com>
-
Guillaume Abrioux authored
ceph-ansible leaves a ceph-crash container in containerized deployment. It means we end up with 2 ceph-crash containers running after the migration playbook is complete. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1954614 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- 27 Apr, 2021 2 commits
-
-
Guillaume Abrioux authored
Due to a recent breaking change in ceph, this command must be modified to add the <svc_id> parameter. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
When migrating from a cluster with no MDS nodes deployed, `{{ cephfs_data_pool.name }}` doesn't exist so we need to create a pool for storing nfs export objects. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1950403 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-