- 09 Jan, 2020 6 commits
-
-
Dimitri Savineau authored
We don't need to use dev repository on stable branches. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
Instead of running the ceph roles against localhost we should do it on the first mon. The ansible and inventory hostname of the rgw nodes could be different. Ensure that the rgw instance to remove is present in the cluster. Fix rgw service and directory path. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 747555df)
-
Guillaume Abrioux authored
We must pick up a mon which actually exists in ceph-facts in order to detect if a cluster is running. Otherwise, it will state no cluster is already running which will end up deploying a new monitor isolated in a new quorum. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622688 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 86f3eeb7)
-
Dimitri Savineau authored
Only the ipv4 addresses from the nodes running the dashboard mgr module were added to the trusted_ip_list configuration file on the iscsigws nodes. This also add the iscsi gateways with ipv6 configuration to the ceph dashboard. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1787531 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 70eba661)
-
Benoît Knecht authored
RadosGW pools can be created by setting ```yaml rgw_create_pools: .rgw.root: pg_num: 512 size: 2 ``` for instance. However, doing so would create pools of size `osd_pool_default_size` regardless of the `size` value. This was due to the fact that the Ansible task used ``` {{ item.size | default(osd_pool_default_size) }} ``` as the pool size value, but `item.size` is always undefined; the correct variable is `item.value.size`. Signed-off-by:
Benoît Knecht <bknecht@protonmail.ch> (cherry picked from commit 3c31b19a)
-
Guillaume Abrioux authored
411bd07d introduced a bug in handlers using `handler_*_status` instead of `hostvars[item]['handler_*_status']` causes handlers to be triggered in anycase even though `handler_*_status` was set to `False` on a specific node. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622688 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 30200802)
-
- 08 Jan, 2020 16 commits
-
-
Dimitri Savineau authored
Since RHEL 8.1 we need to add the ganesha_t type to the permissive SELinux list. Otherwise the nfs-ganesha service won't start. This was done on RHEL 7 previously and part of the nfs-ganesha-selinux package on RHEL 8. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786110 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit d7581252)
-
Guillaume Abrioux authored
When using fqdn in inventory, that playbook fails because of some tasks using the result of ceph osd tree (which returns shortname) to get some datas in hostvars[]. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1779021 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 6d9ca6b0)
-
Dimitri Savineau authored
The RBD devices aren't excluded from the devices list in the LVM auto discovery scenario. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783908 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 6f0556f0)
-
Dimitri Savineau authored
The grafana-server group name was hardcoded for the grafana/prometheus firewalld tasks condition. We should we the associated variable : grafana_server_group_name Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 2c06678c)
-
Dimitri Savineau authored
Instead of using multiple dashboard_enabled condition in the configure_firewall file we could just have the condition once and include the dedicated tasks list. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit f4c261ef)
-
Dimitri Savineau authored
When there's no mgr group defined in the ansible inventory then the mgrs are deployed implicitly on the mons nodes. If the dashboard is enabled then we need to open the dashboard port on the node that is running the ceph mgr process (mgr or mon). The current code only allow to open that port on the mgr nodes when they are present explicitly in the inventory but not implicitly. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1783520 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 45359851)
-
Guillaume Abrioux authored
Force fqdn to be used in external url for prometheus and alertmanager. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1765485 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 498bc458)
-
Dimitri Savineau authored
The ceph iscsi repository was still set to dev (shaman) instead of using the stable ceph-iscsi repository. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
When using the ceph dashboard with iscsi gateways nodes we also need to remove the nodes from the ceph dashboard list. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786686 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 931a842f)
-
Guillaume Abrioux authored
When an OSD is stopped, it leaves partitions mounted. We must umount them before zapping them, otherwise error like "Device is busy" will show up. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 80565141)
-
Guillaume Abrioux authored
We only need to set `container_binary`. Let's use `tasks_from` option. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 0ae0a9ce)
-
Guillaume Abrioux authored
The command is delegated on the first monitor so we must use the fact `container_binary` from this node. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 77b39d23)
-
Guillaume Abrioux authored
that task is delegated on the first mon so we should always use the `discovered_interpreter_python` from that node. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 5adb735c)
-
Guillaume Abrioux authored
This commit deletes the filesystem when no more MDS is present after shrinking operation. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1787543 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 38278a6b)
-
Guillaume Abrioux authored
This commit prevent from shrinking an mds node when max_mds wouldn't be honored after that operation. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 2cfe5a04)
-
Guillaume Abrioux authored
This commit adds the filestore to bluestore migration support in ceph_volume module. We must append to the executed command only the relevant options according to what is passed in `osd_objectostore` Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit aabba3ba)
-
- 11 Dec, 2019 10 commits
-
-
Guillaume Abrioux authored
This commit adds a task to ensure device mappers are well closed when lvm batch scenario is used. Otherwise, OSDs can't be redeployed given that devices that are rejected by ceph-volume because they are locked. Adding a condition `devices | default([]) | length > 0` to remove these dm only when using lvm batch scenario. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 8e6ef818)
-
Guillaume Abrioux authored
Otherwise, sometimes it can take a while for an OSD to be seen as down and causes the `ceph osd purge` command to fail. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 51d60119)
-
Guillaume Abrioux authored
Do not use `--destroy` when zapping a device. Otherwise, it destroys VGs while they are still needed to redeploy the OSDs. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit e3305e6b)
-
Guillaume Abrioux authored
The zap action from ceph_volume module always implies `--destroy`. This commit adds the destroy option support so we can ask ceph-volume to not use `--destroy` when zapping a device. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 0dcacdbe)
-
Guillaume Abrioux authored
This commit adds the non containerized context support to the filestore-to-bluestore.yml infrastructure playbook. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 4833b85e)
-
Guillaume Abrioux authored
This commit adds a new job in order to test the filestore-to-bluestore.yml infrastructure playbook. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 40de34fb)
-
Guillaume Abrioux authored
There's no need to enforce PreferredAuthentications by default. Users can still choose to override the ansible.cfg with any additional parameter like this one to fit their infrastructure. Fixes: #4826 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d682412e)
-
Guillaume Abrioux authored
A recent change in ceph/ceph prevent from having username in the password: `Error EINVAL: Password cannot contain username.` Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 0756fa46)
-
Guillaume Abrioux authored
In containerized context, containers aren't stopped early in the sequence. It means they aren't restarted after the upgrade because the task is just checking the daemon status is started (eg: `state: started`). This commit also removes the task which ensure services are started because it's already done in the role ceph-iscsigw. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit c7708eb4)
-
Guillaume Abrioux authored
when upgrading from RHCS 3, dashboard has obviously never been deployed and it forces us to deploy it later manually. This commit adds the dashboard deployment as part of the upgrade to RHCS 4. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1779092 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 451c5ca9)
-
- 10 Dec, 2019 1 commit
-
-
Guillaume Abrioux authored
This commit isolates and adds an explicit comment about variables not intended to be modified by the user. Fixes: #4828 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit a234338e)
-
- 09 Dec, 2019 2 commits
-
-
Guillaume Abrioux authored
Typical error: ``` type=AVC msg=audit(1575367499.582:3210): avc: denied { search } for pid=26680 comm="node_exporter" name="1" dev="proc" ino=11528 scontext=system_u:system_r:container_t:s0:c100,c1014 tcontext=system_u:system_r:init_t:s0 tclass=dir permissive=0 ``` node_exporter needs to be run as privileged to avoid avc denied error since it gathers lot of information on the host. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1762168 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit d245eb7e)
-
Dimitri Savineau authored
The md devices (RAID software) aren't excluded from the devices list in the auto discovery scenario. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1764601 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 014f51c2)
-
- 05 Dec, 2019 1 commit
-
-
Guillaume Abrioux authored
When using `osd_auto_discovery`, `devices` is built multiple times due to multiple runs of `ceph-facts` role. It end up with duplicate instances of a same device in the list. Using `unique` filter when building the list fixes this issue. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 23b1f438)
-
- 04 Dec, 2019 4 commits
-
-
Dimitri Savineau authored
The podman support was added to the purge-container-cluster playbook but containers are always used for the dashboard even on non containerized deployment. This commits adds the podman support on purging the dashboard resources in the purge-cluster playbook. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 89f6cc54)
-
Dimitri Savineau authored
Having max_mds value equals to the number of mds nodes generates a warning in the ceph cluster status: cluster: id: 6d3e49a4-ab4d-4e03-a7d6-58913b8ec00a' health: HEALTH_WARN' insufficient standby MDS daemons available' (...) services: mds: cephfs:3 {0=mds1=up:active,1=mds0=up:active,2=mds2=up:active}' Let's use 2 active and 1 standby mds. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 4a6d19da)
-
Guillaume Abrioux authored
Since we now support podman, let's rename the playbook so it's more generic. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> (cherry picked from commit 7bc7e366)
-
Dimitri Savineau authored
If the new mon/osd node doesn't have python installed then we need to execute the tasks from raw_install_python.yml. Closes: #4368 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com> (cherry picked from commit 34b03d18)
-