- Apr 11, 2019
-
-
Sébastien Han authored
We only validate the devices that are passed if there is a list of devices to validate. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
osd_scenario has become obsolete and defaults to lvm. With lvm there is no such things has collocated and non-collocated. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
ceph-disk is not supported anymore, so all the newly created OSDs will be configured using ceph-volume. Signed-off-by:
Sébastien Han <seb@redhat.com>
-
Sébastien Han authored
We don't support the preparation of OSD with ceph-disk. ceph-volume is only supported. However, the start operation of OSD is still supported. So let's say you change a config option, the handlers will be able to restart all the OSDs via their respective systemd unit files. Signed-off-by:
Sébastien Han <seb@redhat.com> Co-authored-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Dimitri Savineau authored
We don't need to use the cephfs variable for the application pool name because it's always cephfs. If the cephfs variable is set to something else than the default value it will break the appplication pool task. Resolves: #3790 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Apr 10, 2019
-
-
Guillaume Abrioux authored
ceph-volume didn't work when the devices where passed by path. Since it now support it, let's allow this feature in ceph-ansible Closes: #3812 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
Let's use a condition to run this task only on the first mon. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Apr 09, 2019
-
-
Dimitri Savineau authored
As discussed in ceph/ceph#26599, beast is now the default frontend for rados gateway with nautilus release. Add rgw_thread_pool_size variable with 512 as default value and keep backward compatibility with num_threads option when using civetweb. Update radosgw_civetweb_num_threads to reflect rgw_thread_pool_size change. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Dimitri Savineau authored
docker daemon is automatically started during package installation but the service isn't enabled on boot. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
Matthew Vernon authored
The Ubuntu Cloud Archive-related (UCA) defaults in roles/ceph-defaults/defaults/main.yml were commented out, which means if you set `ceph_repository` to "uca", you get undefined variable errors, e.g. ``` The task includes an option with an undefined variable. The error was: 'ceph_stable_repo_uca' is undefined The error appears to have been in '/nfs/users/nfs_m/mv3/software/ceph-ansible/roles/ceph-common/tasks/installs/debian_uca_repository.yml': line 6, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - name: add ubuntu cloud archive repository ^ here ``` Unfortunately, uncommenting these results in some other breakage, because further roles were written that use the fact of `ceph_stable_release_uca` being defined as a proxy for "we're using UCA", so try and install packages from the bionic-updates/queens release, for example, which doesn't work. So there are a few `apt` tasks that need modifying to not use `ceph_stable_release_uca` unless `ceph_origin` is `repository` and `ceph_repository` is `uca`. Closes: #3475 Signed-off-by:
Matthew Vernon <mv3@sanger.ac.uk>
-
Dimitri Savineau authored
When using monitor_address_block or radosgw_address_block variables to configure the mon/rgw address we're getting the first ip address from the ansible facts present in that cidr. When there's VIP on that network the first filter could return the wrong value. This seems to affect only IPv6 setup because the VIP addresses are added to the ansible facts at the beginning of the list. This is the opposite (at the end) when using IPv4. This causes the mon/rgw processes to bind on the VIP address. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1680155 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
François Lafont authored
The path of the RGW environment file (in the /var/lib/ceph/radosgw/ directory) depends on the Ceph clustername. It was not taken into account in the Ansible role `ceph-rgw`. Signed-off-by:
flaf <francois.lafont.1978@gmail.com>
-
Guillaume Abrioux authored
When mgrs are implicitly collocated on monitors (no mgrs in mgrs group). That include was skipped because of this condition : `inventory_hostname == groups[mgr_group_name][0]` Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
before managing mgr modules, we must ensure all mgr are available otherwise we can hit failure like following: ``` stdout:Error ENOENT: all mgr daemons do not support module 'restful', pass --force to force enablement ``` It happens because all mgr are not yet available when trying to manage with mgr modules. Closes: #3100 Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Apr 08, 2019
-
-
Rishabh Dave authored
Add a tox scenario that adds an new MDS node as a part of already deployed Ceph cluster and deploys MDS there. Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
- Apr 06, 2019
-
-
Ali Maredia authored
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1664869 Signed-off-by:
Ali Maredia <amaredia@redhat.com>
-
- Apr 04, 2019
-
-
Dimitri Savineau authored
In containerized deployment the default radosgw quota is too low for production environment. This is causing performance degradation compared to bare-metal. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1680171 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Apr 03, 2019
-
-
fpantano authored
According to rdo testing https://review.rdoproject.org/r/#/c/18721 a check on the output of the ceph_health value is added to allow the playbook to make several attempts (according to the retry/delay variables) when waiting the cluster quorum or when the container bootstrap is not ended. It avoids the failure of the command execution when it doesn't receive a valid json object to decode (because cluster is too slow to boostrap compared to ceph-ansible task execution). Signed-off-by:
fpantano <fpantano@redhat.com>
-
- Apr 02, 2019
-
-
Dimitri Savineau authored
Since https://github.com/ceph/ceph/commit/77912c0 ceph-volume uses stdout encoding based on LC_CTYPE and PYTHONIOENCODING environment variables. Thoses variables aren't set when using ansible. Currently this commit breaks non containerized deployment on Ubuntu. TASK [use ceph-volume to create bluestore osds] ******************** cmd: - ceph-volume - --cluster - ceph - lvm - create - --bluestore - --data - /dev/sdb rc: 1 stderr: |- Traceback (most recent call last): (...) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 132: ordinal not in range(128) Note that the task is failing on ansible side due to the stdout decoding but the osd creation is successful. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 29, 2019
-
-
Rishabh Dave authored
Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
Rishabh Dave authored
Otherwise the reader is forced to search for "when" when blocks are too long. Signed-off-by:
Rishabh Dave <ridave@redhat.com>
-
- Mar 28, 2019
-
-
Guillaume Abrioux authored
Similar to #3658 Since there's too many changes between master and stable branches let's commit directly in each branches instead of trying to backport this commit. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Dimitri Savineau authored
When installing python-minimal on Ubuntu bionic, this will add the /usr/bin/python symlink to the default python interpreter. On bionic, this isn't python2 but python3. $ /usr/bin/python --version Python 3.6.7 The python docker library is only installed for python2 which causes issues when running the purge-docker-cluster playbook. This playbook uses the ansible docker modules and requires to have python bindings installed on the remote host. Without the bindings we can see python error reported by the docker module. msg: Failed to import docker or docker-py - No module named 'docker'. Try `pip install docker` or `pip install docker-py` (Python 2.6) Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 25, 2019
-
-
Guillaume Abrioux authored
ee2d52d3 introduced a typo. This commit fixes it. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
this task was here for backward compatibility. It's time to remove it in the next release. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
sometimes those tasks might fail because of a timeout. I've been facing this several times in the CI, adding this retry might help and won't hurt in any case. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
Add a couple of fixes to allow containerized deployments upgrade support to upgrade from luminous/mimic to nautilus. - pass CEPH_CONTAINER_IMAGE and CEPH_CONTAINER_BINARY environment variable to the ceph_key module, - fix the docker exec command in 'waiting for the containerized monitor to join the quorum' task according to the `delegate_to` parameter, - override `docker_exec_cmd` in `ceph-facts` with `mon_host` when rolling_update is `True`, - do not run unnecessarily `create_mds_filesystems.yml` when performing an upgrade. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
otherwise it generates a new cluster fsid and makes the upgrade failing Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
otherwise, the task to copy mgr keyring fails during the rolling_update. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This commit enable the msgr2 protocol when the cluster is fully upgraded to nautilus Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
This prevents the packaging from restarting services before we do need to restart them in the rolling update sequence. We want to handle services restart at rolling_update playbook. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Guillaume Abrioux authored
As of nautilus, the initial keyrings list has changed, it means when upgrading from Luminous or Mimic, it is expected there's a mismatch between what is found on the cluster and the expected initial keyring list hardcoded in ceph_key module. We shouldn't fail when upgrading to nautilus. str_to_bool() took from ceph-volume. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com> Co-Authored-by:
Alfredo Deza <adeza@redhat.com>
-
Guillaume Abrioux authored
rolling_update playbook already takes care of stopping/starting services during the sequence. There's no need to trigger potential unwanted services restart. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
- Mar 20, 2019
-
-
Dimitri Savineau authored
When using osd_scenario lvm, we never check if the lvm2 package is present on the host. When using containerized deployment and docker on CentOS/RedHat this package will be automatically installed as a dependency but not for Ubuntu distribution. OSD deployed via ceph-volume require the lvmetad.socket to be active and running. Resolves: #3728 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 18, 2019
-
-
Guillaume Abrioux authored
Since all files in container image have moved to `/opt/ceph-container` this check must look for new AND the old path so it's backward compatible. Otherwise it could end up by templating an inconsistent `ceph-osd-run.sh`. Signed-off-by:
Guillaume Abrioux <gabrioux@redhat.com>
-
Dimitri Savineau authored
When using monitor_address_block to determine the ip address of the monitor node, we need an ip address available in that cidr to be present in the ansible facts (ansible_all_ipv[46]_addresses). Currently we don't check if there's an ip address available during the ceph-validate role. As a result, the ceph-config role fails due to an empty list during ceph.conf template creation but the error isn't explicit. TASK [ceph-config : generate ceph.conf configuration file] ***** fatal: [0]: FAILED! => {"msg": "No first item, sequence was empty."} With this patch we will fail before the ceph deployment with an explicit failure message. Resolves: rhbz#1673687 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 16, 2019
-
-
Dimitri Savineau authored
When using community repository we need to set the priority on the ceph repositories because we could have some conflict with EPEL packages. In order to set the priority on the ceph repositories, we need to install the yum-plugin-priorities package. http://docs.ceph.com/docs/master/install/get-packages/#rpm-packages Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 14, 2019
-
-
wumingqiao authored
the task will be delegated to mons[0] for all mgr hosts, so we can just run it on the first host and have the same effect. Signed-off-by:
wumingqiao <wumingqiao@beyondcent.com>
-
Dimitri Savineau authored
Currently the default crush rule value is added to the ceph config on the mon nodes as an extra configuration applied after the template generation via the ansible ini module. This implies two behaviors: 1/ On each ceph-ansible run, the ceph.conf will be regenerated via ceph-config+template and then ceph-mon+ini_file. This leads to a non necessary daemons restart. 2/ When other ceph daemons are collocated on the monitor nodes (like mgr or rgw), the default crush rule value will be erased by the ceph.conf template (mon -> mgr -> rgw). This patch adds the osd_pool_default_crush_rule config to the ceph template and only for the monitor nodes (like crush_rules.yml). The default crush rule id is read (if exist) from the current ceph configuration. The default configuration is -1 (ceph default). Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1638092 Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-
- Mar 12, 2019
-
-
Dimitri Savineau authored
With 3e32dce we can run OSD containers with numactl support. When using numactl command in a containerized deployment we need to be sure that the corresponding package is installed on the host. The package installation is only executed when the ceph_osd_numactl_opts variable isn't empty. Signed-off-by:
Dimitri Savineau <dsavinea@redhat.com>
-