[ceph-users] Re: Issue installing radosgw on debian 10

2021-08-27 Thread Dimitri Savineau
]' > [errno 22] error connecting to the cluster > > > F. > > Il 27.08.21 17:04, Dimitri Savineau ha scritto: > > Can you try to update the `mon host` value with brackets ? > > > > mon host = [v2:192.168.1.50:3300,v1:192.168.1.50:6789],[v2: > 192.168.1

[ceph-users] Re: Issue installing radosgw on debian 10

2021-08-27 Thread Dimitri Savineau
Can you try to update the `mon host` value with brackets ? mon host = [v2:192.168.1.50:3300,v1:192.168.1.50:6789],[v2:192.168.1.51:3300 ,v1:192.168.1.51:6789],[v2:192.168.1.52:3300,v1:192.168.1.52:6789] https://docs.ceph.com/en/latest/rados/configuration/msgr2/#updating-ceph-conf-and-mon-host

[ceph-users] Re: Disable autostart of old services

2021-08-26 Thread Dimitri Savineau
If you're using ceph-volume then you have an extra systemd unit called ceph-volume@lvm-- [1] So you probably want to disable that one too. [1] https://docs.ceph.com/en/latest/ceph-volume/systemd/ Regards, Dimitri On Wed, Aug 25, 2021 at 4:50 AM Marc wrote: > Probably ceph-disk osd's not?

[ceph-users] Re: Ceph packages for Rocky Linux

2021-08-26 Thread Dimitri Savineau
> If rocky8 had a ceph-common I would go with that. rocky/almalinux/centos/rhel 8 don't have a ceph-common package in the base packages. This was true for el7 but not for el8. > It would (presumably) be tested more, since it comes with the original distro. I would avoid that since you won't

[ceph-users] Re: [ceph-ansible] rolling-upgrade variables not present

2021-08-19 Thread Dimitri Savineau
Hi Gregory, > $ time ansible-playbook infrastructure-playbooks/rolling_update.yml -e ireallymeanit=yes You mentioned you were using /etc/ansible/hosts as the ansible inventory file. But where are located the group_vars and host_vars directories ? It looks like you have your group_vars and

[ceph-users] Re: Cephadm Upgrade from Octopus to Pacific

2021-08-06 Thread Dimitri Savineau
Looks related to https://tracker.ceph.com/issues/51620 Hopefully this will be backported to Pacific and included in 16.2.6 Regards, Dimitri On Fri, Aug 6, 2021 at 9:21 AM Arnaud MARTEL < arnaud.mar...@i2bc.paris-saclay.fr> wrote: > Peter, > > I had the same error and my workaround was to

[ceph-users] Re: unable to map device with krbd on el7 with ceph nautilus

2021-07-26 Thread Dimitri Savineau
Hi, > As Marc mentioned, you would need to disable unsupported features but > you are right that the kernel doesn't make it to that point. I remember disabling unsupported features on el7 nodes (kernel 3.10) with Nautilus. But the error on the map command is usually more obvious. $ rbd feature

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Dimitri Savineau
t; Thanks > > Il Ven 23 Lug 2021, 17:36 Dimitri Savineau ha > scritto: > >> Hi, >> >> This looks similar to https://tracker.ceph.com/issues/46687 >> >> Since you want to use hdd devices to bluestore data and ssd devices for >> bluestore db, I would sugge

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Dimitri Savineau
Hi, This looks similar to https://tracker.ceph.com/issues/46687 Since you want to use hdd devices to bluestore data and ssd devices for bluestore db, I would suggest using the rotational [1] filter isn't dealing with the size filter. --- service_type: osd service_id: osd_spec_default placement:

[ceph-users] Re: cephadm stuck in deleting state

2021-07-14 Thread Dimitri Savineau
Hi, That's probably related to https://tracker.ceph.com/issues/51571 Regards, Dimitri On Wed, Jul 14, 2021 at 8:17 AM Eugen Block wrote: > Hi, > > do you see the daemon on that iscsi host(s) with 'cephadm ls'? If the > answer is yes, you could remove it with cephadm, too: > > cephadm

[ceph-users] Re: Documentation of the LVM metadata format

2021-04-19 Thread Dimitri Savineau
bin:/usr/sbin:/usr/bin:/sbin:/bin > --> FileNotFoundError: [Errno 2] No such file or directory: 'systemctl': > 'systemctl' > > Best regards, > > Nico > > Dimitri Savineau writes: > > > Hi, > > > >> My background is that ceph-volume activate does not work on non-s

[ceph-users] Re: Documentation of the LVM metadata format

2021-04-19 Thread Dimitri Savineau
Hi, > My background is that ceph-volume activate does not work on non-systemd Linux distributions Why not using the --no-systemd option during the ceph-volume activate command? The systemd part is only enabling and starting the service but the tmpfs part should work if you're not using systemd

[ceph-users] v15.2.10 Octopus released

2021-03-18 Thread Dimitri Savineau
Hi, https://download.ceph.com/rpm-octopus/ symlink isn't updated to the new release [1] and the content is still 15.2.9 [2] As a consequence, the new Octopus container images 15.2.10 can't be built. [1] https://download.ceph.com/rpm-15.2.10/ [2] https://download.ceph.com/rpm-15.2.9/ Regards,

[ceph-users] Re: Upgrade to 15.2.7 fails on mixed x86_64/arm64 cluster

2020-12-08 Thread Dimitri Savineau
I think you should open an issue on the ceph tracker as it seems the cephadm upgrade workflow doesn't support multi arch container images. docker.io/ceph/ceph:v15.2.7 is a manifest list [1], which depending on the host architecture (x86_64 or ARMv8), will provide you the right container image.

[ceph-users] Re: Larger number of OSDs, cheroot, cherrypy, limits + containers == broken

2020-12-08 Thread Dimitri Savineau
As far as I know, the issue isn't specific to using container as deployment using packages (rpm or deb) are also affected by the issue (at least CentOS 8 and Ubuntu 20.04 focal) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: Doing minor version update of Ceph cluster with ceph-ansible and rolling-update playbook

2020-09-28 Thread Dimitri Savineau
Hi Andreas, > Is this assumption correct? The documentation > (https://docs.ceph.com/projects/ceph-ansible/en/latest/day-2/upgrade.html) is > short on > this. That's right, if you run the rolling_update.yml playbook without changing the ceph_stable_release in the group_vars then you will

[ceph-users] Re: OSDs and tmpfs

2020-09-11 Thread Dimitri Savineau
> We have a 23 node cluster and normally when we add OSDs they end up > mounting like > this: > > /dev/sde1 3.7T 2.0T 1.8T 54% /var/lib/ceph/osd/ceph-15 > > /dev/sdj1 3.7T 2.0T 1.7T 55% /var/lib/ceph/osd/ceph-20 > > /dev/sdd1 3.7T 2.1T 1.6T 58%

[ceph-users] Re: cephadm didn't create journals

2020-09-08 Thread Dimitri Savineau
journal_devices is for filestore and filestore isn't supported with cephadm ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-09-08 Thread Dimitri Savineau
https://tracker.ceph.com/issues/46558 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm grafana url

2020-09-02 Thread Dimitri Savineau
Did you try to restart the dashboard mgr module after your change ? # ceph mgr module disable dashboard # ceph mgr module enable dashboard Regards, ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Ceph SSH orchestrator?

2020-07-03 Thread Dimitri Savineau
You can try to use ceph-ansible which supports baremetal and containerized deployment. https://github.com/ceph/ceph-ansible ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: How to debug ssh: ceph orch host add ceph01 10.10.1.1

2020-04-24 Thread Dimitri Savineau
Did you take a look at the cephadm logs (/var/log/ceph/ceph.cephadm.log) ? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph Ansible - - name: set grafana_server_addr fact - ipv4

2019-09-03 Thread Dimitri Savineau
This is probably a duplicate of https://github.com/ceph/ceph-ansible/issues/4404 Regards, Dimitri On Thu, Aug 29, 2019 at 9:42 AM Sebastien Han wrote: > +Guillaume Abrioux and +Dimitri Savineau > Thanks! > – > Sébastien Han > Principal Software Engineer, Storage Archit