]'
> [errno 22] error connecting to the cluster
>
>
> F.
>
> Il 27.08.21 17:04, Dimitri Savineau ha scritto:
> > Can you try to update the `mon host` value with brackets ?
> >
> > mon host = [v2:192.168.1.50:3300,v1:192.168.1.50:6789],[v2:
> 192.168.1
Can you try to update the `mon host` value with brackets ?
mon host = [v2:192.168.1.50:3300,v1:192.168.1.50:6789],[v2:192.168.1.51:3300
,v1:192.168.1.51:6789],[v2:192.168.1.52:3300,v1:192.168.1.52:6789]
https://docs.ceph.com/en/latest/rados/configuration/msgr2/#updating-ceph-conf-and-mon-host
If you're using ceph-volume then you have an extra systemd unit called
ceph-volume@lvm-- [1]
So you probably want to disable that one too.
[1] https://docs.ceph.com/en/latest/ceph-volume/systemd/
Regards,
Dimitri
On Wed, Aug 25, 2021 at 4:50 AM Marc wrote:
> Probably ceph-disk osd's not?
> If rocky8 had a ceph-common I would go with that.
rocky/almalinux/centos/rhel 8 don't have a ceph-common package in the base
packages.
This was true for el7 but not for el8.
> It would (presumably) be tested more, since it comes with the original
distro.
I would avoid that since you won't
Hi Gregory,
> $ time ansible-playbook infrastructure-playbooks/rolling_update.yml -e
ireallymeanit=yes
You mentioned you were using /etc/ansible/hosts as the ansible inventory
file. But where are located the group_vars and host_vars directories ?
It looks like you have your group_vars and
Looks related to https://tracker.ceph.com/issues/51620
Hopefully this will be backported to Pacific and included in 16.2.6
Regards,
Dimitri
On Fri, Aug 6, 2021 at 9:21 AM Arnaud MARTEL <
arnaud.mar...@i2bc.paris-saclay.fr> wrote:
> Peter,
>
> I had the same error and my workaround was to
Hi,
> As Marc mentioned, you would need to disable unsupported features but
> you are right that the kernel doesn't make it to that point.
I remember disabling unsupported features on el7 nodes (kernel 3.10) with
Nautilus.
But the error on the map command is usually more obvious.
$ rbd feature
t; Thanks
>
> Il Ven 23 Lug 2021, 17:36 Dimitri Savineau ha
> scritto:
>
>> Hi,
>>
>> This looks similar to https://tracker.ceph.com/issues/46687
>>
>> Since you want to use hdd devices to bluestore data and ssd devices for
>> bluestore db, I would sugge
Hi,
This looks similar to https://tracker.ceph.com/issues/46687
Since you want to use hdd devices to bluestore data and ssd devices for
bluestore db, I would suggest using the rotational [1] filter isn't dealing
with the size filter.
---
service_type: osd
service_id: osd_spec_default
placement:
Hi,
That's probably related to https://tracker.ceph.com/issues/51571
Regards,
Dimitri
On Wed, Jul 14, 2021 at 8:17 AM Eugen Block wrote:
> Hi,
>
> do you see the daemon on that iscsi host(s) with 'cephadm ls'? If the
> answer is yes, you could remove it with cephadm, too:
>
> cephadm
bin:/usr/sbin:/usr/bin:/sbin:/bin
> --> FileNotFoundError: [Errno 2] No such file or directory: 'systemctl':
> 'systemctl'
>
> Best regards,
>
> Nico
>
> Dimitri Savineau writes:
>
> > Hi,
> >
> >> My background is that ceph-volume activate does not work on non-s
Hi,
> My background is that ceph-volume activate does not work on non-systemd
Linux distributions
Why not using the --no-systemd option during the ceph-volume activate
command?
The systemd part is only enabling and starting the service but the tmpfs
part should work if you're not using systemd
Hi,
https://download.ceph.com/rpm-octopus/ symlink isn't updated to the new
release [1] and the content is still 15.2.9 [2]
As a consequence, the new Octopus container images 15.2.10 can't be built.
[1] https://download.ceph.com/rpm-15.2.10/
[2] https://download.ceph.com/rpm-15.2.9/
Regards,
I think you should open an issue on the ceph tracker as it seems the cephadm
upgrade workflow doesn't support multi arch container images.
docker.io/ceph/ceph:v15.2.7 is a manifest list [1], which depending on the host
architecture (x86_64 or ARMv8), will provide you the right container image.
As far as I know, the issue isn't specific to using container as deployment
using packages (rpm or deb) are also affected by the issue (at least CentOS 8
and Ubuntu 20.04 focal)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi Andreas,
> Is this assumption correct? The documentation
> (https://docs.ceph.com/projects/ceph-ansible/en/latest/day-2/upgrade.html) is
> short on
> this.
That's right, if you run the rolling_update.yml playbook without changing the
ceph_stable_release in the group_vars then you will
> We have a 23 node cluster and normally when we add OSDs they end up
> mounting like
> this:
>
> /dev/sde1 3.7T 2.0T 1.8T 54% /var/lib/ceph/osd/ceph-15
>
> /dev/sdj1 3.7T 2.0T 1.7T 55% /var/lib/ceph/osd/ceph-20
>
> /dev/sdd1 3.7T 2.1T 1.6T 58%
journal_devices is for filestore and filestore isn't supported with cephadm
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
https://tracker.ceph.com/issues/46558
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Did you try to restart the dashboard mgr module after your change ?
# ceph mgr module disable dashboard
# ceph mgr module enable dashboard
Regards,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
You can try to use ceph-ansible which supports baremetal and containerized
deployment.
https://github.com/ceph/ceph-ansible
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Did you take a look at the cephadm logs (/var/log/ceph/ceph.cephadm.log) ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
This is probably a duplicate of
https://github.com/ceph/ceph-ansible/issues/4404
Regards,
Dimitri
On Thu, Aug 29, 2019 at 9:42 AM Sebastien Han wrote:
> +Guillaume Abrioux and +Dimitri Savineau
> Thanks!
> –
> Sébastien Han
> Principal Software Engineer, Storage Archit
23 matches
Mail list logo