Hi,
A disk failed in our cephadm-managed 16.2.15 cluster, the affected OSD is
down, out and stopped with cephadm, I also removed the failed drive from
the host's service definition. The cluster has finished recovering but the
following warning persists:
[WRN] CEPHADM_FAILED_DAEMON: 1 failed cepha
Hi again Adam :-)
Would you happen to have the Bug Tracker issue for label bug?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
ceph fs subvolume getpath cephfs cluster_A_subvolume
cephfs_data_pool_ec21_subvolumegroup
/volumes/cephfs_data_pool_ec21_subvolumegroup/cluster_A_subvolume/0f90806d-0d70-4fe1-9e2b-f958056ef0c9
If the subvolume got deleted, is it possible to recreate the subvolume with
the same absolute pat
good afternoon,
i am trying to set bucket policies to allow to different users to access
same bucket with different permissions, BUT it seems that is not yet
supported, am i wrong?
https://docs.ceph.com/en/reef/radosgw/bucketpolicy/#limitations
"We do not yet support setting policies on users,
Ubuntu 22.04 packages are now available for the 17.2.7 Quincy release.
The upcoming Squid release will not support Ubuntu 20.04 (Focal
Fossa). Ubuntu users planning to upgrade from Quincy to Squid will
first need to perform a distro upgrade to 22.04.
Getting Ceph
* Git at git://githu