We investigated the issue and set debug_mon up to 20 during little change
of osdmap get many messages for all pgs of each pool (for all cluster):
> 2018-12-25 19:28:42.426776 7f075af7d700 20 mon.1@0(leader).osd e1373789
> prime_pg_tempnext_up === next_acting now, clear pg_temp
> 2018-12-25 19:28:4
Hi all,
Just wanted to explain my experience on how to stop the whole cluster and
change the IPs.
First, we shut down the cluster with this procedure:
1.Stop the clients from using the RBD images/Rados Gateway on this
cluster or any other clients.
2.The cluster must be in healthy state
I have seen several post on the bucket lists, how do you change this for
multitenant user: Tenant$tenuser
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred"]},
"Action": "s3:PutObjectAcl",
"Resource": [
Hector,
One more thing to mention - after expansion please run fsck using
ceph-bluestore-tool prior to running osd daemon and collect another log
using CEPH_ARGS variable.
Thanks,
Igor
On 12/27/2018 2:41 PM, Igor Fedotov wrote:
Hi Hector,
I've never tried bluefs-bdev-expand over encrypte
Hi Hector,
I've never tried bluefs-bdev-expand over encrypted volumes but it works
absolutely fine for me in other cases.
So it would be nice to troubleshoot this a bit.
Suggest to do the following:
1) Backup first 8K for all OSD.1 devices (block, db and wal) using dd.
This will probably al
Hi list,
I'm slightly expanding the underlying LV for two OSDs and figured I
could use ceph-bluestore-tool to avoid having to re-create them from
scratch.
I first shut down the OSD, expanded the LV, and then ran:
ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-0
I forgot