Re: [ceph-users] Understanding EC properties for CephFS / small files.

2019-02-16 Thread jesper
> I'm trying to understand the nuts and bolts of EC / CephFS > We're running an EC4+2 pool on top of 72 x 7.2K rpm 10TB drives. Pretty > slow bulk / archive storage. Ok, did some more searching and found this: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021642.html. Which to

[ceph-users] Understanding EC properties for CephFS / small files.

2019-02-16 Thread jesper
Hi List. I'm trying to understand the nuts and bolts of EC / CephFS We're running an EC4+2 pool on top of 72 x 7.2K rpm 10TB drives. Pretty slow bulk / archive storage. # getfattr -n ceph.dir.layout /mnt/home/cluster/mysqlbackup getfattr: Removing leading '/' from absolute path names # file:

Re: [ceph-users] Second radosgw install

2019-02-16 Thread Adrian Nicolae
Hi all, I know that it seems like a stupid question, but I have some concerns about this, maybe someone can clear the things for me. I read in the offical docs that , when I create a rgw server with 'ceph-deploy rgw create', the rgw scripts will automatically create the rgw system pools.

Re: [ceph-users] PG_AVAILABILITY with one osd down?

2019-02-16 Thread Maks Kowalik
Clients' experience depends on whether at the very moment they need to read/write to those particular PGs involved in peering. If their objects are placed in another PGs, then I/O operations shouldn't be impacted. If clients were performing I/O ops to those PGs that went into peering, then they

[ceph-users] Some ceph config parameters default values

2019-02-16 Thread Oliver Freyermuth
Dear Cephalopodians, in some recent threads on this list, I have read about the "knobs": pglog_hardlimit (false by default, available at least with 12.2.11 and 13.2.5) bdev_enable_discard (false by default, advanced option, no description) bdev_async_discard (false by default,

Re: [ceph-users] PG_AVAILABILITY with one osd down?

2019-02-16 Thread jesper
> Hello, > your log extract shows that: > > 2019-02-15 21:40:08 OSD.29 DOWN > 2019-02-15 21:40:09 PG_AVAILABILITY warning start > 2019-02-15 21:40:15 PG_AVAILABILITY warning cleared > > 2019-02-15 21:44:06 OSD.29 UP > 2019-02-15 21:44:08 PG_AVAILABILITY warning start > 2019-02-15 21:44:15

Re: [ceph-users] PG_AVAILABILITY with one osd down?

2019-02-16 Thread Maks Kowalik
Hello, your log extract shows that: 2019-02-15 21:40:08 OSD.29 DOWN 2019-02-15 21:40:09 PG_AVAILABILITY warning start 2019-02-15 21:40:15 PG_AVAILABILITY warning cleared 2019-02-15 21:44:06 OSD.29 UP 2019-02-15 21:44:08 PG_AVAILABILITY warning start 2019-02-15 21:44:15 PG_AVAILABILITY warning

Re: [ceph-users] Placing replaced disks to correct buckets.

2019-02-16 Thread Konstantin Shalygin
I recently replaced failed HDDs and removed them from their respective buckets as per procedure. But I’m now facing an issue when trying to place new ones back into the buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or directory not found’ OR command sintax error. I have been

[ceph-users] Placing replaced disks to correct buckets.

2019-02-16 Thread John Molefe
Hi Everyone, I recently replaced failed HDDs and removed them from their respective buckets as per procedure. But I’m now facing an issue when trying to place new ones back into the buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or directory not found’ OR command sintax error. I

[ceph-users] Ceph auth caps 'create rbd image' permission

2019-02-16 Thread Marc Roos
Currently I am using 'profile rbd' on mon and osd. Is it possible with the caps to allow a user to - List rbd images - get state of images - write/read to images Etc But do not allow to have it create new images? ___ ceph-users mailing list

Re: [ceph-users] ceph osd commit latency increase over time, until restart

2019-02-16 Thread Alexandre DERUMIER
>>There are 10 OSDs in these systems with 96GB of memory in total. We are >>runnigh with memory target on 6G right now to make sure there is no >>leakage. If this runs fine for a longer period we will go to 8GB per OSD >>so it will max out on 80GB leaving 16GB as spare. Thanks Wido. I send

Re: [ceph-users] Openstack RBD EC pool

2019-02-16 Thread Konstantin Shalygin
### ceph.conf [global] fsid = b5e30221-a214-353c-b66b-8c37b4349123 mon host = ceph-mon.service.i.ewcs.ch auth cluster required = cephx auth service required = cephx auth client required = cephx ### ## ceph.ec.conf [global] fsid =