Re: [ceph-users] Does "ceph df" use "bogus" copies factor instead of (k, m) for erasure coded pool?

2019-04-12 Thread Igor Podlesny
And as to min_size choice -- since you've replied exactly to that part of mine message only. On Sat, 13 Apr 2019 at 06:54, Paul Emmerich wrote: > On Fri, Apr 12, 2019 at 9:30 PM Igor Podlesny wrote: > > For e. g., an EC pool with default profile (2, 1) has bogus "sizing" > > params (size=3,

Re: [ceph-users] Does "ceph df" use "bogus" copies factor instead of (k, m) for erasure coded pool?

2019-04-12 Thread Igor Podlesny
On Sat, 13 Apr 2019 at 06:54, Paul Emmerich wrote: > > Please don't use an EC pool with 2+1, that configuration makes no sense. That's too much of an irony given that (2, 1) is default EC profile, described in CEPH documentation in addition. > min_size 3 is the default for that pool, yes. That

Re: [ceph-users] v12.2.12 Luminous released

2019-04-12 Thread Paul Emmerich
I think the most notable change here is the backport of the new bitmap allocator, but that's missing completely from the change log. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49

Re: [ceph-users] Does "ceph df" use "bogus" copies factor instead of (k, m) for erasure coded pool?

2019-04-12 Thread Paul Emmerich
Please don't use an EC pool with 2+1, that configuration makes no sense. min_size 3 is the default for that pool, yes. That means your data will be unavailable if any OSD is offline. Reducing min_size to 2 means you are accepting writes when you cannot guarantee durability which will cause

[ceph-users] Does "ceph df" use "bogus" copies factor instead of (k, m) for erasure coded pool?

2019-04-12 Thread Igor Podlesny
For e. g., an EC pool with default profile (2, 1) has bogus "sizing" params (size=3, min_size=3). Min. size 3 is wrong as far as I know and it's been fixed in fresh releases (but not in Luminous). But besides that it looks like pool usage isn't calculated according to EC overhead but as if it was

Re: [ceph-users] Ceph Object storage for physically separating tenants storage infrastructure

2019-04-12 Thread Gregory Farnum
Yes, you would do this by setting up separate data pools for segregated clients, giving those pools a CRUSH rule placing them on their own servers, and if using S3 assigning the clients to them using either wholly separate instances or perhaps separate zones and the S3 placement options. -Greg On

Re: [ceph-users] Remove RBD mirror?

2019-04-12 Thread Jason Dillaman
On Fri, Apr 12, 2019 at 10:48 AM Magnus Grönlund wrote: > > > > Den fre 12 apr. 2019 kl 16:37 skrev Jason Dillaman : >> >> On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund wrote: >> > >> > Hi Jason, >> > >> > Tried to follow the instructions and setting the debug level to 15 worked >> > OK, but

[ceph-users] Limits of mds bal fragment size max

2019-04-12 Thread Benjeman Meekhof
We have a user syncing data with some kind of rsync + hardlink based system creating/removing large numbers of hard links. We've encountered many of the issues with stray inode re-integration as described in the thread and tracker below. As noted one fix is to increase mds_bal_fragment_size_max

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-12 Thread Charles Alva
Got it. Thanks, Mark! Kind regards, Charles Alva Sent from Gmail Mobile On Fri, Apr 12, 2019 at 10:53 PM Mark Nelson wrote: > They have the same issue, but depending on the SSD may be better at > absorbing the extra IO if network or CPU are bigger bottlenecks. That's > one of the reasons

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-12 Thread Mark Nelson
They have the same issue, but depending on the SSD may be better at absorbing the extra IO if network or CPU are bigger bottlenecks.  That's one of the reasons that a lot of folks like to put the DB on flash for HDD based clusters.  It's still possible to oversubscribe them, but you've got

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-12 Thread Charles Alva
Thanks Mark, This is interesting. I'll take a look at the links you provided. Does rocksdb compacting issue only affect HDDs? Or SSDs are having same issue? Kind regards, Charles Alva Sent from Gmail Mobile On Fri, Apr 12, 2019, 9:01 PM Mark Nelson wrote: > Hi Charles, > > > Basically the

[ceph-users] can not change log level for ceph-client.libvirt online

2019-04-12 Thread lin zhou
Hi, cephers we have a ceph cluster with openstack. maybe long ago, we set debug_rbd in ceph.conf and then boot vm. but these debug config not exist in the config now. Now we find the ceph-client.libvirt.log is 200GB. But I can not using ceph --admin-daemon ceph-client.libvirt.asok config set

Re: [ceph-users] RadosGW ops log lag?

2019-04-12 Thread Aaron Bassett
Ok thanks. Is the expectation that events will be available on that socket as soon as the occur or is it more of a best effort situation? I'm just trying to nail down which side of the socket might be lagging. It's pretty difficult to recreate this as I have to hit the cluster very hard to get

Re: [ceph-users] RadosGW ops log lag?

2019-04-12 Thread Matt Benjamin
Hi Aaron, I don't think that exists currently. Matt On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett wrote: > > I have an radogw log centralizer that we use to for an audit trail for data > access in our ceph clusters. We've enabled the ops log socket and added > logging of the

[ceph-users] RadosGW ops log lag?

2019-04-12 Thread Aaron Bassett
I have an radogw log centralizer that we use to for an audit trail for data access in our ceph clusters. We've enabled the ops log socket and added logging of the http_authorization header to it: rgw log http headers = "http_authorization" rgw ops log socket path = /var/run/ceph/rgw-ops.sock

Re: [ceph-users] Remove RBD mirror?

2019-04-12 Thread Magnus Grönlund
Den fre 12 apr. 2019 kl 16:37 skrev Jason Dillaman : > On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund > wrote: > > > > Hi Jason, > > > > Tried to follow the instructions and setting the debug level to 15 > worked OK, but the daemon appeared to silently ignore the restart command > (nothing

Re: [ceph-users] Remove RBD mirror?

2019-04-12 Thread Jason Dillaman
On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund wrote: > > Hi Jason, > > Tried to follow the instructions and setting the debug level to 15 worked OK, > but the daemon appeared to silently ignore the restart command (nothing > indicating a restart seen in the log). > So I set the log level to

Re: [ceph-users] Remove RBD mirror?

2019-04-12 Thread Magnus Grönlund
Hi Jason, Tried to follow the instructions and setting the debug level to 15 worked OK, but the daemon appeared to silently ignore the restart command (nothing indicating a restart seen in the log). So I set the log level to 15 in the config file and restarted the rbd mirror daemon. The output

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-12 Thread Mark Nelson
Hi Charles, Basically the goal is to reduce write-amplification as much as possible.  The deeper that the rocksdb hierarchy gets, the worse the write-amplifcation for compaction is going to be.  If you look at the OSD logs you'll see the write-amp factors for compaction in the rocksdb

Re: [ceph-users] bluefs-bdev-expand experience

2019-04-12 Thread Alfredo Deza
On Thu, Apr 11, 2019 at 4:23 PM Yury Shevchuk wrote: > > Hi Igor! > > I have upgraded from Luminous to Nautilus and now slow device > expansion works indeed. The steps are shown below to round up the > topic. > > node2# ceph osd df > ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETA

[ceph-users] Ceph Object storage for physically separating tenants storage infrastructure

2019-04-12 Thread Varun Singh
Hi, We have a requirement to build an object storage solution with thin layer of customization on top. This is to be deployed in our own data centre. We will be using the objects stored in this system at various places in our business workflow. The solution should support multi-tenancy. Multiple

Re: [ceph-users] bluefs-bdev-expand experience

2019-04-12 Thread Igor Fedotov
On 4/11/2019 11:23 PM, Yury Shevchuk wrote: Hi Igor! I have upgraded from Luminous to Nautilus and now slow device expansion works indeed. The steps are shown below to round up the topic. node2# ceph osd df ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETAAVAIL %USE VAR

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-12 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:53 AM Jason Dillaman wrote: > On Thu, Apr 11, 2019 at 8:49 AM Erik McCormick > wrote: > > > > > > > > On Thu, Apr 11, 2019, 8:39 AM Erik McCormick > wrote: > >> > >> > >> > >> On Thu, Apr 11, 2019, 12:07 AM Brayan Perera > wrote: > >>> > >>> Dear Jason, > >>> > >>> >