Re: [ceph-users] Cluster unusable after 50% full, even with index sharding

2018-04-13 Thread Christian Balzer
Hello, On Fri, 13 Apr 2018 11:59:01 -0500 Robert Stanford wrote: > I have 65TB stored on 24 OSDs on 3 hosts (8 OSDs per host). SSD journals > and spinning disks. Our performance before was acceptable for our purposes > - 300+MB/s simultaneous transmit and receive. Now that we're up to about

[ceph-users] Cluster unusable after 50% full, even with index sharding

2018-04-13 Thread Robert Stanford
I have 65TB stored on 24 OSDs on 3 hosts (8 OSDs per host). SSD journals and spinning disks. Our performance before was acceptable for our purposes - 300+MB/s simultaneous transmit and receive. Now that we're up to about 50% of our total storage capacity (65/120TB, say), the write performance

Re: [ceph-users] rbd-nbd not resizing even after kernel tweaks

2018-04-13 Thread Alex Gorbachev
On Thu, Apr 12, 2018 at 9:38 AM, Alex Gorbachev wrote: > On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman wrote: >> If you run "partprobe" after you resize in your second example, is the >> change visible in "parted"? > > No, partprobe does not

[ceph-users] How much damage have I done to RGW hardcore-wiping a bucket out of its existence?

2018-04-13 Thread Katie Holly
Hi everyone, I found myself in a situation where dynamic sharding and writing data to a bucket containing a little more than 5M objects at the same time caused corruption on the data rendering the entire bucket unusable, I tried several solutions to fix this bucket and ended up ditching it.

[ceph-users] Error Creating OSD

2018-04-13 Thread Rhian Resnick
Evening, When attempting to create an OSD we receive the following error. [ceph-admin@ceph-storage3 ~]$ sudo ceph-volume lvm create --bluestore --data /dev/sdu Running command: ceph-authtool --gen-print-key Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring

Re: [ceph-users] osds with different disk sizes may killing performance (?? ?)

2018-04-13 Thread David Turner
You'll find it said time and time agin on the ML... avoid disks of different sizes in the same cluster. It's a headache that sucks. It's not impossible, it's not even overly hard to pull off... but it's very easy to cause a mess and a lot of headaches. It will also make it harder to diagnose

[ceph-users] CephFS MDS stuck (failed to rdlock when getattr / lookup)

2018-04-13 Thread Oliver Freyermuth
Dear Cephalopodians, in our cluster (CentOS 7.4, EC Pool, Snappy compression, Luminous 12.2.4), we often have all (~40) clients accessing one file in readonly mode, even with multiple processes per client doing that. Sometimes (I do not yet know when, nor why!) the MDS ends up in a situation

Re: [ceph-users] CephFS MDS stuck (failed to rdlock when getattr / lookup)

2018-04-13 Thread Oliver Freyermuth
Dear Cephalopodians, a small addition. As far as I know, the I/O the user is performing is based on the following directory structure: datafolder/some_older_tarball.tar.gz datafolder/sometarball.tar.gz datafolder/processing_number_2/ datafolder/processing_number_3/