[ceph-users] intermittent slow requests on idle ssd ceph clusters

2018-07-16 Thread Pavel Shub
Hello folks, We've been having issues with slow requests cropping up on practically idle ceph clusters. From what I can tell the requests are hanging waiting for subops, and the OSD on the other end receives requests minutes later! Below it started waiting for subops at 12:09:51 and the subop was

Re: [ceph-users] ceph df: Raw used vs. used vs. actual bytes in cephfs

2018-02-19 Thread Pavel Shub
Could you be running into block size (minimum allocation unit) overhead? Default bluestore block size is 4k for hdd and 64k for ssd. This is exacerbated if you have tons of small files. I tend to see this when "ceph df detail" sum of raw used in pools is less than the global raw bytes used. On

Re: [ceph-users] CEPH bluestore space consumption with small objects

2017-08-08 Thread Pavel Shub
Marcus, You may want to look at the bluestore_min_alloc_size setting as well as the respective bluestore_min_alloc_size_ssd and bluestore_min_alloc_size_hdd. By default bluestore sets a 64k block size for ssds. I'm also using ceph for small objects and I've see my OSD usage go down from 80% to

Re: [ceph-users] bluestore object overhead

2017-04-19 Thread Pavel Shub
On Wed, Apr 19, 2017 at 4:33 PM, Gregory Farnum <gfar...@redhat.com> wrote: > On Wed, Apr 19, 2017 at 1:26 PM, Pavel Shub <pa...@citymaps.com> wrote: >> Hey All, >> >> I'm running a test of bluestore in a small VM and seeing 2x overhead >> for each object in

[ceph-users] bluestore object overhead

2017-04-19 Thread Pavel Shub
Hey All, I'm running a test of bluestore in a small VM and seeing 2x overhead for each object in cephfs. Here's the output of df detail https://gist.github.com/pavel-citymaps/868a7c4b1c43cea9ab86cdf2e79198ee This is on a VM with all daemons & 20gb disk, all pools are of size 1. Is this the

[ceph-users] bluestore object overhead

2017-04-17 Thread Pavel Shub
Hey All, I'm running a test of bluestore in a small VM and seeing 2x overhead for each object. Here's the output of df detail: GLOBAL: SIZE AVAIL RAW USED %RAW USED OBJECTS 20378M 14469M5909M 29.00772k POOLS: NAMEID

[ceph-users] Ceph uses more raw space than expected

2017-01-18 Thread Pavel Shub
Hi all, I'm running a 6 node 24 OSD cluster, Jewel 10.2.5 with kernel 4.8. I put about 1TB of data in the cluster, with all pools having size 3. Yet about 5TB of raw disk is used as opposed to the expected 3TB. result of ceph -s: pgmap v1057361: 2400 pgs, 3 pools, 984 GB data, 125