Re: [ceph-users] cephfs 1 large omap objects

2019-10-30 Thread Jake Grimmett
>> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ > ceph-users maili

Re: [ceph-users] cephfs 1 large omap objects

2019-10-28 Thread Jake Grimmett
On 10/8/19 10:27 AM, Paul Emmerich wrote: > Hi, > > the default for this warning changed recently (see other similar > threads on the mailing list), it was 2 million before 14.2.3. > > I don't think the new default of 200k is a good choice, so increasing > it is a reason

Re: [ceph-users] ceph pg repair fails...?

2019-10-03 Thread Jake Grimmett
n http://tracker.ceph.com/issues/24994 > for hints on how to proceed. > ' >> >> Thanks, >> >> Mattia >> >> On 10/1/19 1:08 PM, Jake Grimmett wrote: >>> Dear All, >>> >>> I've just found two inconsistent pg that fail to repair. >

[ceph-users] ceph pg repair fails...?

2019-10-01 Thread Jake Grimmett
-1 log_channel(cluster) log [ERR] : 2.36b repair 11 errors, 0 fixed Any advice on fixing this would be very welcome! Best regards, Jake -- Jake Grimmett MRC Laboratory of Molecular Biology Francis Crick Avenue, Cambridge CB2 0QH, UK. ___ ceph-users m

Re: [ceph-users] iostat and dashboard freezing

2019-08-27 Thread Jake Grimmett
let me know if the balancer is your problem too... best, Jake On 8/27/19 3:57 PM, Jake Grimmett wrote: > Yes, the problem still occurs with the dashboard disabled... > > Possibly relevant, when both the dashboard and iostat plugins are > disabled, I occasionally see ceph-mgr ri

Re: [ceph-users] iostat and dashboard freezing

2019-08-27 Thread Jake Grimmett
bd_support", >>         "restful", >>         "telemetry" >>     ], > > I'm on Ubuntu 18.04, so that doesn't corroborate with some possible OS > correlation. > > Thanks, > > Reed > >> On Aug 27, 2019, at 8:37 AM, Lenz Grimmer &g

Re: [ceph-users] iostat and dashboard freezing

2019-08-27 Thread Jake Grimmett
Whoops, I'm running Scientific Linux 7.6, going to upgrade to 7.7. soon... thanks Jake On 8/27/19 2:22 PM, Jake Grimmett wrote: > Hi Reed, > > That exactly matches what I'm seeing: > > when iostat is working OK, I see ~5% CPU use by ceph-mgr > and when iostat freezes, ceph

Re: [ceph-users] iostat and dashboard freezing

2019-08-27 Thread Jake Grimmett
aving similar issues with > instability in the mgr as well, curious if any similar threads to pull at. > > While the iostat command is running, is the active mgr using 100% CPU in top? > > Reed > >> On Aug 27, 2019, at 6:41 AM, Jake Grimmett wrote: >> >> Dear A

[ceph-users] iostat and dashboard freezing

2019-08-27 Thread Jake Grimmett
Dear All, We have a new Nautilus (14.2.2) cluster, with 328 OSDs spread over 40 nodes. Unfortunately "ceph iostat" spends most of it's time frozen, with occasional periods of working normally for less than a minute, then freeze again for a couple of minutes, then come back to life, and so so

[ceph-users] lz4 compression?

2019-08-19 Thread Jake Grimmett
Dear all, I've not seen posts from people using LZ4 compression, and wondered what other peoples experiences are if they have tried LZ4 on Nautilus. Since enabling LZ4 we have copied 1.9 PB into a pool without problem. However, and if "ceph df detail" is accurate, we are not getting much

Re: [ceph-users] Correct number of pg

2019-08-19 Thread Jake Grimmett
Wonderful, we will leave our pg at 4096 :) many thanks for the advice Paul :) have a good day, Jake On 8/19/19 11:03 AM, Paul Emmerich wrote: > On Mon, Aug 19, 2019 at 10:51 AM Jake Grimmett wrote: >> >> Dear All, >> >> We have a new Nautilus cluster, used for

[ceph-users] Correct number of pg

2019-08-19 Thread Jake Grimmett
Dear All, We have a new Nautilus cluster, used for cephfs, with pg_autoscaler in warn mode. Shortly after hitting 62% full, the autoscaler started warning that we have too few pg: * Pool ec82pool has 4096 placement groups, should have

Re: [ceph-users] What's the best practice for Erasure Coding

2019-07-08 Thread Jake Grimmett
Hi David, How many nodes in your cluster? k+m has to be smaller than your node count, preferably by at least two. How important is your data? i.e. do you have a remote mirror or backup, if not you may want m=3 We use 8+2 on one cluster, and 6+2 on another. Best, Jake On 7 July 2019

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-29 Thread Jake Grimmett
Thank you for a lot of detailed and useful information :) I'm tempted to ask a related question on SSD endurance... If 60GB is the sweet spot for each DB/WAL partition, and the SSD has spare capacity, for example, I'd budgeted 266GB per DB/WAL. Would it then be better to make a 60GB "sweet

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Jake Grimmett
ttps://goo.gl/PGE1Bx > > > Am Di., 28. Mai 2019 um 15:13 Uhr schrieb Jake Grimmett > mailto:j...@mrc-lmb.cam.ac.uk>>: > > Dear All, > > Quick question regarding SSD sizing for a DB/WAL... > > I understand 4% is generally recommended for a DB/WAL. &

[ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Jake Grimmett
Dear All, Quick question regarding SSD sizing for a DB/WAL... I understand 4% is generally recommended for a DB/WAL. Does this 4% continue for "large" 12TB drives, or can we economise and use a smaller DB/WAL? Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD, rather than

Re: [ceph-users] mount cephfs on ceph servers

2019-03-06 Thread Jake Grimmett
Just to add "+1" on this datapoint, based on one month usage on Mimic 13.2.4 essentially "it works great for us" Prior to this, we had issues with the kernel driver on 12.2.2. This could have been due to limited RAM on the osd nodes (128GB / 45 OSD), and an older kernel. Upgrading the RAM to

Re: [ceph-users] MDS crash (Mimic 13.2.2 / 13.2.4 ) elist.h: 39: FAILED assert(!is_on_list())

2019-02-11 Thread Jake Grimmett
Hi Zheng, Sorry - I've just re-read your email and saw your instruction to restore the mds_cache_size and mds_cache_memory_limit to original values if the MDS does not crash - I have now done this... thanks again for your help, best regards, Jake On 2/11/19 12:01 PM, Jake Grimmett wrote: >

Re: [ceph-users] MDS crash (Mimic 13.2.2 / 13.2.4 ) elist.h: 39: FAILED assert(!is_on_list())

2019-02-11 Thread Jake Grimmett
configuration? again thanks for the assistance, Jake On 2/11/19 8:17 AM, Yan, Zheng wrote: > On Sat, Feb 9, 2019 at 12:36 AM Jake Grimmett wrote: >> >> Dear All, >> >> Unfortunately the MDS has crashed on our Mimic cluster... >> >> First symptoms were rsync

[ceph-users] MDS crash (Mimic 13.2.2 / 13.2.4 ) elist.h: 39: FAILED assert(!is_on_list())

2019-02-08 Thread Jake Grimmett
Dear All, Unfortunately the MDS has crashed on our Mimic cluster... First symptoms were rsync giving: "No space left on device (28)" when trying to rename or delete This prompted me to try restarting the MDS, as it reported laggy. Restarting the MDS, shows this as error in the log before the

Re: [ceph-users] EC Pool Disk Performance Toshiba vs Segate

2018-12-13 Thread Jake Grimmett
Hi Ashley, Always interesting to see hardware benchmarks :) Do I understand the following correctly? 1) your host (server provider) rates the Toshiba drives as faster 2) Ceph osd perf rates the Seagate drives as faster Could you share the benchmark output and drive model numbers? Presumably

[ceph-users] cephfs mount on osd node

2018-08-29 Thread Jake Grimmett
Hi Marc, We mount cephfs using FUSE on all 10 nodes of our cluster, and provided that we limit bluestore memory use, find it to be reliable*. bluestore_cache_size = 209715200 bluestore_cache_kv_max = 134217728 Without the above tuning, we get OOM errors. As others will confirm, the FUSE client

Re: [ceph-users] cephfs kernel client hangs

2018-08-09 Thread Jake Grimmett
RAM limit. again, many thanks Jake On 08/08/18 17:11, John Spray wrote: > On Wed, Aug 8, 2018 at 4:46 PM Jake Grimmett wrote: >> >> Hi John, >> >> With regard to memory pressure; Does the cephfs fuse client also cause a >> deadlock - or is this just the ke

Re: [ceph-users] cephfs kernel client hangs

2018-08-08 Thread Jake Grimmett
Hi John, With regard to memory pressure; Does the cephfs fuse client also cause a deadlock - or is this just the kernel client? We run the fuse client on ten OSD nodes, and use parsync (parallel rsync) to backup two beegfs systems (~1PB). Ordinarily fuse works OK, but any OSD problems can cause

[ceph-users] Optane 900P device class automatically set to SSD not NVME

2018-08-01 Thread Jake Grimmett
Dear All, Not sure if this is a bug, but when I add Intel Optane 900P drives, their device class is automatically set to SSD rather than NVME. This happens under Mimic 13.2.0 and 13.2.1 [root@ceph2 ~]# ceph-volume lvm prepare --bluestore --data /dev/nvme0n1 (SNIP see http://p.ip.fi/eopR for

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-30 Thread Jake Grimmett
Hi All, there might be a a problem on Scientific Linux 7.5 too: after upgrading directly from 12.2.5 to 13.2.1 [root@cephr01 ~]# ceph-detect-init Traceback (most recent call last): File "/usr/bin/ceph-detect-init", line 9, in load_entry_point('ceph-detect-init==1.0.1', 'console_scripts',

Re: [ceph-users] CephFS with erasure coding, do I need a cache-pool?

2018-07-17 Thread Jake Grimmett
Hi Oliver, We put Cephfs directly on an 8:2 EC cluster, (10 nodes, 450 OSD), but put metadata on a replicated pool using NVMe drives (1 per node, 5 nodes). We get great performance with large files, but as Linh indicated, IOPS with small files could be better. I did consider adding a replicated

Re: [ceph-users] fuse vs kernel client

2018-07-09 Thread Jake Grimmett
Hi Manuel, My own experiences are that cephfs kernel client is significantly faster than fuse, however the fuse client is generally more reliable. If you need the extra speed of the kernel client on Centos, it may be worth using the ml kernel, as this gives you much more up to date cephfs

Re: [ceph-users] corrupt OSD: BlueFS.cc: 828: FAILED assert

2018-07-05 Thread Jake Grimmett
ated data it's much simpler just to start these OSDs over. > > > Thanks, > > Igor > > > On 7/5/2018 3:58 PM, Jake Grimmett wrote: >> Dear All, >> >> I have a Mimic (13.2.0) cluster, which, due to a bad disk controller, >> corrupted three Bluestore O

[ceph-users] corrupt OSD: BlueFS.cc: 828: FAILED assert

2018-07-05 Thread Jake Grimmett
Dear All, I have a Mimic (13.2.0) cluster, which, due to a bad disk controller, corrupted three Bluestore OSD's on one node. Unfortunately these three OSD's crash when they try to start. systemctl start ceph-osd@193 (snip) /BlueFS.cc: 828: FAILED assert(r != q->second->file_map.end()) Full log

Re: [ceph-users] "ceph pg scrub" does not start

2018-07-04 Thread Jake Grimmett
t_last_update": "0'0", "scrubber.deep": false, "scrubber.waiting_on_whom": [] Not sure where to go from here :( Jake On 04/07/18 01:14, Sean Redmond wrote: > do a deep-scrub instead of just a scrub > > On T

Re: [ceph-users] "ceph pg scrub" does not start

2018-07-03 Thread Jake Grimmett
Dear All, Sorry to bump the thread, but I still can't manually repair inconsistent pgs on our Mimic cluster (13.2.0, upgraded from 12.2.5) There are many similarities to an unresolved bug: http://tracker.ceph.com/issues/15781 To give more examples of the problem: The following commands appear

Re: [ceph-users] "ceph pg scrub" does not start

2018-06-21 Thread Jake Grimmett
On 21/06/18 10:14, Wido den Hollander wrote: Hi Wido, >> Note the date stamps, the scrub command appears to be ignored >> >> Any ideas on why this is happening, and what we can do to fix the error? > > Are any of the OSDs involved with that PG currently doing recovery? If > so, they will ignore

[ceph-users] "ceph pg scrub" does not start

2018-06-21 Thread Jake Grimmett
Dear All, A bad disk controller appears to have damaged our cluster... # ceph health HEALTH_ERR 10 scrub errors; Possible data damage: 10 pgs inconsistent probing to find bad pg... # ceph health detail HEALTH_ERR 10 scrub errors; Possible data damage: 10 pgs inconsistent OSD_SCRUB_ERRORS 10

Re: [ceph-users] Sudden increase in "objects misplaced"

2018-06-01 Thread Jake Grimmett
_wait 1active+clean+snaptrim io: client: 101 MB/s wr, 0 op/s rd, 28 recovery: 2806 MB/s, 975 objects/s again, many thanks, Jake On 31/05/18 21:52, Gregory Farnum wrote: > On Thu, May 31, 2018 at 5:07 AM Jake Grimmett <mailto:j...@mrc-lmb.cam.ac.uk>> wrot

[ceph-users] Sudden increase in "objects misplaced"

2018-05-31 Thread Jake Grimmett
Dear All, I recently upgraded our Ceph cluster from 12.2.4 to 12.2.5 & simultaneously upgraded the OS from Scientific Linux 7.4 to 7.5 After reboot, 0.7% objects were misplaced and many pgs degraded. the cluster had no client connections, so I speeded up recovery with: ceph tell 'osd.*'

Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-24 Thread Jake Grimmett
ake, > > On Thu, 24 May 2018 13:17:16 +0100, Jake Grimmett wrote: > >> Hi Daniel, David, >> >> Many thanks for both of your advice. >> >> Sorry not to reply to the list, but I'm subscribed to the digest and my >> mail client will not reply to individual threads - I

[ceph-users] samba gateway experiences with cephfs ?

2018-05-21 Thread Jake Grimmett
Dear All, Excited to see snapshots finally becoming a stable feature in cephfs :) Unfortunately we have a large number (~200) of Windows and Macs clients which need CIFS/SMB access to cephfs. None-the-less, snapshots have prompted us to start testing ceph to see if we can use it as a scale-out

Re: [ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-02-05 Thread Jake Grimmett
Dear Nick & Wido, Many thanks for your helpful advice; our cluster has returned to HEALTH_OK One caveat is that a small number of pgs remained at "activating". By increasing mon_max_pg_per_osd from 500 to 1000 these few osds activated, allowing the cluster to rebalance fully. i.e. this was

Re: [ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Jake Grimmett
mber of PG's before all the data has re-balanced, you have probably exceeded hard PG per OSD limit. See this thread https://www.spinics.net/lists/ceph-users/msg41231.html Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jake Grimmett Sent:

[ceph-users] pgs down after adding 260 OSDs & increasing PGs

2018-01-29 Thread Jake Grimmett
Dear All, Our ceph luminous (12.2.2) cluster has just broken, due to either adding 260 OSDs drives in one go, or to increasing the PG number from 1024 to 4096 in one go, or a combination of both... Prior to the upgrade, the cluster consisted of 10 dual v4 Xeon nodes running SL7.4, each node

Re: [ceph-users] clients failing to advance oldest client/flush tid

2017-10-09 Thread Jake Grimmett
? any other tricks that you can suggest are most welcome... again, many thanks for your time, Jake On 09/10/17 16:37, John Spray wrote: > On Mon, Oct 9, 2017 at 9:21 AM, Jake Grimmett <j...@mrc-lmb.cam.ac.uk> wrote: >> Dear All, >> >> We have a new cluster based on v12.2

[ceph-users] clients failing to advance oldest client/flush tid

2017-10-09 Thread Jake Grimmett
ake -- Jake Grimmett ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Writing to EC Pool in degraded state?

2017-07-12 Thread Jake Grimmett
lently 9 > and min_size is 7. > > I have a 3 node cluster with 2+1 and I can restart 1 node at a time with > host failure domain. > > > On Wed, Jul 12, 2017, 6:34 AM Jake Grimmett <j...@mrc-lmb.cam.ac.uk > <mailto:j...@mrc-lmb.cam.ac.uk>> wrote: > > D

[ceph-users] Writing to EC Pool in degraded state?

2017-07-12 Thread Jake Grimmett
Dear All, Quick question; is it possible to write to a degraded EC pool? i.e. is there an equivalent to this setting for a replicated pool.. osd pool default size = 3 osd pool default min size = 2 My reason for asking, is that it would be nice if we could build a EC 7+2 cluster, and actively

[ceph-users] cephfs df with EC pool

2017-06-28 Thread Jake Grimmett
Dear All, Sorry is this has been covered before, but is it possible to configure cephfs to report free space based on what is available in the main storage tier? My "df" shows 76%, this gives a false sense of security, when the EC tier is 93% full... i.e. # df -h /ceph Filesystem Size

Re: [ceph-users] ceph pg repair : Error EACCES: access denied

2017-06-16 Thread Jake Grimmett
Hi Greg, adding caps mgr = "allow *" fixed the problem :) Many thanks for your help, Jake On 14/06/17 18:29, Gregory Farnum wrote: > > > On Wed, Jun 14, 2017 at 4:08 AM Jake Grimmett <j...@mrc-lmb.cam.ac.uk > <mailto:j...@mrc-lmb.cam.ac.uk>> wrote: >

Re: [ceph-users] ceph pg repair : Error EACCES: access denied

2017-06-14 Thread Jake Grimmett
, Jake On 13/06/17 18:02, Gregory Farnum wrote: > What are the cephx permissions of the key you are using to issue repair > commands? > On Tue, Jun 13, 2017 at 8:31 AM Jake Grimmett <j...@mrc-lmb.cam.ac.uk > <mailto:j...@mrc-lmb.cam.ac.uk>> wrote: > > Dear All,

[ceph-users] ceph pg repair : Error EACCES: access denied

2017-06-13 Thread Jake Grimmett
Dear All, I'm testing Luminous and have a problem repairing inconsistent pgs. This occurs with v12.0.2 and is still present with v12.0.3-1507-g52f0deb # ceph health HEALTH_ERR noout flag(s) set; 2 pgs inconsistent; 2 scrub errors # ceph health detail HEALTH_ERR noout flag(s) set; 2 pgs

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-06-08 Thread Jake Grimmett
*** thanks again, Jake On 08/06/17 12:08, nokia ceph wrote: > Hello Mark, > > Raised tracker for the issue -- http://tracker.ceph.com/issues/20222 > > Jake can you share the restart_OSD_and_log-this.sh script > > Thanks > Jayaram > > On Wed, Jun 7, 2017

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-06-07 Thread Jake Grimmett
, Jake On 06/06/17 15:52, Jake Grimmett wrote: > Hi Mark, > > OK, I'll upgrade to the current master and retest... > > best, > > Jake > > On 06/06/17 15:46, Mark Nelson wrote: >> Hi Jake, >> >> I just happened to notice this was on 12.0.3. W

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-06-06 Thread Jake Grimmett
h the python bindings and the ceph debug symbols for it to >> work. >> >> This might tell us over time if the tp_osd_tp processes are just sitting >> on pg::locks. >> >> Mark >> >> On 06/06/2017 05:34 AM, Jake Grimmett wrote: >>> Hi Mark, >>>

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-06-06 Thread Jake Grimmett
Hi Mark, Thanks again for looking into this problem. I ran the cluster overnight, with a script checking for dead OSDs every second, and restarting them. 40 OSD failures occurred in 12 hours, some OSDs failed multiple times, (there are 50 OSDs in the EC tier). Unfortunately, the output of

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-06-01 Thread Jake Grimmett
Hi Mark, Firstly, many thanks for looking into this. Jayaram appears to have a similar config to me; v12.0.3, EC 4+1 bluestore SciLin,7.3 - 3.10.0-514.21.1.el7.x86_64 I have 5 EC nodes (10 x 8TB ironwolf each) plus 2 nodes with replicated NVMe (Cephfs hot tier) I now think the Highpoint r750

[ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-27 Thread Jake Grimmett
Dear All, I wonder if anyone can give advice regarding bluestore OSD's going down on Luminous 12.0.3 when the cluster is under moderate (200MB/s) load, OSD's seem to do down randomly across the 5 OSD servers. When one OSD is down, load decreases, so no further OSD's drop, until I restart the

Re: [ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-25 Thread Jake Grimmett
cephfs copes with 400 million files... thanks again for your help, Jake On 24/05/17 20:30, John Spray wrote: > On Wed, May 24, 2017 at 8:17 PM, Jake Grimmett <j...@mrc-lmb.cam.ac.uk> wrote: >> Hi John, >> That's great, thank you so much for the advice. >> Some of our

Re: [ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-24 Thread Jake Grimmett
or setting their size, it doesn't affect how anything is stored. > >John > >> On Wed, May 24, 2017 at 1:36 PM, John Spray <jsp...@redhat.com> >wrote: >>> >>> On Wed, May 24, 2017 at 7:19 PM, Jake Grimmett ><j...@mrc-lmb.cam.ac.uk> >>>

[ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-24 Thread Jake Grimmett
Dear All, I've been testing out cephfs, and bumped into what appears to be an upper file size limit of ~1.1TB e.g: [root@cephfs1 ~]# time rsync --progress -av /ssd/isilon_melis.tar /ceph/isilon_melis.tar sending incremental file list isilon_melis.tar 1099341824000 54% 237.51MB/s1:02:05

[ceph-users] OSD move after reboot

2015-04-23 Thread Jake Grimmett
Dear All, I have multiple disk types (15k 7k) on each ceph node, which I assign to different pools, but have a problem as whenever I reboot a node, the OSD's move in the CRUSH map. i.e. on host ceph4, before a reboot I have this osd tree -10 7.68980 root 15k-disk (snip) -9 2.19995

Re: [ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-26 Thread Jake Grimmett
On 03/25/2015 05:44 PM, Gregory Farnum wrote: On Wed, Mar 25, 2015 at 10:36 AM, Jake Grimmett j...@mrc-lmb.cam.ac.uk wrote: Dear All, Please forgive this post if it's naive, I'm trying to familiarise myself with cephfs! I'm using Scientific Linux 6.6. with Ceph 0.87.1 My first steps

[ceph-users] how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

2015-03-25 Thread Jake Grimmett
the original cephfs config before attempting to use an erasure cold tier? Or can I just redefine the cephfs? many thanks, Jake Grimmett ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com