Re: [ceph-users] Mimic upgrade failure

2018-09-09 Thread Kevin Hrpcek

Update for the list archive.

I went ahead and finished the mimic upgrade with the osds in a 
fluctuating state of up and down. The cluster did start to normalize a 
lot easier after everything was on mimic since the random mass OSD 
heartbeat failures stopped and the constant mon election problem went 
away. I'm still battling with the cluster reacting poorly to host 
reboots or small map changes, but I feel like my current pg:osd ratio 
may be playing a factor in that since we are 2x normal pg count while 
migrating data to new EC pools.


I'm not sure of the root cause but it seems like the mix of luminous and 
mimic did not play well together for some reason. Maybe it has to do 
with the scale of my cluster, 871 osd, or maybe I've missed some some 
tuning as my cluster has scaled to this size.


Kevin


On 09/09/2018 12:49 PM, Kevin Hrpcek wrote:
Nothing too crazy for non default settings. Some of those osd settings 
were in place while I was testing recovery speeds and need to be 
brought back closer to defaults. I was setting nodown before but it 
seems to mask the problem. While its good to stop the osdmap changes, 
OSDs would come up, get marked up, but at some point go down again 
(but the process is still running) and still stay up in the map. Then 
when I'd unset nodown the cluster would immediately mark 250+ osd down 
again and i'd be back where I started.


This morning I went ahead and finished the osd upgrades to mimic to 
remove that variable. I've looked for networking problems but haven't 
found any. 2 of the mons are on the same switch. I've also tried 
combinations of shutting down a mon to see if a single one was the 
problem, but they keep electing no matter the mix of them that are up. 
Part of it feels like a networking problem but I haven't been able to 
find a culprit yet as everything was working normally before starting 
the upgrade. Other than the constant mon elections, yesterday I had 
the cluster 95% healthy 3 or 4 times, but it doesn't last long since 
at some point the OSDs start trying to fail each other through their 
heartbeats.
2018-09-09 17:37:29.079 7eff774f5700  1 mon.sephmon1@0(leader).osd 
e991282 prepare_failure osd.39 10.1.9.2:6802/168438 from osd.49 
10.1.9.3:6884/317908 is reporting failure:1
2018-09-09 17:37:29.079 7eff774f5700  0 log_channel(cluster) log [DBG] 
: osd.39 10.1.9.2:6802/168438 reported failed by osd.49 
10.1.9.3:6884/317908
2018-09-09 17:37:29.083 7eff774f5700  1 mon.sephmon1@0(leader).osd 
e991282 prepare_failure osd.93 10.1.9.9:6853/287469 from osd.372 
10.1.9.13:6801/275806 is reporting failure:1


I'm working on getting things mostly good again with everything on 
mimic and will see if it behaves better.


Thanks for your input on this David.


[global]
mon_initial_members = sephmon1, sephmon2, sephmon3
mon_host = 10.1.9.201,10.1.9.202,10.1.9.203
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = 10.1.0.0/16
osd backfill full ratio = 0.92
osd failsafe nearfull ratio = 0.90
osd max object size = 21474836480
mon max pg per osd = 350

[mon]
mon warn on legacy crush tunables = false
mon pg warn max per osd = 300
mon osd down out subtree limit = host
mon osd nearfull ratio = 0.90
mon osd full ratio = 0.97
mon health preluminous compat warning = false
osd heartbeat grace = 60
rocksdb cache size = 1342177280

[mds]
mds log max segments = 100
mds log max expiring = 40
mds bal fragment size max = 20
mds cache memory limit = 4294967296

[osd]
osd mkfs options xfs = -i size=2048 -d su=512k,sw=1
osd recovery delay start = 30
osd recovery max active = 5
osd max backfills = 3
osd recovery threads = 2
osd crush initial weight = 0
osd heartbeat interval = 30
osd heartbeat grace = 60


On 09/08/2018 11:24 PM, David Turner wrote:
What osd/mon/etc config settings do you have that are not default? It 
might be worth utilizing nodown to stop osds from marking each other 
down and finish the upgrade to be able to set the minimum osd version 
to mimic. Stop the osds in a node, manually mark them down, start 
them back up in mimic. Depending on how bad things are, setting pause 
on the cluster to just finish the upgrade faster might not be a bad 
idea either.


This should be a simple question, have you confirmed that there are 
no networking problems between the MONs while the elections are 
happening?


On Sat, Sep 8, 2018, 7:52 PM Kevin Hrpcek > wrote:


Hey Sage,

I've posted the file with my email address for the user. It is
with debug_mon 20/20, debug_paxos 20/20, and debug ms 1/5. The
mons are calling for elections about every minute so I let this
run for a few elections and saw this node become the leader a
couple times. Debug logs start around 23:27:30. I had managed to
get about 850/857 osds up, but it seems that within the last 30
min it has all gone bad again due to the OSDs reporting each
ot

Re: [ceph-users] Mixing EC and Replicated pools on HDDs in Ceph RGW Luminous

2018-09-09 Thread David Turner
You can indeed have multiple types of pools on the same disks. Go ahead and
put the non-ec pool with a replicated ruleset on the HDDs with the EC data
pool. I believe your correct that the non-ec pool gets cleared out when the
upload is complete and the file is flushed to EC.

On Sun, Sep 9, 2018, 9:49 PM Nhat Ngo  wrote:

> Hi all,
>
>
> I am setting up RadosGW and Ceph cluster on Luminous. I am using EC for 
> `buckets.data`
> pool on HDD osds, is it okay to put `buckets.non-ec` pool with replicated
> ruleset for multi-parts upload on the same HDD osds? Will there be issues
> with mixing EC and replicated pools on the same disk types?
>
>
> We have a use case where users will upload large files up to 1TB each and
> unable to fit this pool into our metada NVMe SSD osds. My assumption on
> `buckets.non-ec` pool is that the objects on this pool will get cleared
> once the whole file is upload and transferred over to the EC pool. Is my
> understanding correct?
>
>
> Best regards,
>
> *Nhat Ngo* | Ops Engineer
>
> University of Melbourne, 3010, VIC
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] kRBD write performance for high IO use cases

2018-09-09 Thread Tyler Bishop
Running 3.10 but I don't think i can change the depth on this older kernel.

I see that config option on my 4.9 test machine.  I wonder if that will
help a lot!  My cluster has wait but it seems entirely limited by the RBD
client.. the OSD are not busy and i don't really have any iowait at all.
_

*Tyler Bishop*
EST 2007


O: 513-299-7108 x1000
M: 513-646-5809
http://BeyondHosting.net 


This email is intended only for the recipient(s) above and/or
otherwise authorized personnel. The information contained herein and
attached is confidential and the property of Beyond Hosting. Any
unauthorized copying, forwarding, printing, and/or disclosing
any information related to this email is prohibited. If you received this
message in error, please contact the sender and destroy all copies of this
email and any attachment(s).


On Sat, Sep 8, 2018 at 4:56 AM Ilya Dryomov  wrote:

> On Sat, Sep 8, 2018 at 1:52 AM Tyler Bishop
>  wrote:
> >
> > I have a fairly large cluster running ceph bluestore with extremely fast
> SAS ssd for the metadata.  Doing FIO benchmarks I am getting 200k-300k
> random write iops but during sustained workloads of ElasticSearch my
> clients seem to hit a wall of around 1100 IO/s per RBD device.  I've tried
> 1 RBD and 4 RBD devices and I still only get 1100 IO per device, so 4
> devices gets me around 4k.
> >
> > Is there some sort of setting that limits each RBD devices performance?
> I've tried playing with nr_requests but that don't seem to change it at
> all... I'm just looking for another 20-30% performance on random write
> io... I even thought about doing raid 0 across 4-8 rbd devices just to get
> the io performance.
>
> What is the I/O profile of that workload?  How did you arrive at the
> 20-30% number?
>
> Which kernel are you running?  Increasing nr_requests doesn't actually
> increase the queue depth, at least on anything moderately recent.  You
> need to map with queue_depth=X for that, see [1] for details.
>
> [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b55841807fb864eccca0167650a65722fd7cd553
>
> Thanks,
>
> Ilya
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Mixing EC and Replicated pools on HDDs in Ceph RGW Luminous

2018-09-09 Thread Nhat Ngo
Hi all,


I am setting up RadosGW and Ceph cluster on Luminous. I am using EC for 
`buckets.data` pool on HDD osds, is it okay to put `buckets.non-ec` pool with 
replicated ruleset for multi-parts upload on the same HDD osds? Will there be 
issues with mixing EC and replicated pools on the same disk types?


We have a use case where users will upload large files up to 1TB each and 
unable to fit this pool into our metada NVMe SSD osds. My assumption on 
`buckets.non-ec` pool is that the objects on this pool will get cleared once 
the whole file is upload and transferred over to the EC pool. Is my 
understanding correct?


Best regards,

Nhat Ngo | Ops Engineer

University of Melbourne, 3010, VIC
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mimic upgrade failure

2018-09-09 Thread Kevin Hrpcek
Nothing too crazy for non default settings. Some of those osd settings 
were in place while I was testing recovery speeds and need to be brought 
back closer to defaults. I was setting nodown before but it seems to 
mask the problem. While its good to stop the osdmap changes, OSDs would 
come up, get marked up, but at some point go down again (but the process 
is still running) and still stay up in the map. Then when I'd unset 
nodown the cluster would immediately mark 250+ osd down again and i'd be 
back where I started.


This morning I went ahead and finished the osd upgrades to mimic to 
remove that variable. I've looked for networking problems but haven't 
found any. 2 of the mons are on the same switch. I've also tried 
combinations of shutting down a mon to see if a single one was the 
problem, but they keep electing no matter the mix of them that are up. 
Part of it feels like a networking problem but I haven't been able to 
find a culprit yet as everything was working normally before starting 
the upgrade. Other than the constant mon elections, yesterday I had the 
cluster 95% healthy 3 or 4 times, but it doesn't last long since at some 
point the OSDs start trying to fail each other through their heartbeats.
2018-09-09 17:37:29.079 7eff774f5700  1 mon.sephmon1@0(leader).osd 
e991282 prepare_failure osd.39 10.1.9.2:6802/168438 from osd.49 
10.1.9.3:6884/317908 is reporting failure:1
2018-09-09 17:37:29.079 7eff774f5700  0 log_channel(cluster) log [DBG] : 
osd.39 10.1.9.2:6802/168438 reported failed by osd.49 10.1.9.3:6884/317908
2018-09-09 17:37:29.083 7eff774f5700  1 mon.sephmon1@0(leader).osd 
e991282 prepare_failure osd.93 10.1.9.9:6853/287469 from osd.372 
10.1.9.13:6801/275806 is reporting failure:1


I'm working on getting things mostly good again with everything on mimic 
and will see if it behaves better.


Thanks for your input on this David.


[global]
mon_initial_members = sephmon1, sephmon2, sephmon3
mon_host = 10.1.9.201,10.1.9.202,10.1.9.203
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = 10.1.0.0/16
osd backfill full ratio = 0.92
osd failsafe nearfull ratio = 0.90
osd max object size = 21474836480
mon max pg per osd = 350

[mon]
mon warn on legacy crush tunables = false
mon pg warn max per osd = 300
mon osd down out subtree limit = host
mon osd nearfull ratio = 0.90
mon osd full ratio = 0.97
mon health preluminous compat warning = false
osd heartbeat grace = 60
rocksdb cache size = 1342177280

[mds]
mds log max segments = 100
mds log max expiring = 40
mds bal fragment size max = 20
mds cache memory limit = 4294967296

[osd]
osd mkfs options xfs = -i size=2048 -d su=512k,sw=1
osd recovery delay start = 30
osd recovery max active = 5
osd max backfills = 3
osd recovery threads = 2
osd crush initial weight = 0
osd heartbeat interval = 30
osd heartbeat grace = 60


On 09/08/2018 11:24 PM, David Turner wrote:
What osd/mon/etc config settings do you have that are not default? It 
might be worth utilizing nodown to stop osds from marking each other 
down and finish the upgrade to be able to set the minimum osd version 
to mimic. Stop the osds in a node, manually mark them down, start them 
back up in mimic. Depending on how bad things are, setting pause on 
the cluster to just finish the upgrade faster might not be a bad idea 
either.


This should be a simple question, have you confirmed that there are no 
networking problems between the MONs while the elections are happening?


On Sat, Sep 8, 2018, 7:52 PM Kevin Hrpcek > wrote:


Hey Sage,

I've posted the file with my email address for the user. It is
with debug_mon 20/20, debug_paxos 20/20, and debug ms 1/5. The
mons are calling for elections about every minute so I let this
run for a few elections and saw this node become the leader a
couple times. Debug logs start around 23:27:30. I had managed to
get about 850/857 osds up, but it seems that within the last 30
min it has all gone bad again due to the OSDs reporting each other
as failed. We relaxed the osd_heartbeat_interval to 30 and
osd_heartbeat_grace to 60 in an attempt to slow down how quickly
OSDs are trying to fail each other. I'll put in the
rocksdb_cache_size setting.

Thanks for taking a look.

Kevin

On 09/08/2018 06:04 PM, Sage Weil wrote:

Hi Kevin,

I can't think of any major luminous->mimic changes off the top of my head
that would impact CPU usage, but it's always possible there is something
subtle.  Can you ceph-post-file a the full log from one of your mons
(preferbably the leader)?

You might try adjusting the rocksdb cache size.. try setting

  rocksdb_cache_size = 1342177280   # 10x the default, ~1.3 GB

on the mons and restarting?

Thanks!
sage

On Sat, 8 Sep 2018, Kevin Hrpcek wrote:


Hello,

I've had a Luminous -> Mimic upgr

Re: [ceph-users] Ceph and NVMe

2018-09-09 Thread Stefan Priebe - Profihost AG
It sounds like sata ssd is still the way to go. While 4 osds in top of one nvme 
sounds good at first place I think you get more out of 2 sata ssds where 
chassis and ssd are cheaper.

Greets,
Stefan

> Am 06.09.2018 um 21:09 schrieb Steven Vacaroaia :
> 
> Hi ,
> Just to add to this question, is anyone using Intel Optane DC P4800X on DELL 
> R630 ...or any other server ?
> Any gotchas / feedback/ knowledge sharing will be greatly appreciated
>  
> Steven
> 
>> On Thu, 6 Sep 2018 at 14:59, Stefan Priebe - Profihost AG 
>>  wrote:
>> Hello list,
>> 
>> has anybody tested current NVMe performance with luminous and bluestore?
>> Is this something which makes sense or just a waste of money?
>> 
>> Greets,
>> Stefan
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-09 Thread Jarek
On Sun, 9 Sep 2018 11:20:01 +0200
Alex Lupsa  wrote:

> Hi,
> Any ideas about the below ?

Don't use consumer grade ssd for Ceph cache/block.db/bcache.



> Thanks,
> Alex
> 
> --
> Hi,
> I have a really small homelab 3-node ceph cluster on consumer hw -
> thanks to Proxmox for making it easy to deploy it.
> The problem I am having is very very bad transfer rates, ie 20mb/sec
> for both read and write on 17 OSDs with cache layer.
> However during recovery the speed hover between 250 to 700mb/sec which
> proves that the cluster IS capable of reaching way above those
> 20mb/sec in KVM.
> 
> Reading the documentation, I see that during recovery "nearly all OSDs
> participate in resilvering a new drive" - kind of a torrent of data
> incoming from multiple sources at once, causing a huge deluge.
> 
> However I believe this does not happen during the normal transfers,
> so my question is simply - is there any hidden tunables I can enable
> for this with the implied cost of network and heavy usage of disks ?
> Will there be in the future if not ?
> 
> I have tried disabling authx, upgrading the network to 10gbit, have
> bigger journals, more bluestore cache and disabled the debugging logs
> as it has been advised on the list. The only thing that did help a
> bit was cache tiering, but this only helps somewhat as the ops do not
> get promoted unless I am very adamant about keeping programs in KVM
> open for very long times so that the writes/reads are promoted.
> To add some to the injury, once the cache gets full - the whole 3-node
> cluster grinds to a full halt until I start forcefully evict data
> from the cache... manually!
> So I am therefore guessing a really bad misconfiguration from my side.
> 
> Next step would be removing the cache layer and using those SSDs as
> bcache instead as it seems to yeld 5x the results, even though it
> does add yet another layer of complexity and RAM requirements.
> 
> Full config details:https://pastebin.com/xUM7VF9k
> 
> rados bench -p ceph_pool 30 write
> Total time run: 30.983343
> Total writes made:  762
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 98.3754
> Stddev Bandwidth:   20.9586
> Max bandwidth (MB/sec): 132
> Min bandwidth (MB/sec): 16
> Average IOPS:   24
> Stddev IOPS:5
> Max IOPS:   33
> Min IOPS:   4
> Average Latency(s): 0.645017
> Stddev Latency(s):  0.326411
> Max latency(s): 2.08067
> Min latency(s): 0.0355789
> Cleaning up (deleting benchmark objects)
> Removed 762 objects
> Clean up completed and total clean up time :3.925631
> 
> Thanks,
> Alex



-- 
Pozdrawiam
Jarosław Mociak - Nettelekom GK Sp. z o.o.



pgpiGzmDSwfS4.pgp
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-09 Thread Ronny Aasen

ceph is a distributed system, it scales by concurrent access to nodes.

generally a single client will access a single OSD at the time, iow max 
possible single thread read is the read speak of the drive. and max 
possible write is single drive write / (replication size-1)
but when you have many vm's accessing the same cluster the load is 
spread all over (just like when you see the recovery running)


A single spinning disk should be able to do 100-150MB/s depending on 
make and model. even with the overhead of ceph and networking so i still 
think 20MB/s is a bit on the low side, depending on how you benchmark.


I would start by going thru this benchmarking guide, and see if you find 
some issues:

https://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance


in order to get more singlethread performance out of ceph you must get 
faster individual parts ( nvram disks/fast ram and processors/fast 
network/etc/etc) or you can cheat by either spreading the load over more 
disks. eg you can do rbd fancy striping, or attach multiple disk's with 
individual controllers in the vm. or use caching and /or readahead.



when it comes to cache tiering i would remove that, it does not get the 
love it needs. and redhat have even stopped supporting it in deployments.

but you can use dm-cache or bcache on osd's
or/and  rbd-cache on kvm clients.


good luck
Ronny Aasen


On 09.09.2018 11:20, Alex Lupsa wrote:

Hi,
Any ideas about the below ?

Thanks,
Alex

--
Hi,
I have a really small homelab 3-node ceph cluster on consumer hw - thanks
to Proxmox for making it easy to deploy it.
The problem I am having is very very bad transfer rates, ie 20mb/sec for
both read and write on 17 OSDs with cache layer.
However during recovery the speed hover between 250 to 700mb/sec which
proves that the cluster IS capable of reaching way above those 20mb/sec in
KVM.

Reading the documentation, I see that during recovery "nearly all OSDs
participate in resilvering a new drive" - kind of a torrent of data
incoming from multiple sources at once, causing a huge deluge.

However I believe this does not happen during the normal transfers, so my
question is simply - is there any hidden tunables I can enable for this
with the implied cost of network and heavy usage of disks ? Will there be
in the future if not ?

I have tried disabling authx, upgrading the network to 10gbit, have bigger
journals, more bluestore cache and disabled the debugging logs as it has
been advised on the list. The only thing that did help a bit was cache
tiering, but this only helps somewhat as the ops do not get promoted unless
I am very adamant about keeping programs in KVM open for very long times so
that the writes/reads are promoted.
To add some to the injury, once the cache gets full - the whole 3-node
cluster grinds to a full halt until I start forcefully evict data from the
cache... manually!
So I am therefore guessing a really bad misconfiguration from my side.

Next step would be removing the cache layer and using those SSDs as bcache
instead as it seems to yeld 5x the results, even though it does add yet
another layer of complexity and RAM requirements.

Full config details:
https://pastebin.com/xUM7VF9k

rados bench -p ceph_pool 30 write
Total time run: 30.983343
Total writes made:  762
Write size: 4194304
Object size:4194304
Bandwidth (MB/sec): 98.3754
Stddev Bandwidth:   20.9586
Max bandwidth (MB/sec): 132
Min bandwidth (MB/sec): 16
Average IOPS:   24
Stddev IOPS:5
Max IOPS:   33
Min IOPS:   4
Average Latency(s): 0.645017
Stddev Latency(s):  0.326411
Max latency(s): 2.08067
Min latency(s): 0.0355789
Cleaning up (deleting benchmark objects)
Removed 762 objects
Clean up completed and total clean up time :3.925631

Thanks,
Alex


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-09 Thread Alex Lupsa
Hi,
Any ideas about the below ?

Thanks,
Alex

--
Hi,
I have a really small homelab 3-node ceph cluster on consumer hw - thanks
to Proxmox for making it easy to deploy it.
The problem I am having is very very bad transfer rates, ie 20mb/sec for
both read and write on 17 OSDs with cache layer.
However during recovery the speed hover between 250 to 700mb/sec which
proves that the cluster IS capable of reaching way above those 20mb/sec in
KVM.

Reading the documentation, I see that during recovery "nearly all OSDs
participate in resilvering a new drive" - kind of a torrent of data
incoming from multiple sources at once, causing a huge deluge.

However I believe this does not happen during the normal transfers, so my
question is simply - is there any hidden tunables I can enable for this
with the implied cost of network and heavy usage of disks ? Will there be
in the future if not ?

I have tried disabling authx, upgrading the network to 10gbit, have bigger
journals, more bluestore cache and disabled the debugging logs as it has
been advised on the list. The only thing that did help a bit was cache
tiering, but this only helps somewhat as the ops do not get promoted unless
I am very adamant about keeping programs in KVM open for very long times so
that the writes/reads are promoted.
To add some to the injury, once the cache gets full - the whole 3-node
cluster grinds to a full halt until I start forcefully evict data from the
cache... manually!
So I am therefore guessing a really bad misconfiguration from my side.

Next step would be removing the cache layer and using those SSDs as bcache
instead as it seems to yeld 5x the results, even though it does add yet
another layer of complexity and RAM requirements.

Full config details:https://pastebin.com/xUM7VF9k

rados bench -p ceph_pool 30 write
Total time run: 30.983343
Total writes made:  762
Write size: 4194304
Object size:4194304
Bandwidth (MB/sec): 98.3754
Stddev Bandwidth:   20.9586
Max bandwidth (MB/sec): 132
Min bandwidth (MB/sec): 16
Average IOPS:   24
Stddev IOPS:5
Max IOPS:   33
Min IOPS:   4
Average Latency(s): 0.645017
Stddev Latency(s):  0.326411
Max latency(s): 2.08067
Min latency(s): 0.0355789
Cleaning up (deleting benchmark objects)
Removed 762 objects
Clean up completed and total clean up time :3.925631

Thanks,
Alex
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com