Re: [ceph-users] NUMA and ceph ... zone_reclaim_mode

2015-01-13 Thread Mark Nelson
On 01/12/2015 07:47 AM, Dan van der Ster wrote: (resending to list) Hi Kyle, I'd like to +10 this old proposal of yours. Let me explain why... A couple months ago we started testing a new use-case with radosgw -- this new user is writing millions of small files and has been causing us some

[ceph-users] ssd osd fails often with FAILED assert(soid scrubber.start || soid = scrubber.end)

2015-01-13 Thread Udo Lembke
Hi, since last thursday we had an ssd-pool (cache tier) in front of an ec-pool and fill the pools with data via rsync (app. 50MB/s). The ssd-pool has tree disks and one of them (an DC S3700) fails four times since that. I simply start the osd again and the pool pas rebuilded and work again for

Re: [ceph-users] CRUSH question - failing to rebalance after failure test

2015-01-13 Thread Sage Weil
On Mon, 12 Jan 2015, Christopher Kunz wrote: Hi, [redirecting back to list] Oh, it could be that... can you include the output from 'ceph osd tree'? That's a more concise view that shows up/down, weight, and in/out. Thanks! sage root@cepharm17:~# ceph osd tree # id weight

[ceph-users] any workaround for FAILED assert(p != snapset.clones.end())

2015-01-13 Thread Luke Kao
Hello community, We have a cluster using v0.80.5, and recently several OSDs goes down with error when removing a rbd snapshot: osd/ReplicatedPG.cc: 2352: FAILED assert(p != snapset.clones.end()) and after restart those OSDs, it will go down again soon for the same error. It looks like link to

Re: [ceph-users] error adding OSD to crushmap

2015-01-13 Thread Jason King
Hi Luis, Could you show us the output of *ceph osd tree*? Jason 2015-01-12 20:45 GMT+08:00 Luis Periquito periqu...@gmail.com: Hi all, I've been trying to add a few new OSDs, and as I manage everything with puppet, it was manually adding via the CLI. At one point it adds the OSD to the