Re: [ceph-users] mds0: Client X failing to respond to capability release

2016-02-05 Thread Yan, Zheng
> On Feb 6, 2016, at 13:41, Michael Metz-Martini | SpeedPartner GmbH > wrote: > > Hi, > > sorry for the delay - productional system unfortunately ;-( > > Am 04.02.2016 um 15:38 schrieb Yan, Zheng: >>> On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH

Re: [ceph-users] Ceph mirrors wanted!

2016-02-05 Thread Josef Johansson
Hi Wido, We're planning on hosting here in Sweden. I can let you know when we're ready. Regards Josef On Sat, 30 Jan 2016 15:15 Wido den Hollander wrote: > Hi, > > My PR was merged with a script to mirror Ceph properly: > https://github.com/ceph/ceph/tree/master/mirroring >

Re: [ceph-users] ceph 9.2.0 mds cluster went down and now constantly crashes with Floating point exception

2016-02-05 Thread John Spray
On Fri, Feb 5, 2016 at 3:48 PM, Kenneth Waegeman wrote: > Hi, > > In my attempt to retry, I ran 'ceph mds newfs' because removing the fs was > not working (because the mdss could not be stopped). > With the new fs, I could again start syncing. After 10-15min it all

Re: [ceph-users] ceph 9.2.0 mds cluster went down and now constantly crashes with Floating point exception

2016-02-05 Thread Kenneth Waegeman
Hi, In my attempt to retry, I ran 'ceph mds newfs' because removing the fs was not working (because the mdss could not be stopped). With the new fs, I could again start syncing. After 10-15min it all crashed again. The log now shows some other stacktrace. -9> 2016-02-05 15:26:29.015197

[ceph-users] cls_rbd ops on rbd_id.$name objects in EC pool

2016-02-05 Thread Sage Weil
On Wed, 27 Jan 2016, Nick Fisk wrote: > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Jason Dillaman > > Sent: 27 January 2016 14:25 > > To: Nick Fisk > > Cc: ceph-users@lists.ceph.com > > Subject: Re:

Re: [ceph-users] Default CRUSH Weight Set To 0 ?

2016-02-05 Thread Kyle
Burkhard Linke writes: > > The default weight is the size of the OSD in tera bytes. Did you use > a very small OSD partition for test purposes, e.g. 20 GB? In that > case the weight is rounded and results in an effective weight of > 0.0. As a result the

Re: [ceph-users] Ceph and hadoop (fstab insted of CephFS)

2016-02-05 Thread Jose M
Hi Zoltan, thanks for the answer. Because replacing hdfs:// with ceph:// and use CephFs doesn't work for all haddop componentes out of the box (unless in my tests), for example I had issues with Hbase, then with Yarn, Hue, etc (I'm using the cloudera distribution but I also tried with

Re: [ceph-users] Ceph mirrors wanted!

2016-02-05 Thread Wido den Hollander
Hi, Great! So that would be se.ceph.com? There is a ceph-mirrors list for mirror admins, so let me know when you are ready to set up so I can add you there. Wido > Op 6 februari 2016 om 8:22 schreef Josef Johansson : > > > Hi Wido, > > We're planning on hosting here in

Re: [ceph-users] Ceph mirrors wanted!

2016-02-05 Thread Wido den Hollander
> Op 6 februari 2016 om 0:08 schreef Tyler Bishop > : > > > I have ceph pulling down from eu. What *origin* should I setup rsync to > automatically pull from? > > download.ceph.com is consistently broken. > download.ceph.com should be your best guess, since

Re: [ceph-users] cls_rbd ops on rbd_id.$name objects in EC pool

2016-02-05 Thread Sage Weil
On Fri, 5 Feb 2016, Samuel Just wrote: > On Fri, Feb 5, 2016 at 7:53 AM, Jason Dillaman wrote: > > #1 and #2 are awkward for existing pools since we would need a tool to > > inject dummy omap values within existing images. Can the cache tier > > force-promote it from the

Re: [ceph-users] cls_rbd ops on rbd_id.$name objects in EC pool

2016-02-05 Thread Samuel Just
It seems like the cache tier should force promote when it gets an op the backing pool doesn't support. I think using the cache-pin mechanism would make sense. -Sam On Fri, Feb 5, 2016 at 7:53 AM, Jason Dillaman wrote: > #1 and #2 are awkward for existing pools since we

Re: [ceph-users] cls_rbd ops on rbd_id.$name objects in EC pool

2016-02-05 Thread Jason Dillaman
#1 and #2 are awkward for existing pools since we would need a tool to inject dummy omap values within existing images. Can the cache tier force-promote it from the EC pool to the cache when an unsupported op is encountered? There is logic like that in jewel/master for handling the proxied

[ceph-users] Unified queue in Infernalis

2016-02-05 Thread Stillwell, Bryan
I saw the following in the release notes for Infernalis, and I'm wondering where I can find more information about it? * There is now a unified queue (and thus prioritization) of client IO, recovery, scrubbing, and snapshot trimming. I've tried checking the docs for more details, but didn't have

[ceph-users] CFQ changes affect Ceph priority?

2016-02-05 Thread Warren Wang - ISD
Not sure how many folks use the CFQ scheduler to use Ceph IO priority, but there’s a CFQ change that probably needs to be evaluated for Ceph purposes. http://lkml.iu.edu/hypermail/linux/kernel/1602.0/00820.html This might be a better question for the dev list. Warren Wang This email and any

Re: [ceph-users] Ceph mirrors wanted!

2016-02-05 Thread Tyler Bishop
I have ceph pulling down from eu. What *origin* should I setup rsync to automatically pull from? download.ceph.com is consistently broken. - Original Message - From: "Tyler Bishop" To: "Wido den Hollander" Cc: "ceph-users"

Re: [ceph-users] Ceph mirrors wanted!

2016-02-05 Thread Tyler Bishop
We would be happy to mirror the project. http://mirror.beyondhosting.net - Original Message - From: "Wido den Hollander" To: "ceph-users" Sent: Saturday, January 30, 2016 9:14:59 AM Subject: [ceph-users] Ceph mirrors wanted! Hi, My PR was merged

[ceph-users] radosgw config changes

2016-02-05 Thread Austin Johnson
All, I'm running a small infernalis cluster. I think I've either a) found a bug, or b) need to be retrained on how to use a keyboard. ;) For some reason I cannot get a radosgw daemons (using upstart) to accept a config change through the "ceph-deploy config push" method. If I start radosgw

Re: [ceph-users] cls_rbd ops on rbd_id.$name objects in EC pool

2016-02-05 Thread Nick Fisk
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Sage Weil > Sent: 05 February 2016 18:45 > To: Samuel Just > Cc: Jason Dillaman ; Nick Fisk ; >

Re: [ceph-users] ceph 9.2.0 mds cluster went down and now constantly crashes with Floating point exception

2016-02-05 Thread John Spray
On Fri, Feb 5, 2016 at 9:36 AM, Kenneth Waegeman wrote: > > > On 04/02/16 16:17, Gregory Farnum wrote: >> >> On Thu, Feb 4, 2016 at 1:42 AM, Kenneth Waegeman >> wrote: >>> >>> Hi, >>> >>> Hi, we are running ceph 9.2.0. >>> Overnight, our ceph

Re: [ceph-users] cephfs - inconsistent nfs and samba directory listings

2016-02-05 Thread Dan van der Ster
Thanks for this thread. We just did the same mistake (rmfailed) on our hammer cluster which broke it similarly. The addfailed patch worked for us too. -- Dan On Fri, Jan 15, 2016 at 6:30 AM, Mike Carlson wrote: > Hey ceph-users, > > I wanted to follow up, Zheng's patch did

Re: [ceph-users] ceph 9.2.0 mds cluster went down and now constantly crashes with Floating point exception

2016-02-05 Thread Kenneth Waegeman
On 04/02/16 16:17, Gregory Farnum wrote: On Thu, Feb 4, 2016 at 1:42 AM, Kenneth Waegeman wrote: Hi, Hi, we are running ceph 9.2.0. Overnight, our ceph state went to 'mds mds03 is laggy' . When I checked the logs, I saw this mds crashed with a stacktrace. I

Re: [ceph-users] Fwd: HEALTH_WARN pool vol has too few pgs

2016-02-05 Thread M Ranga Swami Reddy
Hi - As per the OSD calculations: no of OSD * 100/pool size => 96 *100/3 = 3200 => 4096 So 4096 is correct pg_num. this case - PG are correct number as per the recommended. On Thu, Feb 4, 2016 at 2:14 AM, Ferhat Ozkasgarli wrote: > As message satates, you must increase

Re: [ceph-users] why is there heavy read traffic during object delete?

2016-02-05 Thread Stephen Lord
I looked at this system this morning, and the it actually finished what it was doing. The erasure coded pool still contains all the data and the cache pool has about a million zero sized objects: GLOBAL: SIZE AVAIL RAW USED %RAW USED OBJECTS 15090G 9001G

Re: [ceph-users] why is there heavy read traffic during object delete?

2016-02-05 Thread Gregory Farnum
On Fri, Feb 5, 2016 at 6:39 AM, Stephen Lord wrote: > > I looked at this system this morning, and the it actually finished what it was > doing. The erasure coded pool still contains all the data and the cache > pool has about a million zero sized objects: > > > GLOBAL: >

Re: [ceph-users] Performance issues related to scrubbing

2016-02-05 Thread Bob R
Cullen, We operate a cluster with 4 nodes, each has 2xE5-2630, 64gb ram, 10x4tb spinners. We've recently replaced 2xm550 journals with a single p3700 nvme drive per server and didn't see the performance gains we were hoping for. After making the changes below we're now seeing significantly better

Re: [ceph-users] Unified queue in Infernalis

2016-02-05 Thread Robert LeBlanc
I believe this is referring to combining the previously separate queues into a single queue (PrioritizedQueue and soon to be WeightedPriorityQueue) in ceph. That way client IO and recovery IO can be better prioritized in the Ceph code. This is all before the disk queue. Robert LeBlanc Sent from

Re: [ceph-users] radosgw config changes

2016-02-05 Thread Karol Mroz
On Fri, Feb 05, 2016 at 12:43:52PM -0700, Austin Johnson wrote: > All, > > I'm running a small infernalis cluster. > > I think I've either a) found a bug, or b) need to be retrained on how to > use a keyboard. ;) > > For some reason I cannot get a radosgw daemons (using upstart) to accept a >

Re: [ceph-users] mds0: Client X failing to respond to capability release

2016-02-05 Thread Michael Metz-Martini | SpeedPartner GmbH
Hi, sorry for the delay - productional system unfortunately ;-( Am 04.02.2016 um 15:38 schrieb Yan, Zheng: >> On Feb 4, 2016, at 17:00, Michael Metz-Martini | SpeedPartner GmbH >> wrote: >> Am 04.02.2016 um 09:43 schrieb Yan, Zheng: >>> On Thu, Feb 4, 2016 at 4:36 PM,