Re: [ceph-users] Global, Synchronous Blocked Requests

2015-11-28 Thread Lionel Bouton
Hi, Le 28/11/2015 04:24, Brian Felton a écrit : > Greetings Ceph Community, > > We are running a Hammer cluster (0.94.3-1) in production that recently > experienced asymptotic performance degradation. We've been migrating > data from an older non-Ceph cluster at a fairly steady pace for the >

Re: [ceph-users] RGW pool contents

2015-11-28 Thread Wido den Hollander
On 11/26/2015 09:03 AM, Somnath Roy wrote: > Thanks Wido ! > Could you please explain a bit more on the relationship between user created > buckets and the objects within .bucket.index pool ? > I am not seeing for each bucket one entry is created within .bucket.index > pool. > I don't think

Re: [ceph-users] Global, Synchronous Blocked Requests

2015-11-28 Thread Lindsay Mathieson
On 28 November 2015 at 13:24, Brian Felton wrote: > Each storage server contains 72 6TB SATA drives for Ceph (648 OSDs, ~3.5PB > in total). Each disk is set up as its own ZFS zpool. Each OSD has a 10GB > journal, located within the disk's zpool. > I doubt I have much to

Re: [ceph-users] Undersized pgs problem

2015-11-28 Thread Bob R
Vasiliy, Your OSDs are marked as 'down' but 'in'. "Ceph OSDs have two known states that can be combined. *Up* and *Down* only tells you whether the OSD is actively involved in the cluster. OSD states also are expressed in terms of cluster replication: *In* and *Out*. Only when a Ceph OSD is

[ceph-users] ceph and cache pools?

2015-11-28 Thread Florian Rommel
Hi, I have a bit of a problem. I have a fully functioning cep cluster. Each server has an SSD drive that we would like to use as a cache pool and 6 1.7TB data drives that we would like to put as erasure coded drive. Yes I would ike to put a cache pool overlaid to the Erasure pool. Cep version

Re: [ceph-users] network failover with public/custer network - is that possible

2015-11-28 Thread Alex Gorbachev
On Wednesday, November 25, 2015, Götz Reinicke - IT Koordinator < goetz.reini...@filmakademie.de> wrote: > Hi, > > discussing some design questions we came across the failover possibility > of cephs network configuration. > > If I just have a public network, all traffic is crossing that lan. > >

[ceph-users] Ceph OSD: Memory Leak problem

2015-11-28 Thread prasad pande
Hi, I installed a ceph cluster with 2 MON, 1 MDS and 10 OSDs. While performing the rados put operation to put objects in ceph cluster I am getting the OSD errors as follows: 2015-11-28 23:02:03.276821 7f7f5affb700 0 -- 10.176.128.135:0/1009266 >> 10.176.128.136:6800/22824 pipe(0x7f7f6000e190

[ceph-users] 回复:In flight osd io

2015-11-28 Thread louis
Change my question,how can I know there is no io running on a robe image? 在2015年11月29日 08:28,louis 写道: Hi, I am facing a problem to solve in flight ceph io issue. I think it is possible that client process

[ceph-users] In flight osd io

2015-11-28 Thread louis
Hi, I am facing a problem to solve in flight ceph io issue. I think it is possible that client process exited, while there are some ios sent by this client are still in flight in odds. These in flight iOS could cause my application get

Re: [ceph-users] Undersized pgs problem

2015-11-28 Thread Vasiliy Angapov
Bob, Thanks for explanation, sounds resonable! But how it could happen that host is down and its OSDs are still IN cluster? I mean NOOUT flag is not set and my timeouts are fully default... But if I remember correctly host was not completely down, it was pingable but not other services were