Re: [ceph-users] Trying to understand the contents of .rgw.buckets.index

2016-01-29 Thread Gregory Farnum
On Fri, Jan 29, 2016 at 3:10 AM, Wido den Hollander wrote: > > > On 29-01-16 11:31, Micha Krause wrote: >> Hi, >> >> I'm having problems listing the contents of an s3 bucket with ~2M objects. >> >> I already found the new bucket index sharding feature, but I'm >> interested how

Re: [ceph-users] Striping feature gone after flatten with cloned images

2016-01-29 Thread Jason Dillaman
That was intended as an example of how to use fancy striping with RBD images. The stripe unit and count are knobs to tweak depending on your IO situation. Taking a step back, what are you actually trying to accomplish? The flatten operation doesn't necessarily fit the use case for this

Re: [ceph-users] SSD Journal

2016-01-29 Thread Lionel Bouton
Le 29/01/2016 01:12, Jan Schermer a écrit : > [...] >> Second I'm not familiar with Ceph internals but OSDs must make sure that >> their PGs are synced so I was under the impression that the OSD content for >> a PG on the filesystem should always be guaranteed to be on all the other >> active

Re: [ceph-users] ceph.conf file update

2016-01-29 Thread M Ranga Swami Reddy
Thank you...Will use the Matt's suggestion to deploy the updated conf files. Thanks Swami On Fri, Jan 29, 2016 at 3:10 PM, Adrien Gillard wrote: > Hi, > > No, when an OSD or a MON service starts it fetches its local > /etc/ceph/ceph.conf file. So, as Matt stated, you

Re: [ceph-users] SSD Journal

2016-01-29 Thread Jan Schermer
>> inline > On 29 Jan 2016, at 05:03, Somnath Roy wrote: > > < > From: Jan Schermer [mailto:j...@schermer.cz ] > Sent: Thursday, January 28, 2016 3:51 PM > To: Somnath Roy > Cc: Tyler Bishop; ceph-users@lists.ceph.com

Re: [ceph-users] Trying to understand the contents of .rgw.buckets.index

2016-01-29 Thread Wido den Hollander
On 29-01-16 11:31, Micha Krause wrote: > Hi, > > I'm having problems listing the contents of an s3 bucket with ~2M objects. > > I already found the new bucket index sharding feature, but I'm > interested how these Indexes are stored. > > My index pool shows no space used, and all objects have

Re: [ceph-users] Typical architecture in RDB mode - Number of servers explained ?

2016-01-29 Thread Gaetan SLONGO
Thank you for your answer ! For you what is medium and large clusters ? Best regards, - Mail original - De: "Eneko Lacunza" À: ceph-users@lists.ceph.com Envoyé: Jeudi 28 Janvier 2016 14:32:40 Objet: Re: [ceph-users] Typical architecture in RDB mode - Number

Re: [ceph-users] Trying to understand the contents of .rgw.buckets.index

2016-01-29 Thread Micha Krause
Hi, > The index is stored in the omap of the object which you can list with> the 'rados' command. > > So it's not data inside the RADOS object, but in the omap key/value store. Thank you very much: rados -p .rgw.buckets.index listomapkeys .dir.default.55059808.22 | wc -l 2228777 So this

[ceph-users] Trying to understand the contents of .rgw.buckets.index

2016-01-29 Thread Micha Krause
Hi, I'm having problems listing the contents of an s3 bucket with ~2M objects. I already found the new bucket index sharding feature, but I'm interested how these Indexes are stored. My index pool shows no space used, and all objects have 0B. root@mon01:~ # rados df -p .rgw.buckets.index

Re: [ceph-users] ceph.conf file update

2016-01-29 Thread Adrien Gillard
Hi, No, when an OSD or a MON service starts it fetches its local /etc/ceph/ceph.conf file. So, as Matt stated, you need to deploy your updated ceph.conf on all the nodes. Indeed, services contact the MONs, but it is mainly to retrieve the crushmap. Adrien On Fri, Jan 29, 2016 at 7:46 AM, M

Re: [ceph-users] SSD Journal

2016-01-29 Thread Lionel Bouton
Le 29/01/2016 16:25, Jan Schermer a écrit : > > [...] > > > But if I understand correctly, there is indeed a log of the recent > modifications in the filestore which is used when a PG is recovering > because another OSD is lagging behind (not when Ceph reports a full > backfill

Re: [ceph-users] Lost access when removing cache pool overlay

2016-01-29 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Do the client key have access to the base pool? Something similar bit us when adding a caching tier. Since the cache tier may be proxying all the I/O, the client may not have had access to the base pool and it still worked ok. Once you removed the

Re: [ceph-users] SSD Journal

2016-01-29 Thread Jan Schermer
> On 29 Jan 2016, at 16:00, Lionel Bouton > wrote: > > Le 29/01/2016 01:12, Jan Schermer a écrit : >> [...] >>> Second I'm not familiar with Ceph internals but OSDs must make sure that >>> their PGs are synced so I was under the impression that the OSD content

[ceph-users] Lost access when removing cache pool overlay

2016-01-29 Thread Gerd Jakobovitsch
Dear all, I had to move .rgw.buckets.index pool to another structure; therefore, I created a new pool .rgw.buckets.index.new ; added the old pool as cache pool, and flushed the data. Up to this moment everything was ok. With radosgw -p df, I saw the objects moving to the new pool; the

Re: [ceph-users] Striping feature gone after flatten with cloned images

2016-01-29 Thread Jason Dillaman
High queue depth, sequential, direct IO. With appropriate striping settings, instead of all the sequential IO being processed by the same PG sequentially, the IO will be processed by multiple PGs in parallel. -- Jason Dillaman - Original Message - > From: "Василий Ангапов"

Re: [ceph-users] SSD Journal

2016-01-29 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Jan, I know that Sage has worked through a lot of this and spent a lot of time on it, so I'm somewhat inclined to say that if he says it needs to be there, then it needs to be there. I, however, have been known to stare at the tress so much that I

Re: [ceph-users] Lost access when removing cache pool overlay

2016-01-29 Thread Gerd Jakobovitsch
Thank you for the response. It seems to me it is a transient situation. At this moment, I regained access to most, but not all buckets/index objects. But the overall performance dropped once again - I already have huge performance issues. Regards. Em 29-01-2016 14:41, Robert LeBlanc

[ceph-users] rbd kernel mapping on 3.13

2016-01-29 Thread Deneau, Tom
The commands shown below had successfully mapped rbd images in the past on kernel version 4.1. Now I need to map one on a system running the 3.13 kernel. Ceph version is 9.2.0. Rados bench operations work with no problem. I get the same error message whether I use format 1 or format 2 or

Re: [ceph-users] rbd kernel mapping on 3.13

2016-01-29 Thread Ilya Dryomov
On Fri, Jan 29, 2016 at 11:43 PM, Deneau, Tom wrote: > The commands shown below had successfully mapped rbd images in the past on > kernel version 4.1. > > Now I need to map one on a system running the 3.13 kernel. > Ceph version is 9.2.0. Rados bench operations work with no

Re: [ceph-users] rbd kernel mapping on 3.13

2016-01-29 Thread Deneau, Tom
Ah, yes I see this... feature set mismatch, my 4a042a42 < server's 104a042a42, missing 10 which looks like CEPH_FEATURE_CRUSH_V2 Is there any workaround for that? Or what ceph version would I have to back up to? The cbt librbdfio benchmark worked fine (once I had installed librbd-dev

[ceph-users] RGW Civetweb + CentOS7 boto errors

2016-01-29 Thread Ben Hines
After updating our RGW servers to Centos 7 + civetweb, when hit with a fair amount of load (20 gets/sec + a few puts/sec) i'm seeing 'BadStatusLine' exceptions from boto relatively often. Happens most when calling bucket.get_key() (about 10 times in 1000) These appear to be possibly random TCP

Re: [ceph-users] SSD Journal

2016-01-29 Thread Anthony D'Atri
> Right now we run the journal as a partition on the data disk. I've build > drives without journals and the write performance seems okay but random io > performance is poor in comparison to what it should be. Co-located journals have multiple issues: o The disks are presented with double

[ceph-users] storing bucket index in different pool than default

2016-01-29 Thread Krzysztof Księżyk
Hi, When I show bucket info I see: > [root@prod-ceph-01 /home/chris.ksiezyk]> radosgw-admin bucket stats -b bucket1 > { > "bucket": "bucket1", > "pool": ".rgw.buckets", > "index_pool": ".rgw.buckets.index", > "id": "default.4162.3", > "marker": "default.4162.3", >

Re: [ceph-users] Ceph Tech Talk - High-Performance Production Databases on Ceph

2016-01-29 Thread Gregory Farnum
This is super cool — thanks, Thorvald, for the realistic picture of how databases behave on rbd! On Thu, Jan 28, 2016 at 11:56 AM, Patrick McGarry wrote: > Hey cephers, > > Here are the links to both the video and the slides from the Ceph Tech > Talk today. Thanks again to