Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Le 09/02/2016 20:07, Kris Jurka a écrit : > > > On 2/9/2016 10:11 AM, Lionel Bouton wrote: > >> Actually if I understand correctly how PG splitting works the next spike >> should be times smaller and spread over times the period (where >> is the number of subdirectories created during each

[ceph-users] erasure code backing pool, replication cache, and openstack

2016-02-09 Thread WRIGHT, JON R (JON R)
New user. :) I'm interested in exploring how to use an erasure coded pool as block storage for Openstack. Instructions are on this page. http://docs.ceph.com/docs/master/rados/operations/erasure-code/ Of course, it says "It is not possible to create an RBD image on an erasure coded pool

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Samuel Just
There was a patch at some point to pre-split on pg creation (merged in ad6a2be402665215a19708f55b719112096da3f4). More generally, bluestore is the answer to this. -Sam On Tue, Feb 9, 2016 at 11:34 AM, Lionel Bouton wrote: > Le 09/02/2016 20:18, Lionel Bouton a

[ceph-users] Bucket listing requests get stuck

2016-02-09 Thread Alexey Kuntsevich
Hi! I have a ver 0.94.5 debian-based cluster used mostly through rados. I tried to delete objects with the same prefix from one of the buckets (~1300 objects) using a python boto library. The process finished after several minutes without any errors, but now I can list only a subset (~20) of

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Kris Jurka
On 2/9/2016 10:11 AM, Lionel Bouton wrote: Actually if I understand correctly how PG splitting works the next spike should be times smaller and spread over times the period (where is the number of subdirectories created during each split which seems to be 15 according to OSDs' directory

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Le 09/02/2016 20:18, Lionel Bouton a écrit : > Le 09/02/2016 20:07, Kris Jurka a écrit : >> >> On 2/9/2016 10:11 AM, Lionel Bouton wrote: >> >>> Actually if I understand correctly how PG splitting works the next spike >>> should be times smaller and spread over times the period (where >>> is

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Wade Holler
Hi there, What is the best way to "look at the rgw admin socket " to see what operations are taking a long time ? Best Regards Wade On Mon, Feb 8, 2016 at 12:16 PM Gregory Farnum wrote: > On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: > > > > I've been

Re: [ceph-users] K is for Kraken

2016-02-09 Thread Dan van der Ster
On Mon, Feb 8, 2016 at 8:10 PM, Sage Weil wrote: > On Mon, 8 Feb 2016, Karol Mroz wrote: >> On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote: >> > I didn't find any other good K names, but I'm not sure anything would top >> > kraken anyway, so I didn't look too hard.

Re: [ceph-users] radosgw anonymous write

2016-02-09 Thread Yehuda Sadeh-Weinraub
On Tue, Feb 9, 2016 at 5:15 AM, Jacek Jarosiewicz wrote: > Hi list, > > My setup is: ceph 0.94.5, ubuntu 14.04, tengine (patched nginx). > > I'm trying to migrate from our old file storage (MogileFS) to the new ceph > radosgw. The problem is that the old storage had no

Re: [ceph-users] K is for Kraken

2016-02-09 Thread Götz Reinicke - IT Koordinator
Am 08.02.16 um 20:09 schrieb Robert LeBlanc: > Too bad K isn't an LTS. It was be fun to release the Kraken many times. +1 :) https://www.youtube.com/watch?v=_lN2auTVavw cheers . Götz -- Götz Reinicke IT-Koordinator Tel. +49 7141 969 82420 E-Mail goetz.reini...@filmakademie.de

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-09 Thread Jason Dillaman
What release of Infernalis are you running? When you encounter this error, is the partition table zeroed out or does it appear to be random corruption? -- Jason Dillaman - Original Message - > From: "Udo Waechter" > To: "ceph-users" >

[ceph-users] Fwd: Increasing time to save RGW objects

2016-02-09 Thread Jaroslaw Owsiewski
FYI -- Jarek -- Forwarded message -- From: Jaroslaw Owsiewski Date: 2016-02-09 12:00 GMT+01:00 Subject: Re: [ceph-users] Increasing time to save RGW objects To: Wade Holler Hi, For example: # ceph

Re: [ceph-users] K is for Kraken

2016-02-09 Thread Ferhat Ozkasgarli
Release the Kraken! (Please...) On Feb 9, 2016 1:05 PM, "Dan van der Ster" wrote: > On Mon, Feb 8, 2016 at 8:10 PM, Sage Weil wrote: > > On Mon, 8 Feb 2016, Karol Mroz wrote: > >> On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote: > >> > I didn't

[ceph-users] radosgw anonymous write

2016-02-09 Thread Jacek Jarosiewicz
Hi list, My setup is: ceph 0.94.5, ubuntu 14.04, tengine (patched nginx). I'm trying to migrate from our old file storage (MogileFS) to the new ceph radosgw. The problem is that the old storage had no access control - no authorization, so the access to read and/or write was controlled by the

Re: [ceph-users] Tips for faster openstack instance boot

2016-02-09 Thread Vickey Singh
Guys Thanks a lot for your response. We are running OpenStack Juno + Ceph 94.5 @Jason Dillaman Can you please explain what do you mean by "Glance is configured to cache your RBD image" ? This might give me some clue. Many Thanks. On Mon, Feb 8, 2016 at 10:33 PM, Jason Dillaman

Re: [ceph-users] Tips for faster openstack instance boot

2016-02-09 Thread Jason Dillaman
If your glance configuration includes the following, RBD images will be cached to disk on the API server: [paste_deploy] flavor = keystone+cachemanagement See [1] for the configuration steps for Glance. [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-glance -- Jason

Re: [ceph-users] radosgw anonymous write

2016-02-09 Thread Jacek Jarosiewicz
On 02/09/2016 04:07 PM, Yehuda Sadeh-Weinraub wrote: On Tue, Feb 9, 2016 at 5:15 AM, Jacek Jarosiewicz wrote: Hi list, My setup is: ceph 0.94.5, ubuntu 14.04, tengine (patched nginx). I'm trying to migrate from our old file storage (MogileFS) to the new ceph

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Kris Jurka
On 2/8/2016 9:16 AM, Gregory Farnum wrote: On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: I've been testing the performance of ceph by storing objects through RGW. This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW instances. Initially the storage

[ceph-users] Dell Ceph Hardware recommendations

2016-02-09 Thread Michael
Hello, I'm looking at purchasing Qty 3-4, Dell PowerEdge T630 or R730xd for my OSD nodes in a Ceph cluster. Hardware: Qty x 1, E5-2630v3 2.4Ghz 8C/16T 128 GB DDR4 Ram QLogic 57810 DP 10Gb DA/SFP+ Converged Network Adapter I'm trying to determine which RAID controller to use, since I've read

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Hi, Le 09/02/2016 17:07, Kris Jurka a écrit : > > > On 2/8/2016 9:16 AM, Gregory Farnum wrote: >> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: >>> >>> I've been testing the performance of ceph by storing objects through >>> RGW. >>> This is on Debian with Hammer using 40

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Le 09/02/2016 19:11, Lionel Bouton a écrit : > Actually if I understand correctly how PG splitting works the next spike > should be times smaller and spread over times the period (where > is the number of subdirectories created during each split which > seems to be 15 typo : 16 > according to

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Gregory Farnum
On Tue, Feb 9, 2016 at 8:07 AM, Kris Jurka wrote: > > > On 2/8/2016 9:16 AM, Gregory Farnum wrote: >> >> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: >>> >>> >>> I've been testing the performance of ceph by storing objects through RGW. >>> This is on

Re: [ceph-users] Tips for faster openstack instance boot

2016-02-09 Thread Josef Johansson
The biggest question here is if the OS is using systemctl or not. Cl7 boots extremely quick but our cl6 instances take up to 90 seconds if the cluster has work to do. I know there a lot to do in the init as well with boot profiling etc that could help. /Josef On Tue, 9 Feb 2016 17:11 Vickey

[ceph-users] Max Replica Size

2016-02-09 Thread Swapnil Jain
Hi, What is the maximum replica size we can have for a poll with Infernalis — Swapnil Jain | swap...@linux.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Dell Ceph Hardware recommendations

2016-02-09 Thread Alexandre DERUMIER
Hi, I'm using dell r630 (8 disks, no expander backplane) with PERC H330 RAID Controller (supports for non-RAID passthrough) Full ssd nodes, 2x raid1 (s3700 100GB) for os + mon, passthrough for 6x osd (intel s3610 1,6TB) 64GB ram, 2x intel E5-2687W v3 @ 3.10GHz (10C/20T) H330 controller

[ceph-users] Can't fix down+incomplete PG

2016-02-09 Thread Scott Laird
I lost a few OSDs recently. Now my cell is unhealthy and I can't figure out how to get it healthy again. OSD 3, 7, 10, and 40 died in a power outage. Now I have 10 PGs that are down+incomplete, but all of them seem like they should have surviving replicas of all data. I'm running 9.2.0. $

Re: [ceph-users] Max Replica Size

2016-02-09 Thread Swapnil Jain
Sorry for the typo, its pool ;) — Swapnil Jain | swap...@linux.com > On 10-Feb-2016, at 11:13 AM, Shinobu Kinjo wrote: > > What is poll? > > Rgds, > Shinobu > > - Original Message - > From: "Swapnil Jain" > To:

Re: [ceph-users] Max Replica Size

2016-02-09 Thread Shinobu Kinjo
What is poll? Rgds, Shinobu - Original Message - From: "Swapnil Jain" To: ceph-users@lists.ceph.com Sent: Wednesday, February 10, 2016 2:20:08 PM Subject: [ceph-users] Max Replica Size Hi, What is the maximum replica size we can have for a poll with Infernalis

Re: [ceph-users] Dell Ceph Hardware recommendations

2016-02-09 Thread Matt Taylor
We are using Dell R730XD's with 2 x Internal SAS in Raid 1 for OS. 24 x 400GB SSD. PERC H730P Mini is being used with non-RAID passthrough for the SSD's. CPU and RAM specs aren't really needed to be known as you can do whatever you want, however I would recommend minimum of 2 x quad's and at

Re: [ceph-users] Max Replica Size

2016-02-09 Thread Lindsay Mathieson
On 10/02/16 15:43, Shinobu Kinjo wrote: What is poll? One suspects "Pool" -- Lindsay Mathieson ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Can't fix down+incomplete PG

2016-02-09 Thread Arvydas Opulskis
Hi, What is min_size for this pool? Maybe you need to decrease it for cluster to start recovering. Arvydas From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Scott Laird Sent: Wednesday, February 10, 2016 7:22 AM To: 'ceph-users@lists.ceph.com'

Re: [ceph-users] Max Replica Size

2016-02-09 Thread Loris Cuoghi
Pool ;) Le 10/02/2016 06:43, Shinobu Kinjo a écrit : What is poll? Rgds, Shinobu - Original Message - From: "Swapnil Jain" To: ceph-users@lists.ceph.com Sent: Wednesday, February 10, 2016 2:20:08 PM Subject: [ceph-users] Max Replica Size Hi, What is the maximum

Re: [ceph-users] Max Replica Size

2016-02-09 Thread Wido den Hollander
> Op 10 februari 2016 om 6:20 schreef Swapnil Jain : > > > Hi, > > > What is the maximum replica size we can have for a poll with Infernalis > Depends on your CRUSH map, but if you have sufficient places for CRUSH, you can go up to 10 replicas with the default min and