Re: [ceph-users] Write throughput drops to zero

2015-11-05 Thread Max A. Krasilnikov
Здравствуйте! On Fri, Oct 30, 2015 at 09:30:40PM +, moloney wrote: > Hi, > I recently got my first Ceph cluster up and running and have been doing some > stress tests. I quickly found that during sequential write benchmarks the > throughput would often drop to zero. Initially I saw this

[ceph-users] Glance with Ceph Backend

2015-11-05 Thread Le Quang Long
Hi, I have a trouble when integrating Ceph 0.94.5 with OpenStack Kilo. I upload successfully image to Glance, but I can't delete, it's status always keeps in "Deleting" This is my glance-api.conf http://pastebin.com/index/TpZ4xps1 Thanks and regards

Re: [ceph-users] Creating RGW Zone System Users Fails with "couldn't init storage provider"

2015-11-05 Thread Daniel Schneller
Bump... :) On 2015-11-02 15:52:44 +, Daniel Schneller said: Hi! I am trying to set up a Rados Gateway, prepared for multiple regions and zones, according to the documenation on http://docs.ceph.com/docs/hammer/radosgw/federated-config/. Ceph version is 0.94.3 (Hammer). I am stuck at the

Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-05 Thread Jason Dillaman
I am not sure of its status -- it looks like it was part of 3.6 planning but it recently was moved to 4.0 on the wiki. There is a video walkthrough of the running integration from this past August [1]. You would need to just deploy Cinder and Keystone -- no need for all the other bits.

Re: [ceph-users] Creating RGW Zone System Users Fails with "couldn't init storage provider"

2015-11-05 Thread Wido den Hollander
On 11/05/2015 01:13 PM, Daniel Schneller wrote: > Bump... :) > > On 2015-11-02 15:52:44 +, Daniel Schneller said: > >> Hi! >> >> >> I am trying to set up a Rados Gateway, prepared for multiple regions >> and zones, according to the documenation on >>

Re: [ceph-users] rbd hang

2015-11-05 Thread Joe Ryner
Its weird that it has even been working. Thanks again for your help! - Original Message - From: "Jason Dillaman" To: "Joe Ryner" Cc: ceph-us...@ceph.com Sent: Thursday, November 5, 2015 4:29:49 PM Subject: Re: [ceph-users] rbd hang On the bright

Re: [ceph-users] pgs per OSD

2015-11-05 Thread Oleksandr Natalenko
(128*2+256*2+256*14+256*5)/15 =~ 375. On Thursday, November 05, 2015 10:21:00 PM Deneau, Tom wrote: > I have the following 4 pools: > > pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash > rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool > stripe_width 0

Re: [ceph-users] rbd hang

2015-11-05 Thread Joe Ryner
Hi, Do you have any ideas as to what might be wrong? Since my last email I decided to recreate the cluster. I am currently testing upgrading from 0.72 to 0.80.10 with hopes to end up on hammer. So I completely erased the cluster and reloaded the machines with centos 6.5(to match my

Re: [ceph-users] rbd hang

2015-11-05 Thread Joe Ryner
It worked. So Whats broken with caching? - Original Message - From: "Jason Dillaman" To: "Joe Ryner" Cc: ceph-us...@ceph.com Sent: Thursday, November 5, 2015 3:18:39 PM Subject: Re: [ceph-users] rbd hang Can you retry with 'rbd --rbd-cache=false

[ceph-users] pgs per OSD

2015-11-05 Thread Deneau, Tom
I have the following 4 pools: pool 1 'rep2host' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 88 flags hashpspool stripe_width 0 pool 17 'rep2osd' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 256 pgp_num 256

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
It appears you have set your cache size to 64 bytes(!): 2015-11-05 15:07:49.927510 7f0d9af5a760 20 librbd::ImageCtx: Initial cache settings: size=64 num_objects=10 max_dirty=32 target_dirty=16 max_dirty_age=5 This exposed a known issue [1] when you attempt to read more data in a single read

[ceph-users] adding ceph mon with ceph-deply ends in ceph-create-keys:ceph-mon is not in quorum: u'probing' / monmap with 0.0.0.0:0 addresses

2015-11-05 Thread Oliver Dzombic
Hi, ceph-deploy mon create ceph5 builds the monitor ( 5th new monitor, 4 already existing ) ceph5# python /usr/sbin/ceph-create-keys --cluster ceph -i ceph5 hangs with: INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing' INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'

Re: [ceph-users] rbd hang

2015-11-05 Thread Joe Ryner
Thanks for the heads up. I have had this set this way for a long time in all of my deployments. I assumed that the units where in MB. Arg.. I will test new settings. Joe - Original Message - From: "Jason Dillaman" To: "Joe Ryner" Cc:

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'? -- Jason Dillaman - Original Message - > From: "Joe Ryner" > To: "Jason Dillaman" > Cc: ceph-us...@ceph.com > Sent: Thursday, November 5, 2015 4:14:28 PM > Subject:

Re: [ceph-users] Ceph OSDs with bcache experience

2015-11-05 Thread Michal Kozanecki
Why did you guys go with partitioning the SSD for ceph journals, instead of just using the whole SSD for bcache and leaving the journal on the filesystem (which itself is ontop bcache)? Was there really a benefit to separating the journals from the bcache fronted HDDs? I ask because it has

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
On the bright side, at least your week of export-related pain should result in a decent speed boost when your clients get 64MB of cache instead of 64B. -- Jason Dillaman - Original Message - > From: "Joe Ryner" > To: "Jason Dillaman" > Cc:

[ceph-users] Suggestion: Create a DOI for ceph projects in github

2015-11-05 Thread Goncalo Borges
Dear Ceph supports and developers. This is just a suggestion to improve Ceph visibility. I have been looking on how to properly cite Ceph project in proposals and scientific literature. I have just found that GitHub provides a way to generate a DOI for projects. Just check:

Re: [ceph-users] Federated gateways

2015-11-05 Thread WD_Hwang
Hi Craig, I am testing the federated gateway of 1 region with 2 zones. And I found only metadata is replicated, the data is NOT. According to your check list, I am sure all thinks are checked. Could you review my configuration scripts? The configuration files are similar to

Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-05 Thread Lars Marowsky-Bree
On 2015-11-04T14:30:56, Hugo Slabbert wrote: > Sure. My post was not intended to say that iSCSI over RBD is *slow*, just > that it scales differently than native RBD client access. > > If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs > can

Re: [ceph-users] Creating RGW Zone System Users Fails with "couldn't init storage provider"

2015-11-05 Thread Daniel Schneller
On 2015-11-05 12:16:35 +, Wido den Hollander said: This is usuaully when keys aren't set up properly. Are you sure that the cephx keys you are using are correct and that you can connect to the Ceph cluster? Wido Yes, I could execute all kinds of commands, however it turns out, I might