Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-28 Thread Mark Kirkwood
On 29/08/14 04:11, Sebastien Han wrote: Hey all, See my fio template: [global] #logging #write_iops_log=write_iops_log #write_bw_log=write_bw_log #write_lat_log=write_lat_lo time_based runtime=60 ioengine=rbd clientname=admin pool=test rbdname=fio invalidate=0# mandatory #rw=randwrite

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-28 Thread Mark Kirkwood
On 29/08/14 14:06, Mark Kirkwood wrote: ... mounting (xfs) with nobarrier seems to get much better results. The run below is for a single osd on an xfs partition from an Intel 520. I'm using another 520 as a journal: ...and adding filestore_queue_max_ops = 2 improved IOPS a bit more

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-30 Thread Mark Kirkwood
On 29/08/14 22:17, Sebastien Han wrote: @Mark thanks trying this :) Unfortunately using nobarrier and another dedicated SSD for the journal (plus your ceph setting) didn’t bring much, now I can reach 3,5K IOPS. By any chance, would it be possible for you to test with a single OSD SSD?

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Mark Kirkwood
On 31/08/14 17:55, Mark Kirkwood wrote: On 29/08/14 22:17, Sebastien Han wrote: @Mark thanks trying this :) Unfortunately using nobarrier and another dedicated SSD for the journal (plus your ceph setting) didn’t bring much, now I can reach 3,5K IOPS. By any chance, would it be possible

Re: [ceph-users] About IOPS num

2014-08-31 Thread Mark Kirkwood
Yes, as Jason suggests - 27 IOPS doing 4k blocks is: 27*4/1024 MB/s = 0.1 MB/s While the RBD volume is composed of 4MB objects - many of the (presumably) random IOs of 4k blocks can reside in the same 4MB object, so it is tricky to estimate how many 4MB objects are needing to be rewritten

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Mark Kirkwood
On 01/09/14 12:36, Mark Kirkwood wrote: Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS (running fio on the filesystem I've seen 70K IOPS so is reasonably believable). So anyway we are not getting anywhere near the max IOPS from our devices. We use the Intel S3700

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-08-31 Thread Mark Kirkwood
On 01/09/14 17:10, Alexandre DERUMIER wrote: Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS (running fio on the filesystem I've seen 70K IOPS so is reasonably believable). So anyway we are not getting anywhere near the max IOPS from our devices. Hi, Just check this:

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Mark Kirkwood
On 02/09/14 19:38, Alexandre DERUMIER wrote: Hi Sebastien, I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition). Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ? (I'm thinking about filesystem write syncs) Oddly enough, it does not seem

Re: [ceph-users] SSD journal deployment experiences

2014-09-04 Thread Mark Kirkwood
On 05/09/14 10:05, Dan van der Ster wrote: That's good to know. I would plan similarly for the wear out. But I want to also prepare for catastrophic failures -- in the past we've had SSDs just disappear like a device unplug. Those were older OCZ's though... Yes - the Intel dc style drives

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-16 Thread Mark Kirkwood
On 17/09/14 08:39, Alexandre DERUMIER wrote: Hi, I’m just surprised that you’re only getting 5299 with 0.85 since I’ve been able to get 6,4K, well I was using the 200GB model Your model is DC S3700 mine is DC s3500 with lower writes, so that could explain the difference. Interesting -

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-19 Thread Mark Kirkwood
On 19/09/14 15:11, Aegeaner wrote: I noticed ceph added key/value store OSD backend feature in firefly, but i can hardly get any documentation about how to use it. At last I found that i can add a line in ceph.conf: osd objectstore = keyvaluestore-dev but got failed with ceph-deploy creating

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-19 Thread Mark Kirkwood
On 19/09/14 18:02, Mark Kirkwood wrote: On 19/09/14 15:11, Aegeaner wrote: I noticed ceph added key/value store OSD backend feature in firefly, but i can hardly get any documentation about how to use it. At last I found that i can add a line in ceph.conf: osd objectstore = keyvaluestore-dev

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-23 Thread Mark Kirkwood
On 23/09/14 18:22, Aegeaner wrote: Now I use the following script to create key/value backended OSD, but the OSD is created down and never go up. ceph osd create umount /var/lib/ceph/osd/ceph-0 rm -rf /var/lib/ceph/osd/ceph-0 mkdir /var/lib/ceph/osd/ceph-0 ceph osd crush add

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-23 Thread Mark Kirkwood
On 23/09/14 18:22, Aegeaner wrote: Now I use the following script to create key/value backended OSD, but the OSD is created down and never go up. ceph osd create umount /var/lib/ceph/osd/ceph-0 rm -rf /var/lib/ceph/osd/ceph-0 mkdir /var/lib/ceph/osd/ceph-0 ceph osd crush add

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-23 Thread Mark Kirkwood
On 24/09/14 14:07, Aegeaner wrote: I turned on the debug option, and this is what I got: # ./kv.sh removed osd.0 removed item id 0 name 'osd.0' from crush map 0 umount: /var/lib/ceph/osd/ceph-0: not found updated add item id 0 name 'osd.0' weight 1 at location

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-23 Thread Mark Kirkwood
On 24/09/14 14:29, Aegeaner wrote: I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I run service ceph start i got: # service ceph start ERROR:ceph-disk:Failed to activate ceph-disk: Does not look like a Ceph OSD, or incompatible version:

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-23 Thread Mark Kirkwood
On 24/09/14 16:21, Aegeaner wrote: I have got my ceph OSDs running with keyvalue store now! Thank Mark! I have been confused for a whole week. Pleased to hear it! Now you can actually start plying with key value store backend. There are quite a few parameters, not fully documented yet -

Re: [ceph-users] Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

2014-09-24 Thread Mark Kirkwood
On 25/09/14 01:03, Sage Weil wrote: On Wed, 24 Sep 2014, Mark Kirkwood wrote: On 24/09/14 14:29, Aegeaner wrote: I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I run service ceph start i got: # service ceph start ERROR:ceph-disk:Failed to activate ceph-disk

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-07 Thread Mark Kirkwood
On 08/10/14 11:02, lakshmi k s wrote: I am trying to integrate OpenStack Keystone with Ceph Object Store using the link - http://ceph.com/docs/master/radosgw/keystone. http://ceph.com/docs/master/radosgw/keystone Swift V1.0 (without keystone) works quite fine. But for some reason, Swift v2.0

[ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-07 Thread Mark Kirkwood
I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for authentication: $ cat ceph.conf ... [client.radosgw.gateway] host = ceph4 keyring = /etc/ceph/ceph.rados.gateway.keyring rgw_socket_path = /var/run/ceph/$name.sock log_file = /var/log/ceph/radosgw.log rgw_data =

Re: [ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-08 Thread Mark Kirkwood
On 08/10/14 18:46, Mark Kirkwood wrote: I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for authentication: $ cat ceph.conf ... [client.radosgw.gateway] host = ceph4 keyring = /etc/ceph/ceph.rados.gateway.keyring rgw_socket_path = /var/run/ceph/$name.sock log_file = /var

Re: [ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-08 Thread Mark Kirkwood
Yes. I ran into that as well - I used WSGIChunkedRequest On in the virtualhost config for the *keystone* server [1] as indicated in issue 7796. Cheers Mark [1] i.e, not the rgw. On 08/10/14 22:58, Ashish Chandra wrote: Hi Mark, Good you got the solution. But since you have already done

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-08 Thread Mark Kirkwood
Ok, so that is the thing to get sorted. I'd suggest posting the error(s) you are getting perhaps here (someone else might know), but definitely to one of the Debian specific lists. In the meantime perhaps try installing the packages with aptitude rather than apt-get - if there is some fancy

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-08 Thread Mark Kirkwood
or certutil tool on debian/ubuntu? If so, how did you go about this problem. On Wednesday, October 8, 2014 7:01 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Ok, so that is the thing to get sorted. I'd suggest posting the error(s) you are getting perhaps here (someone else might know

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-09 Thread Mark Kirkwood
:~$ openssl x509 -in /home/gateway/ca.pem -pubkey | certutil -d /var/lib/ceph/nss -A -n ca -t TCu,Cu,Tuw certutil: function failed: SEC_ERROR_LEGACY_DATABASE: The certificate/key database is in an old, unsupported format. On Wednesday, October 8, 2014 7:55 PM, Mark Kirkwood mark.kirkw

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-09 Thread Mark Kirkwood
/client.radosgw.gateway.log rgw dns name = gateway On Thursday, October 9, 2014 1:15 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: I ran into this - needed to actually be root via sudo -i or similar, *then* it worked. Unhelpful error message is I think referring to no intialized db. On 09/10/14 16

Re: [ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-09 Thread Mark Kirkwood
setup in my environment. If you can and would like to test it so that we could get it merged it would be great. Thanks, Yehuda On Wed, Oct 8, 2014 at 6:18 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Yes. I ran into that as well - I used WSGIChunkedRequest On in the virtualhost

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-09 Thread Mark Kirkwood
SSLCertificateKeyFile /etc/apache2/ssl/apache.key SetEnv SERVER_PORT_SECURE 443 On Thursday, October 9, 2014 2:48 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Almost - the converted certs need to be saved on your *rgw* host in nss_db_path (default is /var/ceph/nss but wherever you have

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-09 Thread Mark Kirkwood
-- On Thursday, October 9, 2014 3:51 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: No, I don't have any explicit ssl enabled in the rgw site. Now you might be running into http://tracker.ceph.com/issues/7796 http://tracker.ceph.com/issues/7796. So check if you have

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-09 Thread Mark Kirkwood
On Thursday, October 9, 2014 4:45 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Hmm - It looks to me like you added the chunked request into Horizon instead of Keystone. You want virtual host *:35357 On 10/10/14 12:32, lakshmi k s wrote: Have done this too, but in vain. I made changes

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread Mark Kirkwood
Given your setup appears to be non standard, it might be useful to see the output of the 2 commands below: $ keystone service-list $ keystone endpoint-list So we can avoid advising you incorrectly. Regards Mark On 10/10/14 18:46, Mark Kirkwood wrote: Also just to double check - 192.0.8.2

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread Mark Kirkwood
PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz mailto:mark.kirkw...@catalyst.net.nz wrote: Oh, I see. That complicates it a wee bit (looks back at your messages). I see you have: rgw_keystone_url = http://192.0.8.2:5000 http://192.0.8.2:5000/http://192.0.8.2:5000/ So you'll need

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread Mark Kirkwood
that as well, but in vain. In fact, that is how I created the endpoint to begin with. Since, that didn't work, I followed Openstack standard which was to include %tenant-id. -Lakshmi. On Friday, October 10, 2014 6:49 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Hi, I think your swift

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-11 Thread Mark Kirkwood
1 == req done req=0x7f13e40256a0 http_status=401 == 2014-10-11 19:38:28.516647 7f13c67ec700 20 process_request() returned -1 On Friday, October 10, 2014 10:15 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Right, well I suggest changing it back, and adding debug rgw = 20

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-12 Thread Mark Kirkwood
Ah, yes. So your gateway is called something other than: [client.radosgw.gateway] So take a look at what $ ceph auth list says (run from your rgw), it should pick up the correct name. Then correct your ceph.conf, restart and see what the rgw log looks like as you edge ever so closer to

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-13 Thread Mark Kirkwood
: AQCI5C1UUH7iOhAAWazAeqVLetIDh+CptBtRrQ== caps: [mon] allow rwx caps: [osd] allow rwx On Sunday, October 12, 2014 8:02 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Ah, yes. So your gateway is called something other than: [client.radosgw.gateway] So take a look at what

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-13 Thread Mark Kirkwood
1.0 -A http://gateway.ex.com/auth/v1.0 -U s3User:swiftUser -K CRV8PeotaW204nE9IyutoVTcnr+2Uw8M8DQuRP7i list my-Test I am at total loss now. On Monday, October 13, 2014 3:25 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Well that certainly looks ok. So entries

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-14 Thread Mark Kirkwood
Right, So you have 3 osds, one of whom is a mon. Your rgw is on another host (called gateway it seems). I'm wondering if is this the issue. In my case I'm using one of my osds as a rgw as well. This *should* not matter... but it might be worth trying out a rgw on one of your osds instead.

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-15 Thread Mark Kirkwood
rgw = 20 rgw keystone url = http://stack1:35357 rgw keystone admin token = tokentoken rgw keystone accepted roles = admin Member _member_ rgw keystone token cache size = 500 rgw keystone revocation interval = 500 rgw s3 auth use keystone = true nss db path = /var/ceph/nss/ On 15/10/14 10:25, Mark

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread Mark Kirkwood
On 16/10/14 09:08, lakshmi k s wrote: I am trying to integrate Openstack keystone with radosgw. I have followed the instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/. But for some reason, keystone flags under [client.radosgw.gateway] section are not being honored. That

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread Mark Kirkwood
On 16/10/14 10:37, Mark Kirkwood wrote: On 16/10/14 09:08, lakshmi k s wrote: I am trying to integrate Openstack keystone with radosgw. I have followed the instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/. But for some reason, keystone flags under

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-16 Thread Mark Kirkwood
Hi, While I certainly can (attached) - if your install has keystone running it *must* have one. It will be hiding somewhere! Cheers Mark On 17/10/14 05:12, lakshmi k s wrote: Hello Mark - Can you please paste your keystone.conf? Also It seems that Icehouse install that I have does not

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-17 Thread Mark Kirkwood
, October 16, 2014 3:17 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Hi, While I certainly can (attached) - if your install has keystone running it *must* have one. It will be hiding somewhere! Cheers Mark On 17/10/14 05:12, lakshmi k s wrote: Hello Mark - Can you please paste

[ceph-users] Fio rbd stalls during 4M reads

2014-10-23 Thread Mark Kirkwood
I'm doing some fio tests on Giant using fio rbd driver to measure performance on a new ceph cluster. However with block sizes 1M (initially noticed with 4M) I am seeing absolutely no IOPS for *reads* - and the fio process becomes non interrupteable (needs kill -9): $ ceph -v ceph version

Re: [ceph-users] Fio rbd stalls during 4M reads

2014-10-23 Thread Mark Kirkwood
On 24/10/14 13:09, Mark Kirkwood wrote: I'm doing some fio tests on Giant using fio rbd driver to measure performance on a new ceph cluster. However with block sizes 1M (initially noticed with 4M) I am seeing absolutely no IOPS for *reads* - and the fio process becomes non interrupteable

Re: [ceph-users] Fio rbd stalls during 4M reads

2014-10-24 Thread Mark Kirkwood
branch temporarily that makes rbd reads greater than the cache size hang (if the cache was on). This might be that. (Jason is working on it: http://tracker.ceph.com/issues/9854) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Oct 23, 2014 at 5:09 PM, Mark Kirkwood mark.kirkw

Re: [ceph-users] Use 2 osds to create cluster but health check display active+degraded

2014-10-29 Thread Mark Kirkwood
It looks to me like this has been considered (mapping default pool size to 2). However just to check - this *does* mean that you need two (real or virtual) hosts - if the two osds are on the same host then crush map adjustment (hosts - osds) will be required. Regards Mark On 29/10/14

Re: [ceph-users] Use 2 osds to create cluster but health check display active+degraded

2014-10-29 Thread Mark Kirkwood
That is not my experience: $ ceph -v ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec20455b1646) $ cat /etc/ceph/ceph.conf [global] ... osd pool default size = 2 $ ceph osd dump|grep size pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128

Re: [ceph-users] Use 2 osds to create cluster but health check display active+degraded

2014-10-29 Thread Mark Kirkwood
Righty, both osd are on the same host, so you will need to amend the default crush rule. It will look something like: rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host

[ceph-users] Rbd cache severely inhibiting read performance (Giant)

2014-10-29 Thread Mark Kirkwood
I am doing some testing on our new ceph cluster: - 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel) - 8 osd on each (i.e 24 in total) - 4 compute nodes (ceph clients) - 10G networking - ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82) I'm using one of the compute nodes to run some fio

Re: [ceph-users] Rbd cache severely inhibiting read performance (Giant)

2014-10-29 Thread Mark Kirkwood
On 30/10/14 11:16, Mark Kirkwood wrote: I am doing some testing on our new ceph cluster: - 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel) - 8 osd on each (i.e 24 in total) - 4 compute nodes (ceph clients) - 10G networking - ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82) I'm using

Re: [ceph-users] giant release osd down

2014-11-02 Thread Mark Kirkwood
On 03/11/14 14:56, Christian Balzer wrote: On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote: On Mon, 3 Nov 2014, Christian Balzer wrote: c) But wait, you specified a pool size of 2 in your OSD section! Tough luck, because since Firefly there is a bug that at the very least prevents OSD

Re: [ceph-users] giant release osd down

2014-11-03 Thread Mark Kirkwood
On 04/11/14 03:02, Sage Weil wrote: On Mon, 3 Nov 2014, Mark Kirkwood wrote: Ah, I missed that thread. Sounds like three separate bugs: - pool defaults not used for initial pools - osd_mkfs_type not respected by ceph-disk - osd_* settings not working The last one is a real shock; I would

Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Kirkwood
On 04/11/14 22:02, Sage Weil wrote: On Tue, 4 Nov 2014, Blair Bethwaite wrote: On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote: In the Ceph session at the OpenStack summit someone asked what the CephFS survey results looked like. Thanks Sage, that was me! Here's the link:

Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Kirkwood
On 05/11/14 10:58, Mark Nelson wrote: On 11/04/2014 03:11 PM, Mark Kirkwood wrote: Heh, not necessarily - I put multi mds in there, as we want the cephfs part to be of similar to the rest of ceph in its availability. Maybe its because we are looking at plugging it in with an Openstack setup

Re: [ceph-users] cephfs survey results

2014-11-04 Thread Mark Kirkwood
On 05/11/14 11:47, Sage Weil wrote: On Wed, 5 Nov 2014, Mark Kirkwood wrote: On 04/11/14 22:02, Sage Weil wrote: On Tue, 4 Nov 2014, Blair Bethwaite wrote: On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote: In the Ceph session at the OpenStack summit someone asked what the CephFS

[ceph-users] Radosgw agent only syncing metadata

2014-11-20 Thread Mark Kirkwood
Hi, I am following http://docs.ceph.com/docs/master/radosgw/federated-config/ with giant (0.88-340-g5bb65b3). I figured I'd do the simple case first: - 1 region - 2 zones (us-east, us-west) master us-east - 2 radosgw instances (client.radosgw.us-east-1, wclient.radosgw.us-west-1) - 1 ceph

Re: [ceph-users] Radosgw agent only syncing metadata

2014-11-20 Thread Mark Kirkwood
On 21/11/14 14:49, Mark Kirkwood wrote: The only things that look odd in the destination zone logs are 383 requests getting 404 rather than 200: $ grep http_status=404 ceph-client.radosgw.us-west-1.log ... 2014-11-21 13:48:58.435201 7ffc4bf7f700 1 == req done req=0x7ffca002df00

Re: [ceph-users] Radosgw agent only syncing metadata

2014-11-20 Thread Mark Kirkwood
On 21/11/14 15:52, Mark Kirkwood wrote: On 21/11/14 14:49, Mark Kirkwood wrote: The only things that look odd in the destination zone logs are 383 requests getting 404 rather than 200: $ grep http_status=404 ceph-client.radosgw.us-west-1.log ... 2014-11-21 13:48:58.435201 7ffc4bf7f700 1

Re: [ceph-users] Radosgw agent only syncing metadata

2014-11-21 Thread Mark Kirkwood
On 21/11/14 16:05, Mark Kirkwood wrote: On 21/11/14 15:52, Mark Kirkwood wrote: On 21/11/14 14:49, Mark Kirkwood wrote: The only things that look odd in the destination zone logs are 383 requests getting 404 rather than 200: $ grep http_status=404 ceph-client.radosgw.us-west-1.log ... 2014

Re: [ceph-users] Radosgw agent only syncing metadata

2014-11-24 Thread Mark Kirkwood
On 22/11/14 10:54, Yehuda Sadeh wrote: On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Fri Nov 21 02:13:31 2014 x-amz-copy-source:bucketbig/_multipart_big.dat.2/fjid6CneDQYKisHf0pRFOT5cEWF_EQr.meta /bucketbig/__multipart_big.dat.2

Re: [ceph-users] Radosgw agent only syncing metadata

2014-11-24 Thread Mark Kirkwood
On 25/11/14 11:58, Yehuda Sadeh wrote: On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: On 22/11/14 10:54, Yehuda Sadeh wrote: On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Fri Nov 21 02:13:31 2014 x-amz-copy

Re: [ceph-users] evaluating Ceph

2014-11-25 Thread Mark Kirkwood
It looks to me like you need to supply it the *ids* of the pools not their names. So do: $ ceph osd dump # (or lspools) note down the ids of the pools you want to use (suppose I have cephfs_data 10 and cepfs_metadata 12): $ ceph mds newfs 10 12 --yes-i-really-mean-it On 26/11/14 11:30,

Re: [ceph-users] Radosgw agent only syncing metadata

2014-12-01 Thread Mark Kirkwood
On 25/11/14 12:40, Mark Kirkwood wrote: On 25/11/14 11:58, Yehuda Sadeh wrote: On Mon, Nov 24, 2014 at 2:43 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: On 22/11/14 10:54, Yehuda Sadeh wrote: On Thu, Nov 20, 2014 at 6:52 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote

Re: [ceph-users] normalizing radosgw

2014-12-06 Thread Mark Kirkwood
On 07/12/14 07:39, Sage Weil wrote: Thoughts? Suggestions? Would kit make sense to include radosgw-agent package in this normalization too? Regards Mark ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Unable to start radosgw

2014-12-10 Thread Mark Kirkwood
On 10/12/14 07:36, Vivek Varghese Cherian wrote: Hi, I am trying to integrate OpenStack Juno Keystone with the Ceph Object Gateway(radosw). I want to use keystone as the users authority. A user that keystone authorizes to access the gateway will also be created on the radosgw. Tokens that

Re: [ceph-users] Unable to start radosgw

2014-12-10 Thread Mark Kirkwood
On 11/12/14 02:33, Vivek Varghese Cherian wrote: Hi, root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d log-to-stderr 2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7 (__6c0127fcb58008793d3c8b62d925bc__91963672a3), process

Re: [ceph-users] my cluster has only rbd pool

2014-12-14 Thread Mark Kirkwood
On 14/12/14 17:25, wang lin wrote: Hi All I set up my first ceph cluster according to instructions in http://ceph.com/docs/master/start/quick-ceph-deploy/#storing-retrieving-object-data,http://ceph.com/docs/master/start/quick-ceph-deploy/#storing-retrieving-object-data but I got

Re: [ceph-users] Unable to start radosgw

2014-12-15 Thread Mark Kirkwood
On 15/12/14 20:54, Vivek Varghese Cherian wrote: Hi, Do I need to overwrite the existing .db files and .txt file in /var/lib/nssdb on the radosgw host with the ones copied from /var/ceph/nss on the Juno node ? Yeah - worth a try (we want to rule out any

Re: [ceph-users] Slow RBD performance bs=4k

2014-12-15 Thread Mark Kirkwood
On 15/12/14 17:44, ceph@panther-it.nl wrote: I have the following setup: Node1 = 8 x SSD Node2 = 6 x SATA Node3 = 6 x SATA Having 1 node different from the rest is not going to help...you will probably get better results if you sprinkle the SSD through all 3 nodes and use SATA for osd

Re: [ceph-users] Help with SSDs

2014-12-17 Thread Mark Kirkwood
Looking at the blog, I notice he disabled the write cache before the tests: doing this on my m550 resulted in *improved* dsync results (300 IOPS - 700 IOPS) still not great obviously, but ... interesting. So do experiment with the settings to see if you can get the 840's working better for

Re: [ceph-users] Help with SSDs

2014-12-18 Thread Mark Kirkwood
to try that don't require him to buy new SSDs! Cheers Mark On 18/12/14 21:28, Udo Lembke wrote: On 18.12.2014 07:15, Mark Kirkwood wrote: While you can't do much about the endurance lifetime being a bit low, you could possibly improve performance using a journal *file* that is located

Re: [ceph-users] Help with SSDs

2014-12-18 Thread Mark Kirkwood
On 19/12/14 03:01, Lindsay Mathieson wrote: On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote: The effect of this is *highly* dependent to the SSD make/model. My m550 work vastly better if the journal is a file on a filesystem as opposed to a partition. Obviously the Intel S3700/S3500

Re: [ceph-users] Live database files on Ceph

2014-04-04 Thread Mark Kirkwood
As others have mentioned, there is no reason you cannot run databases on Ceph storage. I've been running/testing Postgres and Mysql/Mariadb on Ceph RDB volumes for quite a while now - since version 0.50 (typically inside KVM containers but via the kernel driver will work too). With a

Re: [ceph-users] Openstack Nova not removing RBD volumes after removing of instance

2014-04-04 Thread Mark Kirkwood
I managed to provoke this behavior by forgetting to include 'rbd_children' in the Ceph auth setup for the images and volumes keyrings (https://ceph.com/docs/master/rbd/rbd-openstack/). Doing a: $ ceph auth list should reveal if all is well there. Regards Mark On 04/04/14 20:56, Mariusz

Re: [ceph-users] Live database files on Ceph

2014-04-04 Thread Mark Kirkwood
really understood the cause. Could you clarify what's going on that would cause that kind of asymmetry. I've been assuming that once I get around to turning on/tuning read caching on my underlying OSD nodes the situation will improve but haven't dug into that yet. ~jpr On 04/04/2014 04:46 AM, Mark

Re: [ceph-users] Ceph v0.79 Firefly RC :: erasure-code-profile command set not present

2014-04-08 Thread Mark Kirkwood
Wow - that is a bit strange: $ cat /etc/issue Ubuntu 13.10 \n \l $ sudo ceph -v ceph version 0.78-569-g6a4c50d (6a4c50d7f27d2e7632d8c017d09e864e969a05f7) $ sudo ceph osd erasure-code-profile ls default myprofile profile profile1 I'd hazard a guess that some of your ceph components are at

Re: [ceph-users] Ceph v0.79 Firefly RC :: erasure-code-profile command set not present

2014-04-08 Thread Mark Kirkwood
I'm not sure if this is relevant, but my 0.78 (and currently building 0.79) are compiled from src git checkout (and packages built from the same src tree using dpkg-buildpackage Debian/Ubuntu package builder). Having said that - the above procedure *should* produce equivalent binaries to the

Re: [ceph-users] Ceph v0.79 Firefly RC :: erasure-code-profile command set not present

2014-04-09 Thread Mark Kirkwood
Hi Karan, Just to double check - run the same command after ssh'ing into each of the osd hosts, and maybe again on the monitor hosts too (just in case you have *some* hosts successfully updated to 0.79 and some still on 0.78). Regards Mark On 08/04/14 22:32, Karan Singh wrote: Hi Loic

[ceph-users] OSD space usage 2x object size after rados put

2014-04-09 Thread Mark Kirkwood
Hi all, I've noticed that objects are using twice their actual space for a few minutes after they are 'put' via rados: $ ceph -v ceph version 0.79-42-g010dff1 (010dff12c38882238591bb042f8e497a1f7ba020) $ ceph osd tree # idweight type name up/down reweight -1 0.03998 root

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-09 Thread Mark Kirkwood
On Wed, Apr 9, 2014 at 1:58 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Hi all, I've noticed that objects are using twice their actual space for a few minutes after they are 'put' via rados: $ ceph -v ceph version 0.79-42-g010dff1 (010dff12c38882238591bb042f8e497a1f7ba020) $ ceph osd tree

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-09 Thread Mark Kirkwood
the code is that some layer in the stack is preallocated and then trimmed the objects back down once writing stops, but I'd like some more data points before I dig. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Wed, Apr 9, 2014 at 7:59 PM, Mark Kirkwood mark.kirkw

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Mark Kirkwood
) weirdness. Regards Mark On 10/04/14 15:41, Mark Kirkwood wrote: Redoing (attached, 1st file is for 2x space, 2nd for normal). I'm seeing: $ diff osd-du.0.txt osd-du.1.txt 924,925c924,925 2048 /var/lib/ceph/osd/ceph-1/current/5.1a_head/file__head_2E6FB49A__5 2048/var/lib/ceph/osd/ceph-1

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-10 Thread Mark Kirkwood
On 11/04/14 06:35, Udo Lembke wrote: Hi, On 10.04.2014 20:03, Russell E. Glaue wrote: I am seeing the same thing, and was wondering the same. We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4. ceph version 0.72.2 I am importing a 3.3TB disk image into a rbd image. At

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-13 Thread Mark Kirkwood
for xfs filesystem with 'allocsize' mount option. Check out this http://serverfault.com/questions/406069/why-are-my-xfs-filesystems-suddenly-consuming-more-space-and-full-of-sparse-file On Thu, Apr 10, 2014 at 5:41 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz mailto:mark.kirkw...@catalyst.net.nz

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-13 Thread Mark Kirkwood
clusters. On 14/04/14 13:30, Mark Kirkwood wrote: Yeah, I was looking at preallocation as the likely cause, but your link is way better than anything I'd found (especially with the likely commit - speculative preallocation - mentioned http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git

Re: [ceph-users] Copying RBD images between clusters?

2014-04-26 Thread Mark Kirkwood
Some discussion about this can be found here: http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ Cheers Mark On 25/04/14 08:25, Brian Rak wrote: Is there a recommended way to copy an RBD image between two different clusters? My initial thought was 'rbd export - | ssh rbd import -',

Re: [ceph-users] Red Hat to acquire Inktank

2014-05-01 Thread Mark Kirkwood
On 01/05/14 18:26, Wido den Hollander wrote: Ok, thanks for the information. Just something that comes up in my mind: - Repository location and access - Documentation efforts for non-RHEL platforms - Support for non-RHEL platforms I'm confident that RedHat will make Ceph bigger and better, but

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-09 Thread Mark Kirkwood
So that's two hosts - if this is a new cluster chances are the pools have replication size=3, and won't place replica pgs on the same host... 'ceph osd dump' will let you know if this is the case. If it is ether reduce size to 2, add another host or edit your crush rules to allow replica pgs

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-09 Thread Mark Kirkwood
'.users.uid' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 56 owner 18446744073709551615 flags hashpspool stripe_width 0 Kind Regards, Georg On 09.05.2014 08:29, Mark Kirkwood wrote: So that's two hosts - if this is a new cluster chances

Re: [ceph-users] Journal SSD durability

2014-05-13 Thread Mark Kirkwood
On thing that would put me off the 530 is lack on power off safety (capacitor or similar). Given the job of the journal, I think an SSD that has some guarantee of write integrity is crucial - so yeah the DC3500 or DC3700 seem like the best choices. Regards Mark On 13/05/14 21:31, Christian

Re: [ceph-users] PCI-E SSD Journal for SSD-OSD Disks

2014-05-14 Thread Mark Kirkwood
On 15/05/14 11:36, Tyler Wilson wrote: Hey All, I am setting up a new storage cluster that absolutely must have the best read/write sequential speed @ 128k and the highest IOps at 4k read/write as possible. My current specs for each storage node are currently; CPU: 2x E5-2670V2 Motherboard: SM

Re: [ceph-users] Benchmark for Ceph

2014-05-15 Thread Mark Kirkwood
On 15/05/14 20:23, Séguin Cyril wrote: Hello, I would like to test I/O Ceph's performances. I'm searching for a popular benchmark to create a dataset and run I/O tests that I can reuse for other distributed file systems and other tests. I have tried filebench but it seems not to be

Re: [ceph-users] Storage

2014-06-05 Thread Mark Kirkwood
On 05/06/14 17:01, yalla.gnan.ku...@accenture.com wrote: Hi All, I have a ceph storage cluster with four nodes. I have created block storage using cinder in openstack and ceph as its storage backend. So, I see a volume is created in ceph in one of the pools. But how to get information like

Re: [ceph-users] Run ceph from source code

2014-06-13 Thread Mark Kirkwood
I compile and run from the src build quite often. Here is my recipe: $ ./autogen.sh $ ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --with-radosgw $ time make $ sudo make install $ sudo cp src/init-ceph /etc/init.d/ceph $ sudo cp src/init-radosgw /etc/init.d/radosgw $ sudo

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-21 Thread Mark Kirkwood
I can reproduce this in: ceph version 0.81-423-g1fb4574 on Ubuntu 14.04. I have a two osd cluster with data on two sata spinners (WD blacks) and journals on two ssd (Crucual m4's). I getting about 3.5 MB/s (kernel and librbd) using your dd command with direct on. Leaving off direct I'm

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-21 Thread Mark Kirkwood
On 22/06/14 14:09, Mark Kirkwood wrote: Upgrading the VM to 14.04 and restesting the case *without* direct I get: - 164 MB/s (librbd) - 115 MB/s (kernel 3.13) So managing to almost get native performance out of the librbd case. I tweaked both filestore max and min sync intervals (100 and 10

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-22 Thread Mark Kirkwood
Good point, I had neglected to do that. So, amending my conf.conf [1]: [client] rbd cache = true rbd cache size = 2147483648 rbd cache max dirty = 1073741824 rbd cache max dirty age = 100 and also the VM's xml def to include cache to writeback: disk type='network' device='disk'

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-23 Thread Mark Kirkwood
On 23/06/14 18:51, Christian Balzer wrote: On Sunday, June 22, 2014, Mark Kirkwood mark.kirkw...@catalyst.net.nz rbd cache max dirty = 1073741824 rbd cache max dirty age = 100 Mark, you're giving it a 2GB cache. For a write test that's 1GB in size. Aggressively set is a bit

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Mark Kirkwood
On 24/06/14 17:37, Alexandre DERUMIER wrote: Hi Greg, So the only way to improve performance would be to not use O_DIRECT (as this should bypass rbd cache as well, right?). yes, indeed O_DIRECT bypass cache. BTW, Do you need to use mysql with O_DIRECT ? default innodb_flush_method is

  1   2   3   >