Re: [ceph-users] 70+ OSD are DOWN and not coming up

2014-05-22 Thread Craig Lewis
On 5/21/14 21:15 , Sage Weil wrote: On Wed, 21 May 2014, Craig Lewis wrote: If you do this over IRC, can you please post a summary to the mailling list? I believe I'm having this issue as well. In the other case, we found that some of the OSDs were behind processing maps (by several thousand

Re: [ceph-users] rbd watchers

2014-05-22 Thread James Eckersall
Hi, Thanks for the suggestion, but unfortunately there are no snapshots for this image either. Still confused :( On 22 May 2014 02:54, Mandell Degerness mand...@pistoncloud.com wrote: The times I have seen this message, it has always been because there are snapshots of the image that

Re: [ceph-users] Access denied error for list users

2014-05-22 Thread alain.dechorgnat
GET /admin/metadata/user returns only user ids (no detail) GET /admin/user returns 403 GET /admin/user?uid=XXX returns detail on user XXX So, if you want the user list with details, you’ll have to call once GET /admin/metadata/user to fetch all uids and for every user GET /admin/user?uid=XXX

[ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Mārtiņš Jakubovičs
Hello, I follow this guide http://ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster and stuck in item 4. Add the initial monitor(s) and gather the keys (new inceph-deployv1.1.3). ceph-deploy mon create-initial For example: ceph-deploy mon create-initial If I

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Wido den Hollander
On 05/22/2014 11:46 AM, Mārtiņš Jakubovičs wrote: Hello, I follow this guide http://ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster and stuck in item 4. Add the initial monitor(s) and gather the keys (new inceph-deployv1.1.3). ceph-deploy mon create-initial For

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Mārtiņš Jakubovičs
Hello, Thanks for such fast response. Warning still persist: http://pastebin.com/QnciHG6v I didn't mention it, but admin and monitoring nodes are Ubuntu 14.04 x64, ceph-deploy 1.4 and ceph 0.79. On 2014.05.22. 12:50, Wido den Hollander wrote: On 05/22/2014 11:46 AM, Mārtiņš Jakubovičs

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-22 Thread Sharmila Govind
root@cephnode4:/mnt/ceph/osd2# *ceph-disk list* /dev/sda : /dev/sda1 other, ext4, mounted on / /dev/sda2 other, ext4, mounted on /boot /dev/sda3 other /dev/sda4 swap, swap /dev/sda5 other, ext4, mounted on /home /dev/sda6 other, ext4 /dev/sda7 other, ext4, mounted on /mnt/Storage /dev/sdb

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Wido den Hollander
On 05/22/2014 11:54 AM, Mārtiņš Jakubovičs wrote: Hello, Thanks for such fast response. Warning still persist: http://pastebin.com/QnciHG6v Hmm, that's weird. I didn't mention it, but admin and monitoring nodes are Ubuntu 14.04 x64, ceph-deploy 1.4 and ceph 0.79. Why aren't you trying

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Mārtiņš Jakubovičs
Thanks, I will try upgrade to 0.80. On 2014.05.22. 13:00, Wido den Hollander wrote: On 05/22/2014 11:54 AM, Mārtiņš Jakubovičs wrote: Hello, Thanks for such fast response. Warning still persist: http://pastebin.com/QnciHG6v Hmm, that's weird. I didn't mention it, but admin and

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Mārtiņš Jakubovičs
Unfortunately upgrade to 0.80 didn't help. Same error. In monitor node I checked file /etc/ceph/ceph.client.admin.keyring and it didn't exist. Should it exist? Maybe I can manually perform some actions in monitor node to generate key's? On 2014.05.22. 13:00, Wido den Hollander wrote: On

[ceph-users] Question about osd objectstore = keyvaluestore-dev setting

2014-05-22 Thread Geert Lindemulder
Hello All Trying to implement the osd leveldb backend at an existing ceph test cluster. The test cluster was updated from 0.72.1 to 0.80.1. The update was ok. After the update, the osd objectstore = keyvaluestore-dev setting was added to ceph.conf. After restarting an osd it gives the

[ceph-users] recommendations for erasure coded pools and profile question

2014-05-22 Thread Kenneth Waegeman
Hi, How can we apply the recommendations of the number of placement groups onto erasure-coded pools? (OSDs * 100) Total PGs = Replicas Shoudl we set replica = 1, or should it be set against some EC parameters? Also a question about the EC profiles. I

[ceph-users] Radosgw Timeout

2014-05-22 Thread Georg Höllrigl
Hello List, Using the radosgw works fine, as long as the amount of data doesn't get too big. I have created one bucket that holds many small files, separated into different directories. But whenever I try to acess the bucket, I only run into some timeout. The timeout is at around 30 - 100

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Mārtiņš Jakubovičs
Yes, indeed, I created new cluster in host ceph-node3 and all works great. I tested with data purge on ceph-node1 but it didn't work anyway... Badly that I can't find where was problem. Thanks! On 2014.05.22. 16:05, Alfredo Deza wrote: Why are you using --overwrite-conf ? Have you deployed

Re: [ceph-users] ceph-deploy mon create-initial

2014-05-22 Thread Sergey Motovilovets
Hello there. I had the same issue when mon_initial_members (in my case it was 1 node) resolved to different IP then mon_host was set to in ceph.conf. 2014-05-22 16:05 GMT+03:00 Alfredo Deza alfredo.d...@inktank.com: Why are you using --overwrite-conf ? Have you deployed the monitor and

Re: [ceph-users] Radosgw Timeout

2014-05-22 Thread Yehuda Sadeh
On Thu, May 22, 2014 at 6:16 AM, Georg Höllrigl georg.hoellr...@xidras.com wrote: Hello List, Using the radosgw works fine, as long as the amount of data doesn't get too big. I have created one bucket that holds many small files, separated into different directories. But whenever I try to

Re: [ceph-users] Journal SSD durability

2014-05-22 Thread Simon Ironside
Hi, Just to revisit this one last time . . . Is the issue only with the SandForce SF-2281 in the Kingston E50? Or are all SandForce controllers considered dodgy, including the SF-2582 in the Kingston E100 and a few other manufacturer's enterprise SSDs? Thanks, Simon. On 16/05/14 22:30,

Re: [ceph-users] Question about osd objectstore = keyvaluestore-dev setting

2014-05-22 Thread Gregory Farnum
On Thu, May 22, 2014 at 5:04 AM, Geert Lindemulder glindemul...@snow.nl wrote: Hello All Trying to implement the osd leveldb backend at an existing ceph test cluster. The test cluster was updated from 0.72.1 to 0.80.1. The update was ok. After the update, the osd objectstore =

[ceph-users] Feature request: stable naming for external journals

2014-05-22 Thread Scott Laird
I recently created a few OSDs with journals on a partitioned SSD. Example: $ ceph-deploy osd prepare v2:sde:sda8 It worked fine at first, but after rebooting, the new OSD failed to start. I discovered that the journal drive had been renamed from /dev/sda to /dev/sdc, so the journal symlink in

Re: [ceph-users] Radosgw Timeout

2014-05-22 Thread Craig Lewis
On 5/22/14 06:16 , Georg Höllrigl wrote: I have created one bucket that holds many small files, separated into different directories. But whenever I try to acess the bucket, I only run into some timeout. The timeout is at around 30 - 100 seconds. This is smaller then the Apache timeout of

[ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-22 Thread Alexandre DERUMIER
Hi, I'm looking to build a full osd ssd cluster, with this config: 6 nodes, each node 10 osd/ ssd drives (dual 10gbit network). (1journal + datas on each osd) ssd drive will be entreprise grade, maybe intel sc3500 800GB (well known ssd) or new Samsung SSD PM853T 960GB (don't have too much

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-05-22 Thread Gregory Farnum
On Thu, May 22, 2014 at 4:09 AM, Kenneth Waegeman kenneth.waege...@ugent.be wrote: - Message from Gregory Farnum g...@inktank.com - Date: Wed, 21 May 2014 15:46:17 -0700 From: Gregory Farnum g...@inktank.com Subject: Re: [ceph-users] Expanding pg's of an erasure coded pool

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-05-22 Thread Henrik Korkuc
On 2014.05.22 19:55, Gregory Farnum wrote: On Thu, May 22, 2014 at 4:09 AM, Kenneth Waegeman kenneth.waege...@ugent.be wrote: - Message from Gregory Farnum g...@inktank.com - Date: Wed, 21 May 2014 15:46:17 -0700 From: Gregory Farnum g...@inktank.com Subject: Re:

[ceph-users] slow requests

2014-05-22 Thread Győrvári Gábor
Hello, Got this kind of logs in two node of 3 node cluster both node has 2 OSD, only affected 2 OSD on two separate node thats why i dont understand the situation. There wasnt any extra io on the system at the given time. Using radosgw with s3 api to store objects under ceph average ops

[ceph-users] Unable to update Swift ACL's on existing containers

2014-05-22 Thread James Page
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Folks I'm seeing some odd behaviour with RADOS Gateway as part of an OpenStack deployment: Environment: Ceph 0.80.1 Ubuntu 14.04 OpenStack Icehouse Setting ACL's on initial container creation works just fine: $ swift post -r

[ceph-users] ceph deploy on rhel6.5 installs ceph from el6 and fails

2014-05-22 Thread Lukac, Erik
Hi there, it seems like ceph-deploy (in firefly but also in 0.72) on rhel6.5 wants to install stuff from el6 repo, even when ceph admin-server is configured to use rhel6 This is how /etc/yum.repos.d/ceph looks like on my admin-node: [ceph@ceph-mir-dmz-admin ceph-mir-dmz]$ cat

Re: [ceph-users] ceph deploy on rhel6.5 installs ceph from el6 and fails

2014-05-22 Thread Simon Ironside
On 22/05/14 23:56, Lukac, Erik wrote: But: this fails because of the dependencies. xfsprogs is in rhel6 repo, but not in el6 L I hadn't noticed that xfsprogs is included in the ceph repos, I'm using the package from the RHEL 6.5 DVD, which is the same version, you'll find it in the

[ceph-users] collectd / graphite / grafana .. calamari?

2014-05-22 Thread Ricardo Rocha
Hi. I saw the thread a couple days ago on ceph-users regarding collectd... and yes, i've been working on something similar for the last few days :) https://github.com/rochaporto/collectd-ceph It has a set of collectd plugins pushing metrics which mostly map what the ceph commands return. In the

Re: [ceph-users] Unable to update Swift ACL's on existing containers

2014-05-22 Thread Yehuda Sadeh
That looks like a bug; generally the permission checks there are broken. I opened issue #8428, and pushed a fix on top of the firefly branch to wip-8428. Thanks! Yehuda On Thu, May 22, 2014 at 2:49 PM, James Page james.p...@ubuntu.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi

Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-22 Thread Christian Balzer
Hello, On Thu, 22 May 2014 18:00:56 +0200 (CEST) Alexandre DERUMIER wrote: Hi, I'm looking to build a full osd ssd cluster, with this config: What is your main goal for that cluster, high IOPS, high sequential writes or reads? Remember my Slow IOPS on RBD... thread, you probably

Re: [ceph-users] Feature request: stable naming for external journals

2014-05-22 Thread Thomas Matysik
I made this mistake originally, too… It’s not real clear in the documentation, but it turns out that if you just initialize your journal drives as GPT, but don’t create the partitions, and then prepare your OSDs with: $ ceph-deploy osd prepare node1:sde:sda (ie, specify the device,

Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-22 Thread Alexandre DERUMIER
What is your main goal for that cluster, high IOPS, high sequential writes or reads? high iops, mostly random. (it's an rbd cluster, with qemu-kvm guest, around 1000vms, doing smalls ios each one). 80%read|20% write I don't care about sequential workload, or bandwith. Remember my Slow IOPS

Re: [ceph-users] full osd ssd cluster advise : replication 2x or 3x ?

2014-05-22 Thread Christian Balzer
On Fri, 23 May 2014 07:02:15 +0200 (CEST) Alexandre DERUMIER wrote: What is your main goal for that cluster, high IOPS, high sequential writes or reads? high iops, mostly random. (it's an rbd cluster, with qemu-kvm guest, around 1000vms, doing smalls ios each one). 80%read|20% write

Re: [ceph-users] recommendations for erasure coded pools and profile question

2014-05-22 Thread Loic Dachary
Hi Kenneth, In the case of erasure coded pools, the Replicas should be replaced by K+M. $ ceph osd erasure-code-profile get myprofile k=2 m=1 plugin=jerasure technique=reed_sol_van ruleset-failure-domain=osd You have K+M=3 I proposed a fix to the documentation