Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Amon Ott
Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-15 Thread Jasper Siero
Hello Greg, The dump and reset of the journal was succesful: [root@th1-mon001 ~]# /usr/bin/ceph-mds -i th1-mon001 --pid-file /var/run/ceph/mds.th1-mon001.pid -c /etc/ceph/ceph.conf --cluster ceph --dump-journal 0 journaldumptgho-mon001 journal is 9483323613~134215459 read 134213311 bytes at

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Stijn De Weirdt
We've been doing a lot of work on CephFS over the past few months. This is an update on the current state of things as of Giant. ... * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse or libcephfs) clients are in good working order. Thanks for all the work and

[ceph-users] Ceph installation error

2014-10-15 Thread Sakhi Hadebe
Hi, I am deploying a 3 node ceph storagollowing thee cluster for my company, following the webinar: http://www.youtube.com/watch?v=R3gnLrsZSno I am stuck at formating the osd's and making them ready to mount the directories. Below is the error thrown back: root@ceph-node1:~# mkcephfs -a

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-15 Thread Mark Kirkwood
Because this is an interesting problem, I added an additional host to my 4 node ceph setup that is a purely radosgw host. So I have - ceph1 (mon + osd) - ceph2-4 (osd) - ceph5 (radosgw) My ceph.conf on ceph5 included below. Obviously I changed my keystone endpoints to use this host (ceph5).

[ceph-users] new installation

2014-10-15 Thread Roman
Hi ALL, I've created 2 mon and 2 osd on Centos 6.5 (x86_64). I've tried 4 times (clean centos installation) but always have health: HEALTH_WARN Never HEALTH_OK always HEALTH_WARN! :( # ceph -s cluster d073ed20-4c0e-445e-bfb0-7b7658954874 health HEALTH_WARN 192 pgs degraded; 192 pgs

Re: [ceph-users] new installation

2014-10-15 Thread Pascal Morillon
Hello, osdmap e10: 4 osds: 2 up, 2 in What about following commands : # ceph osd tree # ceph osd dump You have 2 OSDs on 2 hosts, but 4 OSDs seems to be debined in your crush map. Regards, Pascal Le 15 oct. 2014 à 11:11, Roman intra...@gmail.com a écrit : Hi ALL, I've created 2

Re: [ceph-users] new installation

2014-10-15 Thread Roman
Pascal, Here is my latest installation: cluster 204986f6-f43c-4199-b093-8f5c7bc641bb health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 20/40 objects degraded (50.000%) monmap e1: 2 mons at {ceph02=192.168.33.142:6789/0,ceph03=192.168.33.143:6789/0}, election

Re: [ceph-users] new installation

2014-10-15 Thread Anthony Alba
Firewall? Disable iptables, set SELinux to Permissive. On 15 Oct, 2014 5:49 pm, Roman intra...@gmail.com wrote: Pascal, Here is my latest installation: cluster 204986f6-f43c-4199-b093-8f5c7bc641bb health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 20/40 objects

Re: [ceph-users] new installation

2014-10-15 Thread Pascal Morillon
Le 15 oct. 2014 à 11:48, Roman intra...@gmail.com a écrit : Pascal, Here is my latest installation: cluster 204986f6-f43c-4199-b093-8f5c7bc641bb health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 20/40 objects degraded (50.000%) monmap e1: 2 mons at

Re: [ceph-users] new installation

2014-10-15 Thread Roman
Yes of course... iptables -F (no rules) = the same as disabled SELINUX=disabled As a testing ground, I use VBox. But I think it should not be a problem. Firewall? Disable iptables, set SELinux to Permissive. On 15 Oct, 2014 5:49 pm, Roman intra...@gmail.com mailto:intra...@gmail.com wrote:

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Amon Ott
Am 15.10.2014 14:11, schrieb Ric Wheeler: On 10/15/2014 08:43 AM, Amon Ott wrote: Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the past few months. This is an update on the

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-15 Thread lakshmi k s
Thanks Mark for looking into this further. As I mentioned earlier, I have following nodes in my ceph cluster - 1 admin node 3 OSD (One of them is a monitor too) 1 gateway node This should have worked technically. But I am not sure where I am going wrong. I will continue to look into this and

[ceph-users] ssh; cannot resolve hostname errors

2014-10-15 Thread Support - Avantek
I may be completely overlooking something here but I keep getting ssh; cannot resolve hostname when I try to contact my OSD node's from my monitor node. I have set the ipaddress's of the 3 nodes in /etc/hosts as suggested on the website. Thanks in advance James

Re: [ceph-users] ssh; cannot resolve hostname errors

2014-10-15 Thread Wido den Hollander
On 10/15/2014 04:27 PM, Support - Avantek wrote: I may be completely overlooking something here but I keep getting ssh; cannot resolve hostname when I try to contact my OSD node's from my monitor node. I have set the ipaddress's of the 3 nodes in /etc/hosts as suggested on the website.

[ceph-users] Replacing a disk: Best practices?

2014-10-15 Thread Bryan Wright
Hi folks, I recently had an OSD disk die, and I'm wondering what are the current best practices for replacing it. I think I've thoroughly removed the old disk, both physically and logically, but I'm having trouble figuring out how to add the new disk into ceph. For one thing,

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Sage Weil
On Wed, 15 Oct 2014, Amon Ott wrote: Am 15.10.2014 14:11, schrieb Ric Wheeler: On 10/15/2014 08:43 AM, Amon Ott wrote: Am 14.10.2014 16:23, schrieb Sage Weil: On Tue, 14 Oct 2014, Amon Ott wrote: Am 13.10.2014 20:16, schrieb Sage Weil: We've been doing a lot of work on CephFS over the

Re: [ceph-users] Replacing a disk: Best practices?

2014-10-15 Thread Daniel Schwager
Hi, I recently had an OSD disk die, and I'm wondering what are the current best practices for replacing it. I think I've thoroughly removed the old disk, both physically and logically, but I'm having trouble figuring out how to add the new disk into ceph. I did this today (one disk

Re: [ceph-users] Replacing a disk: Best practices?

2014-10-15 Thread Loic Dachary
Hi Daniel, On 15/10/2014 08:02, Daniel Schwager wrote: Hi, I recently had an OSD disk die, and I'm wondering what are the current best practices for replacing it. I think I've thoroughly removed the old disk, both physically and logically, but I'm having trouble figuring out how

[ceph-users] CRUSH depends on host + OSD?

2014-10-15 Thread Chad Seys
Hi all, When I remove all OSDs on a given host, then wait for all objects (PGs?) to be to be active+clean, then remove the host (ceph osd crush remove hostname), that causes the objects to shuffle around the cluster again. Why does the CRUSH map depend on hosts that no longer have OSDs on

Re: [ceph-users] CRUSH depends on host + OSD?

2014-10-15 Thread Mariusz Gronczewski
On Wed, 15 Oct 2014 11:06:55 -0500, Chad Seys cws...@physics.wisc.edu wrote: Hi all, When I remove all OSDs on a given host, then wait for all objects (PGs?) to be to be active+clean, then remove the host (ceph osd crush remove hostname), that causes the objects to shuffle around the

Re: [ceph-users] Firefly maintenance release schedule

2014-10-15 Thread Dmitry Borodaenko
On Tue, Sep 30, 2014 at 6:49 PM, Dmitry Borodaenko dborodae...@mirantis.com wrote: Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2 months ago). Since then, there were twice as many commits merged into the firefly branch than there existed on the branch before v0.80.5: $

Re: [ceph-users] the state of cephfs in giant

2014-10-15 Thread Alphe Salas
For the humble ceph user I am it is really hard to follow what version of what product will get the changes I requiere. Let me explain myself. I use ceph in my company is specialised in disk recovery, my company needs a flexible, easy to maintain, trustable way to store the data from the

Re: [ceph-users] CRUSH depends on host + OSD?

2014-10-15 Thread Dan van der Ster
Hi Chad, That sounds bizarre to me, and I can't reproduce it. I added an osd (which was previously not in the crush map) to a fake host=test: ceph osd crush create-or-move osd.52 1.0 rack=RJ45 host=test that resulted in some data movement of course. Then I removed that osd from the crush

Re: [ceph-users] Firefly maintenance release schedule

2014-10-15 Thread Gregory Farnum
On Wed, Oct 15, 2014 at 9:39 AM, Dmitry Borodaenko dborodae...@mirantis.com wrote: On Tue, Sep 30, 2014 at 6:49 PM, Dmitry Borodaenko dborodae...@mirantis.com wrote: Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2 months ago). Since then, there were twice as many commits

Re: [ceph-users] CRUSH depends on host + OSD?

2014-10-15 Thread Chad Seys
Hi Mariusz, Usually removing OSD without removing host happens when you remove/replace dead drives. Hosts are in map so * CRUSH wont put 2 copies on same node * you can balance around network interface speed That does not answer the original question IMO: Why does the CRUSH map depend

Re: [ceph-users] CRUSH depends on host + OSD?

2014-10-15 Thread Chad Seys
Hi Dan, I'm using Emperor (0.72). Though I would think CRUSH maps have not changed that much btw versions? That sounds bizarre to me, and I can't reproduce it. I added an osd (which was previously not in the crush map) to a fake host=test: ceph osd crush create-or-move osd.52 1.0

Re: [ceph-users] CRUSH depends on host + OSD?

2014-10-15 Thread Dan van der Ster
Hi, October 15 2014 7:05 PM, Chad Seys cws...@physics.wisc.edu wrote: Hi Dan, I'm using Emperor (0.72). Though I would think CRUSH maps have not changed that much btw versions? I'm using dumpling, with the hashpspool flag enabled, which I believe could have been the only difference. That

Re: [ceph-users] Replacing a disk: Best practices?

2014-10-15 Thread Daniel Schwager
Loic, root@ceph-node3:~# smartctl -a /dev/sdd | less === START OF INFORMATION SECTION === Device Model: ST4000NM0033-9ZM170 Serial Number:Z1Z5LGBX .. admin@ceph-admin:~/cluster1$ emacs -nw ceph.conf

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-15 Thread John Spray
Sadly undump has been broken for quite some time (it was fixed in giant as part of creating cephfs-journal-tool). If there's a one line fix for this then it's probably worth putting in firefly since it's a long term supported branch -- I'll do that now. John On Wed, Oct 15, 2014 at 8:23 AM,

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-15 Thread lakshmi k s
Hello Mark - I setup a new Ceph cluster like before. But this time it is talking to Icehouse. Same set of problems like before. That is keystone flags are not being honored if they are under [client.radosgw.gateway]. It seems like the issue is with my radosgw setup. Let me create a new thread

[ceph-users] (no subject)

2014-10-15 Thread lakshmi k s
I am trying to integrate Openstack keystone with radosgw. I have followed the instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/. But for some reason, keystone flags under [client.radosgw.gateway] section are not being honored. That means, presence of these flags never

Re: [ceph-users] Replacing a disk: Best practices?

2014-10-15 Thread Iban Cabrillo
HI Cephers, I have an other question related to this issue, What would be the procedure to restore a server fail (a whole server for example due to a mother board trouble with no damage on disk). Regards, I 2014-10-15 20:22 GMT+02:00 Daniel Schwager daniel.schwa...@dtnet.de: Loic,

[ceph-users] converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs

2014-10-15 Thread Dan van der Ster
Hi Ceph users, (sorry for the novel, but perhaps this might be useful for someone) During our current project to upgrade our cluster from disks-only to SSD journals, we've found it useful to convert our legacy puppet-ceph deployed cluster (using something like the enovance module) to one that

[ceph-users] Ceph storage pool definition with KVM/libvirt

2014-10-15 Thread Dan Geist
I'm leveraging Ceph in a vm prototyping environment currently and am having issues abstracting my VM definitions from the storage pool (to use a libvirt convention). I'm able to use the rbd support within the disk configuration of individual VMs but am struggling to find a good reference for

Re: [ceph-users] converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs

2014-10-15 Thread Mike Dawson
On 10/15/2014 4:20 PM, Dan van der Ster wrote: Hi Ceph users, (sorry for the novel, but perhaps this might be useful for someone) During our current project to upgrade our cluster from disks-only to SSD journals, we've found it useful to convert our legacy puppet-ceph deployed cluster (using

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread Mark Kirkwood
On 16/10/14 09:08, lakshmi k s wrote: I am trying to integrate Openstack keystone with radosgw. I have followed the instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/. But for some reason, keystone flags under [client.radosgw.gateway] section are not being honored. That

Re: [ceph-users] Firefly maintenance release schedule

2014-10-15 Thread Dmitry Borodaenko
Gregory, Thanks for prompt response, we'll go with v0.80.7. It still would be nice if v0.80.8 doesn't take as long as v0.80.6, I suspect one of the reasons you messed it up was too many commits without intermediate releases. On Wed, Oct 15, 2014 at 9:59 AM, Gregory Farnum g...@inktank.com

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread lakshmi k s
Hello Mark - Changing the rwg keystone url to http://192.168.122.165:35357 did not help. I continue to get 401 error. Also, I am trying to integrate with Icehouse this time. I did not see any keystone.conf in /etc/apache2/sites-available for adding WSGI chunked encoding. That said, I am

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread Mark Kirkwood
On 16/10/14 10:37, Mark Kirkwood wrote: On 16/10/14 09:08, lakshmi k s wrote: I am trying to integrate Openstack keystone with radosgw. I have followed the instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/. But for some reason, keystone flags under

Re: [ceph-users] ssh; cannot resolve hostname errors

2014-10-15 Thread JIten Shah
Please send your /etc/hosts contents here. --Jiten On Oct 15, 2014, at 7:27 AM, Support - Avantek supp...@avantek.co.uk wrote: I may be completely overlooking something here but I keep getting “ssh; cannot resolve hostname” when I try to contact my OSD node’s from my monitor node. I have

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread lakshmi k s
I still think that there is problem with the way radosgw is setup. Two things I want to point out - 1. rgw keystone url - If this flag is under radosgw section of ceph.conf file, I do not see the packets being exchanged between keystone and gateway node when radosgw is restarted. I tried to

Re: [ceph-users] converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs

2014-10-15 Thread Loic Dachary
Hi Dan, Great story and congratulation on the successful conversion :-) There are two minor pitfalls left but they are only an inconvenience when testing the ceph-disk prepare / udev logic ( https://github.com/ceph/ceph/pull/2717 and https://github.com/ceph/ceph/pull/2648 ). Cheers On

Re: [ceph-users] Radosgw refusing to even attempt to use keystone auth

2014-10-15 Thread lakshmi k s
Has anyone seen this issue? Appreciate your time. On Wednesday, October 15, 2014 4:50 PM, lakshmi k s lux...@yahoo.com wrote: I still think that there is problem with the way radosgw is setup. Two things I want to point out - 1. rgw keystone url - If this flag is under radosgw section of