Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Amon Ott
Am 14.10.2014 16:23, schrieb Sage Weil: > On Tue, 14 Oct 2014, Amon Ott wrote: >> Am 13.10.2014 20:16, schrieb Sage Weil: >>> We've been doing a lot of work on CephFS over the past few months. This >>> is an update on the current state of things as of Giant. >> ... >>> * Either the kernel client (k

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
This sounds like any number of readdir bugs that Zheng has fixed over the last 6 months. sage On Tue, 14 Oct 2014, Alphe Salas wrote: > Hello sage, last time I used CephFS it had a strange behaviour when if used in > conjunction with a nfs reshare of the cephfs mount point, I experienced a > p

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Alphe Salas
Hello sage, last time I used CephFS it had a strange behaviour when if used in conjunction with a nfs reshare of the cephfs mount point, I experienced a partial random disapearance of the tree folders. According to people in the mailing list it was a kernel module bug (not using ceph-fuse) do

[ceph-users] v0.80.7 Firefly released

2014-10-14 Thread Sage Weil
This release fixes a few critical issues with v0.80.6, particularly with clusters running mixed versions. We recommend that all v0.80.x Firefly users upgrade to this release. Notable Changes --- * osd: fix invalid memory reference in log trimming (#9731 Samuel Just) * osd: fix use-af

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-14 Thread Gregory Farnum
ceph-mds --undump-journal Looks like it accidentally (or on purpose? you can break things with it) got left out of the help text. On Tue, Oct 14, 2014 at 8:19 AM, Jasper Siero wrote: > Hello Greg, > > I dumped the journal successful to a file: > > journal is 9483323613~134215459 > read 13421331

Re: [ceph-users] Handling of network failures in the cluster network

2014-10-14 Thread Gregory Farnum
On Mon, Oct 13, 2014 at 1:37 PM, Martin Mailand wrote: > Hi Greg, > > I took down the interface with "ifconfig p7p1 down". > I attached the config of the first monitor and the first osd. > I created the cluster with ceph-deploy. > The version is ceph version 0.86 (97dcc0539dfa7dac3de74852305d51580

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-14 Thread Mark Kirkwood
Right, So you have 3 osds, one of whom is a mon. Your rgw is on another host (called gateway it seems). I'm wondering if is this the issue. In my case I'm using one of my osds as a rgw as well. This *should* not matter... but it might be worth trying out a rgw on one of your osds instead. I'm

[ceph-users] radosGW balancer best practices

2014-10-14 Thread Simone Spinelli
Dear all, we are going to add rados-gw to our ceph cluster (144 OSD on 12 servers + 3 monitors connected via 10giga network) and we have a couple of questions. The first question is about the load balancer, do you have some advice based on real-world experience? Second question is about th

Re: [ceph-users] Ceph OSD very slow startup

2014-10-14 Thread Lionel Bouton
Le 14/10/2014 18:51, Lionel Bouton a écrit : > Le 14/10/2014 18:17, Gregory Farnum a écrit : >> On Monday, October 13, 2014, Lionel Bouton > > wrote: >> >> [...] >> >> What could explain such long startup times? Is the OSD init doing >> a lot >> of

Re: [ceph-users] Ceph OSD very slow startup

2014-10-14 Thread Lionel Bouton
Le 14/10/2014 18:17, Gregory Farnum a écrit : > On Monday, October 13, 2014, Lionel Bouton > wrote: > > [...] > > What could explain such long startup times? Is the OSD init doing > a lot > of random disk accesses? Is it dependant on the volume of

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-14 Thread lakshmi k s
Hello Mark - with rgw_keystone_url under radosgw section, I do NOT see keystone handshake. If I move it under global section, I see initial keystone handshake as explained earlier. Below is the output of osd dump and osd tree. I have 3 nodes (node1, node2, node3) acting as OSDs. One of them (nod

Re: [ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-14 Thread Loic Dachary
Hi, Short update : I'm moved the content of the previous pad to the https://etherpad.openstack.org/p/kilo-ceph and merged the two lists. It would be great if people planning to attend could add the topics which they would like to discuss. Cheers On 10/10/2014 14:48, Loic Dachary wrote: > Hi C

Re: [ceph-users] Misconfigured caps on client.admin key, anyway to recover from EAESS denied?

2014-10-14 Thread Gregory Farnum
On Monday, October 13, 2014, Anthony Alba wrote: > > You can disable cephx completely, fix the key and enable cephx again. > > > > auth_cluster_required, auth_service_required and auth_client_required > > That did not work: i.e disabling cephx in the cluster conf and > restarting the cluster. > T

Re: [ceph-users] Ceph OSD very slow startup

2014-10-14 Thread Gregory Farnum
On Monday, October 13, 2014, Lionel Bouton wrote: > Hi, > > # First a short description of our Ceph setup > > You can skip to the next section ("Main questions") to save time and > come back to this one if you need more context. > > We are currently moving away from DRBD-based storage backed by R

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-14 Thread Jasper Siero
Hello Greg, I dumped the journal successful to a file: journal is 9483323613~134215459 read 134213311 bytes at offset 9483323613 wrote 134213311 bytes at offset 9483323613 to journaldumptgho NOTE: this is a _sparse_ file; you can $ tar cSzf journaldumptgho.tgz journaldumptgho to eff

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
On Tue, 14 Oct 2014, Amon Ott wrote: > Am 13.10.2014 20:16, schrieb Sage Weil: > > We've been doing a lot of work on CephFS over the past few months. This > > is an update on the current state of things as of Giant. > ... > > * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Amon Ott
Am 13.10.2014 20:16, schrieb Sage Weil: > We've been doing a lot of work on CephFS over the past few months. This > is an update on the current state of things as of Giant. ... > * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse > or libcephfs) clients are in good working

Re: [ceph-users] Icehouse & Ceph -- live migration fails?

2014-10-14 Thread samuel
Hi all, This issue is also affecting us (centos6.5 based icehouse) and, as far as I could read, comes from the fact that the path /var/lib/nova/instances (or whatever configuration path you have in nova.conf) is not shared. Nova does not see this shared path and therefore does not allow to perform

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
On Tue, 14 Oct 2014, Thomas Lemarchand wrote: > Thanks for theses informations. > > I plan to use CephFS on Giant, with production workload, knowing the > risks and having a hot backup near. I hope to be able to provide useful > feedback. > > My cluster is made of 7 servers (3mon, 3osd (27 osd in

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Sage Weil
On Tue, 14 Oct 2014, Amon Ott wrote: > Am 13.10.2014 20:16, schrieb Sage Weil: > > We've been doing a lot of work on CephFS over the past few months. This > > is an update on the current state of things as of Giant. > ... > > * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse

Re: [ceph-users] the state of cephfs in giant

2014-10-14 Thread Thomas Lemarchand
Thanks for theses informations. I plan to use CephFS on Giant, with production workload, knowing the risks and having a hot backup near. I hope to be able to provide useful feedback. My cluster is made of 7 servers (3mon, 3osd (27 osd inside), 1mds). I use ceph-fuse on clients. You wrote about h