[ceph-users] MDS stuck replaying

2015-12-15 Thread Bryan Wright
Hi folks, This morning, one of my MDSes dropped into "replaying": mds cluster is degraded mds.0 at 192.168.1.31:6800/12550 rank 0 is replaying journal and the ceph filesystem seems to be unavailable to the clients. Is there any way to see the progress of this replay? I don't see any

[ceph-users] ACLs question in cephfs

2015-12-15 Thread Goncalo Borges
Dear Cephfs gurus. I have two questions regarding ACL support on cephfs. 1) Last time we tried ACLs we saw that they were only working properly in the kernel module and I wonder what is the present status of acl support on ceph-fuse. Can you clarify on that? 2) If ceph-fuse is still not

Re: [ceph-users] MDS stuck replaying

2015-12-15 Thread Bryan Wright
John Spray writes: > Anyway -- you'll need to do some local poking of the MDS to work out > what the hold up is. Turn up MDS debug logging[1] and see what's > it's saying during the replay. Also, you can use performance counters > "ceph daemon mds. perf dump" and see which are

Re: [ceph-users] recommendations for file sharing

2015-12-15 Thread Martin Palma
Currently, we use approach #1 with kerberized NFSv4 and Samba (with AD as KDC) - desperately waiting for CephFS :-) Best, Martin On Tue, Dec 15, 2015 at 11:51 AM, Wade Holler wrote: > Keep it simple is my approach. #1 > > If needed Add rudimentary HA with pacemaker. > >

Re: [ceph-users] MDS stuck replaying

2015-12-15 Thread John Spray
On Tue, Dec 15, 2015 at 5:01 PM, Bryan Wright wrote: > Hi folks, > > This morning, one of my MDSes dropped into "replaying": > > mds cluster is degraded > mds.0 at 192.168.1.31:6800/12550 rank 0 is replaying journal > > and the ceph filesystem seems to be unavailable to the

Re: [ceph-users] MDS: How to increase timeouts?

2015-12-15 Thread John Spray
On Tue, Dec 15, 2015 at 6:21 PM, Burkhard Linke wrote: > Hi, > > I have a setup with two MDS in active/standby configuration. During times of > high network load / network congestion, the active MDS is bounced between > both instances: > > 1.

[ceph-users] Ceph Advisory Board Meeting

2015-12-15 Thread Patrick McGarry
Hey cephers, In the interests of transparency, I wanted to share the resulting minutes from last week’s very first Ceph Advisory Board meeting: http://tracker.ceph.com/projects/ceph/wiki/CAB_2015-12-09 We are looking to meet monthly to discuss the following: * Pending development tasks for the

[ceph-users] MDS: How to increase timeouts?

2015-12-15 Thread Burkhard Linke
Hi, I have a setup with two MDS in active/standby configuration. During times of high network load / network congestion, the active MDS is bounced between both instances: 1. mons(?) decide that MDS A is crashed/not available due to missing heartbeats 2015-12-15 16:38:08.471608

Re: [ceph-users] recommendations for file sharing

2015-12-15 Thread Wido den Hollander
On 12/15/2015 11:45 AM, Alex Leake wrote: > Good Morning, > > > I have a production Ceph cluster at the University I work at, which runs > brilliantly. > > > However, I'd like your advice on the best way of sharing CIFS / SMB from > Ceph. So far I have three ideas: > > 1. ​​Use a server as

Re: [ceph-users] recommendations for file sharing

2015-12-15 Thread Wade Holler
Keep it simple is my approach. #1 If needed Add rudimentary HA with pacemaker. http://linux-ha.org/wiki/Samba Cheers Wade On Tue, Dec 15, 2015 at 5:45 AM Alex Leake wrote: > Good Morning, > > > I have a production Ceph cluster at the University I work at, which runs >

[ceph-users] Migrate Block Volumes and VMs

2015-12-15 Thread Sam Huracan
Hi everybody, My OpenStack System use Ceph as backend for Glance, Cinder, Nova. In the future, we intend build a new Ceph Cluster. I can re-connect current OpenStack with new Ceph systems. After that, I have tried export rbd images and import to new Ceph, but VMs and Volumes were clone of Glance

Re: [ceph-users] about federated gateway

2015-12-15 Thread fangchen sun
Hi, I open an issue, and the link is as following: http://tracker.ceph.com/issues/14081 Thanks Sunfch 2015-12-15 2:33 GMT+08:00 Yehuda Sadeh-Weinraub : > On Sun, Dec 13, 2015 at 7:27 AM, 孙方臣 wrote: > > Hi, All, > > > > I'm setting up federated

[ceph-users] recommendations for file sharing

2015-12-15 Thread Alex Leake
Good Morning, I have a production Ceph cluster at the University I work at, which runs brilliantly. However, I'd like your advice on the best way of sharing CIFS / SMB from Ceph. So far I have three ideas: 1. ??Use a server as a head node, with an RBD mapped, then just export with samba

Re: [ceph-users] ACLs question in cephfs

2015-12-15 Thread Gregory Farnum
On Tue, Dec 15, 2015 at 3:01 AM, Goncalo Borges wrote: > Dear Cephfs gurus. > > I have two questions regarding ACL support on cephfs. > > 1) Last time we tried ACLs we saw that they were only working properly in the > kernel module and I wonder what is the present

Re: [ceph-users] MDS stuck replaying

2015-12-15 Thread Gregory Farnum
On Tue, Dec 15, 2015 at 12:29 PM, Bryan Wright wrote: > John Spray writes: > >> If you haven't already, also >> check the overall health of the MDS host, e.g. is it low on >> memory/swapping? > > For what it's worth, I've taken down some OSDs, and that seems to

Re: [ceph-users] MDS stuck replaying

2015-12-15 Thread Bryan Wright
John Spray writes: > If you haven't already, also > check the overall health of the MDS host, e.g. is it low on > memory/swapping? For what it's worth, I've taken down some OSDs, and that seems to have allowed the MDS to finish replaying. My guess is that one of the OSDs was

Re: [ceph-users] MDS: How to increase timeouts?

2015-12-15 Thread Gregory Farnum
On Tue, Dec 15, 2015 at 10:21 AM, Burkhard Linke wrote: > Hi, > > I have a setup with two MDS in active/standby configuration. During times of > high network load / network congestion, the active MDS is bounced between > both instances: > > 1.

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-15 Thread Mykola Dvornik
I had more or less the same problem. This most likely synchronization issue. I have been deploying 16 OSD each running exactly the same hardware/software. The issue appeared randomly with no obvious correlations with other stuff. The dirty workaround was to put time.sleep(5) before invoking

[ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-15 Thread Jesper Thorhauge
Hi, A fresh server install on one of my nodes (and yum update) left me with CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2. "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but "ceph-disk activate / dev/sda1" fails. I have traced the problem to

Re: [ceph-users] MDS: How to increase timeouts?

2015-12-15 Thread Burkhard Linke
Hi, On 12/15/2015 10:22 PM, Gregory Farnum wrote: On Tue, Dec 15, 2015 at 10:21 AM, Burkhard Linke wrote: Hi, I have a setup with two MDS in active/standby configuration. During times of high network load / network congestion, the active MDS

[ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-15 Thread Matt Taylor
Hi all, After recently upgrading to CentOS 7.2 and installing a new Ceph cluster using Infernalis v9.2.0, I have noticed that disk's are failing to prepare. I have observed the same behaviour over multiple Ceph servers when preparing disk's. All the servers are identical. Disk's are

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-15 Thread Christian Balzer
Hello, On Wed, 16 Dec 2015 07:26:52 +0100 Mykola Dvornik wrote: > I had more or less the same problem. This most likely synchronization > issue. I have been deploying 16 OSD each running exactly the same > hardware/software. The issue appeared randomly with no obvious > correlations with other