Re: [ceph-users] VMware + CEPH Integration

2017-06-18 Thread Adrian Saul
> Hi Alex, > > Have you experienced any problems with timeouts in the monitor action in > pacemaker? Although largely stable, every now and again in our cluster the > FS and Exportfs resources timeout in pacemaker. There's no mention of any > slow requests or any peering..etc from the ceph logs so

Re: [ceph-users] Mon Create currently at the state of probing

2017-06-18 Thread Sasha Litvak
Do you have firewall on on new server by any chance? On Sun, Jun 18, 2017 at 8:18 PM, Jim Forde wrote: > I have an eight node ceph cluster running Jewel 10.2.5. > > One Ceph-Deploy node. Four OSD nodes and three Monitor nodes. > > Ceph-Deploy node is r710T > > OSD’s are r710a,

[ceph-users] Kernel RBD client talking to multiple storage clusters

2017-06-18 Thread Alex Gorbachev
Has anyone run into such config where a single client consumes storage from several ceph clusters, unrelated to each other (different MONs and OSDs, and keys)? We have a Hammer and a Jewel cluster now, and this may be a way to have very clean migrations. Best regards, Alex Storcium -- -- Alex

[ceph-users] Mon Create currently at the state of probing

2017-06-18 Thread Jim Forde
I have an eight node ceph cluster running Jewel 10.2.5. One Ceph-Deploy node. Four OSD nodes and three Monitor nodes. Ceph-Deploy node is r710T OSD's are r710a, r710b, r710c, and r710d. Mon's are r710e, r710f, and r710g. Name resolution is in Hosts file on each node. Successfully removed Monitor

Re: [ceph-users] What package I need to install to have CephFS kernel support on CentOS?

2017-06-18 Thread John Spray
On Fri, Jun 16, 2017 at 4:05 PM, Stéphane Klein wrote: > Hi, > > I would like to use CephFS kernel module on CentOS 7 > > I use Atomic version of CentOS. > > I don't know where is the CephFS kermel module rpm package. > > I have installed:

Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-18 Thread Mehmet
Hi, We actually using 3xIntel Server with 12 osds and One supermicro with 24 osds in One ceph Cluster Journals on nvme per server. Did not seeing any issues jet. Best Mehmet Am 9. Juni 2017 19:24:40 MESZ schrieb Deepak Naidu : >Thanks David for sharing your experience,

[ceph-users] FAILED assert(i.first <= i.last)

2017-06-18 Thread Peter Rosell
Hi, I have a small cluster with only three nodes, 4 OSDs + 3 OSDs. I have been running version 0.87.2 (Giant) for over 2.5 year, but a couple of day ago I upgraded to 0.94.10 (Hammer) and then up to 10.2.7 (Jewel). Both the upgrades went great. Started with monitors, osd and finally mds. The log