Re: [ceph-users] Bluestore: v11.2.0 peering not happening when OSD is down

2017-01-23 Thread Muthusamy Muthiah
Hi Greg, We use EC:4+1 on 5 node cluster in production deployments with filestore and it does recovery and peering when one OSD goes down. After few mins , other OSD from a node where the fault OSD exists will take over the PGs temporarily and all PGs goes to active + clean state . Cluster also

Re: [ceph-users] Ceph is rebalancing CRUSH on every osd add

2017-01-23 Thread Mehmet
I guess this is cause you are using always the Same root tree. Am 23. Januar 2017 10:50:16 MEZ schrieb Sascha Spreitzer : >Hi all > >I reckognized ceph is rebalancing the whole crush map when i add osd's >that should not affect any of my crush rulesets. > >Is there a way to

[ceph-users] [RBD][mirror]Can't remove mirrored image.

2017-01-23 Thread int32bit
Hi,All, I'm a new comer of Ceph, I deployed two ceph cluster, and one of which is used as mirror cluster. When I created an image, I found that the primary image blocked in 'up+stopped' status and the non-primary image's status is 'up+syncing`. I'm really not sure if this is in OK status and I

Re: [ceph-users] Issue with upgrade from 0.94.9 to 10.2.5

2017-01-23 Thread Mike Lovell
i was just testing an upgrade of some monitors in a test cluster from hammer (0.94.7) to jewel (10.2.5). after upgrade each of the first two monitors, i stopped and restarted a single osd to cause changes in the maps. the same error messages showed up in ceph -w. i haven't dug into it much but

Re: [ceph-users] machine hangs & soft lockups with 10.2.2 / kernel 4.4.0

2017-01-23 Thread jiajia zhong
try a newer kernel. like 4.8 2017-01-24 0:37 GMT+08:00 Matthew Vernon : > Hi, > > We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu > Xenial). We're seeing both machines freezing (nothing in logs on the > machine, which is entirely unresponsive to anything

Re: [ceph-users] machine hangs & soft lockups with 10.2.2 / kernel 4.4.0

2017-01-23 Thread Matthew Vernon
On 23/01/17 16:40, Tu Holmes wrote: > While I know this seems a silly question, are your monitoring nodes > spec'd the same? Oh, sorry, I should have said that. All 9 machines have osds on (1 per disk); additionally 3 of the nodes are also mons and 3 (a different 3) are rgws. One of the freezing

Re: [ceph-users] machine hangs & soft lockups with 10.2.2 / kernel 4.4.0

2017-01-23 Thread Tu Holmes
While I know this seems a silly question, are your monitoring nodes spec'd the same? //Tu On Mon, Jan 23, 2017 at 8:38 AM Matthew Vernon wrote: > Hi, > > We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu > Xenial). We're seeing both machines freezing

[ceph-users] machine hangs & soft lockups with 10.2.2 / kernel 4.4.0

2017-01-23 Thread Matthew Vernon
Hi, We have a 9-node ceph cluster, running 10.2.2 and kernel 4.4.0 (Ubuntu Xenial). We're seeing both machines freezing (nothing in logs on the machine, which is entirely unresponsive to anything except the power button) and suffering soft lockups. Has anyone seen similar? Googling hasn't found

[ceph-users] Ceph is rebalancing CRUSH on every osd add

2017-01-23 Thread Sascha Spreitzer
Hi all I reckognized ceph is rebalancing the whole crush map when i add osd's that should not affect any of my crush rulesets. Is there a way to add osd's to the crush map without having the cluster change all the osd mappings (rebalancing)? Or am i doing something wrong terribly? How does