Re: [ceph-users] Crush rule freeze cluster

2015-05-11 Thread Georgios Dimitrakakis
Oops... to fast to answer... G. On Mon, 11 May 2015 12:13:48 +0300, Timofey Titovets wrote: Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to map rbd device with map like above! Hooray! 2015-05-11 12:11 GMT+03:00 Timofey Titovets nefelim...@gmail.com: FYI and history Rule:

Re: [ceph-users] Crush rule freeze cluster

2015-05-11 Thread Timofey Titovets
Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to map rbd device with map like above! Hooray! 2015-05-11 12:11 GMT+03:00 Timofey Titovets nefelim...@gmail.com: FYI and history Rule: # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size

Re: [ceph-users] Crush rule freeze cluster

2015-05-11 Thread Georgios Dimitrakakis
Timofey, glad that you 've managed to get it working :-) Best, George FYI and history Rule: # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type room step choose firstn 0 type rack step choose firstn 0

Re: [ceph-users] Crush rule freeze cluster

2015-05-11 Thread Timofey Titovets
FYI and history Rule: # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type room step choose firstn 0 type rack step choose firstn 0 type host step chooseleaf firstn 0 type osd step emit } And after reset

Re: [ceph-users] Crush rule freeze cluster

2015-05-10 Thread Timofey Titovets
Georgios, oh, sorry for my poor english _-_, may be I poor expressed what i want =] i know how to write simple Crush rule and how use it, i want several things things: 1. Understand why, after inject bad map, my test node make offline. This is unexpected. 2. May be somebody can explain what and

Re: [ceph-users] Crush rule freeze cluster

2015-05-10 Thread Georgios Dimitrakakis
Timofey, may be your best chance is to connect directly at the server and see what is going on. Then you can try debug why the problem occurred. If you don't want to wait until tomorrow you may try to see what is going on using the server's direct remote console access. The majority of the

Re: [ceph-users] Crush rule freeze cluster

2015-05-09 Thread Georgios Dimitrakakis
Hi Timofey, assuming that you have more than one OSD hosts and that the replicator factor is equal (or less) to the number of the hosts why don't you just change the crushmap to host replication? You just need to change the default CRUSHmap rule from step chooseleaf firstn 0 type osd to

[ceph-users] Crush rule freeze cluster

2015-05-09 Thread Timofey Titovets
Hi list, i had experiments with crush maps, and I've try to get raid1 like behaviour (if cluster have 1 working osd node, duplicate data across local disk, for avoiding data lose in case local disk failure and allow client working, because this is not a degraded state) ( in best case, i want