Oops... to fast to answer...
G.
On Mon, 11 May 2015 12:13:48 +0300, Timofey Titovets wrote:
Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to
map rbd device with map like above!
Hooray!
2015-05-11 12:11 GMT+03:00 Timofey Titovets nefelim...@gmail.com:
FYI and history
Rule:
Hey! I catch it again. Its a kernel bug. Kernel crushed if i try to
map rbd device with map like above!
Hooray!
2015-05-11 12:11 GMT+03:00 Timofey Titovets nefelim...@gmail.com:
FYI and history
Rule:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size
Timofey,
glad that you 've managed to get it working :-)
Best,
George
FYI and history
Rule:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type room
step choose firstn 0 type rack
step choose firstn 0
FYI and history
Rule:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type room
step choose firstn 0 type rack
step choose firstn 0 type host
step chooseleaf firstn 0 type osd
step emit
}
And after reset
Georgios, oh, sorry for my poor english _-_, may be I poor expressed
what i want =]
i know how to write simple Crush rule and how use it, i want several
things things:
1. Understand why, after inject bad map, my test node make offline.
This is unexpected.
2. May be somebody can explain what and
Timofey,
may be your best chance is to connect directly at the server and see
what is going on.
Then you can try debug why the problem occurred. If you don't want to
wait until tomorrow
you may try to see what is going on using the server's direct remote
console access.
The majority of the
Hi Timofey,
assuming that you have more than one OSD hosts and that the replicator
factor is equal (or less) to the number of the hosts why don't you just
change the crushmap to host replication?
You just need to change the default CRUSHmap rule from
step chooseleaf firstn 0 type osd
to
Hi list,
i had experiments with crush maps, and I've try to get raid1 like
behaviour (if cluster have 1 working osd node, duplicate data across
local disk, for avoiding data lose in case local disk failure and
allow client working, because this is not a degraded state)
(
in best case, i want