On Wed, Aug 24, 2016 at 5:17 PM, Ivan Grcic <[email protected]> wrote:
> Hi Ilya,
>
> there you go, and thank you for your time.
>
> BTW should one get a crushmap from osdmap doing something like this:
>
> osdmaptool --export-crush /tmp/crushmap /tmp/osdmap
> crushtool -c crushmap -o crushmap.3518
Yes. You can also use
$ ceph osd getcrushmap -o /tmp/crushmap
>
> Until now I was just creating/compiling crushmaps, havent played with
> osd maps yet.
You've got the following buckets in your crushmap:
...
host g6 {
id -5 # do not change unnecessarily
# weight 4.930
alg straw
hash 0 # rjenkins1
item osd.18 weight 0.600
item osd.19 weight 0.250
item osd.20 weight 1.100
item osd.21 weight 0.500
item osd.22 weight 0.080
item osd.23 weight 0.500
item osd.24 weight 0.400
item osd.25 weight 0.400
item osd.26 weight 0.400
item osd.27 weight 0.150
item osd.28 weight 0.400
item osd.29 weight 0.150
}
room kitchen {
id -100 # do not change unnecessarily
# weight 4.930
alg straw
hash 0 # rjenkins1
item g6 weight 4.930
}
room bedroom {
id -200 # do not change unnecessarily
# weight 6.920
alg straw
hash 0 # rjenkins1
item asus weight 2.500
item urs weight 2.500
item think weight 1.920
}
datacenter home {
id -1000 # do not change unnecessarily <---
# weight 11.850
alg straw
hash 0 # rjenkins1
item kitchen weight 4.930
item bedroom weight 6.920
}
root sonnenbergweg {
id -1000000 # do not change unnecessarily <---
# weight 11.850
alg straw
hash 0 # rjenkins1
item home weight 11.850
}
The id of the bucket isn't just an arbitrary number - it indexes the
buckets array. By having a 1000000 in there, you are creating an ~4M
crushmap (~8M for the in-memory pointers-to-buckets array), which the
kernel fails to allocate memory for. The failure mode could have been
slightly better, but this is a borderline crushmap - we should probably
add checks to "crushtool -c" for this.
Thanks,
Ilya
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com