> On Jul 17, 2016, at 4:21 PM, Ruben Kerkhof <[email protected]> wrote:
>
> Yes, that's it. You should see osd processes running, and the osd's
> should be marked 'up' when you run 'ceph osd tree’.
Looks like I’m good then:
[wdennis@ceph2 ~]$ sudo ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.04408 root default
-2 0.01469 host ceph2
0 0.00490 osd.0 up 1.00000 1.00000
1 0.00490 osd.1 up 1.00000 1.00000
2 0.00490 osd.2 up 1.00000 1.00000
-3 0.01469 host ceph3
3 0.00490 osd.3 up 1.00000 1.00000
4 0.00490 osd.4 up 1.00000 1.00000
5 0.00490 osd.5 up 1.00000 1.00000
-4 0.01469 host ceph4
6 0.00490 osd.6 up 1.00000 1.00000
7 0.00490 osd.7 up 1.00000 1.00000
8 0.00490 osd.8 up 1.00000 1.00000
>
> Just thought of a fourth issue, please make sure your disks are
> absolutely empty!
> I reused disks that I used previously for zfs, and zfs leaves metadata
> behind at the end of the disk.
> This confuses blkid greatly (and me too).
> ceph-disk prepare --zap is not enough to resolve this.
>
> I've stuck the following in my kickstart file which I use to prepare
> my OSD servers.
>
> %pre
> #!/bin/bash
> for disk in $(ls -1 /dev/sd* | awk '/[a-z]$/ {print}'); do
> test -b "$disk" || continue
> size_in_bytes=$(blockdev --getsize64 ${disk})
> offset=$((size_in_bytes - 8 * 1024 * 1024))
>
> echo "Wiping ${disk}"
> # wipe start
> dd if=/dev/zero of=${disk} bs=1M count=8 status=none
> # wipe end
> dd if=/dev/zero of=${disk} bs=1M count=8 seek=${offset}
> oflag=seek_bytes status=none
> done
> %end
Again, good to know - thanks! The prior use was just the previous Ceph install,
no other fs use…
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com