Hi Federico,

Yep I understand that. This is for legacy reasons. We already have 3 older 
clusters running with a similar setup with minor differences (hardware, etc.) 
and this one is being setup to test something  [:(]


Thanks

________________________________
From: Federico Lucifredi <feder...@redhat.com>
Sent: Friday, April 7, 2017 5:05:36 PM
To: Melzer Pinto
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] Ceph drives not detected

Hi Melzer,
 Somewhat pointing out the obvious, but just in case: Ceph is in rapid
development, and Giant is way behind where the state of the art is. If
this is your first Ceph experience, it is definitely recommended you
look at Jewel or even Kraken -- In Linux terms, it is almost as if you
were running a 2.4 kernel ;-)

 Good luck --F
_________________________________________
-- "'Problem' is a bleak word for challenge" - Richard Fish
(Federico L. Lucifredi) - federico at redhat.com - GnuPG 0x4A73884C


On Fri, Apr 7, 2017 at 11:19 AM, Melzer Pinto <melzer.pi...@mezocliq.com> wrote:
> Hello,
>
> I am setting up a 9 node ceph cluster. For legacy reasons I'm using Ceph
> giant (0.87) on Fedora 21. Each OSD node has 4x4TB SATA drives with journals
> on a separate SSD. The server is an HP XL190 Gen 9 with latest firmware.
>
> The issue I'm seeing is that only 2 drives get detected and mounted on
> almost all the nodes. During the initial creation all drives were created
> and mounted but now only 2 show up.
>
> Usually a partprobe forces the drives to come online but in this case it
> doesnt.
>
> On a reboot a different set of OSDs will get detected. For e.g. of osds 0
> and 1 are up, on a reboot osds 0 and 3 will be detected. On another reboot
> osds 1 and 2 may come up.
>
>
> $ ceph osd tree
>
> # id    weight  type name       up/down reweight
> -1      131     root default
> -2      14.56           host xxx-a5-34
> 0       3.64                    osd.0   up      1
> 1       3.64                    osd.1   down    0
> 2       3.64                    osd.2   up      1
> 3       3.64                    osd.3   down    0
> -3      14.56           host xxx-a5-36
> 4       3.64                    osd.4   down    0
> 5       3.64                    osd.5   down    0
> 6       3.64                    osd.6   up      1
> 7       3.64                    osd.7   up      1
> -4      14.56           host xxx-a5-37
> 8       3.64                    osd.8   down    0
> 9       3.64                    osd.9   up      1
> 10      3.64                    osd.10  up      1
> 11      3.64                    osd.11  down    0
> -5      14.56           host xxx-b5-34
> 12      3.64                    osd.12  up      1
> 13      3.64                    osd.13  down    0
> 14      3.64                    osd.14  up      1
> 15      3.64                    osd.15  down    0
> -6      14.56           host xxx-b5-36
> 16      3.64                    osd.16  up      1
> 17      3.64                    osd.17  up      1
> 18      3.64                    osd.18  down    0
> 19      3.64                    osd.19  down    0
> -7      14.56           host xxx-b5-37
> 20      3.64                    osd.20  up      1
> 21      3.64                    osd.21  up      1
> 22      3.64                    osd.22  down    0
> 23      3.64                    osd.23  down    0
> -8      14.56           host xxx-c5-34
> 24      3.64                    osd.24  up      1
> 25      3.64                    osd.25  up      1
> 26      3.64                    osd.26  up      1
> 27      3.64                    osd.27  up      1
> -9      14.56           host xxx-c5-36
> 28      3.64                    osd.28  down    0
> 29      3.64                    osd.29  up      1
> 30      3.64                    osd.30  down    0
> 31      3.64                    osd.31  up      1
> -10     14.56           host xxx-c5-37
> 32      3.64                    osd.32  up      1
> 33      3.64                    osd.33  up      1
> 34      3.64                    osd.34  up      1
> 35      3.64                    osd.35  up      1
>
> Anyone seen this problem before and know what the issue could be?
>
> Thanks
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to