Hi Felix,
I have experience from running Ceph on SATADOM on R630. And it is kind of
bad cause we got bad SATADOM's from Dell.
If you are going to use SATADOM make sure to buy directly from a Innodisk
reseller and not from Dell.
We bought our SATADOM from Dell and they degraded in 5-6 months. And
Thanks everybody for the answers, it really helped me a lot. So, to sum up,
this is the options that I have:
- OS in a RAID1.
- PROS: the cluster is protected against OS failures. If one of this
disks fail, it could be easily replaced because it is hot-swappable.
- CONS: we
Hello,
I shall top-quote, summarize here.
Firstly we have to consider that Ceph is deployed by people with a wide
variety of needs, budgets and most of all cluster sizes.
Wido has the pleasure (or is that nightmare? ^o^) to deal with a really
huge cluster, thousands of OSDs and an according
> Op 13 aug. 2016 om 08:58 heeft Georgios Dimitrakakis
> het volgende geschreven:
>
>
>>> Op 13 aug. 2016 om 03:19 heeft Bill Sharer het volgende geschreven:
>>>
>>> If all the system disk does is handle the o/s (ie osd journals are
>>> on dedicated or osd drives as
Op 13 aug. 2016 om 03:19 heeft Bill Sharer het volgende geschreven:
If all the system disk does is handle the o/s (ie osd journals are
on dedicated or osd drives as well), no problem. Just rebuild the
system and copy the ceph.conf back in when you re-install ceph.Â
Keep a spare copy of your
> Op 13 aug. 2016 om 03:19 heeft Bill Sharer het
> volgende geschreven:
>
> If all the system disk does is handle the o/s (ie osd journals are on
> dedicated or osd drives as well), no problem. Just rebuild the system and
> copy the ceph.conf back in when you
If all the system disk does is handle the o/s (ie osd journals are on
dedicated or osd drives as well), no problem. Just rebuild the system
and copy the ceph.conf back in when you re-install ceph. Keep a spare
copy of your original fstab to keep your osd filesystem mounts straight.
Just
On 12.08.2016 13:41, Félix Barbeira wrote:
Hi,
I'm planning to make a ceph cluster but I have a serious doubt. At
this moment we have ~10 servers DELL R730xd with 12x4TB SATA disks.
The official ceph docs says:
"We recommend using a dedicated drive for the operating system and
software,
Nothing actually happens to your osds if your OS drive fails. To prevent the
unnecessary backfilling off of the server with the dead OS drive, you would set
NOOUT in the cluster, reinstall the OS on a good drive, install ceph on it, and
then restart the server. The OSDs have all of the
Hello Felix,
When you put your OS on a single drive and that drive fails, you will
loose all the OSDs on that machine, because the entier machine goes
down. The PGs that now miss a partner are going to be replicated again.
So, in your case, the PGs that are on those 11 OSDs.
This rebuilding
Mirror the OS disks, use 10 disks for 10 OSD's
> On Aug 12, 2016, at 7:41 AM, Félix Barbeira wrote:
>
> Hi,
>
> I'm planning to make a ceph cluster but I have a serious doubt. At this
> moment we have ~10 servers DELL R730xd with 12x4TB SATA disks. The official
> ceph
Hi Felix
If you have R730XD then you should have 2 x 2.5" slots on the back.
You can stick in SSDs in RAID1 for your OS here.
On Fri, Aug 12, 2016 at 12:41 PM, Félix Barbeira wrote:
> Hi,
>
> I'm planning to make a ceph cluster but I have a serious doubt. At this
> moment
Hi,
I'm planning to make a ceph cluster but I have a serious doubt. At this
moment we have ~10 servers DELL R730xd with 12x4TB SATA disks. The official
ceph docs says:
"We recommend using a dedicated drive for the operating system and
software, and one drive for each Ceph OSD Daemon you run on
13 matches
Mail list logo