Hi,
thanks for the explanation, but...
Twisting the Ceph storage model as you plan it is not a good idea :
- You will decrease the support level(I'm not sure many people will
build such an architecture)
- You are certainly going to face strange issues with HW Raid on top of
Ceph OSD
- You
Hi Anthony,
o I think you might have some misunderstandings about how Ceph works. Ceph
is best deployed as a single cluster spanning multiple servers, generally
at least 3. Is that your plan?
I want to deply servers for 100VDI Windows 10 each (at least 3 servers). I
plan to sell servers
Oscar, a few thoughts:
o I think you might have some misunderstandings about how Ceph works. Ceph is
best deployed as a single cluster spanning multiple servers, generally at least
3. Is that your plan? It sort of sounds as though you're thinking of Ceph
managing only the drives local to
Hi Brady,
For me is very difficult to make a PoC because servers are very expensive.
Then, may I understand that your advice is a RAID0 for each 4TB? For a
balanced configuration...
1 osd x 1 disk of 4TB
1 osd x 2 disks of 2TB
1 odd x 4 disks of 1 TB
Isn't it?
Thanks a lot
El 13 nov. 2017
Le 13/11/2017 à 15:47, Oscar Segarra a écrit :
> Thanks Mark, Peter,
>
> For clarification, the configuration with RAID5 is having many servers
> (2 or more) with RAID5 and CEPH on top of it. Ceph will replicate data
> between servers. Of course, each server will have just one OSD daemon
>
a DRBD the raid arrays.
> Expanding this storage is quite hastle, compared to just adding a few
> osd's.
>
>
>
> -Original Message-
> From: Oscar Segarra [mailto:oscar.sega...@gmail.com]
> Sent: maandag 13 november 2017 15:26
> To: Peter Maloney
> Cc: ceph-users
>
...@gmail.com]
Sent: maandag 13 november 2017 15:26
To: Peter Maloney
Cc: ceph-users
Subject: Re: [ceph-users] HW Raid vs. Multiple OSD
Hi Peter,
Thanks a lot for your consideration in terms of storage consumption.
The other question is considering having one OSDs vs 8 OSDs... 8 OSDs
will consume
Hi Peter,
Thanks a lot for your consideration in terms of storage consumption.
The other question is considering having one OSDs vs 8 OSDs... 8 OSDs will
consume more CPU than 1 OSD (RAID5) ?
As I want to share compute and osd in the same box, resources consumed by
OSD can be a handicap.
Oscar Segarra wrote:
I'd like to hear your opinion about theese two configurations:
1.- RAID5 with 8 disks (I will have 7TB but for me it is enough) + 1
OSD daemon
2.- 8 OSD daemons
You mean 1 OSD daemon on top of RAID5? I don't think I'd do that. You'll
probably want redundancy at Ceph's
Once you've replaced an OSD, you'll see it is quite simple... doing it
for a few is not much more work (you've scripted it, right?). I don't
see RAID as giving any benefit here at all. It's not tricky...it's
perfectly normal operation. Just get used to ceph, and it'll be as
normal as replacing a
Hi,
I'm designing my infraestructure. I want to provide 8TB (8 disks x 1TB
each) of data per host just for Microsoft Windows 10 VDI. In each host I
will have storage (ceph osd) and compute (on kvm).
I'd like to hear your opinion about theese two configurations:
1.- RAID5 with 8 disks (I will
11 matches
Mail list logo