Regarding using spinning disks for journals, before I was able to put SSDs
in my deployment I came up wit ha somewhat novel journal setup that gave my
cluster way more life than having all the journals on a single disk, or
having the journal on the disk with the OSD. I called it interleaved
The biggest thing to be careful of with this kind of deployment is that
now a single drive failure will take out 2 OSDs instead of 1 which means
OSD failure rates and associated recovery traffic go up. I'm not sure
that's worth the trade-off...
Mark
On 07/08/2015 11:01 AM, Quentin Hartman
I don't see it as being any worse than having multiple journals on a single
drive. If your journal drive tanks, you're out X OSDs as well. It's
arguably better, since the number of affected OSDs per drive failure is
lower. Admittedly, neither deployment is ideal, but it an effective way to
get
Another issue is performance : you'll get 4x more IOPS with 4 x 2TB drives
than with one single 8TB.
So if you have a performance target your money might be better spent on
smaller drives
Regardless of the discussion if it is smart to have very large spinners:
Be aware that some of the
Lionel - thanks for the feedback ... inline below ...
On 7/2/15, 9:58 AM, Lionel Bouton
lionel+c...@bouton.namemailto:lionel+c...@bouton.name wrote:
Ouch. These spinning disks are probably a bottleneck: there are regular advices
on this list to use one DC SSD for 4 OSDs. You would probably
I'd def be happy to share what numbers I can get out of it. I'm still a
neophyte w/ Ceph, and learning how to operate it, set it up ... etc...
My limited performance testing to date has been with stock XFS ceph-disk
built filesystem for the OSDs, basic PG/CRUSH map stuff - and using dd across
Are you using the 4TB disks for the journal?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 2, 2015 at 12:16 PM, Shane Gibson shane_gib...@symantec.com
wrote:
I'd def be happy to share what numbers I can get out of it.
I would like to get some clarification on the size of the journal disks
that I should get for my new Ceph cluster I am planning. I read about the
journal settings on
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
but that didn't really clarify it for me that or I
I would probably go with less size osd disks, 4TB is to much to loss in
case of a broken disk, so maybe more osd daemons with less size, maybe 1TB
or 2TB size. 4:1 relationship is good enough, also i think that 200G disk
for the journals would be ok, so you can save some money there, the osd's
of
It also depends a lot on the size of your cluster ... I have a test cluster I'm
standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I
lose 4 TB - that's a very small fraction of the data. My replicas are going to
be spread out across a lot of spindles, and
I'm interested in such a configuration, can you share some perfomance
test/numbers?
Thanks in advance,
Best regards,
*German*
2015-07-01 21:16 GMT-03:00 Shane Gibson shane_gib...@symantec.com:
It also depends a lot on the size of your cluster ... I have a test
cluster I'm standing up right
11 matches
Mail list logo