Re: [ceph-users] New Ceph cluster design

2018-03-13 Thread Christian Balzer
Hello, On Sat, 10 Mar 2018 16:14:53 +0100 Vincent Godin wrote: > Hi, > > As i understand it, you'll have one RAID1 of two SSDs for 12 HDDs. A > WAL is used for all writes on your host. This isn't filestore, AFAIK the WAL/DB will be used for small writes only to keep latency with Bluestore

Re: [ceph-users] New Ceph cluster design

2018-03-10 Thread Vincent Godin
Hi, As i understand it, you'll have one RAID1 of two SSDs for 12 HDDs. A WAL is used for all writes on your host. If you have good SSDs, they can handle 450-550 MBpsc. Your 12 HDDs SATA can handle 12 x 100 MBps that is to say 1200 GBps. So your RAID 1 will be the bootleneck with this design. A

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread Jonathan Proulx
On Fri, Mar 09, 2018 at 03:06:15PM +0100, Ján Senko wrote: :We are looking at 100+ nodes. : :I know that the Ceph official recommendation is 1GB of RAM per 1TB of disk. :Was this ever changed since 2015? :CERN is definitely using less (source:

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread John Petrini
What you linked was only a 2 week test. When Ceph is healthy it does not need a lot of RAM, it's during recovery that OOM appears and that's when you'll find yourself upgrading the RAM on your nodes just to stop OOM and allow the cluster to recover. Look through the mailing list and you'll see

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread Ján Senko
We are looking at 100+ nodes. I know that the Ceph official recommendation is 1GB of RAM per 1TB of disk. Was this ever changed since 2015? CERN is definitely using less (source: https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf) RedHat suggests using 16GB + 2GB/HDD as the

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread Brady Deetz
I'd increase ram. 1GB per 1TB of disk is the recommendation. Another thing you need to consider is your node density. 12x10TB is a lot of data to have to rebalance if you aren't going to have 20+ nodes. I have 17 nodes with 24x6TB disks each. Rebuilds can take what seems like an eternity. It may

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread Tristan Le Toullec
Hi,     same REX, we had troubles with OutOfMemory Kill Process on OSD process with ten 8 To disks. After an upgrade to 128 Go these troubles disapears. Recommendations on memory aren't overestimated. Regards, Tristan On 09/03/2018 11:31, Eino Tuominen wrote: On 09/03/2018 12.16, Ján

Re: [ceph-users] New Ceph cluster design

2018-03-09 Thread Eino Tuominen
On 09/03/2018 12.16, Ján Senko wrote: I am planning a new Ceph deployement and I have few questions that I could not find good answers yet. Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each. Our target is to use 10TB drives for 120TB capacity per node. We ran into

[ceph-users] New Ceph cluster design

2018-03-09 Thread Ján Senko
I am planning a new Ceph deployement and I have few questions that I could not find good answers yet. Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each. Our target is to use 10TB drives for 120TB capacity per node. 1. We want to have small amount of SSDs in the machines.