Re: [ceph-users] Ideal Bluestore setup

2018-01-24 Thread Alex Gorbachev
Hi Ean,

I don't have any experience with less than 8 drives per OSD node, and
the setup heavily depends on what you want to use it for.  Assuming
small proof of concept with not much requirement for performance (due
to low spindle count), I would do this:

On Mon, Jan 22, 2018 at 1:28 PM, Ean Price  wrote:
> Hi folks,
>
> I’m not sure the ideal setup for bluestore given the set of hardware I have 
> to work with so I figured I would ask the collective wisdom of the ceph 
> community. It is a small deployment so the hardware is not all that 
> impressive, but I’d still like to get some feedback on what would be the 
> preferred and most maintainable setup.
>
> We have 5 ceph OSD hosts with the following setup:
>
> 16 GB RAM
> 1 PCI-E NVRAM 128GB
> 1 SSD 250 GB
> 2 HDD 1 TB each
>
> I was thinking to put:
>
> OS on NVRAM with 2x20 GB partitions for bluestore’s WAL and rocksdb

I would put the OS on the SSD and not colocate with WAL/DB.  I would
also put WAL/DB on the NVMe drive as the fastest.

> And either use bcache with the SSD to cache the 2x HDDs or possibly use 
> Ceph’s built in cache tiering.

Ceph cache tiering is likely out of the range of this setup, and
requires a very clear understanding of the workload.  I would not use
it.

No experience with bcache, but again seems to be a bit of overkill for
a small setup like this.  Simple = stable.

>
> My questions are:
>
> 1) is a 20GB logical volume adequate for the WAL and db with a 1TB HDD or 
> should it be larger?

I believe so, yes.  If it spills over, the data will just go onto the drives.

>
> 2) or - should I put the rocksdb on the SSD and just leave the WAL on the 
> NVRAM device?

You are likely better off with WAL and DB on the NVRAM

>
> 3) Lastly, what are the downsides of bcache vs Ceph’s cache tiering? I see 
> both are used in production so I’m not sure which is the better choice for us.
>
> Performance is, of course, important but maintainability and stability are 
> definitely more important.

I would avoid both bcache and tiering to simplify the configuration,
and seriously consider larger nodes if possible, and more OSD drives.

HTH,
--
Alex Gorbachev
Storcium

>
> Thanks in advance for your advice!
>
> Best,
> Ean
>
>
>
>
>
> --
> __
>
> This message contains information which may be confidential.  Unless you
> are the addressee (or authorized to receive for the addressee), you may not
> use, copy, or disclose to anyone the message or any information contained
> in the message.  If you have received the message in error, please advise
> the sender by reply e-mail or contact the sender at Price Paper & Twine
> Company by phone at (516) 378-7842 and delete the message.  Thank you very
> much.
>
> __
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ideal Bluestore setup

2018-01-22 Thread Ean Price
Hi folks,

I’m not sure the ideal setup for bluestore given the set of hardware I have to 
work with so I figured I would ask the collective wisdom of the ceph community. 
It is a small deployment so the hardware is not all that impressive, but I’d 
still like to get some feedback on what would be the preferred and most 
maintainable setup. 

We have 5 ceph OSD hosts with the following setup:

16 GB RAM
1 PCI-E NVRAM 128GB
1 SSD 250 GB
2 HDD 1 TB each

I was thinking to put:

OS on NVRAM with 2x20 GB partitions for bluestore’s WAL and rocksdb
And either use bcache with the SSD to cache the 2x HDDs or possibly use Ceph’s 
built in cache tiering. 

My questions are:

1) is a 20GB logical volume adequate for the WAL and db with a 1TB HDD or 
should it be larger?

2) or - should I put the rocksdb on the SSD and just leave the WAL on the NVRAM 
device?

3) Lastly, what are the downsides of bcache vs Ceph’s cache tiering? I see both 
are used in production so I’m not sure which is the better choice for us. 

Performance is, of course, important but maintainability and stability are 
definitely more important.

Thanks in advance for your advice!

Best,
Ean





-- 
__

This message contains information which may be confidential.  Unless you 
are the addressee (or authorized to receive for the addressee), you may not 
use, copy, or disclose to anyone the message or any information contained 
in the message.  If you have received the message in error, please advise 
the sender by reply e-mail or contact the sender at Price Paper & Twine 
Company by phone at (516) 378-7842 and delete the message.  Thank you very 
much.

__
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com