Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-24 Thread Eugen Block
Hi, I don't know why but, I noticed in the ceph-volume-systemd.log (above in bold), that there are 2 different lines corresponding to the lvm-1 (normally associated to the osd.1) ? One seems to have the correct id, while the other has a bad one...and it's looks like he's trying to start

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-24 Thread Hervé Ballans
Le 23/08/2018 à 18:44, Alfredo Deza a écrit : ceph-volume-systemd.log (extract) [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received: lvm-6-ba351d69-5c48-418e-a377-4034f503af93 [2018-08-20 11:26:26,386][systemd][INFO ] raw systemd input received:

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 11:32 AM, Hervé Ballans wrote: > Le 23/08/2018 à 16:13, Alfredo Deza a écrit : > > What you mean is that, at this stage, I must directly declare the UUID paths > in value of --block.db (i.e. replace /dev/nvme0n1p1 with its PARTUUID), that > is ? > > No, this all looks

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Hervé Ballans
Le 23/08/2018 à 16:13, Alfredo Deza a écrit : What you mean is that, at this stage, I must directly declare the UUID paths in value of --block.db (i.e. replace /dev/nvme0n1p1 with its PARTUUID), that is ? No, this all looks correct. How does the ceph-volume.log and ceph-volume-systemd.log look

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 9:56 AM, Hervé Ballans wrote: > Le 23/08/2018 à 15:20, Alfredo Deza a écrit : > > Thanks Alfredo for your reply. I'm using the very last version of Luminous > (12.2.7) and ceph-deploy (2.0.1). > I have no problem in creating my OSD, that's work perfectly. > My issue only

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Hervé Ballans
Le 23/08/2018 à 15:20, Alfredo Deza a écrit : Thanks Alfredo for your reply. I'm using the very last version of Luminous (12.2.7) and ceph-deploy (2.0.1). I have no problem in creating my OSD, that's work perfectly. My issue only concerns the problem of the mount names of the NVMe partitions

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 9:12 AM, Hervé Ballans wrote: > Le 23/08/2018 à 12:51, Alfredo Deza a écrit : >> >> On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans >> wrote: >>> >>> Hello all, >>> >>> I would like to continue a thread that dates back to last May (sorry if >>> this >>> is not a good

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Hervé Ballans
Le 23/08/2018 à 12:51, Alfredo Deza a écrit : On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans wrote: Hello all, I would like to continue a thread that dates back to last May (sorry if this is not a good practice ?..) Thanks David for your usefil tips on this thread. In my side, I created my

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Alfredo Deza
On Thu, Aug 23, 2018 at 5:42 AM, Hervé Ballans wrote: > Hello all, > > I would like to continue a thread that dates back to last May (sorry if this > is not a good practice ?..) > > Thanks David for your usefil tips on this thread. > In my side, I created my OSDs with ceph-deploy (in place of

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-08-23 Thread Hervé Ballans
Hello all, I would like to continue a thread that dates back to last May (sorry if this is not a good practice ?..) Thanks David for your usefil tips on this thread. In my side, I created my OSDs with ceph-deploy (in place of ceph-volume) [1], but this is exactly the same context as this

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-12 Thread Oliver Schulz
Thanks! On 12.05.2018 21:17, David Turner wrote: I would suggest 2GB partitions for WAL partitions and 150GB osds to make an SSD only pool for the fs metadata pool. I know that doesn't use the whole disk, but there's no need or reason to. By under-provisioning the nvme it just adds that much

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-12 Thread David Turner
I would suggest 2GB partitions for WAL partitions and 150GB osds to make an SSD only pool for the fs metadata pool. I know that doesn't use the whole disk, but there's no need or reason to. By under-provisioning the nvme it just adds that much more longevity to the life of the drive. You cannot

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-12 Thread Oliver Schulz
Dear David, On 11.05.2018 22:10, David Turner wrote: For if you should do WAL only on the NVMe vs use a filestore journal, that depends on your write patterns, use case, etc. we mostly use CephFS, for scientific data processing. It's mainly larger files (10 MB to 10 GB, but sometimes also a

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread David Turner
For if you should do WAL only on the NVMe vs use a filestore journal, that depends on your write patterns, use case, etc. In my clusters with 10TB disks I use 2GB partitions for the WAL and leave the DB on the HDD with the data. Those are in archival RGW use cases and that works fine for the

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread Oliver Schulz
Dear David, thanks a lot for the detailed answer(s) and clarifications! Can I ask just a few more questions? On 11.05.2018 18:46, David Turner wrote: partitions is 10GB per 1TB of OSD.  If your OSD is a 4TB disk you should be looking closer to a 40GB block.db partition.  If your block.db

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread David Turner
Nope, only detriment. If you lost sdb, you would have to rebuild 2 OSDs instead of just 1. Also you add more complexity as ceph-volume would much prefer to just take sda and make it the OSD with all data/db/wal without partitions or anything. On Fri, May 11, 2018 at 1:06 PM Jacob DeGlopper

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread Jacob DeGlopper
Thanks, this is useful in general.  I have a semi-related question: Given an OSD server with multiple SSDs or NVME devices, is there an advantage to putting wal/db on a different device of the same speed?  For example, data on sda1, matching wal/db on sdb1,  and then data on sdb2 and wal/db

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread David Turner
Note that instead of including the step to use the UUID in the osd creation like [1] this, I opted to separate it out in those instructions. That was to simplify the commands and to give people an idea of how to fix their OSDs if they created them using the device name instead of UUID. It would

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread David Turner
This thread is off in left field and needs to be brought back to how things work. While multiple OSDs can use the same device for block/wal partitions, they each need their own partition. osd.0 could use nvme0n1p1, osd.2/nvme0n1p2, etc. You cannot use the same partition for each osd.

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread João Paulo Sacchetto Ribeiro Bastos
Actually, if you go to https://ceph.com/community/new-luminous-bluestore/ you will see that DB/WAL work on a XFS partition, while the data itself goes on a raw block. Also, I told you the wrong command in the last mail. When i said --osd-db it should be --block-db. On Fri, May 11, 2018 at 11:51

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread Oliver Schulz
Hi, thanks for the advice! I'm a bit confused now, though. ;-) I thought DB and WAL were supposed to go on raw block devices, not file systems? Cheers, Oliver On 11.05.2018 16:01, João Paulo Sacchetto Ribeiro Bastos wrote: Hello Oliver, As far as I know yet, you can use the same DB device

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread Oliver Schulz
Hi Jaroslaw, I tried that (using /dev/nvme0n1), but no luck: ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph- volume --cluster ceph lvm create --bluestore --data /dev/sdb --block.wal /dev/nvme0n1 When I run "/usr/sbin/ceph-volume ..." on the storage node, it

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread João Paulo Sacchetto Ribeiro Bastos
Hello Oliver, As far as I know yet, you can use the same DB device for about 4 or 5 OSDs, just need to be aware of the free space. I'm also developing a bluestore cluster, and our DB and WAL will be in the same SSD of about 480GB serving 4 OSD HDDs of 4 TB each. About the sizes, its just a

[ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-11 Thread Oliver Schulz
Dear Ceph Experts, I'm trying to set up some new OSD storage nodes, now with bluestore (our existing nodes still use filestore). I'm a bit unclear on how to specify WAL/DB devices: Can several OSDs share one WAL/DB partition? So, can I do ceph-deploy osd create --bluestore