Re: [ceph-users] ceph-ansible / block-db block-wal

2019-11-01 Thread solarflow99
ceph-ansible is able to find those on its own now, try just not specifying
the devices and dedicated devices like before, you'll see in the osd .yml
file its changed.


On Wed, Oct 30, 2019 at 3:47 AM Lars Täuber  wrote:

> I don't use ansible anymore. But this was my config for the host onode1:
>
> ./host_vars/onode2.yml:
>
> lvm_volumes:
>   - data: /dev/sdb
> db: '1'
> db_vg: host-2-db
>   - data: /dev/sdc
> db: '2'
> db_vg: host-2-db
>   - data: /dev/sde
> db: '3'
> db_vg: host-2-db
>   - data: /dev/sdf
> db: '4'
> db_vg: host-2-db
> …
>
> one config file per host. The LVs were created by hand on a PV over RAID1
> over two SSDs.
> The hosts had empty slots for hdds to be bought later. So I had to
> "partition" the PV by hand, because ansible uses the whole RAID1 only for
> the present HDDs.
>
> It is said that only certain sizes of DB & WAL partitions are sensible.
> I now use 58GiB LVs.
> The remaining space in the RAID1 is used for a faster OSD.
>
>
> Lars
>
>
> Wed, 30 Oct 2019 10:02:23 +
> CUZA Frédéric  ==> "ceph-users@lists.ceph.com" <
> ceph-users@lists.ceph.com> :
> > Hi Everyone,
> >
> > Does anyone know how to indicate block-db and block-wal to device on
> ansible ?
> > In ceph-deploy it is quite easy :
> > ceph-deploy osd create osd_host08 --data /dev/sdl --block-db /dev/sdm12
> --block-wal /dev/sdn12 -bluestore
> >
> > On my data nodes I have 12 HDDs and 2 SSDs I use those SSDs for block-db
> and block-wal.
> > How to indicate for each osd which partition to use ?
> >
> > And finally, how do you handle the deployment if you have multiple data
> nodes setup ?
> > SSDs on sdm and sdn on one host and SSDs on sda and sdb on another ?
> >
> > Thank you for your help.
> >
> > Regards,
>
>
> --
> Informationstechnologie
> Berlin-Brandenburgische Akademie der Wissenschaften
> Jägerstraße 22-23  10117 Berlin
> Tel.: +49 30 20370-352   http://www.bbaw.de
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-ansible / block-db block-wal

2019-10-30 Thread Lars Täuber
I don't use ansible anymore. But this was my config for the host onode1:

./host_vars/onode2.yml:

lvm_volumes:
  - data: /dev/sdb
db: '1'
db_vg: host-2-db
  - data: /dev/sdc
db: '2'
db_vg: host-2-db
  - data: /dev/sde
db: '3'
db_vg: host-2-db
  - data: /dev/sdf
db: '4'
db_vg: host-2-db
…

one config file per host. The LVs were created by hand on a PV over RAID1 over 
two SSDs.
The hosts had empty slots for hdds to be bought later. So I had to "partition" 
the PV by hand, because ansible uses the whole RAID1 only for the present HDDs.

It is said that only certain sizes of DB & WAL partitions are sensible.
I now use 58GiB LVs.
The remaining space in the RAID1 is used for a faster OSD.


Lars


Wed, 30 Oct 2019 10:02:23 +
CUZA Frédéric  ==> "ceph-users@lists.ceph.com" 
 :
> Hi Everyone,
> 
> Does anyone know how to indicate block-db and block-wal to device on ansible ?
> In ceph-deploy it is quite easy :
> ceph-deploy osd create osd_host08 --data /dev/sdl --block-db /dev/sdm12 
> --block-wal /dev/sdn12 -bluestore
> 
> On my data nodes I have 12 HDDs and 2 SSDs I use those SSDs for block-db and 
> block-wal.
> How to indicate for each osd which partition to use ?
> 
> And finally, how do you handle the deployment if you have multiple data nodes 
> setup ?
> SSDs on sdm and sdn on one host and SSDs on sda and sdb on another ?
> 
> Thank you for your help.
> 
> Regards,


-- 
Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstraße 22-23  10117 Berlin
Tel.: +49 30 20370-352   http://www.bbaw.de
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-ansible / block-db block-wal

2019-10-30 Thread CUZA Frédéric
Hi Everyone,

Does anyone know how to indicate block-db and block-wal to device on ansible ?
In ceph-deploy it is quite easy :
ceph-deploy osd create osd_host08 --data /dev/sdl --block-db /dev/sdm12 
--block-wal /dev/sdn12 -bluestore

On my data nodes I have 12 HDDs and 2 SSDs I use those SSDs for block-db and 
block-wal.
How to indicate for each osd which partition to use ?

And finally, how do you handle the deployment if you have multiple data nodes 
setup ?
SSDs on sdm and sdn on one host and SSDs on sda and sdb on another ?

Thank you for your help.

Regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com