Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Dietmar Maurer
> Why does Proxmox team have not incorporated a software Raid in the > install process ? Because we consider mdraid unreliable and dangerous. > So that we could include redundancy and lvm advantages > when using local disks. Sorry, but we have software raid included - ZFS provides that.

Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread dorsy
I'd say that it is more convinient to support one method. Also mentioned in this thread that zfs could be considered to be a successor of MDraid+LVM. It is still a debian system with cutom kernel and some pve packages on top, so You could do anything just like on any standard debian system.

Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Andreas Heinlein
Am 08.08.2018 um 15:32 schrieb Denis Morejon: > Why does Proxmox team have not incorporated a software Raid in the > install process ? So that we could include redundancy and lvm > advantages when using local disks. Because ZFS offers redundancy and LVM features (and much more) in a more modern

Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Denis Morejon
Why does Proxmox team have not incorporated a software Raid in the install process ? So that we could include redundancy and lvm advantages when using local disks. El 08/08/18 a las 09:23, Denis Morejon escribió: El 07/08/18 a las 17:51, Yannis Milios escribió:   (zfs create -V 100G

Re: [PVE-User] How to use lvm on zfs ?

2018-08-08 Thread Denis Morejon
El 07/08/18 a las 17:51, Yannis Milios escribió: (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate /dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm) and then a LV (lvcreate -L100% pve/data) Try the above as it was suggested to you ... But I suspect I have

Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Vadim Bulst
Thanks guys - great help! All up and running :-) On 08.08.2018 09:22, Alwin Antreich wrote: Hi, On Wed, Aug 08, 2018 at 07:54:45AM +0200, Vadim Bulst wrote: Hi Alwin, thanks for your advise. But no success. Still same error. mds-section: [mds.1]     host = scvirt03     keyring =

Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Alwin Antreich
Hi, On Wed, Aug 08, 2018 at 07:54:45AM +0200, Vadim Bulst wrote: > Hi Alwin, > > thanks for your advise. But no success. Still same error. > > mds-section: > > [mds.1] >     host = scvirt03 >     keyring = /var/lib/ceph/mds/ceph-scvirt03/keyring [mds] keyring =

Re: [PVE-User] Cephfs starting 2nd MDS

2018-08-08 Thread Ronny Aasen
your .conf references mds.1 (id =1) but your command starts the mds with id=scvirt03 so the block in ceph.conf is not used. replace [mds.1] with [mds.scvirt03] btw: iirc you can not have just numerical id's for mds's for some versions now, so mds.1 would not be valid either. kind regards