Re: [PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-22 Thread Eneko Lacunza
Hi Dominik, El 22/8/19 a las 9:50, Dominik Csapak escribió: On 8/21/19 2:37 PM, Eneko Lacunza wrote: # pveceph createosd /dev/sdb -db_dev /dev/sdd device '/dev/sdd' is already in use and has no LVM on it this sounds like a bug.. can you open one on bugzilla.proxmox.com, while i

Re: [PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-22 Thread Dominik Csapak
Hi, On 8/21/19 2:37 PM, Eneko Lacunza wrote: # pveceph createosd /dev/sdb -db_dev /dev/sdd device '/dev/sdd' is already in use and has no LVM on it this sounds like a bug.. can you open one on bugzilla.proxmox.com, while i investigate ? we should be able to use a disk as db/wal even if

[PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-21 Thread Eneko Lacunza
Hi all, I'm reporting here an issue that I think should be handled somehow by Proxmox, maybe with extended migration notes. Starting point: - Proxmox 5.4 cluster with Ceph Server. Proxmox nodes have 1 SSD + 3 HDD. System and Ceph OSD journals (filestore or bluestore db) are on the SSD.