On 27.05.2021 11:53, Eugen Block wrote:
This test was on ceph version 15.2.8.
On Pacific (ceph version 16.2.4) this also works for me for initial
deployment of an entire host:
+-+-+--+--+--+-+
|SERVICE |NAME |HOST |DATA |DB
This test was on ceph version 15.2.8.
On Pacific (ceph version 16.2.4) this also works for me for initial
deployment of an entire host:
+-+-+--+--+--+-+
|SERVICE |NAME |HOST |DATA |DB|WAL |
On 27.05.2021 11:17, Eugen Block wrote:
That's not how it's supposed to work. I tried the same on an Octopus
cluster and removed all filters except:
data_devices:
rotational: 1
db_devices:
rotational: 0
My Octopus test osd nodes have two HDDs and one SSD, I removed all
OSDs and redeployed
That's not how it's supposed to work. I tried the same on an Octopus
cluster and removed all filters except:
data_devices:
rotational: 1
db_devices:
rotational: 0
My Octopus test osd nodes have two HDDs and one SSD, I removed all
OSDs and redeployed on one node. This spec file results
Hi,
The VG has 357.74GB of free space of total 5.24TB so I did actually
tried different values like "30G:", "30G", "300G:", "300G", "357G".
I also tied some crazy high numbers and some ranges, but don't
remember the values. But none of them worked.
the size parameter is filtering the disk
On 26.05.2021 22:14, David Orman wrote:
We've found that after doing the osd rm, you can use: "ceph-volume lvm
zap --osd-id 178 --destroy" on the server with that OSD as per:
https://docs.ceph.com/en/latest/ceph-volume/lvm/zap/#removing-devices
and it will clean things up so they work as
On 26.05.2021 18:12, Eugen Block wrote:
Could you share the output of
lsblk -o name,rota,size,type
from the affected osd node?
# lsblk -o name,rota,size,type
NAME
ROTA SIZE
We've found that after doing the osd rm, you can use: "ceph-volume lvm
zap --osd-id 178 --destroy" on the server with that OSD as per:
https://docs.ceph.com/en/latest/ceph-volume/lvm/zap/#removing-devices
and it will clean things up so they work as expected.
On Tue, May 25, 2021 at 6:51 AM Kai
Could you share the output of
lsblk -o name,rota,size,type
from the affected osd node?
My spec file is for a tiny lab cluster, in your case the db drive size
should be something like '5T:6T' to specify a range.
How large are the HDDs? Also maybe you should use the option
'filter_logic:
On 26.05.2021 11:16, Eugen Block wrote:
Yes, the LVs are not removed automatically, you need to free up the
VG, there are a couple of ways to do so, for example remotely:
pacific1:~ # ceph orch device zap pacific4 /dev/vdb --force
or directly on the host with:
pacific1:~ # cephadm ceph-volume
Yes, the LVs are not removed automatically, you need to free up the
VG, there are a couple of ways to do so, for example remotely:
pacific1:~ # ceph orch device zap pacific4 /dev/vdb --force
or directly on the host with:
pacific1:~ # cephadm ceph-volume lvm zap --destroy /dev//
Zitat von
On 26.05.2021 08:22, Eugen Block wrote:
Hi,
did you wipe the LV on the SSD that was assigned to the failed HDD? I
just did that on a fresh Pacific install successfully, a couple of
weeks ago it also worked on an Octopus cluster.
No, I did not wipe the LV.
Not sure what you mean by wipe, so I
Hi,
did you wipe the LV on the SSD that was assigned to the failed HDD? I
just did that on a fresh Pacific install successfully, a couple of
weeks ago it also worked on an Octopus cluster. Note that I have a few
filters in my specs file but that shouldn't make a difference, I
believe.
13 matches
Mail list logo