Hey,
no problem and thank you!
This is the output of lsblk:
sda 8:0 0 14.6T 0 disk
└─ceph--937823b8--204b--4190--9bd1--f867e64621db-osd--block--a4bbaa5d--eb2d--41f3--8f4e--f8c5a2747012
253:24 0 14.6T 0 lvm
sdb 8:16 0 14.6T 0 disk
Hello,
I'm sorry for not getting back to you sooner.
[2023-01-26 16:25:00,785][ceph_volume.process][INFO ] stdout
ceph.block_device=/dev/ceph-808efc2a-54fd-47cc-90e2-c5cc96bdd825/osd-block-
2a1d1bf0-300e-4160-ac55-047837a5af0b,ceph.block_uuid=
b4WDQQ-eMTb-AN1U-D7dk-yD2q-4dPZ-KyFrHi,ceph.cephx_
OK, attachments wont work.
See this:
https://filebin.net/t0p7f1agx5h6bdje
Best
Ken
On 01.02.23 17:22, mailing-lists wrote:
I've pulled a few lines from the log and i've attached this to this
mail. (I hope this works for this mailinglist?)
I found the line 135
[2023-01-26
I've pulled a few lines from the log and i've attached this to this
mail. (I hope this works for this mailinglist?)
I found the line 135
[2023-01-26 16:25:00,785][ceph_volume.process][INFO ] stdout
Any chance you can share the ceph-volume.log (from the corresponding host)?
It should be in /var/log/ceph//ceph-volume.log. Note that
there might be several log files (log rotation). Ideally, the one that
includes the recreation steps.
Thanks,
On Wed, 1 Feb 2023 at 10:13, mailing-lists wrote:
Ah, nice.
service_type: osd
service_id: dashboard-admin-1661788934732
service_name: osd.dashboard-admin-1661788934732
placement:
host_pattern: '*'
spec:
data_devices:
model: MG08SCA16TEY
db_devices:
model: Dell Ent NVMe AGN MU AIC 6.4TB
filter_logic: AND
objectstore: bluestore
On Tue, 31 Jan 2023 at 22:31, mailing-lists wrote:
> I am not sure. I didn't find it... It should be somewhere, right? I used
> the dashboard to create the osd service.
>
what does a `cephadm shell -- ceph orch ls osd --format yaml` say?
--
*Guillaume Abrioux*Senior Software Engineer
Did your db/wall device show as having free space prior to the OSD creation?
Yes.
root@ceph-a1-06:~# pvs
PV VG Fmt Attr
PSize PFree
/dev/nvme0n1 ceph-3a336b8e-ed39-4532-a199-ac6a3730840b lvm2 a--
5.82t 2.91t
/dev/nvme1n1
Did your db/wall device show as having free space prior to the OSD creation?
Yes.
root@ceph-a1-06:~# pvs
PV VG Fmt Attr
PSize PFree
/dev/nvme0n1 ceph-3a336b8e-ed39-4532-a199-ac6a3730840b lvm2 a--
5.82t 2.91t
/dev/nvme1n1
What does your OSD service specification look like? Did your db/wall device
show as having free space prior to the OSD creation?
On Tue, Jan 31, 2023, at 04:01, mailing-lists wrote:
> OK, the OSD is filled again. In and Up, but it is not using the nvme
> WAL/DB anymore.
>
> And it looks like
OK, the OSD is filled again. In and Up, but it is not using the nvme
WAL/DB anymore.
And it looks like the lvm group of the old osd is still on the nvme
drive. I come to this idea, because the two nvme drives still have 9 lvm
groups each. 18 groups but only 17 osd are using the nvme (shown in
oph wait,
i might have been too impatient:
1/30/23 4:43:07 PM[INF]Deploying daemon osd.232 on ceph-a1-06
1/30/23 4:42:26 PM[INF]Found osd claims for drivegroup
dashboard-admin-1661788934732 -> {'ceph-a1-06': ['232']}
1/30/23 4:42:26 PM[INF]Found osd claims -> {'ceph-a1-06': ['232']}
root@ceph-a2-01:/# ceph osd destroy 232 --yes-i-really-mean-it
destroyed osd.232
OSD 232 shows now as destroyed and out in the dashboard.
root@ceph-a1-06:/# ceph-volume lvm zap /dev/sdm
--> Zapping: /dev/sdm
--> --destroy was not specified, but zapping a whole device will remove
the
The 'down' status is why it's not being replaced, vs. destroyed, which would
allow the replacement. I'm not sure why --replace lead to that scenario, but
you will probably need to mark it destroyed for it to be replaced.
# ceph orch osd rm status
No OSD remove/replace operations reported
# ceph orch osd rm 232 --replace
Unable to find OSDs: ['232']
It is not finding 232 anymore. It is still shown as down and out in the
Ceph-Dashboard.
pgs: 3236 active+clean
This is the new disk shown as locked
What does "ceph orch osd rm status" show before you try the zap? Is your
cluster still backfilling to the other OSDs for the PGs that were on the failed
disk?
David
On Fri, Jan 27, 2023, at 03:25, mailing-lists wrote:
> Dear Ceph-Users,
>
> i am struggling to replace a disk. My ceph-cluster is
16 matches
Mail list logo