Hi all,
First of all I apologize if I've not done things correctly but these are
some tests results.
1) I've compiled the main branch in a fresh podman container (Alma Linux
8) and installed. Successfull!
2) I have done a copy of the /etc/ceph directory of the host (member of
the ceph
That's correct - it's the removable flag that's causing the disks to
be excluded.
I actually just merged this PR last week:
https://github.com/ceph/ceph/pull/49954
One of the changes it made was to enable removable (but not USB)
devices, as there are vendors that report hot-swappable drives as
Hi,
Running git pull this morning I saw the patch on the main branch and try
to compile it but it fails with cython for rbd.pyx. I have many similar
errors:
rbd.pyx:760:44: Cannot assign type 'int (*)(uint64_t, uint64_t, void *)
except? -1' to 'librbd_progress_fn_t'. Exception values are
hi all,
I think this patch might fix the problem
(https://github.com/ceph/ceph/pull/49954), it hasn't been merged for a long
time, I asked a few days ago and got it merged, you can try it.
best wishes
___
ceph-users mailing list --
Some tests:
If in Nautilus 16.2.14 in
/usr/lib/python3.6/site-packages/ceph_volume/util/disk.py I disable
lines 804 and 805
804 if get_file_contents(os.path.join(_sys_block_path, dev,
'removable')) == "1":
805 continue
the command "ceph-volume inventory" works as
I have checked my disks as well,
all devices are hot-swappable hdd and have the removable flag set
/Johan
Den 2023-10-24 kl. 13:38, skrev Patrick Begou:
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to
1. May be because they are hot-swappable hard drives.
I
Hi,
May be because they are hot-swappable hard drives.
yes, that's my assumption as well.
Zitat von Patrick Begou :
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set
to 1. May be because they are hot-swappable hard drives.
I have contacted the commit
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to
1. May be because they are hot-swappable hard drives.
I have contacted the commit author Zack Cerza and he asked me for some
additional tests too this morning. I add him in copy to this mail.
Patrick
Le
Hi,
just to confirm, could you check that the disk which is *not*
discovered by 16.2.11 has a "removable" flag?
cat /sys/block/sdX/removable
I could reproduce it as well on a test machine with a USB thumb drive
(live distro) which is excluded in 16.2.11 but is shown in 16.2.10.
Although
It seems that the only way to modify the code is manually ...
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Le 23/10/2023 à 03:04, 544463...@qq.com a écrit :
I think you can try to roll back this part of the python code and wait for your
good news :)
Not so easy
[root@e9865d9a7f41 ceph]# git revert
4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc
Auto-merging
I think you can try to roll back this part of the python code and wait for your
good news :)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
ending with git bisect just now shows:
4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc is the first bad commit
commit 4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc
Author: Zack Cerza
Date: Tue May 17 11:29:02 2022 -0600
ceph-volume: Optionally consume loop devices
A similar proposal
hi johan,
The OS that I use is centos 8.3,The feedback result of the ceph-volume
inventory command in ceph17.2.5 is empty, but ceph orch daemon add osd can add
osd. Hope it helps you.
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi all,
I'm trying to catch the faulty commit. I'm able to build Ceph from the
git repo in a fresh podman container but at this time, the lsblk command
returns nothing in my container.
In ceph containers lsblk works
So something is wrong with launching my podman container (or different
from
Which OS are your running?
What is the outcome of these two tests?
cephadm --image quay.io/ceph/ceph:v16.2.10-20220920 ceph-volume inventory
cephadm --image quay.io/ceph/ceph:v16.2.11-20230125 ceph-volume inventory
/Johan
Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com:
I encountered a
The problem appears in v16.2.11-20230125.
I have no insight into the different commits.
/Johan
Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com:
I encountered a similar problem on ceph17.2.5, could you found which commit
caused it?
___
ceph-users
I encountered a similar problem on ceph17.2.5, could you found which commit
caused it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Johan,
So it is not O.S. related as you are running Debian and I am running
Alma Linux. But I'm surprised why so few people meet this bug.
Patrick
Le 13/10/2023 à 17:38, Johan a écrit :
At home Im running a small cluster, Ceph v17.2.6, Debian 11 Bullseye.
I have recently added a new
At home Im running a small cluster, Ceph v17.2.6, Debian 11 Bullseye.
I have recently added a new server to the cluster but face the same
problem as Patrick, I can't add any HDD. Ceph doesn't recognise them.
I have run the same tests as Patrick, using Ceph v14-v18, and as Patrick
showed the
The server has enough available storage:
[root@mostha1 log]# df -h
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
devtmpfs 24G 0 24G 0% /dev
tmpfs 24G 84K 24G 1% /dev/shm
tmpfs 24G
Trying to resend with the attachment.
I can't really find anything suspicious, ceph-volume (16.2.11) does
recognize /dev/sdc though:
[2023-10-12 08:58:14,135][ceph_volume.process][INFO ] stdout
NAME="sdc" KNAME="sdc" PKNAME="" MAJ:MIN="8:32" FSTYPE=""
MOUNTPOINT="" LABEL="" UUID=""
There are no attachments.
Zitat von Patrick Begou :
Hi Eugen,
You will find in attachment cephadm.log and cepĥ-volume.log. Each
contains the outputs for the 2 versions. v16.2.10-20220920 is
really more verbose or v16.2.11-20230125 does not execute all the
detection process
Patrick
Hi Eugen,
You will find in attachment cephadm.log and cepĥ-volume.log. Each
contains the outputs for the 2 versions. v16.2.10-20220920 is really
more verbose or v16.2.11-20230125 does not execute all the detection process
Patrick
Le 12/10/2023 à 09:34, Eugen Block a écrit :
Good catch,
Good catch, and I found the thread I had in my mind, it was this exact
one. :-D Anyway, can you share the ceph-volume.log from the working
and the not working attempt?
I tried to look for something significant in the pacific release notes
for 16.2.11, and there were some changes to
I've ran additional tests with Pacific releases and with "ceph-volume
inventory" things went wrong with the first v16.11 release
(v16.2.11-20230125)
=== Ceph v16.2.10-20220920 ===
Device Path Size rotates available Model name
/dev/sdc
This afternoon I have a look at the python file but do not manage how it
works with containers as I am only a Fortran HPC programmer... but I
found that "cephadm gather-facts" shows all the HDD in Pacific.
Some quick tests show:
== Nautilus ==
[root@mostha1 ~]#
That's really strange. Just out of curiosity, have you tried Quincy
(and/or Reef) as well? I don't recall what inventory does in the
background exactly, I believe Adam King mentioned that in some thread,
maybe that can help here. I'll search for that thread tomorrow.
Zitat von Patrick
Hi Eugen,
[root@mostha1 ~]# rpm -q cephadm
cephadm-16.2.14-0.el8.noarch
Log associated to the
2023-10-11 16:16:02,167 7f820515fb80 DEBUG
cephadm ['gather-facts']
2023-10-11 16:16:02,208 7f820515fb80 DEBUG
Can you check which cephadm version is installed on the host? And then
please add (only the relevant) output from the cephadm.log when you
run the inventory (without the --image ). Sometimes the
version mismatch on the host and the one the orchestrator uses can
cause some disruptions. You
Hi Eugen,
first many thanks for the time spent on this problem.
"ceph osd purge 2 --force --yes-i-really-mean-it" works and clean all
the bas status.
*[root@mostha1 ~]# cephadm shell
*Inferring fsid 250f9864-0142-11ee-8e5f-00266cf8869c
Using recent ceph image
Your response is a bit confusing since it seems to be mixed up with
the previous answer. So you still need to remove the OSD properly, so
purge it from the crush tree:
ceph osd purge 2 --force --yes-i-really-mean-it (only in a test cluster!)
If everything is clean (OSD has been removed,
Hi Eugen,
sorry for posting twice, my zimbra server returns an error at the first
attempt.
My initial problem is that ceph cannot detect these HDD since Pacific.
So I have deployed Octopus, where "ceph orch apply osd
--all-available-devices" works fine and then upgraded to Pacific.
But
Hi Eugen,
- the OS is Alma Linux 8 with latests updates.
- this morning I've worked with ceph-volume but it ends with a strange
final state. I was connected on host mostha1 where /dev/sdc was not
reconized. These are the steps followed based on the ceph-volume
documentation I've read:
Don't use ceph-volume manually to deploy OSDs if your cluster is
managed by cephadm. I just wanted to point out that you hadn't wiped
the disk properly to be able to re-use it. Let the orchestrator handle
the OSD creation and activation. I recommend to remove the OSD again,
wipe it
Hi Eugen,
- the OS is Alma Linux 8 with latests updates.
- this morning I've worked with ceph-volume but it ends with a strange
final state. I was connected on host mostha1 where /dev/sdc was not
reconized. These are the steps followed based on the ceph-volume
documentation I've read:
Hi,
just wondering if 'ceph-volume lvm zap --destroy /dev/sdc' would help
here. From your previous output you didn't specify the --destroy flag.
Which cephadm version is installed on the host? Did you also upgrade
the OS when moving to Pacific? (Sorry if I missed that.
Zitat von Patrick
Le 02/10/2023 à 18:22, Patrick Bégou a écrit :
Hi all,
still stuck with this problem.
I've deployed octopus and all my HDD have been setup as osd. Fine.
I've upgraded to pacific and 2 osd have failed. They have been
automatically removed and upgrade finishes. Cluster Health is finaly
OK, no
Hi all,
still stuck with this problem.
I've deployed octopus and all my HDD have been setup as osd. Fine.
I've upgraded to pacific and 2 osd have failed. They have been
automatically removed and upgrade finishes. Cluster Health is finaly OK,
no data loss.
But now I cannot re-add these osd
ay 12, 2023 at 2:22 PM
*To: *Beaman, Joshua , ceph-users
*Subject: *Re: [EXTERNAL] [ceph-users] [Pacific] ceph orch
device ls
do not returns any HDD
Hi Joshua and thanks for this quick reply.
At this step I have only one node. I was checking what ceph was
returning with different commands on this
your “lsblk”
shows. If
you need to zap HDDs and orchestrator is still not seeing them, you
can try “cephadm ceph-volume lvm zap /dev/sdb”
Thank you,
Josh Beaman
*From: *Patrick Begou
*Date: *Friday, May 12, 2023 at 2:22 PM
*To: *Beaman, Joshua , ceph-users
*Subject: *Re: [EXTERNAL]
k you,
Josh Beaman
*From: *Patrick Begou
*Date: *Friday, May 12, 2023 at 2:22 PM
*To: *Beaman, Joshua , ceph-users
*Subject: *Re: [EXTERNAL] [ceph-users] [Pacific] ceph orch
device ls
do not returns any HDD
Hi Joshua and thanks for this quick reply.
At this step I have only one node. I was
efully matches up to what your “lsblk”
shows. If
> you need to zap HDDs and orchestrator is still not seeing them, you
> can try “cephadm ceph-volume lvm zap /dev/sdb”
>
> Thank you,
>
> Josh Beaman
>
> *From: *Patrick Begou
> *
shows. If
> you need to zap HDDs and orchestrator is still not seeing them, you
> can try “cephadm ceph-volume lvm zap /dev/sdb”
>
> Thank you,
>
> Josh Beaman
>
> *From: *Patrick Begou
> *Date: *Friday, May 12, 2023 at 2:22 PM
> *To
e other command I should have had you try is “cephadm ceph-volume
> > inventory”. That should show you the devices available for OSD
> > deployment, and hopefully matches up to what your “lsblk” shows. If
> > you need to zap HDDs and orchestrator is still not seeing them, you
> > can
y 12, 2023 at 2:22 PM
To: Beaman, Joshua
<mailto:joshua_bea...@comcast.com>, ceph-users
<mailto:ceph-users@ceph.io>
Subject: Re: [EXTERNAL] [ceph-users] [Pacific] ceph orch device ls do not
returns any HDD
Hi Joshua and thanks for this quick reply.
At this step I have only o
iday, May 12, 2023 at 2:22 PM
*To: *Beaman, Joshua , ceph-users
*Subject: *Re: [EXTERNAL] [ceph-users] [Pacific] ceph orch device ls
do not returns any HDD
Hi Joshua and thanks for this quick reply.
At this step I have only one node. I was checking what ceph was
returning with different
] [Pacific] ceph orch device ls do not
returns any HDD
Hi Joshua and thanks for this quick reply.
At this step I have only one node. I was checking what ceph was returning with
different commands on this host before adding new hosts. Just to compare with
my first Octopus install. As this hardware
Hi Joshua and thanks for this quick reply.
At this step I have only one node. I was checking what ceph was
returning with different commands on this host before adding new hosts.
Just to compare with my first Octopus install. As this hardware is for
testing only, it remains easy for me to
I don’t quite understand why that zap would not work. But, here’s where I’d
start.
1. cephadm check-host
* Run this on each of your hosts to make sure cephadm, podman and all
other prerequisites are installed and recognized
2. ceph orch ls
* This should show at least a
50 matches
Mail list logo