[ceph-users] Re: Client failing to respond to capability release

2023-08-22 Thread Eugen Block
Hi, pointing you to your own thread [1] ;-) [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/HFILR5NMUCEZH7TJSGSACPI4P23XTULI/ Zitat von Frank Schilder : Hi all, I have this warning the whole day already (octopus latest cluster): HEALTH_WARN 4 clients failing to respon

[ceph-users] Re: snaptrim number of objects

2023-08-22 Thread Sridhar Seshasayee
> This also leads me to agree with you there's 'something wrong' with > the mclock scheduler. I was almost starting to suspect hardware issues > or something like that, I was at my wit's end. > > Could you update this thread with the exact quincy version by running: $ ceph versions and $ ceph co

[ceph-users] Re: Create OSDs MANUALLY

2023-08-22 Thread Anh Phan Tuan
You don't need to create OSDs manual to get what you want. Cephadm has two options to control that in OSD specification. OSD Service — Ceph Documentation block_db_size*: Union[int, str, None]*

[ceph-users] ceph osd error log

2023-08-22 Thread Peter
Hi Ceph community, My cluster has lots of logs regarding an error that ceph-osd. I am encountering the following error message in the logs: Aug 22 00:01:28 host008 ceph-osd[3877022]: 2023-08-22T00:01:28.347-0700 7fef85251700 -1 Fail to open '/proc/3850681/cmdline' error = (2) No such file or

[ceph-users] Create OSDs MANUALLY

2023-08-22 Thread Alfredo Rezinovsky
SSD driver works awful when full. Even if I set DB to ssd for 4 OSDs and theres 2 the dashboard daemon allocates all the ssd. I want to partition only 70% of the SSD for DB/WAL and leave the rest for SSD maneouvering. There's a way to create an OSD telling manually disk or partitions to user for

[ceph-users] Client failing to respond to capability release

2023-08-22 Thread Frank Schilder
Hi all, I have this warning the whole day already (octopus latest cluster): HEALTH_WARN 4 clients failing to respond to capability release; 1 pgs not deep-scrubbed in time [WRN] MDS_CLIENT_LATE_RELEASE: 4 clients failing to respond to capability release mds.ceph-24(mds.1): Client sn352.hpc.

[ceph-users] Re: snaptrim number of objects

2023-08-22 Thread Mark Nelson
On 8/21/23 17:38, Angelo Höngens wrote: On 21/08/2023 16:47, Manuel Lausch wrote: Hello, on my testcluster I played a bit with ceph quincy (17.2.6). I also see slow ops while deleting snapshots. With the previous major (pacific) this wasn't a issue. In my case this is related to the new mclock

[ceph-users] Re: [quincy] Migrating ceph cluster to new network, bind OSDs to multple public_nework

2023-08-22 Thread Konstantin Shalygin
Hi, This how OSD's woks. For change the network subnet you need to setup reachability of both: old and new network, until end of migration k Sent from my iPhone > On 22 Aug 2023, at 10:43, Boris Behrens wrote: > > The OSDs are still only bound to one IP address. __

[ceph-users] Windows 2016 RBD Driver install failure

2023-08-22 Thread Robert Ford
Hello, We have been running into an issue installing the pacific windows rbd driver on windows 2016. It has no issues with either 2019 or 2022. It looks like it fails at checkpoint creation. We are installing it as admin. Has anyone seen this before or know of a solution? The closest thing I can

[ceph-users] Re: radosgw-admin sync error trim seems to do nothing

2023-08-22 Thread Matthew Darwin
Thanks Rich, On quincy it seems that provding an end-date is an error.  Any other ideas from anyone? $ radosgw-admin sync error trim --end-date="2023-08-20 23:00:00" end-date not allowed. On 2023-08-20 19:00, Richard Bade wrote: Hi Matthew, At least for nautilus (14.2.22) i have discovered t

[ceph-users] When to use the auth profiles simple-rados-client and profile simple-rados-client-with-blocklist?

2023-08-22 Thread Christian Rohmann
Hey ceph-users, 1) When configuring Gnocchi to use Ceph storage (see https://gnocchi.osci.io/install.html#ceph-requirements) I was wondering if one could use any of the auth profiles like  * simple-rados-client  * simple-rados-client-with-blocklist ? Or are those for different use cases? 2) I

[ceph-users] Re: [quincy] Migrating ceph cluster to new network, bind OSDs to multple public_nework

2023-08-22 Thread Boris Behrens
Yes, I did change the mon_host config in ceph.conf. Then I restarted all ceph services with systemctl restart ceph.target After the restart, nothing changed. I rebooted the host then, and now all OSDs are attached to the new network. I thought that OSDs can attach to different networks, or even t

[ceph-users] CephFS: convert directory into subvolume

2023-08-22 Thread Eugen Block
Hi, while writing a response to [1] I tried to convert an existing directory within a single cephfs into a subvolume. According to [2] that should be possible, I'm just wondering how to confirm that it actually worked. Because setting the xattr works fine, the directory just doesn't show

[ceph-users] Re: Patch change for CephFS subvolume

2023-08-22 Thread Michal Strnad
Hi Eugen, thank you for the message. I've already tried the process of changing the classic directory to a subvolume, but it also didn't appear in the list of set subvolumes. Perhaps it's no longer supported? Michal On 8/22/23 12:56, Eugen Block wrote: Hi, I don't know if there's a way to

[ceph-users] Re: EC pool degrades when adding device-class to crush rule

2023-08-22 Thread Lars Fenneberg
Hey Eugen! Quoting Eugen Block (ebl...@nde.ag): > >When a client writes an object to the primary OSD, the primary OSD > >is responsible for writing the replicas to the replica OSDs. After > >the primary OSD writes the object to storage, the PG will remain > >in a degraded state until the primary

[ceph-users] Re: Patch change for CephFS subvolume

2023-08-22 Thread Eugen Block
Hi, I don't know if there's a way to change the path (I assume not except creating a new path and copy the data), but you could set up a directory the "old school" way (mount the root filesystem, create your subdirectory tree) and then convert the directory into a subvolume by setting the

[ceph-users] Patch change for CephFS subvolume

2023-08-22 Thread Michal Strnad
Hi! I'm trying to figure out how to specify the path for a CephFS subvolume, as it's intended to represent a user's home directory. By default, it's located at /volumes/_nogroup/$NAME/$UUID. Is it possible to change this path somehow, or is using symbolic links the only option? Thank you Mic

[ceph-users] Re: [quincy] Migrating ceph cluster to new network, bind OSDs to multple public_nework

2023-08-22 Thread Eugen Block
Can you add some more details? Did you change the mon_host in ceph.conf and then rebooted? So the OSDs do work correctly now within the new network? OSDs do only bind to one public and one cluster IP, I'm not aware of a way to have them bind to multiple public IPs like the MONs can. You'll

[ceph-users] Re: Global recovery event but HEALTH_OK

2023-08-22 Thread Eugen Block
Hi, can you add 'ceph -s' output? Has the recovery finished and if not, do you see progress? Has the upgrade finished? You could try a 'ceph mgr fail'. Zitat von Alfredo Daniel Rezinovsky : I had many movement in my cluster. Broken node, replacement, rebalancing. Noy I'm stuck in upgra

[ceph-users] Re: [quincy] Migrating ceph cluster to new network, bind OSDs to multple public_nework

2023-08-22 Thread Boris Behrens
The OSDs are still only bound to one IP address. After a reboot, the OSDs switched to the new address and are now unreachable from the compute nodes. Am Di., 22. Aug. 2023 um 09:17 Uhr schrieb Eugen Block : > You'll need to update the mon_host line as well. Not sure if it makes > sense to have

[ceph-users] Re: [quincy] Migrating ceph cluster to new network, bind OSDs to multple public_nework

2023-08-22 Thread Eugen Block
You'll need to update the mon_host line as well. Not sure if it makes sense to have both old and new network in there, but I'd try on one host first and see if it works. Zitat von Boris Behrens : We're working on the migration to cephadm, but it requires some prerequisites that still needs