[ceph-users] Re: Single ceph client usage with multiple ceph cluster

2021-12-08 Thread Mosharaf Hossain
Hello Markus Thank you for your direction. I would like to let you know that the way you show it is quite meaningful but I am afraid how the ceph system would identify the configuration file as by default it uses ceph. conf in /etc/ceph folder. Can we define the config file as we want? It will be

[ceph-users] Re: Need urgent help for ceph health error issue

2021-12-08 Thread Md. Hejbul Tawhid MUNNA
Hi, Yes, we have added new osd. Previously we had only one type disk, hdd. now we have added ssd disk separate them with replicated_rule and device class ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS 0 hdd 5.57100 1.0 5.6 TiB 1.8 TiB 3.8 TiB 31.61 1.04 850 1 hdd

[ceph-users] v16.2.6 PG peering indefinitely after cluster power outage

2021-12-08 Thread Eric Alba
I've been trying to get ceph to force the PG to a good state but it continues to give me a single PG peering. This is a rook-ceph cluster on VMs (hosts went out for a brief period) and I can't figure out how to get this 1GB or so of data to become available to the client. This occurred during a

[ceph-users] Re: 50% IOPS performance drop after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

2021-12-08 Thread Marc
I am still on nautilus, albeit a tiny cluster. I would not mind doing some tests for comparison if necessary. > > Hi Frank, thanks for the input. Im still a bit sceptical to be honest > that this is all, since a.) our bench values are pretty stable over time > (natilus times and octopus times)

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Edward R Huyer
> -Original Message- > From: Carlos Mogas da Silva > Sent: Wednesday, December 8, 2021 1:26 PM > To: Edward R Huyer ; Marc ; > ceph-users@ceph.io > Subject: Re: [ceph-users] Re: Migration from CentOS7/Nautilus to CentOS > Stream/Pacific > > On Wed, 2021-12-08 at 16:42 +, Edward R

[ceph-users] Re: Need urgent help for ceph health error issue

2021-12-08 Thread Nathan Fish
You should probably stop all client mounts to avoid any more writes, temporarily raise full ratios just enough to get it online, then delete something. Never let it get this full. On Wed, Dec 8, 2021 at 1:27 PM Md. Hejbul Tawhid MUNNA wrote: > > Hi, > > Overall status: HEALTH_ERR >

[ceph-users] Re: Need urgent help for ceph health error issue

2021-12-08 Thread Prayank Saxena
Hi Munna, Have you added osd’s in the cluster recently? If yes, i think you have to re-weight the osd’s which you have added to lower values and slowly increase the weight one by one. Also, please share output of ‘ceph osd df’ and ‘ceph health details’ On Wed, 8 Dec 2021 at 11:56 PM, Md. Hejbul

[ceph-users] Re: Need urgent help for ceph health error issue

2021-12-08 Thread Marc
> > Overall status: HEALTH_ERR > PG_DEGRADED_FULL: Degraded data redundancy (low space): 19 pgs > backfill_toofull > OBJECT_MISPLACED: 12359314/17705640 objects misplaced (69.804%) > PG_DEGRADED: Degraded data redundancy: 1707105/17705640 objects degraded > (9.642%), 1979 pgs degraded, 1155

[ceph-users] Need urgent help for ceph health error issue

2021-12-08 Thread Md. Hejbul Tawhid MUNNA
Hi, Overall status: HEALTH_ERR PG_DEGRADED_FULL: Degraded data redundancy (low space): 19 pgs backfill_toofull OBJECT_MISPLACED: 12359314/17705640 objects misplaced (69.804%) PG_DEGRADED: Degraded data redundancy: 1707105/17705640 objects degraded (9.642%), 1979 pgs degraded, 1155 pgs undersized

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Robert Sander
Am 08.12.21 um 19:17 schrieb Robert Sander: The NFS topic has not even been mentioned in the release announcement email This is obviously not true, I just read over that paragraph. Blame on me. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Robert Sander
Am 08.12.21 um 17:57 schrieb Sebastian Wagner: Unfortunately this wasn't the case and I agree this is not great. What bothers me most is the following: With cephadm, the orchestrator and containers the Ceph project aims to make it easy for admins to operate a cluster. This goal has been

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Robert Sander
Hi, Am 08.12.21 um 17:57 schrieb Sebastian Wagner: In any case, here are the manual steps that are performed by the migration automatically, in case something goes wrong: https://github.com/ceph/ceph/pull/44252 There was no automatic migration of the old NFS exports to the new default pool

[ceph-users] CephFS Metadata Pool bandwidth usage

2021-12-08 Thread Andras Sali
Hi All, We have been observing that if we let our MDS run for some time, the bandwidth usage of the disks in the metadata pool starts increasing significantly (whilst IOPS is about constant), even though the number of clients, the workloads or anything else doesn't change. However, after

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Sebastian Wagner
Hi Robert, it would have been much better to avoid this NFS situation altogether by avoiding those different implementations in the first place. Unfortunately this wasn't the case and I agree this is not great. In any case, here are the manual steps that are performed by the migration

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Edward R Huyer
> On Wed, 2021-12-08 at 16:06 +, Marc wrote: > > > > > > It isn't possible to upgrade from CentOS 7 to anything... At least > > > without required massive hacks that may of may not work (and most > > > likely won't). > > > > I meant wipe the os disk, install whatever, install nautilus and put

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Jan Kasprzak
Carlos, Carlos Mogas da Silva wrote: : >From what I can gather, this will not be smooth at all, since I can't make an inplace upgrade of the : OS first and then Ceph and neither other way around. So the idea is to create a total new Ceph : cluster from scratch and migrate the data from

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Carlos Mogas da Silva
(replying to list again as I forgot to "group reply") On Wed, 2021-12-08 at 16:06 +, Marc wrote: > > > > It isn't possible to upgrade from CentOS 7 to anything... At least > > without required massive hacks > > that may of may not work (and most likely won't). > > I meant wipe the os disk,

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Marc
> > From what I can gather, this will not be smooth at all, since I can't > make an inplace upgrade of the > OS first and then Ceph and neither other way around. I think easier would be to upgrade one node at a time from centos7 to ... + nautilus. And when that is done do the upgrade to

[ceph-users] Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Carlos Mogas da Silva
Hey guys. >From what I can gather, this will not be smooth at all, since I can't make an >inplace upgrade of the OS first and then Ceph and neither other way around. So the idea is to create a total new Ceph cluster from scratch and migrate the data from one to another. The question is, how do

[ceph-users] Re: RBDMAP clients rendering theirselfs as "Jewel" in "Luminous" ceph cluster

2021-12-08 Thread Alex Gorbachev
You may be running into a bug with the way client features are interpreted. You may want to review these links, below. Also, you can check your running kernel with e.g. uname -a. https://www.mail-archive.com/ceph-users@ceph.io/msg03365.html

[ceph-users] Re: RBDMAP clients rendering theirselfs as "Jewel" in "Luminous" ceph cluster

2021-12-08 Thread Konstantin Shalygin
Obviously part of your clients may be upgraded, part is not. Or was upgraded, but not rebooted Double check client IP's fro mon sessions k > On 8 Dec 2021, at 16:38, Kamil Kuramshin wrote: > > I have already tried kernel 4.19 - newest available from stretch-backports > repo - nothing

[ceph-users] Re: RBDMAP clients rendering theirselfs as "Jewel" in "Luminous" ceph cluster

2021-12-08 Thread Konstantin Shalygin
One of: client version kernel-mainline 4.13 kernel-el 3.10.0-862 librados12.2.0 k > On 8 Dec 2021, at 16:05, Kamil Kuramshin wrote: > > What kernel version should be used to get rid of "Jewel" version features > from sessions list?

[ceph-users] Re: CEPHADM_STRAY_DAEMON with iSCSI service

2021-12-08 Thread Paul Giralt (pgiralt)
https://tracker.ceph.com/issues/5 -Paul Sent from my iPhone On Dec 8, 2021, at 8:00 AM, Robert Sander wrote: Hi, i just upgraded to 16.2.7 and deployed an iSCSI service. Now I get for each configured target three stray daemons

[ceph-users] CEPHADM_STRAY_DAEMON with iSCSI service

2021-12-08 Thread Robert Sander
Hi, i just upgraded to 16.2.7 and deployed an iSCSI service. Now I get for each configured target three stray daemons (tcmu-runner) that are not managed by cephadm: HEALTH_WARN 6 stray daemon(s) not managed by cephadm [WRN] CEPHADM_STRAY_DAEMON: 6 stray daemon(s) not managed by cephadm

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Robert Sander
Am 08.12.21 um 01:11 schrieb David Galloway: * Cephadm & Ceph Dashboard: NFS management has been completely reworked to ensure that NFS exports are managed consistently across the different Ceph components. Prior to this, there were 3 incompatible implementations for configuring the NFS

[ceph-users] Re: Local NTP servers on monitor node's.

2021-12-08 Thread Anthony D'Atri
I’ve had good success with this strategy, have the mons chime each other, and perhaps have OSD / other nodes against the mons too. Chrony >> ntpd With modern interval backoff / iburst there’s no reason to not have a robust set of peers. The public NTP pools rotate DNS on some period, so

[ceph-users] Re: Local NTP servers on monitor node's.

2021-12-08 Thread Janne Johansson
Den ons 8 dec. 2021 kl 02:35 skrev mhnx : > I've been building Ceph clusters since 2014 and the most annoying and > worst failure is the NTP server faults and having different times on > Ceph nodes. > > I've fixed few clusters because of the ntp failure. > - Sometimes NTP servers can be