[ceph-users] Re: ERROR: Distro uos version 20 not supported

2023-05-08 Thread Ben
Get one of the time services up and running, you then get through this. The error message is quite of misleading? Ben 于2023年4月26日周三 15:07写道: > Hi, > This seems not very relevant since all ceph components are running in > containers. Any ideas to get over this issue? Any other ideas or >

[ceph-users] Re: non root deploy ceph 17.2.5 failed

2023-05-08 Thread Ben
Hi, It is uos v20(with kernel 4.19), one linux distribution among others. no matter since cephadm deploys things in containers by default. cephadm is pulled by curl from Quincy branch of github. I think you could see some sort of errors if you remove parameter --single-host-defaults. More

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-08 Thread Radoslaw Zarzynski
rados approved. On Sun, May 7, 2023 at 11:24 PM Yuri Weinstein wrote: > > All PRs were cherry-picked and the new RC1 build is: > > https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/ > > Rados, fs and rgw were rerun and results are summarized here: >

[ceph-users] Upgrade Ceph cluster + radosgw from 14.2.18 to latest 15

2023-05-08 Thread viplanghe6
Hi, I want to upgrade my old Ceph cluster + Radosgw from v14 to v15. But I'm not using cephadm and I'm not sure how to limit errors as much as possible during the upgrade process? Here is my upgrade steps: Firstly, upgrade from 14.2.18 to 14.2.22 (latest nautilus version) Then, upgrade it from

[ceph-users] Re: pg deep-scrub issue

2023-05-08 Thread Peter
Hi Janne, Thank you for your response. I use `ceph pg deep-scrub ` command, and all returns are point the osd.166. I check SMART data and syslog on osd.166 , the disk are fine. Now the late deep-scrub PG numbers are lower, however it been 5 days since last post. I attached the perf dump

[ceph-users] Re: quincy 17.2.6 - write performance continuously slowing down until OSD restart needed

2023-05-08 Thread Zakhar Kirpichenko
Don't mean to hijack the thread, but I may be observing something similar with 16.2.12: OSD performance noticeably peaks after OSD restart and then gradually reduces over 10-14 days, while commit and apply latencies increase across the board. Non-default settings are:

[ceph-users] Re: quincy 17.2.6 - write performance continuously slowing down until OSD restart needed

2023-05-08 Thread Igor Fedotov
Hey Nikola, On 5/8/2023 10:13 PM, Nikola Ciprich wrote: OK, starting collecting those for all OSDs.. I have hour samples of all OSDs perf dumps loaded in DB, so I can easily examine, sort, whatever.. You didn't reset the counters every hour, do you? So having average subop_w_latency growing

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-08 Thread Yuri Weinstein
Josh, The 16.2.13 RC1 has all approvals and unless I hear any objections, I will start publishing later today. On Mon, May 8, 2023 at 12:09 PM Radoslaw Zarzynski wrote: > > rados approved. > > On Sun, May 7, 2023 at 11:24 PM Yuri Weinstein wrote: > > > > All PRs were cherry-picked and the new

[ceph-users] Re: quincy 17.2.6 - write performance continuously slowing down until OSD restart needed

2023-05-08 Thread Nikola Ciprich
Hello Igor, so I was checking the performance every day since Tuesday.. every day it seemed to be the same - ~ 60-70kOPS on random write from single VM yesterday it finally dropped to 20kOPS today to 10kOPS. I also tried with newly created volume, the result (after prefill) is the same, so it

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Adiga, Anantha
Hi Eugene, I had removed the zone before removing it from the zonegroup. I will check the objects and remove the appropriate ones. Thank you. As outline in the thread, after setting the config for the rgw service, they started ok . Thank you, Anantha -Original Message- From: Eugen

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Adiga, Anantha
Thank you so much. Here it is. I set them to the current value and now the rgw services are up. Should the configuration variables get set automatically for the gateway services as part of multiste configuration updates? OR it should be a manual procedure? ceph config dump | grep

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Eugen Block
Hi, how exactly did you remove the configuration? Check out the .rgw.root pool, there are different namespaces where the corresponding objects are stored. rados -p .rgw.root ls —all You should be able to remove those objects from the pool, but be careful to not delete anything you actually

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Danny Webb
are the old multisite conf values still in ceph.conf (eg, rgw_zonegroup, rgw_zone, rgw_realm)? From: Adiga, Anantha Sent: 08 May 2023 18:27 To: ceph-users@ceph.io Subject: [ceph-users] rgw service fails to start with zone not found CAUTION: This email

[ceph-users] rgw service fails to start with zone not found

2023-05-08 Thread Adiga, Anantha
Hi, An existing multisite configuration was removed. But the radosgw services still see the old zone name and fail to start. journalctl -u ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@rgw.default.default.fl31ca104ja0201.ninovs ... May 08 16:10:48 fl31ca104ja0201 bash[3964341]: debug

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-08 Thread Casey Bodley
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote: > > All PRs were cherry-picked and the new RC1 build is: > > https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/ > > Rados, fs and rgw were rerun and results are summarized here: >

[ceph-users] Re: non root deploy ceph 17.2.5 failed

2023-05-08 Thread Eugen Block
Hi, could you provide some more details about your host OS? Which cephadm version is it? I was able to bootstrap a one-node cluster with both 17.2.5 and 17.2.6 with a non-root user with no such error on openSUSE Leap 15.4: quincy:~ # rpm -qa | grep cephadm

[ceph-users] Re: Ceph Host offline after performing dnf upgrade on RHEL 8.7 host

2023-05-08 Thread Mevludin Blazevic
Ok, the hosts seems to be online again, but it took quite a long time.. Am 08.05.2023 um 13:22 schrieb Mevludin Blazevic: Hi all, after I performed a minor RHEL  package upgrade (8.7 -> 8.7) in one of our Ceph hosts, I get a Ceph warning describing that cephadm "Can't communicate with remote

[ceph-users] Ceph Host offline after performing dnf upgrade on RHEL 8.7 host

2023-05-08 Thread Mevludin Blazevic
Hi all, after I performed a minor RHEL  package upgrade (8.7 -> 8.7) in one of our Ceph hosts, I get a Ceph warning describing that cephadm "Can't communicate with remote host `...`, possibly because python3 is not installed there: [Errno 12] Cannot allocate memory, although Python3 is

[ceph-users] Os changed to Ubuntu, device class not shown

2023-05-08 Thread Szabo, Istvan (Agoda)
Hi, We have an octopus cluster where we want to move from centos to Ubuntu, after activate all the osd, class is not shown in ceph osd tree. However ceph-volume list shows the crush device class :/ Should I just add it or? This message is confidential and is

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-08 Thread Venky Shankar
On Mon, May 8, 2023 at 2:54 AM Yuri Weinstein wrote: > > All PRs were cherry-picked and the new RC1 build is: > > https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/ > > Rados, fs and rgw were rerun and results are summarized here: >

[ceph-users] non root deploy ceph 17.2.5 failed

2023-05-08 Thread Ben
Hi, with following command: sudo cephadm --docker bootstrap --mon-ip 10.1.32.33 --skip-monitoring-stack --ssh-user deployer the user deployer has passwordless sudo configuration. I can see the error below: debug 2023-05-04T12:46:43.268+ 7fc5ddc2e700 0 [cephadm ERROR cephadm.ssh] Unable