Get one of the time services up and running, you then get through this. The
error message is quite of misleading?
Ben 于2023年4月26日周三 15:07写道:
> Hi,
> This seems not very relevant since all ceph components are running in
> containers. Any ideas to get over this issue? Any other ideas or
>
Hi, It is uos v20(with kernel 4.19), one linux distribution among others.
no matter since cephadm deploys things in containers by default. cephadm is
pulled by curl from Quincy branch of github.
I think you could see some sort of errors if you remove parameter
--single-host-defaults.
More
rados approved.
On Sun, May 7, 2023 at 11:24 PM Yuri Weinstein wrote:
>
> All PRs were cherry-picked and the new RC1 build is:
>
> https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/
>
> Rados, fs and rgw were rerun and results are summarized here:
>
Hi, I want to upgrade my old Ceph cluster + Radosgw from v14 to v15. But I'm
not using cephadm and I'm not sure how to limit errors as much as possible
during the upgrade process?
Here is my upgrade steps:
Firstly, upgrade from 14.2.18 to 14.2.22 (latest nautilus version)
Then, upgrade it from
Hi Janne,
Thank you for your response.
I use `ceph pg deep-scrub ` command, and all returns are point the
osd.166.
I check SMART data and syslog on osd.166 , the disk are fine.
Now the late deep-scrub PG numbers are lower, however it been 5 days since last
post.
I attached the perf dump
Don't mean to hijack the thread, but I may be observing something similar
with 16.2.12: OSD performance noticeably peaks after OSD restart and then
gradually reduces over 10-14 days, while commit and apply latencies
increase across the board.
Non-default settings are:
Hey Nikola,
On 5/8/2023 10:13 PM, Nikola Ciprich wrote:
OK, starting collecting those for all OSDs..
I have hour samples of all OSDs perf dumps loaded in DB, so I can easily
examine,
sort, whatever..
You didn't reset the counters every hour, do you? So having average
subop_w_latency growing
Josh,
The 16.2.13 RC1 has all approvals and unless I hear any objections, I
will start publishing later today.
On Mon, May 8, 2023 at 12:09 PM Radoslaw Zarzynski wrote:
>
> rados approved.
>
> On Sun, May 7, 2023 at 11:24 PM Yuri Weinstein wrote:
> >
> > All PRs were cherry-picked and the new
Hello Igor,
so I was checking the performance every day since Tuesday.. every day it seemed
to be the same - ~ 60-70kOPS on random write from single VM
yesterday it finally dropped to 20kOPS
today to 10kOPS. I also tried with newly created volume, the result (after
prefill)
is the same, so it
Hi Eugene,
I had removed the zone before removing it from the zonegroup. I will check the
objects and remove the appropriate ones. Thank you.
As outline in the thread, after setting the config for the rgw service, they
started ok .
Thank you,
Anantha
-Original Message-
From: Eugen
Thank you so much.
Here it is. I set them to the current value and now the rgw services are up.
Should the configuration variables get set automatically for the gateway
services as part of multiste configuration updates? OR it should be a manual
procedure?
ceph config dump | grep
Hi,
how exactly did you remove the configuration?
Check out the .rgw.root pool, there are different namespaces where the
corresponding objects are stored.
rados -p .rgw.root ls —all
You should be able to remove those objects from the pool, but be
careful to not delete anything you actually
are the old multisite conf values still in ceph.conf (eg, rgw_zonegroup,
rgw_zone, rgw_realm)?
From: Adiga, Anantha
Sent: 08 May 2023 18:27
To: ceph-users@ceph.io
Subject: [ceph-users] rgw service fails to start with zone not found
CAUTION: This email
Hi,
An existing multisite configuration was removed. But the radosgw services
still see the old zone name and fail to start.
journalctl -u
ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@rgw.default.default.fl31ca104ja0201.ninovs
...
May 08 16:10:48 fl31ca104ja0201 bash[3964341]: debug
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote:
>
> All PRs were cherry-picked and the new RC1 build is:
>
> https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/
>
> Rados, fs and rgw were rerun and results are summarized here:
>
Hi,
could you provide some more details about your host OS? Which cephadm
version is it? I was able to bootstrap a one-node cluster with both
17.2.5 and 17.2.6 with a non-root user with no such error on openSUSE
Leap 15.4:
quincy:~ # rpm -qa | grep cephadm
Ok, the hosts seems to be online again, but it took quite a long time..
Am 08.05.2023 um 13:22 schrieb Mevludin Blazevic:
Hi all,
after I performed a minor RHEL package upgrade (8.7 -> 8.7) in one of
our Ceph hosts, I get a Ceph warning describing that cephadm "Can't
communicate with remote
Hi all,
after I performed a minor RHEL package upgrade (8.7 -> 8.7) in one of
our Ceph hosts, I get a Ceph warning describing that cephadm "Can't
communicate with remote host `...`, possibly because python3 is not
installed there: [Errno 12] Cannot allocate memory, although Python3 is
Hi,
We have an octopus cluster where we want to move from centos to Ubuntu, after
activate all the osd, class is not shown in ceph osd tree.
However ceph-volume list shows the crush device class :/
Should I just add it or?
This message is confidential and is
On Mon, May 8, 2023 at 2:54 AM Yuri Weinstein wrote:
>
> All PRs were cherry-picked and the new RC1 build is:
>
> https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/
>
> Rados, fs and rgw were rerun and results are summarized here:
>
Hi,
with following command:
sudo cephadm --docker bootstrap --mon-ip 10.1.32.33 --skip-monitoring-stack
--ssh-user deployer
the user deployer has passwordless sudo configuration.
I can see the error below:
debug 2023-05-04T12:46:43.268+ 7fc5ddc2e700 0 [cephadm ERROR
cephadm.ssh] Unable
21 matches
Mail list logo