I’m not exactly sure what I did, but it’s going through now. I did a
ceph orch upgrade check --ceph-version 16.2.7
my current version….
and I did a pause and resume. Now daemons are upgrading to 16.2.11.
-jeremy
> On Monday, Feb 27, 2023 at 11:07 PM, Me (mailto:jer...@skidrow.la)> wrote:
>
[ceph: root@cn01 /]# ceph -W cephadm,
cluster:
id: bfa2ad58-c049-11eb-9098-3c8cf8ed728d
health: HEALTH_OK
services:
mon: 5 daemons, quorum cn05,cn02,cn03,cn04,cn01 (age 111m)
mgr: cn06.rpkpwg(active, since 7h), standbys: cn02.arszct, cn03.elmwhu
mds: 2/2 daemons up, 2 standby
osd: 35 osds: 35 up
Did any of your cluster get partial upgrade? What about ceph -W cephadm,
does that return anything or just hang, also what about ceph health
detail? You can always try ceph orch upgrade pause and then orch upgrade
resume, might kick something loose, so to speak.
On Tue, Feb 28, 2023, 10:39
{
"target_image": "quay.io/ceph/ceph:v16.2.11",
"in_progress": true,
"services_complete": [],
"progress": "",
"message": ""
}
Hasn’t changed in the past two hours.
-jeremy
> On Monday, Feb 27, 2023 at 10:22 PM, Curt (mailto:light...@gmail.com)> wrote:
> What does Ceph orch upgrade status
What does Ceph orch upgrade status return?
On Tue, Feb 28, 2023, 10:16 Jeremy Hansen wrote:
> I’m trying to upgrade from 16.2.7 to 16.2.11. Reading the documentation,
> I cut and paste the orchestrator command to begin the upgrade, but I
> mistakenly pasted directly from the docs and it
I’m trying to upgrade from 16.2.7 to 16.2.11. Reading the documentation, I cut
and paste the orchestrator command to begin the upgrade, but I mistakenly
pasted directly from the docs and it initiated an “upgrade” to 16.2.6. I
stopped the upgrade per the docs and reissued the command specifying
I recently performed an update from the exact mentioned version 17.2.3 to .5
using cephadm.
Like you, I was extremely concerned that something could go wrong since I have
very little knowledge of what happens in the background.
However, there were no problems :) I worried for nothing.
On 2/27/23 03:22, Andrej Filipcic wrote:
On 2/24/23 15:18, Dan van der Ster wrote:
Hi Andrej,
That doesn't sound right -- I checked a couple of our clusters just
now and the mon filesystem is writing at just a few 100kBps.
most of the time it's few 10kB/s, but then it jumps a lot, few times a
Hi,
issue is solved now after executing this command:
# ceph auth get-or-create client.${rbdName} mon "allow r" osd "allow
rwx pool ${rbdPoolName} object_prefix rbd_data.${imageID}; allow rwx
pool ${rbdPoolName} object_prefix rbd_header.${imageID}; allow rx pool
${rbdPoolName} object_prefix
Needs to be inside the " with your other commands.
On Mon, Feb 27, 2023, 16:55 Thomas Schneider <74cmo...@gmail.com> wrote:
> Hi,
>
> I get an error running this ceph auth get-or-create syntax:
>
> # ceph auth get-or-create client.${rbdName} mon "allow r" osd "allow
> rwx pool ${rbdPoolName}
Hi,
I get an error running this ceph auth get-or-create syntax:
# ceph auth get-or-create client.${rbdName} mon "allow r" osd "allow
rwx pool ${rbdPoolName} object_prefix rbd_data.${imageID}; allow rwx
pool ${rbdPoolName} object_prefix rbd_header.${imageID}; allow rx pool
${rbdPoolName}
Hi Cephers, We
have two octopus 15.2.17 clusters in a multisite configuration. Every
once in a while we have to perform a bucket reshard (most recently on
613 shards) and this practically kills our replication for a few days. Does
anyone know of any priority mechanics within sync to give
On 2/24/23 15:18, Dan van der Ster wrote:
Hi Andrej,
That doesn't sound right -- I checked a couple of our clusters just
now and the mon filesystem is writing at just a few 100kBps.
most of the time it's few 10kB/s, but then it jumps a lot, few times a
minute. did you measure it for a long
13 matches
Mail list logo