Hi Sebastian,
I know ceph doesn't meant for that. See, We have 3 clusters. 2 of them has 9
nodes each and 3 mons and 3 managers. Only one of them is 2 node. We use this
2-node only for testing and development purposes. We didn't want to spend more
resources on test-only environment.
Thank you
Hi Sebastian,
No worries about the delay. I just run that command however it returns:
$ ceph mon ok-to-stop vx-rg23-rk65-u43-130
Error EBUSY: not enough monitors would be available (vx-rg23-rk65-u43-130-1)
after stopping mons [vx-rg23-rk65-u43-130]
It seems we have some progress here. In the
sorry for the late response.
I'm seeing
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
in the logs.
please make sure `ceph mon ok-to-stop vx-rg23-rk65-u43-130`
succeeds.
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Hi Sebastian,
>
> I cannot see my replies in here. So i
Hi Sebastian,
Thank you for the reply.
When I ran that command I got:
[17:09] [root] [vx-rg23-rk65-u43-130 ~] # ceph mon ok-to-stop
mon.vx-rg23-rk65-u43-130
quorum should be preserved (vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1) after
stopping [mon.vx-rg23-rk65-u43-130]
Does this mean
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
please make sure,
ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130
return ok
--
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg).
Hi Sebastian,
I cannot see my replies in here. So i put attachment as a body here:
2020-05-21T18:52:36.813+ 7faf19f20040 0 set uid:gid to 167:167 (ceph:ceph)
2020-05-21T18:52:36.813+ 7faf19f20040 0 ceph version 15.2.2
(0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable),
Hi Ashley,
Thank you for the warning. I will not update to 15.2.2 atm. And yes, I did not
get any email from Sebastian but its there in ceph list. I replied using email
but i cannot see Sebastian's email address so im not sure if he seen my
previous reply or not.
I've sent mgr logs but i hope
Hi Sebastian,
I did not get your reply via e-mail. I am very sorry for this. I hope you can
see this message...
I've re-run the upgrade and attached the log.
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hello,Yes I did but wasn't able to suggest anything further to get around it,
however:1/ There is currently an issue with 15.2.2 so I would advise holding
off any upgrade2/ Another mail list user replied to one of your older emails in
the thread asking for some manager logs not sure if you have
Hi Ashley,
Have you seen my previous reply? If so, and no solution then does anyone has
any idea how can this be done with 2 node?
Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç wrote:
This is 2 node setup. I have no third node :(
I am planning to add more in the future but currently
Hi Gencer,
I'm going to need the full mgr log file.
Best,
Sebastian
Am 20.05.20 um 15:07 schrieb Gencer W. Genç:
> Ah yes,
>
> {
> "mon": {
> "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee)
> octopus (stable)": 2
> },
> "mgr": {
> "ceph version
This is 2 node setup. I have no third node :(
I am planning to add more in the future but currently 2 nodes only.
At the moment, is there a --force command for such usage?
On 20.05.2020 16:32:15, Ashley Merrick wrote:
Correct, however it will need to stop one to do the upgrade leaving you with
Correct, however it will need to stop one to do the upgrade leaving you with
only one working MON (this is what I would suggest the error means seeing i had
the same thing when I only had a single MGR), normally is suggested to have 3
MONs due to quorum.
Do you not have a node you can run a
I have 2 mons and 2 mgrs.
cluster:
id: 7d308992-8899-11ea-8537-7d489fa7c193
health: HEALTH_OK
services:
mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)
mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys:
Yes, I think it's because your only running two mons, so the script is halting
at a check to stop you being in the position of just one running (no backup).
I had the same issue with a single MGR instance and had to add a second to
allow to upgrade to continue, can you bring up an extra MON?
Hi Ashley,
I see this:
[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa
[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
Does this meaning anything to you?
I've also attached full log. See especially
ceph config set mgr mgr/cephadm/log_to_cluster_level debug
ceph -W cephadm --watch-debug
See if you see anything that stands out as an issue with the update, seems it
has completed only the two MGR instances
If not:
ceph orch upgrade stop
ceph orch upgrade start --ceph-version 15.2.2
Ah yes,
{
"mon": {
"ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus
(stable)": 2
},
"mgr": {
"ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus
(stable)": 2
},
"osd": {
"ceph version 15.2.1
Does:
ceph versions
show any services yet running on 15.2.2?
On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç
wrote
Hi Ashley,$ ceph orch upgrade status
{
"target_image": "docker.io/ceph/ceph:v15.2.2",
"in_progress": true,
"services_complete": [],
Hi Ashley,
$ ceph orch upgrade status
{
"target_image": "docker.io/ceph/ceph:v15.2.2",
"in_progress": true,
"services_complete": [],
"message": ""
}
Thanks,
Gencer.
On 20.05.2020 15:58:34, Ashley Merrick wrote:
What does
ceph orch upgrade status
show?
On Wed, 20 May
What does
ceph orch upgrade status
show?
On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç
wrote
Hi,
I've 15.2.1 installed on all machines. On primary machine I executed ceph
upgrade command:
$ ceph orch upgrade start --ceph-version 15.2.2
When I check ceph -s I
21 matches
Mail list logo