[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-06-05 Thread Gencer W . Genç
Hi Sebastian,

I know ceph doesn't meant for that. See, We have 3 clusters. 2 of them has 9 
nodes each and 3 mons and 3 managers. Only one of them is 2 node. We use this 
2-node only for testing and development purposes. We didn't want to spend more 
resources on test-only environment.

Thank you so much once again for your great help!

Gencer.
On 6.06.2020 00:15:15, Sebastian Wagner  wrote:


Am 05.06.20 um 22:47 schrieb Gencer W. Genç:
> Hi Sebastian,
>
> I have go ahead and dig into github.com/ceph source code. I see that
> mons are grouped under name 'mon'. This makes me think that maybe
> hostname is actually 'mon' rather than 'mon.abcx..'. So I gho ahead and try:
>
> ceph config set mon container_image docker.io/ceph/ceph:v15.2.3
> ceph orch redeploy mon
>
> Attention to *'mon'* instead of *'mon.hostname'*.
>
> What happend? Success. I've successfully upgraded both monitors to
> v15.2.3. :)

great!

>
> However, this time it says it is NOT safe to stop osd.9.
>
> I have 2 replicas, 2 node. Each node has 10 osd. However, ceph says it
> is not safe to stop osd.9.due to:
>
> ceph osd ok-to-stop osd.9
> Error EBUSY: 16 PGs are already too degraded, would become too degraded
> or might become unavailable

might be fixable. Please have a look at your crush map and your failure
domains. If you have a failure domain = host and a replica size of 3,
Ceph isn't able to fulfil that request with just two hosts.

>
> I don't get it, If 2 replicas available why is it not safe to stop one
> of them?
>
> Man, why is it so hard to upgrade 2-node with cephadm while ceph-deploy
> does this seamlessly.

Indeed. Ceph isn't meant to be used with just two nodes. mon quorun,
replica size, failure domain.

>
> Thanks again and stay safe!,
> Gencer.

Thanks, you too!
>
>
>
>> On 5.06.2020 19:27:02, Gencer W. Genç wrote:
>>
>> Hi Sebastian,
>>
>> After I enabled debug mode and wait for another 10 minutes i see
>> something like this:
>>
>> As you can see below ceph mon get returns still old value v15 image
>> instead of v15.2.3 image.
>>
>> I've also attached whole mgr log to this email.
>>
>> LOGS:
>>
>> 2020-06-05T16:12:40.238+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : Have connection to vx-rg23-rk65-u43-130
>> 2020-06-05T16:12:40.242+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : mon_command: 'config get' -> 0 in 0.002s
>> 2020-06-05T16:12:40.242+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : mon container image docker.io/ceph/ceph:v15
>> 2020-06-05T16:12:40.242+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : args: --image docker.io/ceph/ceph:v15 list-networks
>> 2020-06-05T16:12:40.430+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : code: 0
>> 2020-06-05T16:12:40.430+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : out: {
>>     "5.254.67.88/30": [
>>         "5.254.67.90"
>>     ],
>>     "5.254.96.128/29": [
>>         "5.254.96.130"
>>     ],
>>     "37.221.171.144/30": [
>>         "37.221.171.146"
>>     ],
>>     "93.115.82.132/30": [
>>         "93.115.82.134"
>>     ],
>>     "109.163.234.32/30": [
>>         "109.163.234.34"
>>     ],
>>     "172.17.0.0/16": [
>>         "172.17.0.1"
>>     ],
>>     "192.168.0.0/24": [
>>         "192.168.0.1"
>>     ]
>> }
>> 2020-06-05T16:12:40.430+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : Refreshed host vx-rg23-rk65-u43-130 devices (17) networks (7)
>> 2020-06-05T16:12:40.454+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] :  checking vx-rg23-rk65-u43-130-1
>> 2020-06-05T16:12:40.454+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : Have connection to vx-rg23-rk65-u43-130-1
>> 2020-06-05T16:12:40.454+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : client container image None
>> 2020-06-05T16:12:40.454+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : args: check-host
>> 2020-06-05T16:12:40.982+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : code: 0
>> 2020-06-05T16:12:40.982+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : err: INFO:cephadm:podman|docker (/usr/bin/docker) is present
>> INFO:cephadm:systemctl is present
>> INFO:cephadm:lvcreate is present
>> INFO:cephadm:Unit ntp.service is enabled and running
>> INFO:cephadm:Host looks OK
>> 2020-06-05T16:12:41.026+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] :  host vx-rg23-rk65-u43-130-1 ok
>> 2020-06-05T16:12:41.026+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : refreshing vx-rg23-rk65-u43-130-1 daemons
>> 2020-06-05T16:12:41.030+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : Have connection to vx-rg23-rk65-u43-130-1
>> 2020-06-05T16:12:41.030+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : mon_command: 'config get' -> 0 in 0.002s
>> 2020-06-05T16:12:41.030+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : mon container image docker.io/ceph/ceph:v15
>> 2020-06-05T16:12:41.030+ 7f752b27e700  0 log_channel(cephadm) log
>> [DBG] : args: --image docker.io/cep

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-06-04 Thread Gencer W . Genç
Hi Sebastian,

No worries about the delay. I just run that command however it returns:

$ ceph mon ok-to-stop vx-rg23-rk65-u43-130

Error EBUSY: not enough monitors would be available (vx-rg23-rk65-u43-130-1) 
after stopping mons [vx-rg23-rk65-u43-130]

It seems we have some progress here. In the past commands i got quorum. This 
time it acknowledges about monitor hostname but fail due to not enough monitors 
after stopping it.

Any idea on this step?

Thanks,
Gencer.
On 4.06.2020 13:20:09, Sebastian Wagner  wrote:
sorry for the late response.

I'm seeing

> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130

in the logs.

please make sure `ceph mon ok-to-stop vx-rg23-rk65-u43-130`

succeeds.





Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Hi Sebastian,
>
> I cannot see my replies in here. So i put attachment as a body here:
>
> 2020-05-21T18:52:36.813+ 7faf19f20040 0 set uid:gid to 167:167 (ceph:ceph)
> 2020-05-21T18:52:36.813+ 7faf19f20040 0 ceph version 15.2.2 
> (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable), process 
> ceph-mgr, pid 1
> 2020-05-21T18:52:36.817+ 7faf19f20040 0 pidfile_write: ignore empty 
> --pid-file
> 2020-05-21T18:52:36.853+ 7faf19f20040 1 mgr[py] Loading python module 
> 'alerts'
> 2020-05-21T18:52:36.957+ 7faf19f20040 1 mgr[py] Loading python module 
> 'balancer'
> 2020-05-21T18:52:37.029+ 7faf19f20040 1 mgr[py] Loading python module 
> 'cephadm'
> 2020-05-21T18:52:37.237+ 7faf19f20040 1 mgr[py] Loading python module 
> 'crash'
> 2020-05-21T18:52:37.333+ 7faf19f20040 1 mgr[py] Loading python module 
> 'dashboard'
> 2020-05-21T18:52:37.981+ 7faf19f20040 1 mgr[py] Loading python module 
> 'devicehealth'
> 2020-05-21T18:52:38.045+ 7faf19f20040 1 mgr[py] Loading python module 
> 'diskprediction_local'
> 2020-05-21T18:52:38.221+ 7faf19f20040 1 mgr[py] Loading python module 
> 'influx'
> 2020-05-21T18:52:38.293+ 7faf19f20040 1 mgr[py] Loading python module 
> 'insights'
> 2020-05-21T18:52:38.425+ 7faf19f20040 1 mgr[py] Loading python module 
> 'iostat'
> 2020-05-21T18:52:38.489+ 7faf19f20040 1 mgr[py] Loading python module 
> 'k8sevents'
> 2020-05-21T18:52:39.077+ 7faf19f20040 1 mgr[py] Loading python module 
> 'localpool'
> 2020-05-21T18:52:39.133+ 7faf19f20040 1 mgr[py] Loading python module 
> 'orchestrator'
> 2020-05-21T18:52:39.277+ 7faf19f20040 1 mgr[py] Loading python module 
> 'osd_support'
> 2020-05-21T18:52:39.433+ 7faf19f20040 1 mgr[py] Loading python module 
> 'pg_autoscaler'
> 2020-05-21T18:52:39.545+ 7faf19f20040 1 mgr[py] Loading python module 
> 'progress'
> 2020-05-21T18:52:39.633+ 7faf19f20040 1 mgr[py] Loading python module 
> 'prometheus'
> 2020-05-21T18:52:40.013+ 7faf19f20040 1 mgr[py] Loading python module 
> 'rbd_support'
> 2020-05-21T18:52:40.253+ 7faf19f20040 1 mgr[py] Loading python module 
> 'restful'
> 2020-05-21T18:52:40.553+ 7faf19f20040 1 mgr[py] Loading python module 
> 'rook'
> 2020-05-21T18:52:41.229+ 7faf19f20040 1 mgr[py] Loading python module 
> 'selftest'
> 2020-05-21T18:52:41.285+ 7faf19f20040 1 mgr[py] Loading python module 
> 'status'
> 2020-05-21T18:52:41.357+ 7faf19f20040 1 mgr[py] Loading python module 
> 'telegraf'
> 2020-05-21T18:52:41.421+ 7faf19f20040 1 mgr[py] Loading python module 
> 'telemetry'
> 2020-05-21T18:52:41.581+ 7faf19f20040 1 mgr[py] Loading python module 
> 'test_orchestrator'
> 2020-05-21T18:52:41.937+ 7faf19f20040 1 mgr[py] Loading python module 
> 'volumes'
> 2020-05-21T18:52:42.121+ 7faf19f20040 1 mgr[py] Loading python module 
> 'zabbix'
> 2020-05-21T18:52:42.189+ 7faf06a1a700 0 ms_deliver_dispatch: unhandled 
> message 0x556226c8e6e0 mon_map magic: 0 v1 from mon.1 v2:192.168.0.3:3300/0
> 2020-05-21T18:52:43.557+ 7faf06a1a700 1 mgr handle_mgr_map Activating!
> 2020-05-21T18:52:43.557+ 7faf06a1a700 1 mgr handle_mgr_map I am now 
> activating
> 2020-05-21T18:52:43.665+ 7faed44a7700 0 [balancer DEBUG root] setting log 
> level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.665+ 7faed44a7700 1 mgr load Constructed class from 
> module: balancer
> 2020-05-21T18:52:43.665+ 7faed44a7700 0 [cephadm DEBUG root] setting log 
> level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.689+ 7faed44a7700 1 mgr load Constructed class from 
> module: cephadm
> 2020-05-21T18:52:43.689+ 7faed44a7700 0 [crash DEBUG root] setting log 
> level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.689+ 7faed44a7700 1 mgr load Constructed class from 
> module: crash
> 2020-05-21T18:52:43.693+ 7faed44a7700 0 [dashboard DEBUG root] setting 
> log level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.693+ 7faed44a7700 1 mgr load Constructed class from 
> module: dashboard
> 2020-05-21T18:52:43.693+ 7faed44a7700 0 [devicehealth DEBUG root] setting 
> log level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.693+ 7faed44a770

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-06-04 Thread Sebastian Wagner
sorry for the late response.

I'm seeing

> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130

in the logs.

please make sure `ceph mon ok-to-stop vx-rg23-rk65-u43-130`

succeeds.





Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Hi Sebastian,
> 
> I cannot see my replies in here. So i put attachment as a body here:
> 
> 2020-05-21T18:52:36.813+ 7faf19f20040  0 set uid:gid to 167:167 
> (ceph:ceph)
> 2020-05-21T18:52:36.813+ 7faf19f20040  0 ceph version 15.2.2 
> (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable), process 
> ceph-mgr, pid 1
> 2020-05-21T18:52:36.817+ 7faf19f20040  0 pidfile_write: ignore empty 
> --pid-file
> 2020-05-21T18:52:36.853+ 7faf19f20040  1 mgr[py] Loading python module 
> 'alerts'
> 2020-05-21T18:52:36.957+ 7faf19f20040  1 mgr[py] Loading python module 
> 'balancer'
> 2020-05-21T18:52:37.029+ 7faf19f20040  1 mgr[py] Loading python module 
> 'cephadm'
> 2020-05-21T18:52:37.237+ 7faf19f20040  1 mgr[py] Loading python module 
> 'crash'
> 2020-05-21T18:52:37.333+ 7faf19f20040  1 mgr[py] Loading python module 
> 'dashboard'
> 2020-05-21T18:52:37.981+ 7faf19f20040  1 mgr[py] Loading python module 
> 'devicehealth'
> 2020-05-21T18:52:38.045+ 7faf19f20040  1 mgr[py] Loading python module 
> 'diskprediction_local'
> 2020-05-21T18:52:38.221+ 7faf19f20040  1 mgr[py] Loading python module 
> 'influx'
> 2020-05-21T18:52:38.293+ 7faf19f20040  1 mgr[py] Loading python module 
> 'insights'
> 2020-05-21T18:52:38.425+ 7faf19f20040  1 mgr[py] Loading python module 
> 'iostat'
> 2020-05-21T18:52:38.489+ 7faf19f20040  1 mgr[py] Loading python module 
> 'k8sevents'
> 2020-05-21T18:52:39.077+ 7faf19f20040  1 mgr[py] Loading python module 
> 'localpool'
> 2020-05-21T18:52:39.133+ 7faf19f20040  1 mgr[py] Loading python module 
> 'orchestrator'
> 2020-05-21T18:52:39.277+ 7faf19f20040  1 mgr[py] Loading python module 
> 'osd_support'
> 2020-05-21T18:52:39.433+ 7faf19f20040  1 mgr[py] Loading python module 
> 'pg_autoscaler'
> 2020-05-21T18:52:39.545+ 7faf19f20040  1 mgr[py] Loading python module 
> 'progress'
> 2020-05-21T18:52:39.633+ 7faf19f20040  1 mgr[py] Loading python module 
> 'prometheus'
> 2020-05-21T18:52:40.013+ 7faf19f20040  1 mgr[py] Loading python module 
> 'rbd_support'
> 2020-05-21T18:52:40.253+ 7faf19f20040  1 mgr[py] Loading python module 
> 'restful'
> 2020-05-21T18:52:40.553+ 7faf19f20040  1 mgr[py] Loading python module 
> 'rook'
> 2020-05-21T18:52:41.229+ 7faf19f20040  1 mgr[py] Loading python module 
> 'selftest'
> 2020-05-21T18:52:41.285+ 7faf19f20040  1 mgr[py] Loading python module 
> 'status'
> 2020-05-21T18:52:41.357+ 7faf19f20040  1 mgr[py] Loading python module 
> 'telegraf'
> 2020-05-21T18:52:41.421+ 7faf19f20040  1 mgr[py] Loading python module 
> 'telemetry'
> 2020-05-21T18:52:41.581+ 7faf19f20040  1 mgr[py] Loading python module 
> 'test_orchestrator'
> 2020-05-21T18:52:41.937+ 7faf19f20040  1 mgr[py] Loading python module 
> 'volumes'
> 2020-05-21T18:52:42.121+ 7faf19f20040  1 mgr[py] Loading python module 
> 'zabbix'
> 2020-05-21T18:52:42.189+ 7faf06a1a700  0 ms_deliver_dispatch: unhandled 
> message 0x556226c8e6e0 mon_map magic: 0 v1 from mon.1 v2:192.168.0.3:3300/0
> 2020-05-21T18:52:43.557+ 7faf06a1a700  1 mgr handle_mgr_map Activating!
> 2020-05-21T18:52:43.557+ 7faf06a1a700  1 mgr handle_mgr_map I am now 
> activating
> 2020-05-21T18:52:43.665+ 7faed44a7700  0 [balancer DEBUG root] setting 
> log level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.665+ 7faed44a7700  1 mgr load Constructed class from 
> module: balancer
> 2020-05-21T18:52:43.665+ 7faed44a7700  0 [cephadm DEBUG root] setting log 
> level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.689+ 7faed44a7700  1 mgr load Constructed class from 
> module: cephadm
> 2020-05-21T18:52:43.689+ 7faed44a7700  0 [crash DEBUG root] setting log 
> level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.689+ 7faed44a7700  1 mgr load Constructed class from 
> module: crash
> 2020-05-21T18:52:43.693+ 7faed44a7700  0 [dashboard DEBUG root] setting 
> log level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.693+ 7faed44a7700  1 mgr load Constructed class from 
> module: dashboard
> 2020-05-21T18:52:43.693+ 7faed44a7700  0 [devicehealth DEBUG root] 
> setting log level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.693+ 7faed44a7700  1 mgr load Constructed class from 
> module: devicehealth
> 2020-05-21T18:52:43.701+ 7faed44a7700  0 [iostat DEBUG root] setting log 
> level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.701+ 7faed44a7700  1 mgr load Constructed class from 
> module: iostat
> 2020-05-21T18:52:43.709+ 7faed44a7700  0 [orchestrator DEBUG root] 
> setting log level based on debug_mgr: WARNING (1/5)
> 2020-05-21T18:52:43.709+ 7faed44a7700  1 mgr load Constructe

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-25 Thread Gencer W . Genç
Hi Sebastian,

Thank you for the reply.

When I ran that command I got:

[17:09] [root] [vx-rg23-rk65-u43-130 ~] # ceph mon ok-to-stop 
mon.vx-rg23-rk65-u43-130
quorum should be preserved (vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1) after 
stopping [mon.vx-rg23-rk65-u43-130]

Does this mean upgrade can continue?

If so, Should I upgrade, or wait for 15.2.3 because @Ashley said 15.2.2 has 
problems

Thanks,
Gencer.

On 25.05.2020 19:25:49, Sebastian Wagner  wrote:


Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130

please make sure,

ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130

return ok

--
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäftsführer: Felix Imendörffer

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-25 Thread Sebastian Wagner


Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130

please make sure,

ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130

return ok

-- 
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäftsführer: Felix Imendörffer



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-22 Thread Gencer W . Genç
Hi Sebastian,

I cannot see my replies in here. So i put attachment as a body here:

2020-05-21T18:52:36.813+ 7faf19f20040  0 set uid:gid to 167:167 (ceph:ceph)
2020-05-21T18:52:36.813+ 7faf19f20040  0 ceph version 15.2.2 
(0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable), process ceph-mgr, 
pid 1
2020-05-21T18:52:36.817+ 7faf19f20040  0 pidfile_write: ignore empty 
--pid-file
2020-05-21T18:52:36.853+ 7faf19f20040  1 mgr[py] Loading python module 
'alerts'
2020-05-21T18:52:36.957+ 7faf19f20040  1 mgr[py] Loading python module 
'balancer'
2020-05-21T18:52:37.029+ 7faf19f20040  1 mgr[py] Loading python module 
'cephadm'
2020-05-21T18:52:37.237+ 7faf19f20040  1 mgr[py] Loading python module 
'crash'
2020-05-21T18:52:37.333+ 7faf19f20040  1 mgr[py] Loading python module 
'dashboard'
2020-05-21T18:52:37.981+ 7faf19f20040  1 mgr[py] Loading python module 
'devicehealth'
2020-05-21T18:52:38.045+ 7faf19f20040  1 mgr[py] Loading python module 
'diskprediction_local'
2020-05-21T18:52:38.221+ 7faf19f20040  1 mgr[py] Loading python module 
'influx'
2020-05-21T18:52:38.293+ 7faf19f20040  1 mgr[py] Loading python module 
'insights'
2020-05-21T18:52:38.425+ 7faf19f20040  1 mgr[py] Loading python module 
'iostat'
2020-05-21T18:52:38.489+ 7faf19f20040  1 mgr[py] Loading python module 
'k8sevents'
2020-05-21T18:52:39.077+ 7faf19f20040  1 mgr[py] Loading python module 
'localpool'
2020-05-21T18:52:39.133+ 7faf19f20040  1 mgr[py] Loading python module 
'orchestrator'
2020-05-21T18:52:39.277+ 7faf19f20040  1 mgr[py] Loading python module 
'osd_support'
2020-05-21T18:52:39.433+ 7faf19f20040  1 mgr[py] Loading python module 
'pg_autoscaler'
2020-05-21T18:52:39.545+ 7faf19f20040  1 mgr[py] Loading python module 
'progress'
2020-05-21T18:52:39.633+ 7faf19f20040  1 mgr[py] Loading python module 
'prometheus'
2020-05-21T18:52:40.013+ 7faf19f20040  1 mgr[py] Loading python module 
'rbd_support'
2020-05-21T18:52:40.253+ 7faf19f20040  1 mgr[py] Loading python module 
'restful'
2020-05-21T18:52:40.553+ 7faf19f20040  1 mgr[py] Loading python module 
'rook'
2020-05-21T18:52:41.229+ 7faf19f20040  1 mgr[py] Loading python module 
'selftest'
2020-05-21T18:52:41.285+ 7faf19f20040  1 mgr[py] Loading python module 
'status'
2020-05-21T18:52:41.357+ 7faf19f20040  1 mgr[py] Loading python module 
'telegraf'
2020-05-21T18:52:41.421+ 7faf19f20040  1 mgr[py] Loading python module 
'telemetry'
2020-05-21T18:52:41.581+ 7faf19f20040  1 mgr[py] Loading python module 
'test_orchestrator'
2020-05-21T18:52:41.937+ 7faf19f20040  1 mgr[py] Loading python module 
'volumes'
2020-05-21T18:52:42.121+ 7faf19f20040  1 mgr[py] Loading python module 
'zabbix'
2020-05-21T18:52:42.189+ 7faf06a1a700  0 ms_deliver_dispatch: unhandled 
message 0x556226c8e6e0 mon_map magic: 0 v1 from mon.1 v2:192.168.0.3:3300/0
2020-05-21T18:52:43.557+ 7faf06a1a700  1 mgr handle_mgr_map Activating!
2020-05-21T18:52:43.557+ 7faf06a1a700  1 mgr handle_mgr_map I am now 
activating
2020-05-21T18:52:43.665+ 7faed44a7700  0 [balancer DEBUG root] setting log 
level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.665+ 7faed44a7700  1 mgr load Constructed class from 
module: balancer
2020-05-21T18:52:43.665+ 7faed44a7700  0 [cephadm DEBUG root] setting log 
level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.689+ 7faed44a7700  1 mgr load Constructed class from 
module: cephadm
2020-05-21T18:52:43.689+ 7faed44a7700  0 [crash DEBUG root] setting log 
level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.689+ 7faed44a7700  1 mgr load Constructed class from 
module: crash
2020-05-21T18:52:43.693+ 7faed44a7700  0 [dashboard DEBUG root] setting log 
level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.693+ 7faed44a7700  1 mgr load Constructed class from 
module: dashboard
2020-05-21T18:52:43.693+ 7faed44a7700  0 [devicehealth DEBUG root] setting 
log level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.693+ 7faed44a7700  1 mgr load Constructed class from 
module: devicehealth
2020-05-21T18:52:43.701+ 7faed44a7700  0 [iostat DEBUG root] setting log 
level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.701+ 7faed44a7700  1 mgr load Constructed class from 
module: iostat
2020-05-21T18:52:43.709+ 7faed44a7700  0 [orchestrator DEBUG root] setting 
log level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.709+ 7faed44a7700  1 mgr load Constructed class from 
module: orchestrator
2020-05-21T18:52:43.717+ 7faed44a7700  0 [osd_support DEBUG root] setting 
log level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.717+ 7faed44a7700  1 mgr load Constructed class from 
module: osd_support
2020-05-21T18:52:43.717+ 7faed44a7700  0 [pg_autoscaler DEBUG root] setting 
log level based on debug_mgr: WARNING (1/5)
2020-05-21T18:52:43.721+ 7faed44a7700  1 mgr lo

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-22 Thread Gencer W . Genç
Hi Ashley,

Thank you for the warning. I will not update to 15.2.2 atm. And yes, I did not 
get any email from Sebastian but its there in ceph list. I replied using email 
but i cannot see Sebastian's email address so im not sure if he seen my 
previous reply or not.

I've sent mgr logs but i hope he see it soon and did not missed it.

Thanks,
Gencer.
On 21.05.2020 20:25:03, Ashley Merrick  wrote:
Hello,

Yes I did but wasn't able to suggest anything further to get around it, however:

1/ There is currently an issue with 15.2.2 so I would advise holding off any 
upgrade

2/ Another mail list user replied to one of your older emails in the thread 
asking for some manager logs not sure if you have seen this.

Thanks




 On Fri, 22 May 2020 01:21:26 +0800 gen...@gencgiyen.com wrote 


Hi Ashley,

Have you seen my previous reply? If so, and no solution then does anyone has 
any idea how can this be done with 2 node?

Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote:
This is 2 node setup. I have no third node :(

I am planning to add more in the future but currently 2 nodes only.

At the moment, is there a --force command for such usage?

On 20.05.2020 16:32:15, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:
Correct, however it will need to stop one to do the upgrade leaving you with 
only one working MON (this is what I would suggest the error means seeing i had 
the same thing when I only had a single MGR), normally is suggested to have 3 
MONs due to quorum.


Do you not have a node you can run a mon for the few minutes to complete the 
upgrade?



 On Wed, 20 May 2020 21:28:19 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


I have 2 mons and 2 mgrs.


  cluster:

    id:     7d308992-8899-11ea-8537-7d489fa7c193

    health: HEALTH_OK


  services:

    mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)

    mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie

    mds: cephfs:1 {0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby

    osd: 24 osds: 24 up (since 69m), 24 in (since 3w)


  task status:

    scrub status:

        mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle


  data:

    pools:   4 pools, 97 pgs

    objects: 1.38k objects, 4.8 GiB

    usage:   35 GiB used, 87 TiB / 87 TiB avail

    pgs:     97 active+clean


  io:

    client:   5.3 KiB/s wr, 0 op/s rd, 0 op/s wr


  progress:

    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)

      [=...] (remaining: 9m)


Isn't both mons already up? I have no way to add third mon btw.


Thnaks,

Gencer.


On 20.05.2020 16:21:03, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Yes, I think it's because your only running two mons, so the script is halting 
at a check to stop you being in the position of just one running (no backup).


I had the same issue with a single MGR instance and had to add a second to 
allow to upgrade to continue, can you bring up an extra MON?


Thanks

 On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,



I see this:

[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa

[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130


Does this meaning anything to you?


I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.


Thanks,

Gencer.

On 20.05.2020 16:13:00, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug


See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances


If not:


ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2


and monitor the watch-debug log


Make sure at the end you run:


ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Ah yes,



{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}


How can i fix this?


Gencer.

On 20.05.2020 16:04:33, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Does:


ceph versions


show any ser

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-21 Thread Gencer W . Genç
Hi Sebastian,

I did not get your reply via e-mail. I am very sorry for this. I hope you can 
see this message...

I've re-run the upgrade and attached the log.


Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-21 Thread Ashley Merrick
Hello,Yes I did but wasn't able to suggest anything further to get around it, 
however:1/ There is currently an issue with 15.2.2 so I would advise holding 
off any upgrade2/ Another mail list user replied to one of your older emails in 
the thread asking for some manager logs not sure if you have seen this.Thanks 
 On Fri, 22 May 2020 01:21:26 +0800  gen...@gencgiyen.com  wrote Hi 
Ashley,Have you seen my previous reply? If so, and no solution then does anyone 
has any idea how can this be done with 2 node?Thanks,Gencer.

On 20.05.2020 16:33:53, Gencer W. Genç 
 wrote:





This is 2 node setup. I have no third 
node :(

I am planning to add more in the future 
but currently 2 nodes only.At the moment, is there a --force command for such 
usage?
On 20.05.2020 16:32:15, Ashley Merrick 
 wrote:Correct, however it will need to stop one to 
do the upgrade leaving you with only one working MON (this is what I would 
suggest the error means seeing i had the same thing when I only had a single 
MGR), normally is suggested to have 3 MONs due to quorum.Do you not have a node 
you can run a mon for the few minutes to complete the upgrade? On Wed, 20 
May 2020 21:28:19 +0800 Gencer W. Genç  wrote I have 
2 mons and 2 mgrs.  cluster:    id:     7d308992-8899-11ea-8537-7d489fa7c193    
health: HEALTH_OK  services:    mon: 2 daemons, quorum 
vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)    mgr: 
vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie    mds: cephfs:1 
{0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby    osd: 24 osds: 
24 up (since 69m), 24 in (since 3w)  task status:    scrub status:        
mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle  data:    pools:   4 pools, 97 pgs 
   objects: 1.38k objects, 4.8 GiB    usage:   35 GiB used, 87 TiB / 87 TiB 
avail    pgs:     97 active+clean  io:    client:   5.3 KiB/s wr, 0 op/s rd, 0 
op/s wr  progress:    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)      
[=...] (remaining: 9m)Isn't both mons already up? I 
have no way to add third mon btw.Thnaks,Gencer.On 20.05.2020 16:21:03, Ashley 
Merrick  wrote:Yes, I think it's because your only 
running two mons, so the script is halting at a check to stop you being in the 
position of just one running (no backup).I had the same issue with a single MGR 
instance and had to add a second to allow to upgrade to continue, can you bring 
up an extra MON?Thanks On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç 
 wrote Hi Ashley,I see this:[INF] Upgrade: Target is 
docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa[INF] Upgrade: 
It is NOT safe to stop mon.vx-rg23-rk65-u43-130Does this meaning anything to 
you?I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.Thanks,Gencer.On 20.05.2020 16:13:00, Ashley Merrick 
 wrote:ceph config set mgr 
mgr/cephadm/log_to_cluster_level debugceph -W cephadm --watch-debugSee if you 
see anything that stands out as an issue with the update, seems it has 
completed only the two MGR instancesIf not:ceph orch upgrade stopceph orch 
upgrade start --ceph-version 15.2.2and monitor the watch-debug logMake sure at 
the end you run:ceph config set mgr mgr/cephadm/log_to_cluster_level info 
On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç  wrote 
Ah yes,{    "mon": {        "ceph version 15.2.1 
(9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 2    },    "mgr": 
{        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) 
octopus (stable)": 2    },    "osd": {        "ceph version 15.2.1 
(9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 24    },    
"mds": {        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) 
octopus (stable)": 2    },    "overall": {        "ceph version 15.2.1 
(9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 28,        "ceph 
version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable)": 2  
  }}How can i fix this?Gencer.On 20.05.2020 16:04:33, Ashley Merrick 
 wrote:Does:ceph versionsshow any services yet 
running on 15.2.2? On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç 
 wrote Hi Ashley,$ ceph orch upgrade status {    
"target_image": "docker.io/ceph/ceph:v15.2.2",    "in_progress": true,    
"services_complete": [],    "message": ""}Thanks,Gencer.On 20.05.2020 15:58:34, 
Ashley Merrick  wrote:What does ceph orch upgrade 
status show? On Wed, 20 May 2020 20:52:39 +0800 Genc

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-21 Thread Gencer W . Genç
Hi Ashley,

Have you seen my previous reply? If so, and no solution then does anyone has 
any idea how can this be done with 2 node?

Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç  wrote:
This is 2 node setup. I have no third node :(

I am planning to add more in the future but currently 2 nodes only.

At the moment, is there a --force command for such usage?

On 20.05.2020 16:32:15, Ashley Merrick  wrote:
Correct, however it will need to stop one to do the upgrade leaving you with 
only one working MON (this is what I would suggest the error means seeing i had 
the same thing when I only had a single MGR), normally is suggested to have 3 
MONs due to quorum.


Do you not have a node you can run a mon for the few minutes to complete the 
upgrade?



 On Wed, 20 May 2020 21:28:19 +0800 Gencer W. Genç  
wrote 


I have 2 mons and 2 mgrs.


  cluster:

    id:     7d308992-8899-11ea-8537-7d489fa7c193

    health: HEALTH_OK


  services:

    mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)

    mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie

    mds: cephfs:1 {0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby

    osd: 24 osds: 24 up (since 69m), 24 in (since 3w)


  task status:

    scrub status:

        mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle


  data:

    pools:   4 pools, 97 pgs

    objects: 1.38k objects, 4.8 GiB

    usage:   35 GiB used, 87 TiB / 87 TiB avail

    pgs:     97 active+clean


  io:

    client:   5.3 KiB/s wr, 0 op/s rd, 0 op/s wr


  progress:

    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)

      [=...] (remaining: 9m)


Isn't both mons already up? I have no way to add third mon btw.


Thnaks,

Gencer.


On 20.05.2020 16:21:03, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Yes, I think it's because your only running two mons, so the script is halting 
at a check to stop you being in the position of just one running (no backup).


I had the same issue with a single MGR instance and had to add a second to 
allow to upgrade to continue, can you bring up an extra MON?


Thanks

 On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,



I see this:

[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa

[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130


Does this meaning anything to you?


I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.


Thanks,

Gencer.

On 20.05.2020 16:13:00, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug


See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances


If not:


ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2


and monitor the watch-debug log


Make sure at the end you run:


ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Ah yes,



{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}


How can i fix this?


Gencer.

On 20.05.2020 16:04:33, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Does:


ceph versions


show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,

$ ceph orch upgrade status


{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}


Thanks,

Gencer.


On 20.05.2020 15:58:34, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

What does

ceph orch upgrade status

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi,

I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command:

$ ceph orch upgrade start --ceph-version 15.2.2


When I check ceph -s I see this:

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
      [=...] (remaining: 8h)

It says 8 hours. It is already ran for 3 hours

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Sebastian Wagner
Hi Gencer,

I'm going to need the full mgr log file.

Best,
Sebastian

Am 20.05.20 um 15:07 schrieb Gencer W. Genç:
> Ah yes,
> 
> {
>     "mon": {
>         "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) 
> octopus (stable)": 2
>     },
>     "mgr": {
>         "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) 
> octopus (stable)": 2
>     },
>     "osd": {
>         "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) 
> octopus (stable)": 24
>     },
>     "mds": {
>         "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) 
> octopus (stable)": 2
>     },
>     "overall": {
>         "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) 
> octopus (stable)": 28,
>         "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) 
> octopus (stable)": 2
>     }
> }
> 
> How can i fix this?
> 
> Gencer.
> On 20.05.2020 16:04:33, Ashley Merrick  wrote:
> Does:
> 
> 
> ceph versions
> 
> 
> show any services yet running on 15.2.2?
> 
> 
> 
>  On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç  
> wrote 
> 
> 
> Hi Ashley,
> $ ceph orch upgrade status
> 
> 
> {
> 
>     "target_image": "docker.io/ceph/ceph:v15.2.2",
> 
>     "in_progress": true,
> 
>     "services_complete": [],
> 
>     "message": ""
> 
> }
> 
> 
> Thanks,
> 
> Gencer.
> 
> 
> On 20.05.2020 15:58:34, Ashley Merrick  [mailto:singap...@amerrick.co.uk]> wrote:
> 
> What does
> 
> ceph orch upgrade status
> 
> show?
> 
> 
> 
>  On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç  [mailto:gen...@gencgiyen.com]> wrote 
> 
> 
> Hi,
> 
> I've 15.2.1 installed on all machines. On primary machine I executed ceph 
> upgrade command:
> 
> $ ceph orch upgrade start --ceph-version 15.2.2
> 
> 
> When I check ceph -s I see this:
> 
>   progress:
>     Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
>       [=...] (remaining: 8h)
> 
> It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
> stuck at this point.
> 
> Is there any way to know why this has stuck?
> 
> Thanks,
> Gencer.
> ___
> ceph-users mailing list -- ceph-users@ceph.io [mailto:ceph-users@ceph.io]
> To unsubscribe send an email to ceph-users-le...@ceph.io 
> [mailto:ceph-users-le...@ceph.io]
> 
> 
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> 

-- 
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäftsführer: Felix Imendörffer



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Gencer W . Genç
This is 2 node setup. I have no third node :(

I am planning to add more in the future but currently 2 nodes only.

At the moment, is there a --force command for such usage?

On 20.05.2020 16:32:15, Ashley Merrick  wrote:
Correct, however it will need to stop one to do the upgrade leaving you with 
only one working MON (this is what I would suggest the error means seeing i had 
the same thing when I only had a single MGR), normally is suggested to have 3 
MONs due to quorum.


Do you not have a node you can run a mon for the few minutes to complete the 
upgrade?



 On Wed, 20 May 2020 21:28:19 +0800 Gencer W. Genç  
wrote 


I have 2 mons and 2 mgrs.


  cluster:

    id:     7d308992-8899-11ea-8537-7d489fa7c193

    health: HEALTH_OK


  services:

    mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)

    mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie

    mds: cephfs:1 {0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby

    osd: 24 osds: 24 up (since 69m), 24 in (since 3w)


  task status:

    scrub status:

        mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle


  data:

    pools:   4 pools, 97 pgs

    objects: 1.38k objects, 4.8 GiB

    usage:   35 GiB used, 87 TiB / 87 TiB avail

    pgs:     97 active+clean


  io:

    client:   5.3 KiB/s wr, 0 op/s rd, 0 op/s wr


  progress:

    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)

      [=...] (remaining: 9m)


Isn't both mons already up? I have no way to add third mon btw.


Thnaks,

Gencer.


On 20.05.2020 16:21:03, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Yes, I think it's because your only running two mons, so the script is halting 
at a check to stop you being in the position of just one running (no backup).


I had the same issue with a single MGR instance and had to add a second to 
allow to upgrade to continue, can you bring up an extra MON?


Thanks

 On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,



I see this:

[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa

[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130


Does this meaning anything to you?


I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.


Thanks,

Gencer.

On 20.05.2020 16:13:00, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug


See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances


If not:


ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2


and monitor the watch-debug log


Make sure at the end you run:


ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Ah yes,



{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}


How can i fix this?


Gencer.

On 20.05.2020 16:04:33, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Does:


ceph versions


show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,

$ ceph orch upgrade status


{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}


Thanks,

Gencer.


On 20.05.2020 15:58:34, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

What does

ceph orch upgrade status

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi,

I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command:

$ ceph orch upgrade start --ceph-version 15.2.2


When I check ceph -s I see this:

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
      [=...] (remaining: 8h)

It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point.

Is there any way to know why this has stuck?

Thanks,
Gencer.
___
ceph-users mailing list -- ceph-u

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Ashley Merrick
Correct, however it will need to stop one to do the upgrade leaving you with 
only one working MON (this is what I would suggest the error means seeing i had 
the same thing when I only had a single MGR), normally is suggested to have 3 
MONs due to quorum.



Do you not have a node you can run a mon for the few minutes to complete the 
upgrade?





 On Wed, 20 May 2020 21:28:19 +0800 Gencer W. Genç  
wrote 



I have 2 mons and 2 mgrs.



  cluster:

    id:     7d308992-8899-11ea-8537-7d489fa7c193

    health: HEALTH_OK



  services:

    mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)

    mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie

    mds: cephfs:1 {0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby

    osd: 24 osds: 24 up (since 69m), 24 in (since 3w)



  task status:

    scrub status:

        mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle



  data:

    pools:   4 pools, 97 pgs

    objects: 1.38k objects, 4.8 GiB

    usage:   35 GiB used, 87 TiB / 87 TiB avail

    pgs:     97 active+clean



  io:

    client:   5.3 KiB/s wr, 0 op/s rd, 0 op/s wr



  progress:

    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)

      [=...] (remaining: 9m)




Isn't both mons already up? I have no way to add third mon btw.



Thnaks,

Gencer.



On 20.05.2020 16:21:03, Ashley Merrick  wrote:

Yes, I think it's because your only running two mons, so the script is halting 
at a check to stop you being in the position of just one running (no backup).



I had the same issue with a single MGR instance and had to add a second to 
allow to upgrade to continue, can you bring up an extra MON?



Thanks

 On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç 
 wrote 



Hi Ashley,





I see this:

[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa


[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130



Does this meaning anything to you?



I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.



Thanks,

Gencer.

On 20.05.2020 16:13:00, Ashley Merrick  wrote:

ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug



See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances



If not:



ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2



and monitor the watch-debug log



Make sure at the end you run:



ceph config set mgr mgr/cephadm/log_to_cluster_level info





 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç 
 wrote 



Ah yes,





{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}




How can i fix this?



Gencer.

On 20.05.2020 16:04:33, Ashley Merrick  wrote:

Does:



ceph versions



show any services yet running on 15.2.2?





 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç 
 wrote 



Hi Ashley,

$ ceph orch upgrade status 




{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}




Thanks,

Gencer.



On 20.05.2020 15:58:34, Ashley Merrick  wrote:

What does 

ceph orch upgrade status 

show?





 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç 
 wrote 



Hi, 
 
I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command: 
 
$ ceph orch upgrade start --ceph-version 15.2.2 
 
 
When I check ceph -s I see this: 
 
  progress: 
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m) 
      [=...] (remaining: 8h) 
 
It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point. 
 
Is there any way to know why this has stuck? 
 
Thanks, 
Gencer.
___
ceph-users mailing list -- mailto:ceph-users@ceph.io
To unsubscribe send an email to mailto:ceph-users-le...@ceph.io
___
ceph-users mail

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Gencer W . Genç
I have 2 mons and 2 mgrs.

  cluster:
    id:     7d308992-8899-11ea-8537-7d489fa7c193
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)
    mgr: vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie
    mds: cephfs:1 {0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby
    osd: 24 osds: 24 up (since 69m), 24 in (since 3w)

  task status:
    scrub status:
        mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle

  data:
    pools:   4 pools, 97 pgs
    objects: 1.38k objects, 4.8 GiB
    usage:   35 GiB used, 87 TiB / 87 TiB avail
    pgs:     97 active+clean

  io:
    client:   5.3 KiB/s wr, 0 op/s rd, 0 op/s wr

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)
      [=...] (remaining: 9m)

Isn't both mons already up? I have no way to add third mon btw.

Thnaks,
Gencer.
On 20.05.2020 16:21:03, Ashley Merrick  wrote:
Yes, I think it's because your only running two mons, so the script is halting 
at a check to stop you being in the position of just one running (no backup).


I had the same issue with a single MGR instance and had to add a second to 
allow to upgrade to continue, can you bring up an extra MON?


Thanks

 On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç  
wrote 


Hi Ashley,


I see this:

[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa

[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130


Does this meaning anything to you?


I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.


Thanks,

Gencer.

On 20.05.2020 16:13:00, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug


See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances


If not:


ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2


and monitor the watch-debug log


Make sure at the end you run:


ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Ah yes,


{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}


How can i fix this?


Gencer.

On 20.05.2020 16:04:33, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Does:


ceph versions


show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,
$ ceph orch upgrade status


{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}


Thanks,

Gencer.


On 20.05.2020 15:58:34, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

What does

ceph orch upgrade status

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi,

I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command:

$ ceph orch upgrade start --ceph-version 15.2.2


When I check ceph -s I see this:

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
      [=...] (remaining: 8h)

It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point.

Is there any way to know why this has stuck?

Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io [mailto:ceph-users@ceph.io]
To unsubscribe send an email to ceph-users-le...@ceph.io 
[mailto:ceph-users-le...@ceph.io]








___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Ashley Merrick
Yes, I think it's because your only running two mons, so the script is halting 
at a check to stop you being in the position of just one running (no backup).



I had the same issue with a single MGR instance and had to add a second to 
allow to upgrade to continue, can you bring up an extra MON?


Thanks
 On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç  
wrote 


Hi Ashley,



I see this:

[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa


[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130



Does this meaning anything to you?



I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.



Thanks,

Gencer.

On 20.05.2020 16:13:00, Ashley Merrick  wrote:

ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug



See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances



If not:



ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2



and monitor the watch-debug log



Make sure at the end you run:



ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç 
 wrote 


Ah yes,



{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}




How can i fix this?



Gencer.

On 20.05.2020 16:04:33, Ashley Merrick  wrote:

Does:



ceph versions



show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç 
 wrote 


Hi Ashley,$ ceph orch upgrade status 




{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}




Thanks,

Gencer.



On 20.05.2020 15:58:34, Ashley Merrick  wrote:

What does 

ceph orch upgrade status 

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç 
 wrote 


Hi, 
 
I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command: 
 
$ ceph orch upgrade start --ceph-version 15.2.2 
 
 
When I check ceph -s I see this: 
 
  progress: 
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m) 
      [=...] (remaining: 8h) 
 
It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point. 
 
Is there any way to know why this has stuck? 
 
Thanks, 
Gencer.
___
ceph-users mailing list -- mailto:ceph-users@ceph.io
To unsubscribe send an email to mailto:ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Gencer W . Genç
Hi Ashley,

I see this:
[INF] Upgrade: Target is docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa
[INF] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130


Does this meaning anything to you?

I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.

Thanks,
Gencer.
On 20.05.2020 16:13:00, Ashley Merrick  wrote:
ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug


See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances


If not:


ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2


and monitor the watch-debug log


Make sure at the end you run:


ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç  
wrote 


Ah yes,


{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}


How can i fix this?


Gencer.

On 20.05.2020 16:04:33, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

Does:


ceph versions


show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi Ashley,
$ ceph orch upgrade status


{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}


Thanks,

Gencer.


On 20.05.2020 15:58:34, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

What does

ceph orch upgrade status

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi,

I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command:

$ ceph orch upgrade start --ceph-version 15.2.2


When I check ceph -s I see this:

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
      [=...] (remaining: 8h)

It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point.

Is there any way to know why this has stuck?

Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io [mailto:ceph-users@ceph.io]
To unsubscribe send an email to ceph-users-le...@ceph.io 
[mailto:ceph-users-le...@ceph.io]






___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Ashley Merrick
ceph config set mgr mgr/cephadm/log_to_cluster_level debug

ceph -W cephadm --watch-debug



See if you see anything that stands out as an issue with the update, seems it 
has completed only the two MGR instances



If not:



ceph orch upgrade stop

ceph orch upgrade start --ceph-version 15.2.2



and monitor the watch-debug log



Make sure at the end you run:



ceph config set mgr mgr/cephadm/log_to_cluster_level info



 On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç  
wrote 


Ah yes,



{

    "mon": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "mgr": {

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    },

    "osd": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24

    },

    "mds": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2

    },

    "overall": {

        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,

        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2

    }

}




How can i fix this?



Gencer.

On 20.05.2020 16:04:33, Ashley Merrick  wrote:

Does:



ceph versions



show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç 
 wrote 


Hi Ashley,$ ceph orch upgrade status 




{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}




Thanks,

Gencer.



On 20.05.2020 15:58:34, Ashley Merrick  wrote:

What does 

ceph orch upgrade status 

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç 
 wrote 


Hi, 
 
I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command: 
 
$ ceph orch upgrade start --ceph-version 15.2.2 
 
 
When I check ceph -s I see this: 
 
  progress: 
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m) 
      [=...] (remaining: 8h) 
 
It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point. 
 
Is there any way to know why this has stuck? 
 
Thanks, 
Gencer.
___
ceph-users mailing list -- mailto:ceph-users@ceph.io
To unsubscribe send an email to mailto:ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Gencer W . Genç
Ah yes,

{
    "mon": {
        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2
    },
    "mgr": {
        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2
    },
    "osd": {
        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 24
    },
    "mds": {
        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 2
    },
    "overall": {
        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus 
(stable)": 28,
        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus 
(stable)": 2
    }
}

How can i fix this?

Gencer.
On 20.05.2020 16:04:33, Ashley Merrick  wrote:
Does:


ceph versions


show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç  
wrote 


Hi Ashley,
$ ceph orch upgrade status


{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}


Thanks,

Gencer.


On 20.05.2020 15:58:34, Ashley Merrick mailto:singap...@amerrick.co.uk]> wrote:

What does

ceph orch upgrade status

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç mailto:gen...@gencgiyen.com]> wrote 


Hi,

I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command:

$ ceph orch upgrade start --ceph-version 15.2.2


When I check ceph -s I see this:

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
      [=...] (remaining: 8h)

It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point.

Is there any way to know why this has stuck?

Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io [mailto:ceph-users@ceph.io]
To unsubscribe send an email to ceph-users-le...@ceph.io 
[mailto:ceph-users-le...@ceph.io]




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Ashley Merrick
Does:



ceph versions



show any services yet running on 15.2.2?



 On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç  
wrote 


Hi Ashley,$ ceph orch upgrade status 




{

    "target_image": "docker.io/ceph/ceph:v15.2.2",

    "in_progress": true,

    "services_complete": [],

    "message": ""

}




Thanks,

Gencer.



On 20.05.2020 15:58:34, Ashley Merrick  wrote:

What does 

ceph orch upgrade status 

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç 
 wrote 


Hi, 
 
I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command: 
 
$ ceph orch upgrade start --ceph-version 15.2.2 
 
 
When I check ceph -s I see this: 
 
  progress: 
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m) 
      [=...] (remaining: 8h) 
 
It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point. 
 
Is there any way to know why this has stuck? 
 
Thanks, 
Gencer.
___
ceph-users mailing list -- mailto:ceph-users@ceph.io
To unsubscribe send an email to mailto:ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Gencer W . Genç
Hi Ashley,
$ ceph orch upgrade status


{
    "target_image": "docker.io/ceph/ceph:v15.2.2",
    "in_progress": true,
    "services_complete": [],
    "message": ""
}

Thanks,
Gencer.
On 20.05.2020 15:58:34, Ashley Merrick  wrote:
What does

ceph orch upgrade status

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç  
wrote 


Hi,

I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command:

$ ceph orch upgrade start --ceph-version 15.2.2


When I check ceph -s I see this:

  progress:
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)
      [=...] (remaining: 8h)

It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point.

Is there any way to know why this has stuck?

Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io [mailto:ceph-users@ceph.io]
To unsubscribe send an email to ceph-users-le...@ceph.io 
[mailto:ceph-users-le...@ceph.io]


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Ashley Merrick
What does 

ceph orch upgrade status 

show?



 On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç  
wrote 


Hi, 
 
I've 15.2.1 installed on all machines. On primary machine I executed ceph 
upgrade command: 
 
$ ceph orch upgrade start --ceph-version 15.2.2 
 
 
When I check ceph -s I see this: 
 
  progress: 
    Upgrade to docker.io/ceph/ceph:v15.2.2 (30m) 
      [=...] (remaining: 8h) 
 
It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get 
stuck at this point. 
 
Is there any way to know why this has stuck? 
 
Thanks, 
Gencer.
___
ceph-users mailing list -- mailto:ceph-users@ceph.io
To unsubscribe send an email to mailto:ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io