Re: [ceph-users] ceph-mds suicide on upgrade

2018-03-12 Thread Reed Dier
Good eye,

Thanks Dietmar,

Glad to know this isn’t a standard issue, hopefully anything in the future will 
get caught and/or make it into release notes.

Thanks,

Reed

> On Mar 12, 2018, at 12:55 PM, Dietmar Rieder  
> wrote:
> 
> Hi,
> 
> See: 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/025092.html 
> 
> 
> Might be of interest.
> 
> Dietmar
> 
> Am 12. März 2018 18:19:51 MEZ schrieb Reed Dier :
> Figured I would see if anyone has seen this or can see something I am doing 
> wrong.
> 
> Upgrading all of my daemons from 12.2.2. to 12.2.4.
> 
> Followed the documentation, upgraded mons, mgrs, osds, then mds’s in that 
> order.
> 
> All was fine, until the MDSs.
> 
> I have two MDS’s in Active:Standby config. I decided it made sense to upgrade 
> the Standby MDS, so I could gracefully step down the current active, after 
> the standby was upgraded.
> 
> However, when I upgraded the standby, it caused the working active to 
> suicide, and the then standby to immediately rejoin as active when it 
> restarted, which didn’t leave me feeling warm and fuzzy about upgrading MDS’s 
> in the future.
> 
> Attaching log entries that would appear to be the culprit.
> 
>  2018-03-12 13:07:38.981339 7ff0cdc40700  0 mds.0 handle_mds_map mdsmap 
> compatset compat={},rocompat={},incompat={1=base v0.20,2=client writeable 
> ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds 
> uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file 
> layout v2} not writeable with daemon features 
> compat={},rocompat={},incompat={1=base v0.20,2=client writeable 
> ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds 
> uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline 
> data,8=file layout v2}, killing myself
>  2018-03-12 13:07:38.981353 7ff0cdc40700  1 mds.0 suicide.  wanted state 
> up:active
>  2018-03-12 13:07:40.000753 7ff0cdc40700  1 mds.0.119543 shutdown: shutting 
> down rank 0
>  2018-03-12 13:08:27.325667 7f32cc992200  0 set uid:gid to 64045:64045 
> (ceph:ceph)
>  2018-03-12 13:08:27.325687 7f32cc992200  0 ceph version 12.2.4 
> (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process 
> (unknown), pid 66854
>  2018-03-12 13:08:27.326795 7f32cc992200  0 pidfile_write: ignore empty 
> --pid-file
>  2018-03-12 13:08:32.350266 7f32c6440700  1 mds.0 handle_mds_map standby
> 
> Hopefully there may be some config issue with my mds_map or something like 
> that which may be an easy fix to prevent something like this in the future.
> 
> Thanks,
> 
> Reed
> 
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
> 
> -- 
> ___
> D i e t m a r R i e d e r, Mag.Dr.
> Innsbruck Medical University
> Biocenter - Division for Bioinformatics
> Innrain 80, 6020 Innsbruck
> Phone: +43 512 9003 71402
> Fax: +43 512 9003 73100
> Email: dietmar.rie...@i-med.ac.at
> Web: http://www.icbi.at 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-mds suicide on upgrade

2018-03-12 Thread Dietmar Rieder
Hi,

See: 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/025092.html

Might be of interest.

Dietmar

Am 12. März 2018 18:19:51 MEZ schrieb Reed Dier :
>Figured I would see if anyone has seen this or can see something I am
>doing wrong.
>
>Upgrading all of my daemons from 12.2.2. to 12.2.4.
>
>Followed the documentation, upgraded mons, mgrs, osds, then mds’s in
>that order.
>
>All was fine, until the MDSs.
>
>I have two MDS’s in Active:Standby config. I decided it made sense to
>upgrade the Standby MDS, so I could gracefully step down the current
>active, after the standby was upgraded.
>
>However, when I upgraded the standby, it caused the working active to
>suicide, and the then standby to immediately rejoin as active when it
>restarted, which didn’t leave me feeling warm and fuzzy about upgrading
>MDS’s in the future.
>
>Attaching log entries that would appear to be the culprit.
>
>> 2018-03-12 13:07:38.981339 7ff0cdc40700  0 mds.0 handle_mds_map
>mdsmap compatset compat={},rocompat={},incompat={1=base v0.20,2=client
>writeable ranges,3=default file layouts on dirs,4=dir inode in separate
>object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
>anchor table,9=file layout v2} not writeable with daemon features
>compat={},rocompat={},incompat={1=base v0.20,2=client writeable
>ranges,3=default file layouts on dirs,4=dir inode in separate
>object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds
>uses inline data,8=file layout v2}, killing myself
>> 2018-03-12 13:07:38.981353 7ff0cdc40700  1 mds.0 suicide.  wanted
>state up:active
>> 2018-03-12 13:07:40.000753 7ff0cdc40700  1 mds.0.119543 shutdown:
>shutting down rank 0
>> 2018-03-12 13:08:27.325667 7f32cc992200  0 set uid:gid to 64045:64045
>(ceph:ceph)
>> 2018-03-12 13:08:27.325687 7f32cc992200  0 ceph version 12.2.4
>(52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process
>(unknown), pid 66854
>> 2018-03-12 13:08:27.326795 7f32cc992200  0 pidfile_write: ignore
>empty --pid-file
>> 2018-03-12 13:08:32.350266 7f32c6440700  1 mds.0 handle_mds_map
>standby
>
>Hopefully there may be some config issue with my mds_map or something
>like that which may be an easy fix to prevent something like this in
>the future.
>
>Thanks,
>
>Reed
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
___
D i e t m a r R i e d e r, Mag.Dr.
Innsbruck Medical University
Biocenter - Division for Bioinformatics
Innrain 80, 6020 Innsbruck
Phone: +43 512 9003 71402
Fax: +43 512 9003 73100
Email: dietmar.rie...@i-med.ac.at
Web: http://www.icbi.at
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-mds suicide on upgrade

2018-03-12 Thread Reed Dier
Figured I would see if anyone has seen this or can see something I am doing 
wrong.

Upgrading all of my daemons from 12.2.2. to 12.2.4.

Followed the documentation, upgraded mons, mgrs, osds, then mds’s in that order.

All was fine, until the MDSs.

I have two MDS’s in Active:Standby config. I decided it made sense to upgrade 
the Standby MDS, so I could gracefully step down the current active, after the 
standby was upgraded.

However, when I upgraded the standby, it caused the working active to suicide, 
and the then standby to immediately rejoin as active when it restarted, which 
didn’t leave me feeling warm and fuzzy about upgrading MDS’s in the future.

Attaching log entries that would appear to be the culprit.

> 2018-03-12 13:07:38.981339 7ff0cdc40700  0 mds.0 handle_mds_map mdsmap 
> compatset compat={},rocompat={},incompat={1=base v0.20,2=client writeable 
> ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds 
> uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file 
> layout v2} not writeable with daemon features 
> compat={},rocompat={},incompat={1=base v0.20,2=client writeable 
> ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds 
> uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline 
> data,8=file layout v2}, killing myself
> 2018-03-12 13:07:38.981353 7ff0cdc40700  1 mds.0 suicide.  wanted state 
> up:active
> 2018-03-12 13:07:40.000753 7ff0cdc40700  1 mds.0.119543 shutdown: shutting 
> down rank 0
> 2018-03-12 13:08:27.325667 7f32cc992200  0 set uid:gid to 64045:64045 
> (ceph:ceph)
> 2018-03-12 13:08:27.325687 7f32cc992200  0 ceph version 12.2.4 
> (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process 
> (unknown), pid 66854
> 2018-03-12 13:08:27.326795 7f32cc992200  0 pidfile_write: ignore empty 
> --pid-file
> 2018-03-12 13:08:32.350266 7f32c6440700  1 mds.0 handle_mds_map standby

Hopefully there may be some config issue with my mds_map or something like that 
which may be an easy fix to prevent something like this in the future.

Thanks,

Reed
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com