Hey,

I opened an issue at tracker.ceph.com -> http://tracker.ceph.com/issues
/16266-----Original Message-----
From: Brad Hubbard <bhubb...@redhat.com>
To: Mathias Buresch <mathias.bure...@de.clara.net>
Cc: jsp...@redhat.com <jsp...@redhat.com>, ceph-us...@ceph.com <ceph-us
e...@ceph.com>
Subject: Re: [ceph-users] Ceph Status - Segmentation Fault
Date: Thu, 2 Jun 2016 09:50:20 +1000

Could this be the call in RotatingKeyRing::get_secret() failing?

Mathias, I'd suggest opening a tracker for this with the information in
your last post and let us know the number here.
Cheers,
Brad

On Wed, Jun 1, 2016 at 3:15 PM, Mathias Buresch <mathias.bure...@de.cla
ra.net> wrote:
> Hi,
> 
> here is the output including --debug-auth=20. Does this help?
> 
> (gdb) run /usr/bin/ceph status --debug-monc=20 --debug-ms=20 --debug-
> rados=20 --debug-auth=20
> Starting program: /usr/bin/python /usr/bin/ceph status --debug-
> monc=20
> --debug-ms=20 --debug-rados=20 --debug-auth=20
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-
> gnu/libthread_db.so.1".
> [New Thread 0x7ffff10f5700 (LWP 2210)]
> [New Thread 0x7ffff08f4700 (LWP 2211)]
> [Thread 0x7ffff10f5700 (LWP 2210) exited]
> [New Thread 0x7ffff10f5700 (LWP 2212)]
> [Thread 0x7ffff10f5700 (LWP 2212) exited]
> [New Thread 0x7ffff10f5700 (LWP 2213)]
> [Thread 0x7ffff10f5700 (LWP 2213) exited]
> [New Thread 0x7ffff10f5700 (LWP 2233)]
> [Thread 0x7ffff10f5700 (LWP 2233) exited]
> [New Thread 0x7ffff10f5700 (LWP 2236)]
> [Thread 0x7ffff10f5700 (LWP 2236) exited]
> [New Thread 0x7ffff10f5700 (LWP 2237)]
> [Thread 0x7ffff10f5700 (LWP 2237) exited]
> [New Thread 0x7ffff10f5700 (LWP 2238)]
> [New Thread 0x7fffeb885700 (LWP 2240)]
> 2016-06-01 07:12:55.656336 7ffff10f5700 10 monclient(hunting):
> build_initial_monmap
> 2016-06-01 07:12:55.656440 7ffff10f5700  1 librados: starting msgr at
> :/0
> 2016-06-01 07:12:55.656446 7ffff10f5700  1 librados: starting
> objecter
> [New Thread 0x7fffeb084700 (LWP 2241)]
> 2016-06-01 07:12:55.657552 7ffff10f5700 10 -- :/0 ready :/0
> [New Thread 0x7fffea883700 (LWP 2242)]
> [New Thread 0x7fffea082700 (LWP 2245)]
> 2016-06-01 07:12:55.659548 7ffff10f5700  1 -- :/0 messenger.start
> [New Thread 0x7fffe9881700 (LWP 2248)]
> 2016-06-01 07:12:55.660530 7ffff10f5700  1 librados: setting wanted
> keys
> 2016-06-01 07:12:55.660539 7ffff10f5700  1 librados: calling
> monclient
> init
> 2016-06-01 07:12:55.660540 7ffff10f5700 10 monclient(hunting): init
> 2016-06-01 07:12:55.660550 7ffff10f5700  5 adding auth protocol:
> cephx
> 2016-06-01 07:12:55.660552 7ffff10f5700 10 monclient(hunting):
> auth_supported 2 method cephx
> 2016-06-01 07:12:55.660532 7fffe9881700 10 -- :/1337675866
> reaper_entry
> start
> 2016-06-01 07:12:55.660570 7fffe9881700 10 -- :/1337675866 reaper
> 2016-06-01 07:12:55.660572 7fffe9881700 10 -- :/1337675866 reaper
> done
> 2016-06-01 07:12:55.660733 7ffff10f5700  2 auth: KeyRing::load:
> loaded
> key file /etc/ceph/ceph.client.admin.keyring
> [New Thread 0x7fffe9080700 (LWP 2251)]
> [New Thread 0x7fffe887f700 (LWP 2252)]
> 2016-06-01 07:12:55.662754 7ffff10f5700 10 monclient(hunting):
> _reopen_session rank -1 name 
> 2016-06-01 07:12:55.662764 7ffff10f5700 10 -- :/1337675866
> connect_rank
> to 62.176.141.181:6789/0, creating pipe and registering
> [New Thread 0x7fffe3fff700 (LWP 2255)]
> 2016-06-01 07:12:55.663789 7ffff10f5700 10 -- :/1337675866 >>
> 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=-1 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).register_pipe
> 2016-06-01 07:12:55.663819 7ffff10f5700 10 -- :/1337675866
> get_connection mon.0 62.176.141.181:6789/0 new 0x7fffec064010
> 2016-06-01 07:12:55.663790 7fffe3fff700 10 -- :/1337675866 >>
> 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=-1 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).writer: state = connecting policy.server=0
> 2016-06-01 07:12:55.663830 7fffe3fff700 10 -- :/1337675866 >>
> 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=-1 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).connect 0
> 2016-06-01 07:12:55.663841 7ffff10f5700 10 monclient(hunting): picked
> mon.pix01 con 0x7fffec05aa30 addr 62.176.141.181:6789/0
> 2016-06-01 07:12:55.663847 7ffff10f5700 20 -- :/1337675866
> send_keepalive con 0x7fffec05aa30, have pipe.
> 2016-06-01 07:12:55.663850 7fffe3fff700 10 -- :/1337675866 >>
> 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).connecting to 62.176.141.181:6789/0
> 2016-06-01 07:12:55.663863 7ffff10f5700 10 monclient(hunting):
> _send_mon_message to mon.pix01 at 62.176.141.181:6789/0
> 2016-06-01 07:12:55.663866 7ffff10f5700  1 -- :/1337675866 -->
> 62.176.141.181:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> 0x7fffec060450 con 0x7fffec05aa30
> 2016-06-01 07:12:55.663870 7ffff10f5700 20 -- :/1337675866
> submit_message auth(proto 0 30 bytes epoch 0) v1 remote,
> 62.176.141.181:6789/0, have pipe.
> 2016-06-01 07:12:55.663874 7ffff10f5700 10 monclient(hunting):
> renew_subs
> 2016-06-01 07:12:55.663877 7ffff10f5700 10 monclient(hunting):
> authenticate will time out at 2016-06-01 07:17:55.663876
> 2016-06-01 07:12:55.664115 7fffe3fff700 20 -- :/1337675866 >>
> 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=3 :41128 s=1 pgs=0 cs=0
> l=1 c=0x7fffec05aa30).connect read peer addr 62.176.141.181:6789/0 on
> socket 3
> 2016-06-01 07:12:55.664135 7fffe3fff700 20 -- :/1337675866 >>
> 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=3 :41128 s=1 pgs=0 cs=0
> l=1 c=0x7fffec05aa30).connect peer addr for me is
> 62.176.141.181:41128/0
> 2016-06-01 07:12:55.664143 7fffe3fff700  1 --
> 62.176.141.181:0/1337675866 learned my addr
> 62.176.141.181:0/1337675866
> 2016-06-01 07:12:55.664177 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).connect sent my addr 62.176.141.181:0/1337675866
> 2016-06-01 07:12:55.664193 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).connect sending gseq=1 cseq=0 proto=15
> 2016-06-01 07:12:55.664207 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).connect wrote (self +) cseq, waiting for reply
> 2016-06-01 07:12:55.664268 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=1 pgs=0 cs=0 l=1
> c=0x7fffec05aa30).connect got reply tag 1 connect_seq 1 global_seq
> 339278 proto 15 flags 1 features 55169095435288575
> 2016-06-01 07:12:55.664285 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).connect success 1, lossy = 1, features
> 55169095435288575
> 2016-06-01 07:12:55.664301 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).connect starting reader
> [New Thread 0x7fffe3efe700 (LWP 2258)]
> 2016-06-01 07:12:55.665288 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer: state = open policy.server=0
> 2016-06-01 07:12:55.665307 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).write_keepalive2 14 2016-06-01 07:12:55.665306
> 2016-06-01 07:12:55.665310 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader reading tag...
> 2016-06-01 07:12:55.665347 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer encoding 1 features 55169095435288575
> 0x7fffec060450 auth(proto 0 30 bytes epoch 0) v1
> 2016-06-01 07:12:55.665376 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer no session security
> 2016-06-01 07:12:55.665383 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer sending 1 0x7fffec060450
> 2016-06-01 07:12:55.665384 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got KEEPALIVE_ACK
> 2016-06-01 07:12:55.665394 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader reading tag...
> 2016-06-01 07:12:55.665401 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer: state = open policy.server=0
> 2016-06-01 07:12:55.665410 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer sleeping
> 2016-06-01 07:12:55.665706 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got ACK
> 2016-06-01 07:12:55.665746 7fffe3efe700 15 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got ack seq 1
> 2016-06-01 07:12:55.665752 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader reading tag...
> 2016-06-01 07:12:55.665764 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got MSG
> 2016-06-01 07:12:55.665769 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got envelope type=4 src mon.0 front=340
> data=0
> off 0
> 2016-06-01 07:12:55.665787 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader wants 340 from dispatch throttler
> 0/104857600
> 2016-06-01 07:12:55.665797 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got front 340
> 2016-06-01 07:12:55.665801 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).aborted = 0
> 2016-06-01 07:12:55.665804 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got 340 + 0 + 0 byte message
> 2016-06-01 07:12:55.665825 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).No session security set
> 2016-06-01 07:12:55.665834 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got message 1 0x7fffd0001cb0 mon_map magic:
> 0
> v1
> 2016-06-01 07:12:55.665845 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 queue 0x7fffd0001cb0 prio 196
> 2016-06-01 07:12:55.665853 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader reading tag...
> 2016-06-01 07:12:55.665862 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer: state = open policy.server=0
> 2016-06-01 07:12:55.665874 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got MSG
> 2016-06-01 07:12:55.665879 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got envelope type=18 src mon.0 front=33
> data=0
> off 0
> 2016-06-01 07:12:55.665879 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).write_ack 1
> 2016-06-01 07:12:55.665883 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader wants 33 from dispatch throttler
> 340/104857600
> 2016-06-01 07:12:55.665888 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got front 33
> 2016-06-01 07:12:55.665891 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).aborted = 0
> 2016-06-01 07:12:55.665891 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer: state = open policy.server=0
> 2016-06-01 07:12:55.665895 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got 33 + 0 + 0 byte message
> 2016-06-01 07:12:55.665896 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer sleeping
> 2016-06-01 07:12:55.665916 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).No session security set
> 2016-06-01 07:12:55.665922 7fffe3efe700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader got message 2 0x7fffd0002f20
> auth_reply(proto
> 2 0 (0) Success) v1
> 2016-06-01 07:12:55.665944 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 queue 0x7fffd0002f20 prio 196
> 2016-06-01 07:12:55.665950 7fffe3efe700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).reader reading tag...
> 2016-06-01 07:12:55.665891 7fffea883700  1 --
> 62.176.141.181:0/1337675866 <== mon.0 62.176.141.181:6789/0 1 ====
> mon_map magic: 0 v1 ==== 340+0+0 (3213884171 0 0) 0x7fffd0001cb0 con
> 0x7fffec05aa30
> 2016-06-01 07:12:55.665953 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer: state = open policy.server=0
> 2016-06-01 07:12:55.665960 7fffea883700 10 monclient(hunting):
> handle_monmap mon_map magic: 0 v1
> 2016-06-01 07:12:55.665960 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).write_ack 2
> 2016-06-01 07:12:55.665966 7fffe3fff700 10 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer: state = open policy.server=0
> 2016-06-01 07:12:55.665971 7fffea883700 10 monclient(hunting):  got
> monmap 1, mon.pix01 is now rank 0
> 2016-06-01 07:12:55.665970 7fffe3fff700 20 --
> 62.176.141.181:0/1337675866 >> 62.176.141.181:6789/0
> pipe(0x7fffec064010 sd=3 :41128 s=2 pgs=339278 cs=1 l=1
> c=0x7fffec05aa30).writer sleeping
> 2016-06-01 07:12:55.665972 7fffea883700 10 monclient(hunting): dump:
> epoch 1
> fsid 28af67eb-4060-4770-ac1d-d2be493877af
> last_changed 2014-11-12 15:44:27.182395
> created 2014-11-12 15:44:27.182395
> 0: 62.176.141.181:6789/0 mon.pix01
> 1: 62.176.141.182:6789/0 mon.pix02
> 
> 2016-06-01 07:12:55.665988 7fffea883700 10 --
> 62.176.141.181:0/1337675866 dispatch_throttle_release 340 to dispatch
> throttler 373/104857600
> 2016-06-01 07:12:55.665992 7fffea883700 20 --
> 62.176.141.181:0/1337675866 done calling dispatch on 0x7fffd0001cb0
> 2016-06-01 07:12:55.665997 7fffea883700  1 --
> 62.176.141.181:0/1337675866 <== mon.0 62.176.141.181:6789/0 2 ====
> auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (3918039325 0 0)
> 0x7fffd0002f20 con 0x7fffec05aa30
> 2016-06-01 07:12:55.666015 7fffea883700 10 cephx: set_have_need_key
> no
> handler for service mon
> 2016-06-01 07:12:55.666016 7fffea883700 10 cephx: set_have_need_key
> no
> handler for service osd
> 2016-06-01 07:12:55.666017 7fffea883700 10 cephx: set_have_need_key
> no
> handler for service auth
> 2016-06-01 07:12:55.666018 7fffea883700 10 cephx: validate_tickets
> want
> 37 have 0 need 37
> 2016-06-01 07:12:55.666020 7fffea883700 10 monclient(hunting): my
> global_id is 3511432
> 2016-06-01 07:12:55.666022 7fffea883700 10 cephx client:
> handle_response ret = 0
> 2016-06-01 07:12:55.666023 7fffea883700 10 cephx client:  got initial
> server challenge 3112857369079243605
> 2016-06-01 07:12:55.666025 7fffea883700 10 cephx client:
> validate_tickets: want=37 need=37 have=0
> 2016-06-01 07:12:55.666026 7fffea883700 10 cephx: set_have_need_key
> no
> handler for service mon
> 2016-06-01 07:12:55.666027 7fffea883700 10 cephx: set_have_need_key
> no
> handler for service osd
> 2016-06-01 07:12:55.666030 7fffea883700 10 cephx: set_have_need_key
> no
> handler for service auth
> 2016-06-01 07:12:55.666030 7fffea883700 10 cephx: validate_tickets
> want
> 37 have 0 need 37
> 2016-06-01 07:12:55.666031 7fffea883700 10 cephx client: want=37
> need=37 have=0
> 2016-06-01 07:12:55.666034 7fffea883700 10 cephx client:
> build_request
> 
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7fffea883700 (LWP 2242)]
> 0x00007ffff3141a57 in encrypt (cct=<optimized out>,
> error=0x7fffea882280, out=..., in=..., this=0x7fffea882470)
>     at auth/cephx/../Crypto.h:110
> 110     auth/cephx/../Crypto.h: No such file or directory.
> (gdb) bt
> #0  0x00007ffff3141a57 in encrypt (cct=<optimized out>,
> error=0x7fffea882280, out=..., in=..., this=0x7fffea882470)
>     at auth/cephx/../Crypto.h:110
> #1  encode_encrypt_enc_bl<CephXChallengeBlob> (cct=<optimized out>,
> error="", out=..., key=..., t=<synthetic pointer>)
>     at auth/cephx/CephxProtocol.h:464
> #2  encode_encrypt<CephXChallengeBlob> (cct=<optimized out>,
> error="",
> out=..., key=..., t=<synthetic pointer>)
>     at auth/cephx/CephxProtocol.h:489
> #3  cephx_calc_client_server_challenge (cct=<optimized out>,
> secret=..., server_challenge=3112857369079243605, 
>     client_challenge=12899511428024786235, key=key@entry=0x7fffea8824
> a8
> , ret="") at auth/cephx/CephxProtocol.cc:36
> #4  0x00007ffff313aff4 in CephxClientHandler::build_request
> (this=0x7fffd4001520, bl=...) at auth/cephx/CephxClientHandler.cc:53
> #5  0x00007ffff2fe4a79 in MonClient::handle_auth (this=this@entry=0x7
> ff
> fec006b70, m=m@entry=0x7fffd0002f20) at mon/MonClient.cc:510
> #6  0x00007ffff2fe6507 in MonClient::ms_dispatch
> (this=0x7fffec006b70,
> m=0x7fffd0002f20) at mon/MonClient.cc:277
> #7  0x00007ffff30d5dc9 in ms_deliver_dispatch (m=0x7fffd0002f20,
> this=0x7fffec055410) at ./msg/Messenger.h:582
> #8  DispatchQueue::entry (this=0x7fffec0555d8) at
> msg/simple/DispatchQueue.cc:185
> #9  0x00007ffff31023bd in DispatchQueue::DispatchThread::entry
> (this=<optimized out>) at msg/simple/DispatchQueue.h:103
> #10 0x00007ffff7bc4182 in start_thread () from /lib/x86_64-linux-
> gnu/libpthread.so.0
> #11 0x00007ffff78f147d in clone () from /lib/x86_64-linux-
> gnu/libc.so.6
> 
> 
> Best regards
> Mathias-----Original Message-----
> From: Brad Hubbard <bhubb...@redhat.com>
> To: jsp...@redhat.com
> Cc: ceph-us...@ceph.com, Mathias Buresch <mathias.bure...@de.clara.ne
> t>
> Subject: Re: [ceph-users] Ceph Status - Segmentation Fault
> Date: Wed, 25 May 2016 19:22:03 -0400
> 
> Hi John,
> 
> This looks a lot like http://tracker.ceph.com/issues/12417 which is,
> of
> course, fixed.
> 
> Worth gathering debug-auth=20 ? Maybe on the MON end as well?
> 
> Cheers,
> Brad
> 
> 
> ----- Original Message -----
> >
> > From: "Mathias Buresch" <mathias.bure...@de.clara.net>
> > To: jsp...@redhat.com
> > Cc: ceph-us...@ceph.com
> > Sent: Thursday, 26 May, 2016 12:57:47 AM
> > Subject: Re: [ceph-users] Ceph Status - Segmentation Fault
> >
> > There wasnt a package ceph-debuginfo available (Maybe bc I am
> running
> > Ubuntu). Have installed those:
> >
> >  * ceph-dbg
> >  * librados2-dbg
> >
> > There would be also ceph-mds-dbg and ceph-fs-common-dbg and so..
> >
> > But now there are more information provided by the gdb output :)
> >
> > (gdb) run /usr/bin/ceph status --debug-monc=20 --debug-ms=20 --
> debug-
> > rados=20
> > Starting program: /usr/bin/python /usr/bin/ceph status --debug-
> > monc=20
> > --debug-ms=20 --debug-rados=20
> > [Thread debugging using libthread_db enabled]
> > Using host libthread_db library "/lib/x86_64-linux-
> > gnu/libthread_db.so.1".
> > [New Thread 0x7ffff10f5700 (LWP 26739)]
> > [New Thread 0x7ffff08f4700 (LWP 26740)]
> > [Thread 0x7ffff10f5700 (LWP 26739) exited]
> > [New Thread 0x7ffff10f5700 (LWP 26741)]
> > [Thread 0x7ffff10f5700 (LWP 26741) exited]
> > [New Thread 0x7ffff10f5700 (LWP 26742)]
> > [Thread 0x7ffff10f5700 (LWP 26742) exited]
> > [New Thread 0x7ffff10f5700 (LWP 26743)]
> > [Thread 0x7ffff10f5700 (LWP 26743) exited]
> > [New Thread 0x7ffff10f5700 (LWP 26744)]
> > [Thread 0x7ffff10f5700 (LWP 26744) exited]
> > [New Thread 0x7ffff10f5700 (LWP 26745)]
> > [Thread 0x7ffff10f5700 (LWP 26745) exited]
> > [New Thread 0x7ffff10f5700 (LWP 26746)]
> > [New Thread 0x7fffeb885700 (LWP 26747)]
> > 2016-05-25 16:55:30.929131 7ffff10f5700 10 monclient(hunting):
> > build_initial_monmap
> > 2016-05-25 16:55:30.929221 7ffff10f5700  1 librados: starting msgr
> at
> > :/0
> > 2016-05-25 16:55:30.929226 7ffff10f5700  1 librados: starting
> > objecter
> > [New Thread 0x7fffeb084700 (LWP 26748)]
> > 2016-05-25 16:55:30.930288 7ffff10f5700 10 -- :/0 ready :/0
> > [New Thread 0x7fffea883700 (LWP 26749)]
> > [New Thread 0x7fffea082700 (LWP 26750)]
> > 2016-05-25 16:55:30.932251 7ffff10f5700  1 -- :/0 messenger.start
> > [New Thread 0x7fffe9881700 (LWP 26751)]
> > 2016-05-25 16:55:30.933277 7ffff10f5700  1 librados: setting wanted
> > keys
> > 2016-05-25 16:55:30.933287 7ffff10f5700  1 librados: calling
> > monclient
> > init
> > 2016-05-25 16:55:30.933289 7ffff10f5700 10 monclient(hunting): init
> > 2016-05-25 16:55:30.933279 7fffe9881700 10 -- :/3663984981
> > reaper_entry
> > start
> > 2016-05-25 16:55:30.933300 7ffff10f5700 10 monclient(hunting):
> > auth_supported 2 method cephx
> > 2016-05-25 16:55:30.933303 7fffe9881700 10 -- :/3663984981 reaper
> > 2016-05-25 16:55:30.933305 7fffe9881700 10 -- :/3663984981 reaper
> > done
> > [New Thread 0x7fffe9080700 (LWP 26752)]
> > [New Thread 0x7fffe887f700 (LWP 26753)]
> > 2016-05-25 16:55:30.935485 7ffff10f5700 10 monclient(hunting):
> > _reopen_session rank -1 name
> > 2016-05-25 16:55:30.935495 7ffff10f5700 10 -- :/3663984981
> > connect_rank
> > to 62.176.141.181:6789/0, creating pipe and registering
> > [New Thread 0x7fffe3fff700 (LWP 26754)]
> > 2016-05-25 16:55:30.936556 7ffff10f5700 10 -- :/3663984981 >>
> > 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=-1 :0 s=1 pgs=0 cs=0
> l=1
> > c=0x7fffec05aa30).register_pipe
> > 2016-05-25 16:55:30.936573 7ffff10f5700 10 -- :/3663984981
> > get_connection mon.0 62.176.141.181:6789/0 new 0x7fffec064010
> > 2016-05-25 16:55:30.936557 7fffe3fff700 10 -- :/3663984981 >>
> > 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=-1 :0 s=1 pgs=0 cs=0
> l=1
> > c=0x7fffec05aa30).writer: state = connecting policy.server=0
> > 2016-05-25 16:55:30.936583 7fffe3fff700 10 -- :/3663984981 >>
> > 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=-1 :0 s=1 pgs=0 cs=0
> l=1
> > c=0x7fffec05aa30).connect 0
> > 2016-05-25 16:55:30.936594 7ffff10f5700 10 monclient(hunting):
> picked
> > mon.pix01 con 0x7fffec05aa30 addr 62.176.141.181:6789/0
> > 2016-05-25 16:55:30.936600 7ffff10f5700 20 -- :/3663984981
> > send_keepalive con 0x7fffec05aa30, have pipe.
> > 2016-05-25 16:55:30.936603 7fffe3fff700 10 -- :/3663984981 >>
> > 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=3 :0 s=1 pgs=0 cs=0
> l=1
> > c=0x7fffec05aa30).connecting to 62.176.141.181:6789/0
> > 2016-05-25 16:55:30.936615 7ffff10f5700 10 monclient(hunting):
> > _send_mon_message to mon.pix01 at 62.176.141.181:6789/0
> > 2016-05-25 16:55:30.936618 7ffff10f5700  1 -- :/3663984981 -->
> > 62.176.141.181:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> > 0x7fffec060450 con 0x7fffec05aa30
> > 2016-05-25 16:55:30.936623 7ffff10f5700 20 -- :/3663984981
> > submit_message auth(proto 0 30 bytes epoch 0) v1 remote,
> > 62.176.141.181:6789/0, have pipe.
> > 2016-05-25 16:55:30.936626 7ffff10f5700 10 monclient(hunting):
> > renew_subs
> > 2016-05-25 16:55:30.936630 7ffff10f5700 10 monclient(hunting):
> > authenticate will time out at 2016-05-25 17:00:30.936629
> > 2016-05-25 16:55:30.936867 7fffe3fff700 20 -- :/3663984981 >>
> > 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=3 :38763 s=1 pgs=0
> cs=0
> > l=1 c=0x7fffec05aa30).connect read peer addr 62.176.141.181:6789/0
> on
> > socket 3
> > 2016-05-25 16:55:30.936882 7fffe3fff700 20 -- :/3663984981 >>
> > 62.176.141.181:6789/0 pipe(0x7fffec064010 sd=3 :38763 s=1 pgs=0
> cs=0
> > l=1 c=0x7fffec05aa30).connect peer addr for me is
> > 62.176.141.181:38763/0
> > 2016-05-25 16:55:30.936887 7fffe3fff700  1 --
> > 62.176.141.181:0/3663984981 learned my addr
> > 62.176.141.181:0/3663984981
> > 2016-05-25 16:55:30.936914 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=1 pgs=0 cs=0 l=1
> > c=0x7fffec05aa30).connect sent my addr 62.176.141.181:0/3663984981
> > 2016-05-25 16:55:30.936922 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=1 pgs=0 cs=0 l=1
> > c=0x7fffec05aa30).connect sending gseq=1 cseq=0 proto=15
> > 2016-05-25 16:55:30.936932 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=1 pgs=0 cs=0 l=1
> > c=0x7fffec05aa30).connect wrote (self +) cseq, waiting for reply
> > 2016-05-25 16:55:30.937015 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=1 pgs=0 cs=0 l=1
> > c=0x7fffec05aa30).connect got reply tag 1 connect_seq 1 global_seq
> > 327867 proto 15 flags 1 features 55169095435288575
> > 2016-05-25 16:55:30.937035 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).connect success 1, lossy = 1, features
> > 55169095435288575
> > 2016-05-25 16:55:30.937046 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).connect starting reader
> > [New Thread 0x7fffe3efe700 (LWP 26757)]
> > 2016-05-25 16:55:30.938118 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer: state = open policy.server=0
> > 2016-05-25 16:55:30.938127 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).write_keepalive2 14 2016-05-25 16:55:30.938126
> > 2016-05-25 16:55:30.938161 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer encoding 1 features 55169095435288575
> > 0x7fffec060450 auth(proto 0 30 bytes epoch 0) v1
> > 2016-05-25 16:55:30.938185 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer no session security
> > 2016-05-25 16:55:30.938197 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer sending 1 0x7fffec060450
> > 2016-05-25 16:55:30.938216 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer: state = open policy.server=0
> > 2016-05-25 16:55:30.938222 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer sleeping
> > 2016-05-25 16:55:30.938201 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader reading tag...
> > 2016-05-25 16:55:30.938238 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got KEEPALIVE_ACK
> > 2016-05-25 16:55:30.938243 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader reading tag...
> > 2016-05-25 16:55:30.938368 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got ACK
> > 2016-05-25 16:55:30.938378 7fffe3efe700 15 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got ack seq 1
> > 2016-05-25 16:55:30.938382 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader reading tag...
> > 2016-05-25 16:55:30.938388 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got MSG
> > 2016-05-25 16:55:30.938392 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got envelope type=4 src mon.0 front=340
> > data=0
> > off 0
> > 2016-05-25 16:55:30.938399 7fffe3efe700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader wants 340 from dispatch throttler
> > 0/104857600
> > 2016-05-25 16:55:30.938406 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got front 340
> > 2016-05-25 16:55:30.938409 7fffe3efe700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).aborted = 0
> > 2016-05-25 16:55:30.938413 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got 340 + 0 + 0 byte message
> > 2016-05-25 16:55:30.938427 7fffe3efe700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).No session security set
> > 2016-05-25 16:55:30.938434 7fffe3efe700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got message 1 0x7fffd0001cb0 mon_map
> magic:
> > 0
> > v1
> > 2016-05-25 16:55:30.938442 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 queue 0x7fffd0001cb0 prio 196
> > 2016-05-25 16:55:30.938450 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader reading tag...
> > 2016-05-25 16:55:30.938453 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer: state = open policy.server=0
> > 2016-05-25 16:55:30.938464 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).write_ack 1
> > 2016-05-25 16:55:30.938467 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got MSG
> > 2016-05-25 16:55:30.938471 7fffe3fff700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer: state = open policy.server=0
> > 2016-05-25 16:55:30.938472 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got envelope type=18 src mon.0 front=33
> > data=0
> > off 0
> > 2016-05-25 16:55:30.938475 7fffe3fff700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).writer sleeping
> > 2016-05-25 16:55:30.938476 7fffe3efe700 10 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader wants 33 from dispatch throttler
> > 340/104857600
> > 2016-05-25 16:55:30.938456 7fffea883700  1 --
> > 62.176.141.181:0/3663984981 <== mon.0 62.176.141.181:6789/0 1 ====
> > mon_map magic: 0 v1 ==== 340+0+0 (3213884171 0 0) 0x7fffd0001cb0
> con
> > 0x7fffec05aa30
> > 2016-05-25 16:55:30.938481 7fffe3efe700 20 --
> > 62.176.141.181:0/3663984981 >> 62.176.141.181:6789/0
> > pipe(0x7fffec064010 sd=3 :38763 s=2 pgs=327867 cs=1 l=1
> > c=0x7fffec05aa30).reader got front 33
> > 2016-05-25 16:55:30.938484 7fffea883700 10 monclient(hunting):
> > handle_monmap mon_map magic: 0 v1
> > 2016-05-25 16:55:30.938485 7fffe3efe700 10
> > Program received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 0x7fffea883700 (LWP 26749)]
> > 0x00007ffff3141a57 in encrypt (cct=<optimized out>,
> > error=0x7fffea882280, out=..., in=..., this=0x7fffea882470) at
> > auth/cephx/../Crypto.h:110
> > 110   auth/cephx/../Crypto.h: No such file or directory.
> > (gdb) bt
> > #0  0x00007ffff3141a57 in encrypt (cct=<optimized out>,
> > error=0x7fffea882280, out=..., in=..., this=0x7fffea882470) at
> > auth/cephx/../Crypto.h:110
> > #1  encode_encrypt_enc_bl<CephXChallengeBlob> (cct=<optimized out>,
> > error="", out=..., key=..., t=<synthetic pointer>) at
> > auth/cephx/CephxProtocol.h:464
> > #2  encode_encrypt<CephXChallengeBlob> (cct=<optimized out>,
> > error="",
> > out=..., key=..., t=<synthetic pointer>) at
> > auth/cephx/CephxProtocol.h:489
> > #3  cephx_calc_client_server_challenge (cct=<optimized out>,
> > secret=..., server_challenge=9622349603176979543,
> > client_challenge=7732813711656640623, key=key@entry=0x7fffea8824a8,
> > ret="")
> >     at auth/cephx/CephxProtocol.cc:36
> > #4  0x00007ffff313aff4 in CephxClientHandler::build_request
> > (this=0x7fffd4001520, bl=...) at
> auth/cephx/CephxClientHandler.cc:53
> > #5  0x00007ffff2fe4a79 in MonClient::handle_auth (this=this@entry=0
> x7
> > ff
> > fec006b70, m=m@entry=0x7fffd0002ee0) at mon/MonClient.cc:510
> > #6  0x00007ffff2fe6507 in MonClient::ms_dispatch
> > (this=0x7fffec006b70,
> > m=0x7fffd0002ee0) at mon/MonClient.cc:277
> > #7  0x00007ffff30d5dc9 in ms_deliver_dispatch (m=0x7fffd0002ee0,
> > this=0x7fffec055410) at ./msg/Messenger.h:582
> > #8  DispatchQueue::entry (this=0x7fffec0555d8) at
> > msg/simple/DispatchQueue.cc:185
> > #9  0x00007ffff31023bd in DispatchQueue::DispatchThread::entry
> > (this=<optimized out>) at msg/simple/DispatchQueue.h:103
> > #10 0x00007ffff7bc4182 in start_thread () from /lib/x86_64-linux-
> > gnu/libpthread.so.0
> > #11 0x00007ffff78f147d in clone () from /lib/x86_64-linux-
> > gnu/libc.so.6
> >
> > -----Original Message-----
> > From: John Spray <jsp...@redhat.com>
> > To: Mathias Buresch <mathias.bure...@de.clara.net>
> > Cc: ceph-us...@ceph.com <ceph-us...@ceph.com>
> > Subject: Re: [ceph-users] Ceph Status - Segmentation Fault
> > Date: Wed, 25 May 2016 15:41:51 +0100
> >
> > On Wed, May 25, 2016 at 3:00 PM, Mathias Buresch
> > <mathias.bure...@de.clara.net> wrote:
> > >
> > >
> > > I don't know what exactly is segfaulting.
> > >
> > > Here ist the output with command line flags and gdb (I can't
> really
> > > notice erros in that output):
> > >
> > > # ceph -s --debug-monc=20 --debug-ms=20
> > > 2016-05-25 14:51:02.406135 7f188300a700 10 monclient(hunting):
> > > build_initial_monmap
> > > 2016-05-25 14:51:02.406444 7f188300a700 10 -- :/0 ready :/0
> > > 2016-05-25 14:51:02.407214 7f188300a700  1 -- :/0 messenger.start
> > > 2016-05-25 14:51:02.407261 7f188300a700 10 monclient(hunting):
> init
> > > 2016-05-25 14:51:02.407291 7f188300a700 10 monclient(hunting):
> > > auth_supported 2 method cephx
> > > 2016-05-25 14:51:02.407312 7f187b7fe700 10 -- :/2987460054
> > > reaper_entry
> > > start
> > > 2016-05-25 14:51:02.407380 7f187b7fe700 10 -- :/2987460054 reaper
> > > 2016-05-25 14:51:02.407383 7f187b7fe700 10 -- :/2987460054 reaper
> > > done
> > > 2016-05-25 14:51:02.407638 7f188300a700 10 monclient(hunting):
> > > _reopen_session rank -1 name
> > > 2016-05-25 14:51:02.407646 7f188300a700 10 -- :/2987460054
> > > connect_rank
> > > to 62.176.141.181:6789/0, creating pipe and registering
> > > 2016-05-25 14:51:02.407686 7f188300a700 10 -- :/2987460054 >>
> > > 62.176.141.181:6789/0 pipe(0x7f187c064010 sd=-1 :0 s=1 pgs=0 cs=0
> > > l=1
> > > c=0x7f187c05aa50).register_pipe
> > > 2016-05-25 14:51:02.407698 7f188300a700 10 -- :/2987460054
> > > get_connection mon.0 62.176.141.181:6789/0 new 0x7f187c064010
> > > 2016-05-25 14:51:02.407693 7f1879ffb700 10 -- :/2987460054 >>
> > > 62.176.141.181:6789/0 pipe(0x7f187c064010 sd=-1 :0 s=1 pgs=0 cs=0
> > > l=1
> > > c=0x7f187c05aa50).writer: state = connecting policy.server=0
> > > 2016-05-25 14:51:02.407723 7f1879ffb700 10 -- :/2987460054 >>
> > > 62.176.141.181:6789/0 pipe(0x7f187c064010 sd=-1 :0 s=1 pgs=0 cs=0
> > > l=1
> > > c=0x7f187c05aa50).connect 0
> > > 2016-05-25 14:51:02.407738 7f188300a700 10 monclient(hunting):
> > > picked
> > > mon.pix01 con 0x7f187c05aa50 addr 62.176.141.181:6789/0
> > > 2016-05-25 14:51:02.407745 7f188300a700 20 -- :/2987460054
> > > send_keepalive con 0x7f187c05aa50, have pipe.
> > > 2016-05-25 14:51:02.407744 7f1879ffb700 10 -- :/2987460054 >>
> > > 62.176.141.181:6789/0 pipe(0x7f187c064010 sd=3 :0 s=1 pgs=0 cs=0
> > > l=1
> > > c=0x7f187c05aa50).connecting to 62.176.141.181:6789/0
> > > 2016-05-25 14:51:02.407759 7f188300a700 10 monclient(hunting):
> > > _send_mon_message to mon.pix01 at 62.176.141.181:6789/0
> > > 2016-05-25 14:51:02.407763 7f188300a700  1 -- :/2987460054 -->
> > > 62.176.141.181:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> > > 0x7f187c060380 con 0x7f187c05aa50
> > > 2016-05-25 14:51:02.407768 7f188300a700 20 -- :/2987460054
> > > submit_message auth(proto 0 30 bytes epoch 0) v1 remote,
> > > 62.176.141.181:6789/0, have pipe.
> > > 2016-05-25 14:51:02.407773 7f188300a700 10 monclient(hunting):
> > > renew_subs
> > > 2016-05-25 14:51:02.407777 7f188300a700 10 monclient(hunting):
> > > authenticate will time out at 2016-05-25 14:56:02.407777
> > > 2016-05-25 14:51:02.408128 7f1879ffb700 20 -- :/2987460054 >>
> > > 62.176.141.181:6789/0 pipe(0x7f187c064010 sd=3 :37964 s=1 pgs=0
> > > cs=0
> > > l=1 c=0x7f187c05aa50).connect read peer addr
> 62.176.141.181:6789/0
> > > on
> > > socket 3
> > > 2016-05-25 14:51:02.408144 7f1879ffb700 20 -- :/2987460054 >>
> > > 62.176.141.181:6789/0 pipe(0x7f187c064010 sd=3 :37964 s=1 pgs=0
> > > cs=0
> > > l=1 c=0x7f187c05aa50).connect peer addr for me is
> > > 62.176.141.181:37964/0
> > > 2016-05-25 14:51:02.408148 7f1879ffb700  1 --
> > > 62.176.141.181:0/2987460054 learned my addr
> > > 62.176.141.181:0/2987460054
> > > 2016-05-25 14:51:02.408188 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=1 pgs=0 cs=0 l=1
> > > c=0x7f187c05aa50).connect sent my addr
> 62.176.141.181:0/2987460054
> > > 2016-05-25 14:51:02.408197 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=1 pgs=0 cs=0 l=1
> > > c=0x7f187c05aa50).connect sending gseq=1 cseq=0 proto=15
> > > 2016-05-25 14:51:02.408207 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=1 pgs=0 cs=0 l=1
> > > c=0x7f187c05aa50).connect wrote (self +) cseq, waiting for reply
> > > 2016-05-25 14:51:02.408259 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=1 pgs=0 cs=0 l=1
> > > c=0x7f187c05aa50).connect got reply tag 1 connect_seq 1
> global_seq
> > > 327710 proto 15 flags 1 features 55169095435288575
> > > 2016-05-25 14:51:02.408269 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).connect success 1, lossy = 1, features
> > > 55169095435288575
> > > 2016-05-25 14:51:02.408280 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).connect starting reader
> > > 2016-05-25 14:51:02.408325 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer: state = open policy.server=0
> > > 2016-05-25 14:51:02.408343 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).write_keepalive2 14 2016-05-25 14:51:02.408342
> > > 2016-05-25 14:51:02.408378 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer encoding 1 features 55169095435288575
> > > 0x7f187c060380 auth(proto 0 30 bytes epoch 0) v1
> > > 2016-05-25 14:51:02.408356 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader reading tag...
> > > 2016-05-25 14:51:02.408406 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer no session security
> > > 2016-05-25 14:51:02.408415 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer sending 1 0x7f187c060380
> > > 2016-05-25 14:51:02.408453 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer: state = open policy.server=0
> > > 2016-05-25 14:51:02.408455 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got KEEPALIVE_ACK
> > > 2016-05-25 14:51:02.408463 7f1879ffb700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer sleeping
> > > 2016-05-25 14:51:02.408482 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader reading tag...
> > > 2016-05-25 14:51:02.408696 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got ACK
> > > 2016-05-25 14:51:02.408713 7f1879efa700 15 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got ack seq 1
> > > 2016-05-25 14:51:02.408721 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader reading tag...
> > > 2016-05-25 14:51:02.408732 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got MSG
> > > 2016-05-25 14:51:02.408739 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got envelope type=4 src mon.0 front=340
> > > data=0
> > > off 0
> > > 2016-05-25 14:51:02.408751 7f1879efa700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader wants 340 from dispatch throttler
> > > 0/104857600
> > > 2016-05-25 14:51:02.408763 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got front 340
> > > 2016-05-25 14:51:02.408770 7f1879efa700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).aborted = 0
> > > 2016-05-25 14:51:02.408776 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got 340 + 0 + 0 byte message
> > > 2016-05-25 14:51:02.408801 7f1879efa700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).No session security set
> > > 2016-05-25 14:51:02.408813 7f1879efa700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader got message 1 0x7f186c001cb0 mon_map
> > > magic:
> > > 0
> > > v1
> > > 2016-05-25 14:51:02.408827 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 queue 0x7f186c001cb0 prio 196
> > > 2016-05-25 14:51:02.408837 7f1879efa700 20 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).reader reading tag...
> > > 2016-05-25 14:51:02.408851 7f1879ffb700 10 --
> > > 62.176.141.181:0/2987460054 >> 62.176.141.181:6789/0
> > > pipe(0x7f187c064010 sd=3 :37964 s=2 pgs=327710 cs=1 l=1
> > > c=0x7f187c05aa50).writer: state = open policy.server=0
> > > Segmentation fault
> > >
> > >
> > > (gdb) run /usr/bin/ceph status
> > > Starting program: /usr/bin/python /usr/bin/ceph status
> > > [Thread debugging using libthread_db enabled]
> > > Using host libthread_db library "/lib/x86_64-linux-
> > > gnu/libthread_db.so.1".
> > > [New Thread 0x7ffff10f5700 (LWP 23401)]
> > > [New Thread 0x7ffff08f4700 (LWP 23402)]
> > > [Thread 0x7ffff10f5700 (LWP 23401) exited]
> > > [New Thread 0x7ffff10f5700 (LWP 23403)]
> > > [Thread 0x7ffff10f5700 (LWP 23403) exited]
> > > [New Thread 0x7ffff10f5700 (LWP 23404)]
> > > [Thread 0x7ffff10f5700 (LWP 23404) exited]
> > > [New Thread 0x7ffff10f5700 (LWP 23405)]
> > > [Thread 0x7ffff10f5700 (LWP 23405) exited]
> > > [New Thread 0x7ffff10f5700 (LWP 23406)]
> > > [Thread 0x7ffff10f5700 (LWP 23406) exited]
> > > [New Thread 0x7ffff10f5700 (LWP 23407)]
> > > [Thread 0x7ffff10f5700 (LWP 23407) exited]
> > > [New Thread 0x7ffff10f5700 (LWP 23408)]
> > > [New Thread 0x7fffeb885700 (LWP 23409)]
> > > [New Thread 0x7fffeb084700 (LWP 23410)]
> > > [New Thread 0x7fffea883700 (LWP 23411)]
> > > [New Thread 0x7fffea082700 (LWP 23412)]
> > > [New Thread 0x7fffe9881700 (LWP 23413)]
> > > [New Thread 0x7fffe9080700 (LWP 23414)]
> > > [New Thread 0x7fffe887f700 (LWP 23415)]
> > > [New Thread 0x7fffe807e700 (LWP 23416)]
> > > [New Thread 0x7fffe7f7d700 (LWP 23419)]
> > >
> > > Program received signal SIGSEGV, Segmentation fault.
> > > [Switching to Thread 0x7fffea883700 (LWP 23411)]
> > > 0x00007ffff3141a57 in ?? () from /usr/lib/librados.so.2
> > > (gdb) bt
> > > #0  0x00007ffff3141a57 in ?? () from /usr/lib/librados.so.2
> > > #1  0x00007ffff313aff4 in ?? () from /usr/lib/librados.so.2
> > > #2  0x00007ffff2fe4a79 in ?? () from /usr/lib/librados.so.2
> > > #3  0x00007ffff2fe6507 in ?? () from /usr/lib/librados.so.2
> > > #4  0x00007ffff30d5dc9 in ?? () from /usr/lib/librados.so.2
> > > #5  0x00007ffff31023bd in ?? () from /usr/lib/librados.so.2
> > > #6  0x00007ffff7bc4182 in start_thread () from /lib/x86_64-linux-
> > > gnu/libpthread.so.0
> > > #7  0x00007ffff78f147d in clone () from /lib/x86_64-linux-
> > > gnu/libc.so.6
> > >
> > >
> > > Does that help? I cant really see where the error is. :)
> > Hmm, can you try getting that backtrace again after installing the
> > ceph-debuginfo package?  Also add --debug-rados=20 to your command
> > line (you can use all the --debug... options when you're running
> > inside gdb to get the logs and the backtrace in one).
> >
> > John
> >
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: John Spray <jsp...@redhat.com>
> > > To: Mathias Buresch <mathias.bure...@de.clara.net>
> > > Cc: ceph-us...@ceph.com <ceph-us...@ceph.com>
> > > Subject: Re: [ceph-users] Ceph Status - Segmentation Fault
> > > Date: Wed, 25 May 2016 10:16:55 +0100
> > >
> > > On Mon, May 23, 2016 at 12:41 PM, Mathias Buresch
> > > <mathias.bure...@de.clara.net> wrote:
> > > >
> > > >
> > > >
> > > > Please found the logs with higher debug level attached to this
> > > > email.
> > > You've attached the log from your mon, but it's not your mon
> that's
> > > segfaulting, right?
> > >
> > > You can use normal ceph command line flags to crank up the
> > > verbosity
> > > on the CLI too (--debug-monc=20 --debug-ms=20 spring to mind).
> > >
> > > You can also run the ceph CLI in gdb like this:
> > > gdb python
> > > (gdb) run /usr/bin/ceph status
> > > ... hopefully it crashes and then ...
> > > (gdb) bt
> > >
> > > Cheers,
> > > John
> > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Kind regards
> > > > Mathias
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to