Can you please open a ticket in at tracker.ceph.com with this backtrace, 
and some info about what workload and system config led to this?  Are you 
using erasure coding and/or tiering?

Thanks!
sage


On Thu, 15 May 2014, Sergey Korolev wrote:

> Hello, I have some trouble with OSD. It's crashed with error
> 
> 
> osd/osd_types.h: 2868: FAILED assert(rwstate.empty())
> 
>  ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
>  1: (SharedPtrRegistry<hobject_t,
> ObjectContext>::OnRemoval::operator()(ObjectContext*)+0x2f5) [0x8dfee5]
>  2:(std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count()+0x
> 49) [0x66e839]
>  3: (ReplicatedPG::OpContext::~OpContext()+0xff) [0x8de93f]
>  4: (ReplicatedPG::RepGather::put()+0x37) [0x8deda7]
>  5: (C_OSD_RepopCommit::~C_OSD_RepopCommit()+0x24) [0x8defc4]
>  6:(ReplicatedBackend::sub_op_modify_reply(std::tr1::shared_ptr<OpRequest>)+0x
> 379) [0x967de9]
>  7:
> (ReplicatedBackend::handle_message(std::tr1::shared_ptr<OpRequest>)+0x3be)
> [0x9695ee]
>  8: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>,
> ThreadPool::TPHandle&)+0x250) [0x847000]
>  9: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
> std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x37c) [0x60e96c]
>  10: (OSD::OpWQ::_process(boost::intrusive_ptr<PG>,
> ThreadPool::TPHandle&)+0x63d) [0x64068d]
>  11: (ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>,
> std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG>
> >::_void_process(void*, ThreadPool::TPHandle&)+0xae) [0x6768fe]
>  12: (ThreadPool::worker(ThreadPool::WorkThread*)+0x551) [0xab5721]
>  13: (ThreadPool::WorkThread::entry()+0x10) [0xab8760]
>  14: (()+0x79d1) [0x7f701bb3a9d1]
>  15: (clone()+0x6d) [0x7f701a875b6d]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.
> 
> 
> ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
> 
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to