Hello,

are you sure about your Ceph version? Below’s output states "0.94.1“.

We have ran into a similar issue with Ceph 0.94.3 and can confirm that we
no longer see that with Ceph 0.94.5.

If you upgraded during operation, did you at least migrate all of your VMs
at least once to make sure they are using the most recent librbd?

Cheers,
Torsten

-- 
Torsten Urbas
Mobile: +49 (170) 77 38 251 <+49%20(170)%2077%2038%20251>

Am 28. Juni 2016 um 11:00:21, 한승진 (yongi...@gmail.com) schrieb:

Hi, Cephers.

Our ceph version is Hammer(0.94.7).

I implemented ceph with OpenStack, all instances use block storage as a
local volume.

After increasing the PG number from 256 to 768, many vms are shutdown.

That was very strange case for me.

Below vm's is libvirt error log.

osd/osd_types.cc: In function 'bool pg_t::is_split(unsigned int, unsigned
int, std::set<pg_t>*) const' thread 7fc4c01b9700 time 2016-06-28
14:17:35.004480
osd/osd_types.cc: 459: FAILED assert(m_seed < old_pg_num)
 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
 1: (()+0x15374b) [0x7fc4d1ca674b]
 2: (()+0x222f01) [0x7fc4d1d75f01]
 3: (()+0x222fdd) [0x7fc4d1d75fdd]
 4: (()+0xc5339) [0x7fc4d1c18339]
 5: (()+0xdc3e5) [0x7fc4d1c2f3e5]
 6: (()+0xdcc4a) [0x7fc4d1c2fc4a]
 7: (()+0xde1b2) [0x7fc4d1c311b2]
 8: (()+0xe3fbf) [0x7fc4d1c36fbf]
 9: (()+0x2c3b99) [0x7fc4d1e16b99]
 10: (()+0x2f160d) [0x7fc4d1e4460d]
 11: (()+0x80a5) [0x7fc4cd7aa0a5]
 12: (clone()+0x6d) [0x7fc4cd4d7cfd]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'
2016-06-28 05:17:36.557+0000: shutting down


Could you anybody explain this?

Thank you.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to