Hi,
I just experienced the same behavior as described in
http://tracker.ceph.com/issues/10399
I know that Ubuntu 15.04 is not supported, but I thought it might be
interesting for you.
I have 2 physical hosts, one with lots of disks and one with only the boot/root
disk. On both systems I run kvm. I have 3 VMs running ceph mon, 6 VMs each
running 2 ceph osds and some other VMs. The mon and osd VMs boot from virtual
disks hosted on local disks of the kvm host. The osds access physical disks on
the host using RDM. All other VMs boot from rbd hosted on the ceph cluster.
Here a list of installed software:
root@kvmhost01:~# dpkg -l | grep ceph
ii ceph 0.94.2-0ubuntu0.15.04.1
amd64 distributed storage and file system
ii ceph-common 0.94.2-0ubuntu0.15.04.1
amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-fs-common 0.94.2-0ubuntu0.15.04.1
amd64 common utilities to mount and interact with a ceph file system
ii ceph-mds 0.94.2-0ubuntu0.15.04.1
amd64 metadata server for the ceph distributed file system
ii libcephfs1 0.94.2-0ubuntu0.15.04.1
amd64 Ceph distributed file system client library
ii python-cephfs 0.94.2-0ubuntu0.15.04.1
amd64 Python libraries for the Ceph libcephfs library
root@kvmhost01:~# dpkg -l | grep -E 'kvm|qemu|libvirt'
ii ipxe-qemu 1.0.0+git-20141004.86285d1-1ubuntu3 all
PXE boot firmware - ROM images for qemu
ii libvirt-bin 1.2.12-0ubuntu14.1
amd64 programs for the libvirt library
ii libvirt0 1.2.12-0ubuntu14.1
amd64 library for interfacing with different virtualization systems
ii qemu-kvm 1:2.2+dfsg-5expubuntu9.2
amd64 QEMU Full virtualization
ii qemu-system-common 1:2.2+dfsg-5expubuntu9.2
amd64 QEMU full system emulation binaries (common files)
ii qemu-system-x86 1:2.2+dfsg-5expubuntu9.2
amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:2.2+dfsg-5expubuntu9.2
amd64 QEMU utilities
Here the error logged by libvirt/qemu:
osd/osd_types.cc: In function 'bool pg_t::is_split(unsigned int, unsigned int,
std::set<pg_t>*) const' thread 7f29c
8415700 time 2015-07-26 14:53:17.118199
osd/osd_types.cc: 459: FAILED assert(m_seed < old_pg_num)
ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
1: (()+0x12cc72) [0x7f29d87a3c72]
2: (()+0x20b6f7) [0x7f29d88826f7]
3: (()+0x20b7c0) [0x7f29d88827c0]
4: (()+0x93b81) [0x7f29d870ab81]
5: (()+0xac47f) [0x7f29d872347f]
6: (()+0xacd58) [0x7f29d8723d58]
7: (()+0xae389) [0x7f29d8725389]
8: (()+0xb485f) [0x7f29d872b85f]
9: (()+0x2be40c) [0x7f29d893540c]
10: (()+0x2eee2d) [0x7f29d8965e2d]
11: (()+0x76aa) [0x7f29d42676aa]
12: (clone()+0x6d) [0x7f29d3f9ceed]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
interpret this.
terminate called after throwing an instance of 'ceph::FailedAssertion'
2015-07-26 12:53:48.440+0000: shutting down
and here some info on ceph and what happend:
root@kvmhost01:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
13333G 13087G 240G 1.80
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 2147G 0
libvirt-pool 1 69316M 0.51 2147G 18042
cephfs_data 3 4519M 0.03 2205G 1346
cephfs_metadata 4 8499k 0 2205G 22
.rgw.root 6 848 0 2205G 3
.rgw.control 7 0 0 2205G 8
.rgw 8 706 0 2205G 4
.rgw.gc 9 0 0 2205G 32
.users.uid 10 675 0 2205G 4
.users.email 11 8 0 2205G 1
.users 12 19 0 2205G 2
.rgw.buckets.index 13 0 0 2205G 2
.rgw.buckets 14 7935M 0.06 2205G 4573
root@kvmhost01:~# ceph health
HEALTH_WARN too many PGs per OSD (738 > max 300); pool libvirt-pool has too few
pgs
root@kvmhost01:~# rados df
pool name KB objects clones degraded
unfound rd rd KB wr wr KB
.rgw 1 4 0 0 0
90 70 30 8
.rgw.buckets 8126126 4573 0 0 0
5108 2664 35376 8127189
.rgw.buckets.index 0 2 0 0 0
9968 9962 9934 0
.rgw.control 0 8 0 0 0
0 0 0 0
.rgw.gc 0 32 0 0 0
4928 4896 3576 0
.rgw.root 1 3 0 0 0
65 43 3 3
.users 1 2 0 0 0
0 0 2 2
.users.email 1 1 0 0 0
0 0 1 1
.users.uid 1 4 0 0 0
70 68 60 2
cephfs_data 4628365 1346 0 0 0
4212 159015 4017 4663669
cephfs_metadata 8500 22 0 0 0
76 11168 4079 10581
libvirt-pool 70980227 18042 0 0 0
4544014 124869457 7658948 185182989
rbd 0 0 0 0 0
0 0 0 0
total used 251477028 24039
total avail 13723737372
total space 13981283784
root@kvmhost01:~# ceph osd pool set libvirt-pool pg_num 256
set pool 1 pg_num to 256
root@kvmhost01:~# ceph osd pool set libvirt-pool pgp_num 256
Error EBUSY: currently creating pgs, wait
root@chbsskvmtst01:~# ceph osd pool get libvirt-pool pgp_num
pgp_num: 128
a few minutes later
root@kvmhost01:~# ceph osd pool set libvirt-pool pgp_num 256
set pool 1 pgp_num to 256
root@kvmhost01:~# ceph osd pool get libvirt-pool pgp_num
pgp_num: 256
root@kvmhost01:~# ceph osd pool get libvirt-pool pg_num
pg_num: 256
root@kvmhost01:~# rados df
pool name KB objects clones degraded
unfound rd rd KB wr wr KB
.rgw 1 4 0 0 0
90 70 30 8
.rgw.buckets 8126126 4573 0 0 0
5108 2664 35376 8127189
.rgw.buckets.index 0 2 0 0 0
9968 9962 9934 0
.rgw.control 0 8 0 0 0
0 0 0 0
.rgw.gc 0 32 0 0 0
4928 4896 3576 0
.rgw.root 1 3 0 0 0
65 43 3 3
.users 1 2 0 0 0
0 0 2 2
.users.email 1 1 0 0 0
0 0 1 1
.users.uid 1 4 0 0 0
70 68 60 2
cephfs_data 4628365 1346 0 0 0
4212 159015 4017 4663669
cephfs_metadata 8500 22 0 0 0
76 11168 4079 10581
libvirt-pool 70923657 18049 0 92 0
2016645 54587609 3461290 81148392
rbd 0 0 0 0 0
0 0 0 0
total used 258125320 24046
total avail 13717090168
total space 13981283784
root@kvmhost01:~# ceph health
HEALTH_WARN too many PGs per OSD (770 > max 300); mds cluster is degraded; mds
cephmds02 is laggy
root@chbsskvmtst01:~# ceph mds stat
e33: 1/1/1 up {0=cephmds02=up:replay(laggy or crashed)}
root@chbsskvmtst01:~# virsh list --all
Id Name State
----------------------------------------------------
3 cephmon01 running
4 cephmon02 running
11 cephosd01 running
12 cephosd02 running
13 cephosd03 running
14 cephosd04 running
16 cephosd06 running
17 cephosd05 running
20 cephadm01 running
- cephmds01 shut off
after I started the mds VMs everything went back to normal (except too many PGs
per OSD)
Let me know if you need more info
Bernhard
____________________________________________________________
FREE 3D EARTH SCREENSAVER - Watch the Earth right on your desktop!
Check it out at http://www.inbox.com/earth
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com