Re: [ceph-users] luminous ceph-osd crash

2017-09-13 Thread Marcin Dulak
Hi, It looks like at sdb size around 1.1 GBytes ceph (ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)) is not crashing anymore. Please don't increase the minimum disk size requirements unnecessarily - it makes it more demanding to test new ceph features and

Re: [ceph-users] luminous ceph-osd crash

2017-08-31 Thread Marcin Dulak
Hi, /var/log/ceph/ceph-osd.0.log is attached. My sdb is 128MB and sdc (journal) is 16MB: [root@server0 ~]# ceph-disk list /dev/dm-0 other, xfs, mounted on / /dev/dm-1 swap, swap /dev/sda : /dev/sda1 other, 0x83 /dev/sda2 other, xfs, mounted on /boot /dev/sda3 other, LVM2_member /dev/sdb :

Re: [ceph-users] luminous ceph-osd crash

2017-08-31 Thread Sage Weil
Hi Marcin, Can you reproduce the crash with 'debug bluestore = 20' set, and then ceph-post-file /var/log/ceph/ceph-osd.0.log? My guess is that we're not handling a very small device properly? sage On Thu, 31 Aug 2017, Marcin Dulak wrote: > Hi, > > I have a virtual CentOS 7.3 test setup at:

[ceph-users] luminous ceph-osd crash

2017-08-31 Thread Marcin Dulak
Hi, I have a virtual CentOS 7.3 test setup at: https://github.com/marcindulak/github-test-local/blob/a339ff 7505267545f593fd949a6453a56cdfd7fe/vagrant-ceph-rbd-tutorial-centos7.sh It seems to crash reproducibly with luminous, and works with kraken. Is this a known issue?