Hi,

I have a virtual CentOS 7.3 test setup at:
https://github.com/marcindulak/github-test-local/blob/a339ff
7505267545f593fd949a6453a56cdfd7fe/vagrant-ceph-rbd-tutorial-centos7.sh

It seems to crash reproducibly with luminous, and works with kraken.
Is this a known issue?

[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /bin/ceph-deploy osd activate
server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
server2:/dev/sdb1:/dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
<ceph_deploy.conf.cephdeploy.Conf instance at 0x10ae710>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at
0x109fb90>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('server0',
'/dev/sdb1', '/dev/sdc'), ('server1', '/dev/sdb1', '/dev/sdc'), ('server2',
'/dev/sdb1', '/dev/sdc')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
server0:/dev/sdb1:/dev/sdc server1:/dev/sdb1:/dev/sdc
server2:/dev/sdb1:/dev/sdc
[server0][DEBUG ] connection detected need for sudo
[server0][DEBUG ] connected to host: server0
[server0][DEBUG ] detect platform information from remote host
[server0][DEBUG ] detect machine type
[server0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] activating host server0 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[server0][DEBUG ] find the location of an executable
[server0][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate
--mark-init systemd --mount /dev/sdb1
[server0][WARNIN] main_activate: path = /dev/sdb1
[server0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is
/sys/dev/block/8:17/dm/uuid
[server0][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdb1
[server0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value
-- /dev/sdb1
[server0][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[server0][WARNIN] command: Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[server0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.wfKzzb
with options noatime,inode64
[server0][WARNIN] command_check_call: Running command: /usr/bin/mount -t
xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.wfKzzb
[server0][WARNIN] command: Running command: /sbin/restorecon
/var/lib/ceph/tmp/mnt.wfKzzb
[server0][WARNIN] activate: Cluster uuid is
04e79ca9-308c-41a5-b40d-a2737c34238d
[server0][WARNIN] command: Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[server0][WARNIN] activate: Cluster name is ceph
[server0][WARNIN] activate: OSD uuid is 46d7cc0b-a087-4c8c-b00c-ff584c941cf9
[server0][WARNIN] activate: OSD id is 0
[server0][WARNIN] activate: Initializing OSD...
[server0][WARNIN] command_check_call: Running command: /usr/bin/ceph
--cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
/var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap
[server0][WARNIN] got monmap epoch 1
[server0][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd
--cluster ceph --mkfs -i 0 --monmap
/var/lib/ceph/tmp/mnt.wfKzzb/activate.monmap --osd-data
/var/lib/ceph/tmp/mnt.wfKzzb --osd-uuid
46d7cc0b-a087-4c8c-b00c-ff584c941cf9 --setuser ceph --setgroup ceph
[server0][WARNIN]
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueFS.cc:
In function 'void BlueFS::add_block_extent(unsigned int, uint64_t,
uint64_t)' thread 7fef4f0cfd00 time 2017-08-31 10:05:31.892519
[server0][WARNIN]
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.0/rpm/el7/BUILD/ceph-12.2.0/src/os/bluestore/BlueFS.cc:
172: FAILED assert(bdev[id]->get_size() >= offset + length)
[server0][WARNIN]  ceph version 12.2.0
(32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
[server0][WARNIN]  1: (ceph::__ceph_assert_fail(char const*, char const*,
int, char const*)+0x110) [0x7fef4fb4c510]
[server0][WARNIN]  2: (BlueFS::add_block_extent(unsigned int, unsigned
long, unsigned long)+0x4d8) [0x7fef4fad1f88]
[server0][WARNIN]  3: (BlueStore::_open_db(bool)+0xc4f) [0x7fef4f9f597f]
[server0][WARNIN]  4: (BlueStore::mkfs()+0xd0d) [0x7fef4f9ff99d]
[server0][WARNIN]  5: (OSD::mkfs(CephContext*, ObjectStore*, std::string
const&, uuid_d, int)+0x29b) [0x7fef4f5b6f1b]
[server0][WARNIN]  6: (main()+0xf42) [0x7fef4f4f6972]
[server0][WARNIN]  7: (__libc_start_main()+0xf5) [0x7fef4b72fb35]
[server0][WARNIN]  8: (()+0x4acb56) [0x7fef4f596b56]

Cheers,

Marcin
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to