Public bug reported:

Ceph cluster mds failed during cephfs usage
 
---uname output---
Linux testU 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:31:26 UTC 2016 s390x 
s390x s390x GNU/Linux
 
---Additional Hardware Info---
System Z s390x LPAR 

 
Machine Type = Ubuntu VM on s390x LPAR 
 
---Debugger---
A debugger is not configured
 
---Steps to Reproduce---
 On s390x LPAR 4 Ubuntu VMs:
1VM - ceph monitor, ceph mds
2VM - ceph monitor, ceph osd
3VM - ceph monitor, ceph osd
4VM - client for using cephfs
I installed ceph cluster on 3 VMs and use 4d VM as a client for cephfs. Mount 
cephfs share and trying to touch some file in mount point:
root@testU:~# ceph osd pool ls
rbd
libvirt-pool
root@testU:~# ceph osd pool create cephfs1_data 32
pool 'cephfs1_data' created
root@testU:~# ceph osd pool create cephfs1_metadata 32
pool 'cephfs1_metadata' created
root@testU:~# ceph osd pool ls
rbd
libvirt-pool
cephfs1_data
cephfs1_metadata
root@testU:~# ceph fs new cephfs1 cephfs1_metadata cephfs1_data
new fs with metadata pool 5 and data pool 4
root@testU:~# ceph fs ls 
name: cephfs1, metadata pool: cephfs1_metadata, data pools: [cephfs1_data ]


root@testU:~# ceph mds stat
e37: 1/1/1 up {2:0=mon1=up:active}
root@testU:~# ceph -s
    cluster 9f054e62-10e5-4b58-adb9-03d27a360bdc
     health HEALTH_OK
     monmap e1: 3 mons at 
{mon1=192.168.122.144:6789/0,osd1=192.168.122.233:6789/0,osd2=192.168.122.73:6789/0}
            election epoch 4058, quorum 0,1,2 osd2,mon1,osd1
      fsmap e37: 1/1/1 up {2:0=mon1=up:active}
     osdmap e62: 2 osds: 2 up, 2 in
            flags sortbitwise
      pgmap v2011: 256 pgs, 4 pools, 4109 MB data, 1318 objects
            12371 MB used, 18326 MB / 30698 MB avail
                 256 active+clean


root@testU:~# ceph auth get client.admin | grep key
exported keyring for client.admin
        key = AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==
root@testU:~# mount -t ceph 192.168.122.144:6789:/ /mnt/cephfs -o 
name=admin,secret=AQCepkZY5wuMOxAA6KxPQjDJ17eoGZcGmCvS/g==,context="system_u:object_r:tmp_t:s0"
root@testU:~# 
root@testU:~# mount |grep ceph
192.168.122.144:6789:/ on /mnt/cephfs type ceph 
(rw,relatime,name=admin,secret=<hidden>,acl)


root@testU:~# ls -l /mnt/cephfs/
total 0
root@testU:~# touch /mnt/cephfs/testfile
[  759.865289] ceph: mds parse_reply err -5
[  759.865293] ceph: mdsc_handle_reply got corrupt reply mds0(tid:2)
root@testU:~# ls -l /mnt/cephfs/
[  764.600952] ceph: mds parse_reply err -5
[  764.600955] ceph: mdsc_handle_reply got corrupt reply mds0(tid:5)
[  764.601343] ceph: mds parse_reply err -5
[  764.601345] ceph: mdsc_handle_reply got corrupt reply mds0(tid:6)
ls: reading directory '/mnt/cephfs/': Input/output error
total 0

 
Userspace tool common name: cephfs ceph 
 
The userspace tool has the following bit modes: 64-bit 

Userspace rpm: -

Userspace tool obtained from project website:  na

-Attach ltrace and strace of userspace application.

** Affects: ceph (Ubuntu)
     Importance: Undecided
     Assignee: Skipper Bug Screeners (skipper-screen-team)
         Status: New


** Tags: architecture-s39064 bugnameltc-149933 severity-high 
targetmilestone-inin---

** Tags added: architecture-s39064 bugnameltc-149933 severity-high
targetmilestone-inin---

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1649872

Title:
  Ceph cluster mds failed during cephfs usage

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1649872/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to