Hi,

On 08/13/2018 03:22 PM, Zhenshi Zhou wrote:
Hi,
Finally, I got a running server with files /sys/kernel/debug/ceph/xxx/

[root@docker27 525c4413-7a08-40ca-9a98-0a6df009025b.client213522]# cat mdsc
[root@docker27 525c4413-7a08-40ca-9a98-0a6df009025b.client213522]# cat monc
have monmap 2 want 3+
have osdmap 4545 want 4546
have fsmap.user 0
have mdsmap 335 want 336+
fs_cluster_id -1
[root@docker27 525c4413-7a08-40ca-9a98-0a6df009025b.client213522]# cat osdc
REQUESTS 6 homeless 0
82580   osd10   1.7f9ddac7      [10,13]/10      [10,13]/10
10000053a04.00000000    0x400024        1       write
81019   osd11   1.184ed679      [11,7]/11       [11,7]/11
  1000005397b.00000000    0x400024        1       write
81012   osd12   1.cd98ed57      [12,9]/12       [12,9]/12
  10000053971.00000000    0x400024        1       write,startsync
82589   osd12   1.7cd5405a      [12,8]/12       [12,8]/12
  10000053a13.00000000    0x400024        1       write,startsync
80972   osd13   1.91886156      [13,4]/13       [13,4]/13
  10000053939.00000000    0x400024        1       write
81035   osd13   1.ac5ccb56      [13,4]/13       [13,4]/13
  10000053997.00000000    0x400024        1       write

The cluster claims nothing, and shows HEALTH_OK still.
What I did is just vim a file storing on cephfs, and then it hung there.
And I got a process with 'D' stat.
By the way, the whole mount directory is still in use and with no error.

So there are no pending mds requests, mon seems to be ok, too.

But the osd requests seems to be stuck. Are you sure the ceph user used for the mount point is allowed to write to the cephfs data pools? Are you using additional EC data pools?

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to