Hi folks,

        I've just started working with ceph, and I'm finding that
whenever a 32-bit client mounts the ceph filesystem and tries
to copy something into it, the client host hangs after some
random, small, amount of data has been copied.  The last error
messages displayed are:

 kernel:Process kworker/0:0 (pid: 4913, ti=f6042000 task=f6008a90 
task.ti=f6042000)
 kernel:Stack:
 kernel:Call Trace:
 kernel:Code: 15 48 95 70 c1 81 ea 00 c0 5c 00 81 e2 00 00 e0 ff 29 d0 c1 e8 0c 
8b 14 85 a0 82 8e c1 83 ea 01 85 d2 89 14 85 a0 82 8e c1 75 04 <0f> 0b eb fe 31 
c0 83 fa 01 75 0f 31 c0 81 3d f0 cc 71 c1 f0 cc
 kernel:EIP: [<c1116fff>] kunmap_high+0x4f/0xa0 SS:ESP 0068:f6043e6c

The client host is running 32-bit Centos 6.3, with the elrepo 3.5.4
kernel.  The osd, mon and mds machines are all 64-bit Centos 6.3, with
the stock Centos 2.6.32 kernel.  The ceph version in all cases is
0.48.2.  The OSDS are using XFS for their data stores.

        There are no error messages in the ceph logs.

        After rebooting the client machine and re-mounting the
ceph filesystem, I can see that some files were, indeed, copied,
but "du" gives an error message indicating that there are circular
directory references, and that the filesystem is probably corrupt.

        After wiping out the osds and re-creating the ceph cluster,
the same thing happens.

        Any advice about how to debug this would be appreciated.

                                        Thanks,
                                        Bryan


-- 
========================================================================
Bryan Wright              |"If you take cranberries and stew them like 
Physics Department        | applesauce, they taste much more like prunes
University of Virginia    | than rhubarb does."  --  Groucho 
Charlottesville, VA  22901|                     
(434) 924-7218            |         [email protected]
========================================================================

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to