Hi,
i think it's not related with my local FS. i build a ext4 on a ramdisk
and used it as OSD.
when i run the iozone or fio on the mounted client point, it shows
the same info as before:
2013-02-08 11:45:06.803915 7f28ec7c4700 0 -- 165.91.215.237:6801/7101
>> 165.91.215.237:0/1990103183 pipe(0x2ded240 sd=803 :6801 pgs=0 cs=0
l=0).accept peer addr is really 165.91.215.237:0/1990103183 (socket is
165.91.215.237:60553/0)
2013-02-08 11:45:06.879009 7f28f7add700 -1 *** Caught signal
(Segmentation fault) **
in thread 7f28f7add700
the ceph -s shows, also the same as using my own local FS:
health HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery
21/42 degraded (50.000%)
monmap e1: 1 mons at {0=165.91.215.237:6789/0}, election epoch 2, quorum 0 0
osdmap e3: 1 osds: 1 up, 1 in
pgmap v7: 384 pgs: 384 active+degraded; 21003 bytes data, 276 MB
used, 3484 MB / 3961 MB avail; 21/42 degraded (50.000%)
mdsmap e4: 1/1/1 up {0=0=up:active}
dmesg shows:
[ 656.799209] libceph: client4099 fsid da0fe76d-8506-4bf8-8b49-172fd8bc6d1f
[ 656.800657] libceph: mon0 165.91.215.237:6789 session established
[ 683.789954] libceph: osd0 165.91.215.237:6801 socket closed (con state OPEN)
[ 683.790007] libceph: osd0 165.91.215.237:6801 socket error on write
[ 684.909095] libceph: osd0 165.91.215.237:6801 socket error on write
[ 685.903425] libceph: osd0 165.91.215.237:6801 socket error on write
[ 687.903937] libceph: osd0 165.91.215.237:6801 socket error on write
[ 691.897037] libceph: osd0 165.91.215.237:6801 socket error on write
[ 699.899197] libceph: osd0 165.91.215.237:6801 socket error on write
[ 715.903415] libceph: osd0 165.91.215.237:6801 socket error on write
[ 747.912122] libceph: osd0 165.91.215.237:6801 socket error on write
[ 811.929323] libceph: osd0 165.91.215.237:6801 socket error on write
Thanks,
Sheng
On Fri, Feb 8, 2013 at 11:07 AM, sheng qiu <[email protected]> wrote:
> Hi Sage,
>
> it's a memory based fs similar to pramfs.
>
> Thanks,
> Sheng
>
> On Fri, Feb 8, 2013 at 11:02 AM, Sage Weil <[email protected]> wrote:
>> Hi Sheng-
>>
>> On Fri, 8 Feb 2013, sheng qiu wrote:
>>> least pass through the init-ceph script). i made a minor change on
>>> ceph code, i changed the link_object() in LFNIndex.cc, basically i
>>> changed the hard link call ::link() to symlink(), as my local fs does
>>> not support hard link (the directory entry stores together with the
>>> related inodes).
>>
>> Unrelated question: which local fs are you using?
>>
>> sage
>
>
>
> --
> Sheng Qiu
> Texas A & M University
> Room 332B Wisenbaker
> email: [email protected]
> College Station, TX 77843-3259
--
Sheng Qiu
Texas A & M University
Room 332B Wisenbaker
email: [email protected]
College Station, TX 77843-3259
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html