Ladies and gentlemen.

Help! I compiled the unstable branch still same issue: cmon runs the first
time and crashes afterward.

I am running with 2.6.31.3: here is the last entries with an strace -s 256
-f


[pid  6089] write(1, "09.10.14 02:18:12.511262 mon0(leader).paxos(logm
active lc 8) get_latest v8 len 2630\n"..., 8509.10.14 02:18:12.511262
mon0(leader).paxos(logm active lc 8) get_latest v8 len 2630
) = 85
[pid  6089] write(1, "09.10.14 02:18:12.511414 mon0(leader).log v8
update_from_paxos startup: loading summary e8\n"..., 9109.10.14
02:18:12.511414 mon0(leader).log v8 update_from_paxos startup: loading
summary e8
) = 91
[pid  6089] futex(0xfd85ac0, FUTEX_WAKE_PRIVATE, 2147483647) = 0
[pid  6089] write(2, "terminate called after throwing an instance of '"...,
48terminate called after throwing an instance of ') = 48
[pid  6089] write(2, "buffer::end_of_buffer*"..., 22buffer::end_of_buffer*)
= 22
[pid  6089] write(2, "'\n"..., 2'
)       = 2
[pid  6089] rt_sigprocmask(SIG_UNBLOCK, [ABRT], NULL, 8) = 0
[pid  6089] tgkill(6089, 6089, SIGABRT) = 0
[pid  6089] --- SIGABRT (Aborted) @ 0 (0) ---
[pid  6090] +++ killed by SIGABRT +++
[pid  6091] +++ killed by SIGABRT +++
[pid  6092] +++ killed by SIGABRT +++

Running with debug ms and mon = 20 I get these:

09.10.14 02:21:41.442312 store(/conf/mon0) reading at off 0 of 16
09.10.14 02:21:41.442369 store(/conf/mon0) get_bl clientmap/1 = 16 bytes
09.10.14 02:21:41.442567 mon0(leader).paxos(clientmap active lc 1)
stash_latest v1 len 16
09.10.14 02:21:41.442629 store(/conf/mon0) put_bl clientmap/latest = 28
bytes
09.10.14 02:21:41.459300 mon0(leader).client v1 in-core next_client reset to
0
09.10.14 02:21:41.459384 mon0(leader).client v1 create_pending
09.10.14 02:21:41.459406 mon0(leader).paxosservice(logm) election_finished
09.10.14 02:21:41.459427 mon0(leader).paxosservice(logm) _active
09.10.14 02:21:41.459658 store(/conf/mon0) reading at off 0 of 2642
09.10.14 02:21:41.459727 store(/conf/mon0) get_bl logm/latest = 2642 bytes
09.10.14 02:21:41.459768 mon0(leader).paxos(logm active lc 8) get_latest v8
len 2630
09.10.14 02:21:41.459804 mon0(leader).log v8 update_from_paxos startup:
loading summary e8
terminate called after throwing an instance of 'buffer::end_of_buffer*'
Aborted


Albert.


On Tue, Oct 13, 2009 at 12:51 PM, Albert Ales <albert.c.a...@gmail.com>wrote:

>
>
> On Tue, Oct 13, 2009 at 12:18 PM, Albert Ales <albert.c.a...@gmail.com>wrote:
>
>>
>>
>> On Mon, Oct 12, 2009 at 4:11 PM, Albert Ales <albert.c.a...@gmail.com>wrote:
>>
>>> Hi Sage,
>>>
>>> I went a little further with the mon. Here are the new logs (I sent them
>>> via IRC as well):
>>>
>>> 09.10.12 22:24:47.670310 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=0 cs=0 ltx=0).accept
>>> peer addr is really 10.64.4.15:0/3441800068/0 (socket is
>>> 10.64.4.15:60342/0/0)
>>> 09.10.12 22:24:47.670375 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=0 cs=0 ltx=0).accept
>>> got peer connect_seq 0 global_seq 1
>>> 09.10.12 22:24:47.670454 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=0 cs=0 ltx=1).accept
>>> host_type 4, setting policy, lossy_tx=1
>>> 09.10.12 22:24:47.670507 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=0 cs=0 ltx=1).accept
>>> my proto 15, their proto 15
>>> 09.10.12 22:24:47.670555 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=0 cs=0 ltx=1).accept
>>> new session
>>> 09.10.12 22:24:47.670603 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).accept
>>> success, connect_seq = 1, sending READY
>>> 09.10.12 22:24:47.670653 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1
>>> ltx=1).register_pipe
>>> 09.10.12 22:24:47.670731 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).accept
>>> starting writer, state=0
>>> 09.10.12 22:24:47.671015 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).accept
>>> done
>>> 09.10.12 22:24:47.671076 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).writer:
>>> state = 0 policy.server=1
>>> 09.10.12 22:24:47.671131 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).writer
>>> sleeping
>>> 09.10.12 22:24:47.671188 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).reader
>>> reading tag...
>>> 09.10.12 22:24:47.671248 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).reader
>>> couldn't read tag, Success
>>> 09.10.12 22:24:47.671328 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=9 pgs=1 cs=1 ltx=1).fault
>>> 0: Success
>>> 09.10.12 22:24:47.671435 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=1 ltx=1).fault
>>> on lossy channel, failing
>>> 09.10.12 22:24:47.671534 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=1
>>> ltx=1).was_session_reset
>>> 09.10.12 22:24:47.671637 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0 ltx=1).fail
>>> 09.10.12 22:24:47.671690 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0 ltx=1).stop
>>> 09.10.12 22:24:47.671821 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0 ltx=1).writer
>>> finishing
>>> 09.10.12 22:24:47.671884 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0 ltx=1).writer
>>> done
>>> 09.10.12 22:24:47.672005 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0
>>> ltx=1).unregister_pipe
>>> 09.10.12 22:24:47.672065 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0 ltx=1).reader
>>> queueing for reap
>>> 09.10.12 22:24:47.672181 1208168256 -- 10.64.4.30:6789/0/0 reaper
>>> reaping pipe 0x101fac18 10.64.4.15:0/3441800068/0
>>> 09.10.12 22:24:47.672227 1208168256 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0
>>> ltx=1).unregister_pipe - not registered
>>> 09.10.12 22:24:47.672307 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0 ltx=1).reader
>>> done
>>> 09.10.12 22:24:47.672383 1208168256 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fac18 sd=-1 pgs=1 cs=0
>>> ltx=1).discard_queue
>>> 09.10.12 22:24:47.672440 1208168256 -- 10.64.4.30:6789/0/0 reaper reaped
>>> pipe 0x101fac18 10.64.4.15:0/3441800068/0
>>> 09.10.12 22:24:48.465724 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=0 cs=0 ltx=0).accept
>>> peer addr is really 10.64.4.15:0/3441800068/0 (socket is
>>> 10.64.4.15:60343/0/0)
>>> 09.10.12 22:24:48.465787 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=0 cs=0 ltx=0).accept
>>> got peer connect_seq 0 global_seq 2
>>> 09.10.12 22:24:48.465862 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=0 cs=0 ltx=1).accept
>>> host_type 4, setting policy, lossy_tx=1
>>> 09.10.12 22:24:48.465918 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=0 cs=0 ltx=1).accept
>>> my proto 15, their proto 15
>>> 09.10.12 22:24:48.465966 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=0 cs=0 ltx=1).accept
>>> new session
>>> 09.10.12 22:24:48.466010 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).accept
>>> success, connect_seq = 1, sending READY
>>> 09.10.12 22:24:48.466056 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1
>>> ltx=1).register_pipe
>>> 09.10.12 22:24:48.466135 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).accept
>>> starting writer, state=0
>>> 09.10.12 22:24:48.466314 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).accept
>>> done
>>> 09.10.12 22:24:48.466372 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).writer:
>>> state = 0 policy.server=1
>>> 09.10.12 22:24:48.466429 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).writer
>>> sleeping
>>> 09.10.12 22:24:48.466485 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).reader
>>> reading tag...
>>> 09.10.12 22:24:48.466539 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).reader
>>> couldn't read tag, Success
>>> 09.10.12 22:24:48.466623 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=9 pgs=2 cs=1 ltx=1).fault
>>> 0: Success
>>> 09.10.12 22:24:48.466731 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=1 ltx=1).fault
>>> on lossy channel, failing
>>> 09.10.12 22:24:48.466823 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=1
>>> ltx=1).was_session_reset
>>> 09.10.12 22:24:48.466920 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0 ltx=1).fail
>>> 09.10.12 22:24:48.466973 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0 ltx=1).stop
>>> 09.10.12 22:24:48.467090 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0 ltx=1).writer
>>> finishing
>>> 09.10.12 22:24:48.467149 1308947648 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0 ltx=1).writer
>>> done
>>> 09.10.12 22:24:48.467252 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0
>>> ltx=1).unregister_pipe
>>> 09.10.12 22:24:48.467308 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0 ltx=1).reader
>>> queueing for reap
>>> 09.10.12 22:24:48.467419 1208168256 -- 10.64.4.30:6789/0/0 reaper
>>> reaping pipe 0x101fb018 10.64.4.15:0/3441800068/0
>>> 09.10.12 22:24:48.467462 1208168256 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0
>>> ltx=1).unregister_pipe - not registered
>>> 09.10.12 22:24:48.467528 1300559040 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0 ltx=1).reader
>>> done
>>> 09.10.12 22:24:48.467600 1208168256 -- 10.64.4.30:6789/0/0 >>
>>> 10.64.4.15:0/3441800068/0 pipe(0x101fb018 sd=-1 pgs=2 cs=0
>>> ltx=1).discard_queue
>>> 09.10.12 22:24:48.467655 1208168256 -- 10.64.4.30:6789/0/0 reaper reaped
>>> pipe 0x101fb018 10.64.4.15:0/3441800068/0
>>>
>>>
>>>
>>> Albert.
>>>
>>>
>>> On Mon, Oct 12, 2009 at 7:18 AM, Albert Ales <albert.c.a...@gmail.com>wrote:
>>>
>>>> I compiled the unstable version of the source you put together on Friday
>>>> and I am still having issues:(I did a git pull last night and recompiled
>>>> with the same results)
>>>>
>>>> 09.10.12 04:21:11.736032 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=0).accept
>>>> peer addr is really 10.64.4.15:0/105983296/0 (socket is
>>>> 10.64.4.15:1678/0/0)
>>>> 09.10.12 04:21:11.736119 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=0).accept
>>>> got peer connect_seq 0 global_seq 1
>>>> 09.10.12 04:21:11.736219 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=1).accept
>>>> host_type 4, setting policy, lossy_tx=1
>>>> 09.10.12 04:21:11.736292 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=1).accept
>>>> my proto 15, their proto 14
>>>> 09.10.12 04:21:11.736514 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=1).accept
>>>> couldn't read connect
>>>> 09.10.12 04:21:11.736589 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=1).fault
>>>> 0: Success
>>>> 09.10.12 04:21:11.736670 1250227392 -- 10.64.4.31:6789/0/0 >>
>>>> 10.64.4.15:0/105983296/0 pipe(0x101ea148 sd=6 pgs=0 cs=0 ltx=1).fault
>>>> already closed|closing
>>>>
>>>>
>>>> Sould the requested/advertised  versions match?
>>>>
>>>> Albert.
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Oct 7, 2009 at 10:23 AM, Albert Ales 
>>>> <albert.c.a...@gmail.com>wrote:
>>>>
>>>>> Hi Sage
>>>>>
>>>>> Here it is:
>>>>>
>>>>> Oct  7 04:02:22 Board0 kernel: ceph: loaded
>>>>> (53ca2a8511513ecccf342be9a179233795e01d9f)
>>>>>
>>>>> Albert.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Oct 7, 2009 at 8:30 AM, Sage Weil <s...@newdream.net> wrote:
>>>>>
>>>>>> Hi Albert,
>>>>>>
>>>>>> Can you find the line like
>>>>>>
>>>>>>        ceph: loaded (335cd8f952b457095ea2a66aee3db50efb63c91d)
>>>>>>
>>>>>> in your kern.log and send it?  That'll tell me what version of the
>>>>>> client
>>>>>> you were running.
>>>>>>
>>>>>> Thanks-
>>>>>> sage
>>>>>>
>>>>>>
>>>>>> On Tue, 6 Oct 2009, Albert Ales wrote:
>>>>>>
>>>>>> > Issues with mounting drive on CEPH 0.16.1.
>>>>>> >
>>>>>> > I run cmon with cmon and ms debug set to 20. On the client side I am
>>>>>> getting
>>>>>> > some odd stuff (10.64.4.30 is a node and .4.15 is the client):
>>>>>> >
>>>>>> > 09.10.07 03:50:18.648567 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=0).accept peer
>>>>>> > addr is really 10.64.4.15/2493910549/0 (socket is 10.64.4.15/0/0)
>>>>>> > 09.10.07 03:50:18.648626 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=0).accept got
>>>>>> > peer connect_seq 0 global_seq 16777216
>>>>>> > 09.10.07 03:50:18.648700 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=1).accept
>>>>>> > host_type 67108864, setting policy, lossy_tx=1
>>>>>> > 09.10.07 03:50:18.648755 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=1).accept my
>>>>>> > proto 14, their proto 234881024
>>>>>> > 09.10.07 03:50:18.648831 1216672960 -- 10.64.4.30/0/0 accepter
>>>>>> calling
>>>>>> > select
>>>>>> > 09.10.07 03:50:18.648933 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=1).accept
>>>>>> > couldn't read connect
>>>>>> > 09.10.07 03:50:18.648987 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=1).fault 0:
>>>>>> > Success
>>>>>> > 09.10.07 03:50:18.649073 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=1).fault
>>>>>> > already closed|closing
>>>>>> > 09.10.07 03:50:18.649127 1308947648 -- 10.64.4.30/0/0 >>
>>>>>> > 10.64.4.15/2493910549/0 pipe(0x101ed180 sd=10 pgs=0 cs=0
>>>>>> ltx=1).reader
>>>>>> > queueing for reap
>>>>>> >
>>>>>> >
>>>>>> > if I try to mount it locally on the node on the client side I am
>>>>>> getting:
>>>>>> >
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch, my 14 !=
>>>>>> server's
>>>>>> > 234881024
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch
>>>>>> > ceph: mon0 10.64.4.30:6789 session lost, hunting for new mon
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch, my 14 !=
>>>>>> server's
>>>>>> > 234881024
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch, my 14 !=
>>>>>> server's
>>>>>> > 234881024
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch, my 14 !=
>>>>>> server's
>>>>>> > 234881024
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch, my 14 !=
>>>>>> server's
>>>>>> > 234881024
>>>>>> > ceph: mon0 10.64.4.30:6789 protocol version mismatch
>>>>>> >
>>>>>> >
>>>>>> > Albert.
>>>>>> >
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to