Just to report back on this - I just tried the patches from last week,
which fixed the sending of the keepalives in the different
thread, but my original issue (the sychronisation speed) remains
I'm afraid - so much for the theory that the corruption was causing
the speed decrease. It's obviously
On Mon, Nov 01, 2010 at 09:57:08PM +0200, Mikolaj Golub wrote:
On Mon, 01 Nov 2010 17:06:49 +0200 Mikolaj Golub wrote:
MG On Mon, 1 Nov 2010 12:01:00 +0100 Pawel Jakub Dawidek wrote:
PJD I like your patch and I agree of course it is better to send keepalive
PJD packets only when
On Sat, Oct 30, 2010 at 03:25:56PM +0300, Mikolaj Golub wrote:
On Thu, 28 Oct 2010 22:08:54 +0300 Mikolaj Golub wrote to Pawel Jakub Dawidek:
PJD I looked at the code and the keepalive packets arbe sent from another
PJD thread. Could you try turning them off in primary.c and see if that
On Mon, 1 Nov 2010 12:01:00 +0100 Pawel Jakub Dawidek wrote:
PJD I like your patch and I agree of course it is better to send keepalive
PJD packets only when connection is idle. The only thing I'd change is to
PJD modify QUEUE_TAKE1() macro to take additional argument 'timeout' - if we
PJD
On Mon, 01 Nov 2010 17:06:49 +0200 Mikolaj Golub wrote:
MG On Mon, 1 Nov 2010 12:01:00 +0100 Pawel Jakub Dawidek wrote:
PJD I like your patch and I agree of course it is better to send keepalive
PJD packets only when connection is idle. The only thing I'd change is to
PJD modify
On Thu, 28 Oct 2010 22:08:54 +0300 Mikolaj Golub wrote to Pawel Jakub Dawidek:
PJD I looked at the code and the keepalive packets arbe sent from another
PJD thread. Could you try turning them off in primary.c and see if that
PJD helps?
MG At first I set RETRY_SLEEP to 1 sec to have more
In hast_proto_send() we send header and then data. Couldn't it be that
remote_send and sync threads interfere and their packets are mixed? May be
some synchronization is needed here?
Interesting - I haven't looked very closely at the code, but I didn't
realise that more than one thread was in
On Wed, Oct 27, 2010 at 10:05:20PM +0300, Mikolaj Golub wrote:
In hast_proto_send() we send header and then data. Couldn't it be that
remote_send and sync threads interfere and their packets are mixed? May be
some
synchronization is needed here?
I set sleep(1) in hast_proto_send() between
On Thu, 28 Oct 2010 18:30:36 +0200 Pawel Jakub Dawidek wrote:
PJD On Wed, Oct 27, 2010 at 10:05:20PM +0300, Mikolaj Golub wrote:
In hast_proto_send() we send header and then data. Couldn't it be that
remote_send and sync threads interfere and their packets are mixed? May be
some
On Tue, 26 Oct 2010 17:01:01 +0100 Pete French wrote:
PF Actually, I just llooked I dmesg on the secondary - it is full
PF of messages thus:
PF Oct 26 15:44:59 serpentine-passive hastd[10394]: [serp0] (secondary)
Unable to receive request header: RPC version wrong.
PF Oct 26 15:45:00
You can check if the queue size is an issue monitoring with netstat Recv-Q and
Send-Q for hastd connections during the test. Running something like below:
while sleep 1; do netstat -na |grep '\.8457.*ESTAB'; done
Interesting - I ran those and started a complete resilvert (I do
this by
Actually, I just llooked I dmesg on the secondary - it is full
of messages thus:
Oct 26 15:44:59 serpentine-passive hastd[10394]: [serp0] (secondary) Unable to
receive request header: RPC version wrong.
Oct 26 15:45:00 serpentine-passive hastd[782]: [serp0] (secondary) Worker
process exited
What speed do you expect? IIRC from my tests, I was able to saturate
1Gbit link with initial synchronization. Also note, that hast
synchronize only differences, and not the entire thing after crash or
power failure.
I should probably have put some numbers in the original email,
sorry! I am
If you are 50ms RTT from the remote system, the default buffer size will
limit you to about 21 Mbps. Formula is Window-size(in bits/sec)/RTT(in
sec.) The result is the absolute maximum possible bandwidth in
bits/sec. Of course, you can replace window size with the bytes/sec and
the result
You could change the values and recompile hastd :-). It would be interesting
to know about the results of your experiment (if you do).
I changed the buffer sizes to the same as I was using for ggate, but the speed
is still the same - 44meg/second (about half of what the link can do)
On Mon, 25 Oct 2010 11:55:34 +0100 Pete French wrote:
You could change the values and recompile hastd :-). It would be interesting
to know about the results of your experiment (if you do).
PF I changed the buffer sizes to the same as I was using for ggate, but the
speed
PF is still the
On Thu, 21 Oct 2010 13:25:34 +0100 Pete French wrote:
PF Well, I bit the bullet and moved to using hast - all went beautifully,
PF and I migrated the pool with no downtime. The one thing I do notice,
PF however, is that the synchronisation with hast is much slower
PF than the older
On Fri, Oct 22, 2010 at 05:51:03PM +0300, Mikolaj Golub wrote:
On Thu, 21 Oct 2010 13:25:34 +0100 Pete French wrote:
PF Well, I bit the bullet and moved to using hast - all went beautifully,
PF and I migrated the pool with no downtime. The one thing I do notice,
PF however, is that the
From: Mikolaj Golub to.my.troc...@gmail.com
Date: Fri, 22 Oct 2010 17:51:03 +0300
Sender: owner-freebsd-sta...@freebsd.org
On Thu, 21 Oct 2010 13:25:34 +0100 Pete French wrote:
PF Well, I bit the bullet and moved to using hast - all went beautifully,
PF and I migrated the pool with
19 matches
Mail list logo