See inline...
Harald Schmalzbauer wrote:
Harald Schmalzbauer schrieb am 17.02.2010 20:15 (localtime):
...
Now my first idea is to compare MSS and windows sizes before and
after the performance drop.
How do I best capture them? tdpcump? It's GbE linkspeed...
It seems more likely that ZFS is running into slowdowns from resource
contention, memory fragmentation, etc than your network would
suddenly drop out, but tcpdump -w outfile.pcap is a good method of
looking....
Thanks, but fisrt tests showed that ZFS is not causing the slowdown.
Hello,
I got exactly the same limitations when using tmpfs. So for now I'll
concentrate on that, back to ZFS later.
Please clarify my TCP understanding.
If I have the window set to 65535 in the header and a MSS of 1460, how
often should the receiver send ACK segments? window/MSS, right?
How soon you see the ACK is based on two values in the kernel:
net.inet.tcp.delacktime
net.inet.tcp.delayed_ack
The first one controls how soon the peer replies with an ACK if there is
no data to send back, ie. it is just a plain ack. Van Jacobson first
recommended it in the early days of TCP/IP. Historically, it has been
implemented as a 200 ms timer, but in FreeBSD it is a 100 ms timer.
The second one controls wheter delayed acking is enabled. Setting this
sysctl variable to 0 should cause you to seem more ACKs.
Now I see every two segments acknowledged in my dump (rsync between two
em0 interaces).
I'm curious what an iperf between your systems shows? We have recently
upgrade some of test systems from 6.2 to 8.0-Stable and are seeing almost
1/2 the bandwidth over the em(4). Other systems that have been upgraded
are not seeing and drop in bandwidth.
I'd like to understand
a) why disabling net.inet.tcp.rfc1323 gives slightly better rsync
throughput than enabled
rfc1323 deals with window scaling and timestamp options. Perhaps these
are getting in the way?
b) why I can't transfer more than 50MB/s over my direct linked GbE boxes.
But right now I even don't understand the dump I see. As far as I
understand I should only see every 45 data segments one ACK segment.
That would clearly explain to me why I can't saturate my GbE link. But I
can't imagine this is a uncovered faulty behaviour, so I guess I haven't
understood TCP.
No we are also seeing similar behavior over the em(4) interface under
FreeBSD 8.0-Stable.
Patrick
Please help.
Thanks in advance,
-Harry
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[email protected]"