while checking out the quality of a switch, I came about a very disturbing
dicovery: FreeBSD - Linux througput is MUCH better than FreeBSD - FreeBSD
Setup:
2 blades in the same bladeserver, A running FreeBSD 5.4, B running Linux
C is running FreeBSD 5.4
all are connected
we need more data points -
did you test tcp or udp ?
who is sourcing data ?
are the bandwidth symmetric (i.e. A- same as B - A ?
cheers
luigi
On Tue, Jul 12, 2005 at 09:21:13AM +0300, Danny Braniss wrote:
while checking out the quality of a switch, I came about a very disturbing
dicovery:
we need more data points -
did you test tcp or udp ?
i used iperf:
Client connecting to x-dev, TCP port 5001
TCP window size: 65.0 KByte (WARNING: requested 64.0 KByte)
who is sourcing data ?
all, I tried all combinations, and the numbers are very similar to the
ones i posted.
As a further FYI, a variety of debugging features are still enabled by
default in RELENG_6, including INVARINTS, WITNESS, and user space malloc
debugging. These will remain enabled through the first snapshot from the
Not very scientific but here is my ubench on a dual nocona @ 2.8 GHz
and 4 GB
Are the window sizes on Linux bigger or smaller?
TCP window size: 16.0 KByte (default)
smaller :-(, but increasing it does not make any change
Hmmm... Various things that you could try (I'd try them
one by on, rather than all together):
1) sysctl net.inet.tcp.inflight_enable=0
did the trick! now can someone remind me what inflight does? and could
someone explain why increasing sendspace alone did not do the trick?
(i had it at 64k, which got things better, but not sufficient).
TCP inflight limiting is supposed to guess the bandwidth-delay
product for a TCP
combining
sysctl net.inet.tcp.sendspace=131072
and
sysctl net.inet.tcp.inflight.enable=0
did the trick!
Congratulations! But I wonder why the throughput of FreeBSD=Linux
was almost equal to that of Linux=FreeBSD. If the settings above
improves the throughput of
(I am sorry if you have received this e-mail. I'm resending this
because it seems the previous one was lost.)
TCP inflight limiting is supposed to guess the bandwidth-delay
product for a TCP connection and stop the window expanding much
above this.
(Just to clarify..)
TCP inflight limiting
Could you try SMP kernel without IPF support and without using IPF module?
Could you confirm, that your SMP kernel is not crashing when you do not use
IPF?
Interesting that the box has survived almost two days now, while it was always
crashing after at least 8 hours. Anyway, I have compiled a
combining
sysctl net.inet.tcp.sendspace=131072
and
sysctl net.inet.tcp.inflight.enable=0
did the trick!
Congratulations! But I wonder why the throughput of FreeBSD=Linux
was almost equal to that of Linux=FreeBSD. If the settings above
improves the throughput of
Blaz Zupan wrote on 2005-07-12 13:17:
Interesting that the box has survived almost two days now, while it was always
crashing after at least 8 hours. Anyway, I have compiled a new kernel without
ipfilter, I have used pf instead (the configuration changes from ipfilter to
pf were mostly
Hello Scott,
i run into same Problem with Spamassassin and Perl v5.8.7.
After a backport to 5.8.6_2 all thins run fine.
I have no relly good reason found why the behavior is, but i have many
hint's found
that say's that this is a generally io-problem with perl v5.8.7.
So that i have made a
Hmm,
Recently I've also been seeing less than what I'd expect tcp throughput
on FreeBSD 5.4R machines. I've got six 5.4R boxes with dual Gigabit em
interfaces. netperf gives me:
Recv SendSend
Socket Socket Message Elapsed
Size SizeSize
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, Jul 12, 2005 at 02:28:33PM +0200, Michael Schuh wrote:
Scott wrote
The other problem that will possibly affect more people is that
SpamAssassin stopped working properly. This isn't a server, simply a
workstation, using getmail
In the last episode (Jul 12), David Malone said:
did the trick! now can someone remind me what inflight does? and
could someone explain why increasing sendspace alone did not do the
trick? (i had it at 64k, which got things better, but not
sufficient).
TCP inflight limiting is supposed
I have one IDE and two SATA drives in my system. With
FreeBSD 5.4 the system would not boot from the SATA
drive so I use the IDE as the primary then mount
the SATA drives in fstab. I tried upgrading to RELENG_6
and with the new kernel one of the SATA drives errors
out and so can't be mounted.
I'm having a fiddle with RELENG_6 and while setting up a RAID1 system
disk I noticed that atacontrol now lets you create a RAID5 device.
I gave it a whirl and it seemed to work - I have a device I can use.
But is this working properly? I don't have a hardware raid card, just a
plain old SATA
Yes, there is absolutely no difference. Disabled HTT in the BIOS and in
FreeBSD, the box still crashes.
Matt again :)
So far a 13 day up time after switching from IPF to PF. If thats not the
problem, I hope I find it soon considering this is a production server ...
but it seems to be more
Are you using the correct cables? I had a problem when I changed system
cases and used the wrong cable when I put the drives back in (IDE cable
instead of a SATA cable) I now have FreeBSD 5.4-p3 booting up fine and
without error. I think this is a common problem, where people forget
about the
Jayton Garnett [EMAIL PROTECTED] writes:
Are you using the correct cables? I had a problem when I changed system
cases and used the wrong cable when I put the drives back in (IDE cable
instead of a SATA cable) I now have FreeBSD 5.4-p3 booting up fine and
You mean, an old IDE cable vs. a
Michael C. Shultz wrote:
I have one IDE and two SATA drives in my system. With
FreeBSD 5.4 the system would not boot from the SATA
drive so I use the IDE as the primary then mount
the SATA drives in fstab. I tried upgrading to RELENG_6
and with the new kernel one of the SATA drives errors
Yea, thats what I meant, this mcsd :-\ course must be turning me into
yet another zombie
Matthias Buelow wrote:
Jayton Garnett [EMAIL PROTECTED] writes:
Are you using the correct cables? I had a problem when I changed system
cases and used the wrong cable when I put the drives back in
22 matches
Mail list logo