Chun Wong wrote:

guys,
2.2MBs, 2.2 megabytes per second (120)
7MBs, 7 megabytes pers second (athlon)
Yes.  These are, respectively:

17.6Mbps and 56Mbps (your values * 8 to translate to 'megabits per second')

thats from smart ftp transfering >3GB size files.

On the fw traffic graph, I see 30 megabits per second on the 120 (>95% cpu)
and 75 megabits peak on the athlon platform (45% cpu)
Note how the graph is indicating some 33% to 70% more traffic than your application. Something is 'wrong' (could be as simple as you mis-reading the graph, but probably not.)

Are you measuring ftp 'get' or ftp 'put'? If you're using 'put', please stop and use 'get', or move to an application like 'iperf' to measure throughput.

Reason: When you use ftp's 'put', the client stops measuring the transfer time immediately after all of the data are written through the socket interface, and the socket is closed. In practice, the socket writes only transfer data from the application to the socket buffers maintained in the kernel. The TCP protocol is then responsible for transmitting the data from the socket buffers to the remote machine. Details of acknowledgments, packet losses, and retransmissions that are implemented by TCP are hidden from the application. As a result, the time interval between the first write request and the last socket write and socket close only indicates the time to transfer all of the application data to the socket buffer. This does not measure the total time that elapses before the last byte of data actually reaches the remote machine. In the extreme case, a socket buffer full of data, (normally 8 KBytes, but could be much larger), could still be awaiting transmission by TCP.

In the case of a ftp 'get', the clock only stops when the last byte of the file is read.

Normally, the socket implementation guarantees that the socket buffer is empty when the socket close call succeeds, but only if SO_LINGER was set on the socket at creation (or using setsockopt() (or if you call shutdown() on a socket, but now I'm getting into the weeds)). If SO_LINGER is not set, (or shutdown() is not called) then you can have data in flight (and in the kernel buffers) that is not yet ACKed or delivered.

to be honest I was expecting a lot more.

I am using an 8 port SMC gigabit switch that supports jumbo frames - how do
I increase the ethernet frame size on the firewall interface ?
Are you sure your cards support it?

I'll see if I can rig up an extra long crossover cable to bypass the switch.

If I am supposed to see 400 megabits, then I presume this is split between
the incoming nic and outgoing nic, so 200 megabits per second ??
No, I'm sure Bill was referring to a single flow (in one direction, modulo the ACKs and other protocol overhead) measured in terms of data delivery, not the marketing speak of multiplying x2 because your card supports full-duplex.

Any ideas where I should be checking ?
full-duplex .vs half-duplex mismatch
mtu mismatch

Thanks !

--- Ursprüngliche Nachricht ---
Von: "Bill Marquette" <[EMAIL PROTECTED]>
An: discussion@pfsense.com
Betreff: Re: [pfSense-discussion] throughput - cpu, bus
Datum: Tue, 14 Mar 2006 13:41:15 -0600

On 3/14/06, Jim Thompson <[EMAIL PROTECTED]> wrote:
Chun Wong wrote:

Hi,
I have two fw platforms, mono 1.21 running on a Nokia120 and
pfsense1.0beta2
running on an AMD athlon 900.

I can get 2.2MBs on the 120 platform, at >96% cpu usage. On the athlon,
32bit, 33Mhz pci, I can get 7MBs using Intel PRO 1000MT 64 bit PCI
cards.
My question is what speed/type cpu do I need to use to improve on this
with
a PCI-X bus? (64bit, 33Mhz or maybe 66Mhz)

I would like to get 15-20MBs, but without spending too much. I am
looking at
a 2nd hand Supermicro FPGA370 dual Pentium mb, with PCI-X bus.

All my NICs are Intelpro MT1000, 64bit.

Thanks


Something else is wrong.  Either of these platforms should be able to
forward at something close to 100Mbps, if not higher.
Agreed...unless those MT1000's are plugged into 100Mbit ports (but I
guess that would fall under the "something else is wrong") :)  Then
70Mbit wouldn't be entirely out of line (depending on the test
software).  500Mbit throughput is about all you'll practically get on
a 33Mhz 32bit slot and in practice, it'll be somewhat slower (closer
to 3-400Mbit).  A 64bit/66Mhz slot will make that a much higher
ceiling.

--Bill



Reply via email to