That version of Vmware is prehistoric, and probably only emulates a 10 Mbit AMD PCNet nic.
Try testing from the host OS on your source machine. The best method for testing bulk is iperf, or this Avalance thing is more real-world. -----Original Message----- From: Chun Wong [mailto:[EMAIL PROTECTED] Sent: Thursday, 16 March 2006 12:47 a.m. To: [email protected] Subject: RE: [pfSense-discussion] throughput - cpu, bus Chipset ? I'm not sure tbh, its an abit board I purchased 4-5 years ago. The source is on a HP Netserver LH3000 (2 x P3 866Mhz, 2.25Gb RAM) with dual 64 bit PCI bus. 3 x Intel Pro MT1000 gig nics (64bit). The disk subsystem is 2 x megaraid scsi/sata controllers, with scsi3 and sata raid 5 arrays. I doubt the bottle neck is there. Although it is running vmware 2.5.1 at the moment. The guest OS is Windows XP SP2. I guess I need to see what happens when I run straight linux on the box. The firewall is currently on an abit mb, don't know which chipset till I down the fw and take a look. This has Intel Pro MT1000 gig nics (64bit) too although only 32bits are being used. The destination machine is a nforce2 mb with an athlon xp1700 with 1Gb RAM and ATA133 seagate 7200rpm drive running XP SP2. Here there is a 3com 996B Now somewhere in there is the culprit for slowing things down. I have been using ftp get on large files to do the measuring: Is there a better method ? Thanks -----Original Message----- From: Greg Hennessy [mailto:[EMAIL PROTECTED] Sent: 15 March 2006 10:45 To: [email protected] Subject: RE: [pfSense-discussion] throughput - cpu, bus > guys, > 2.2MBs, 2.2 megabytes per second (120) > 7MBs, 7 megabytes pers second (athlon) Are the Athlon figures on a Via chipset motherboard ? Some of the early Via athlon chipsets had pretty lousy PCI performance. You could try tweaking the PCI latency timers in the bios to give the em card more time on the bus. This may improve throughput slightly. On a bge plugged into a nforce2 board, I can iperf ~800 read/ ~600 write through it. Greg
