Well, my pps requirements are 500 kpps, we expect to improve to 400 MBps on our wan link, the other two attached networks, one fixed at 100 mbps, the other will have 1 Gbps link, three network cards, 1 PCI-X and two PCI cards, right now our main problem is the fact that we receive many short packages, specially dns request and some other udp packages, this increases cpu loads on our network equipment, I mean many many short packages is our main concern. Best regards, Aliet
2008/8/14 Rainer Duffner <[EMAIL PROTECTED]>: > > Am 15.08.2008 um 00:39 schrieb Aliet Santiesteban Sifontes: > >> Hi all, I'm currently migrating an existing Sun Netra T1 100 box >> running Solaris 8 and Checkpoint Firewall 1, wich has run for 9 years, >> to a PFSense, on a HP Proliant ML350 G4 server with 2 GB Ram, a Xeon >> Dual Core at 3GHz bus 800 MHz, three attached network, one at 1 GBps, >> and 2 at 100 MBps, it would help to me first, to know if somebody has >> tested pfsense on highs loads, right now the sun box is running 150000 >> concurrent connections and the hardware is at top, we are forced to >> switch since checkpoint crash on new edns packages, and this platform >> is not supporting this loads anymore, so my question is the best way >> to tune pfsense on the new hard, I mean the Proliant, to allow 300000 >> or maybe 500000 concurrent connections, this setup also has many >> firewall rules, I have read some docs on tunning freebsd at the >> pfsense site, but I have also see some posts about a need to rebuild >> the kernel to allow this, it wouldd be very nice if somebody can give >> me some tips or lights about this. >> Thank you all... >> Best regards, Aliet >> > > > > I think, 500k connections are quite possible (from anecdotal evidence posted > to this list). They will cost about 500 MB RAM. > It's also a question of how many packets per second you want to route. > > Also, from what I remember, the pfSense kernel already contains every > possible optimization (and even some that are not available in stock > FreeBSD6/7). > > Also of relevance: > What kind of bus do the NICs hang on: PCI or PCI-X? Do they all hang on the > same bus (some better motherboards have multiple independent PCI(X/e) > busses, which vastly improves the total real throughput. > Other than that, I'd say our setup is pretty decent - though late Opterons > or Harperton Xeons with 12 MB cache would be even better. > But one can't have everything ;-) > > > > > Rainer > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
