[pfSense-discussion] PPTP on every ifc?

2006-03-14 Thread John Wells
Guys,

I posted this earlier to the help list, but think it's more fitting for
discussion. My apologies up front for the double post.

--
I've been working through my first pfsense install, and have been
extremely impressed with all design decisions...until this morning.

My configuration is pretty easy:

- LAN
- WAN
- DMZ
- DMZ for wireless with PPTP VPN into LAN

Should be easy enough to set up...I've been doing it with Linux fws for
years. However, whenever I enable the PPTP server on pfsense, the firewall
installs rules to allow PPTP traffic on ALL interfaces. So, if I want to
use pfsense's VPN capabilities to protect my wireless network, I have to
also expose my VPN to the world at large...NOT desired by any means.

I posted a FAQ and received this in reply from Holger Bauer:

To answer your question: By enabling the PPTP-Server pfSense creates
rules behind the scenes for all available interfaces to allow pptp
traffic. The user defined rules are created below these system internal
rules. There is no way to block this traffic in pfSense 1.0.

I can fathom why one would not want to restrict all VPN initiation to a
particular interface or set of interfaces.

So, two questions.

1. is this a conscious design decision, or only a feature waiting to
happen? If it is indeed a feature you'd be interested in, I'm willing to
roll up my sleeves if I can block some time.

2. is there an easy way to implement this behavior? Can I hack into the
hidden rules to restrict access to only my wireless interface?

Thanks very much for any insight you can provide!

John



[pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Chun Wong
Hi,
I have two fw platforms, mono 1.21 running on a Nokia120 and pfsense1.0beta2
running on an AMD athlon 900.

I can get 2.2MBs on the 120 platform, at 96% cpu usage. On the athlon,
32bit, 33Mhz pci, I can get 7MBs using Intel PRO 1000MT 64 bit PCI cards.

My question is what speed/type cpu do I need to use to improve on this with
a PCI-X bus? (64bit, 33Mhz or maybe 66Mhz)

I would like to get 15-20MBs, but without spending too much. I am looking at
a 2nd hand Supermicro FPGA370 dual Pentium mb, with PCI-X bus.

All my NICs are Intelpro MT1000, 64bit.

Thanks





Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Jim Thompson

Chun Wong wrote:


Hi,
I have two fw platforms, mono 1.21 running on a Nokia120 and pfsense1.0beta2
running on an AMD athlon 900.

I can get 2.2MBs on the 120 platform, at 96% cpu usage. On the athlon,
32bit, 33Mhz pci, I can get 7MBs using Intel PRO 1000MT 64 bit PCI cards.

My question is what speed/type cpu do I need to use to improve on this with
a PCI-X bus? (64bit, 33Mhz or maybe 66Mhz)

I would like to get 15-20MBs, but without spending too much. I am looking at
a 2nd hand Supermicro FPGA370 dual Pentium mb, with PCI-X bus.

All my NICs are Intelpro MT1000, 64bit.

Thanks
 

Something else is wrong.  Either of these platforms should be able to 
forward at something close to 100Mbps, if not higher.






Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Bill Marquette
On 3/14/06, Jim Thompson [EMAIL PROTECTED] wrote:
 Chun Wong wrote:

 Hi,
 I have two fw platforms, mono 1.21 running on a Nokia120 and pfsense1.0beta2
 running on an AMD athlon 900.
 
 I can get 2.2MBs on the 120 platform, at 96% cpu usage. On the athlon,
 32bit, 33Mhz pci, I can get 7MBs using Intel PRO 1000MT 64 bit PCI cards.
 
 My question is what speed/type cpu do I need to use to improve on this with
 a PCI-X bus? (64bit, 33Mhz or maybe 66Mhz)
 
 I would like to get 15-20MBs, but without spending too much. I am looking at
 a 2nd hand Supermicro FPGA370 dual Pentium mb, with PCI-X bus.
 
 All my NICs are Intelpro MT1000, 64bit.
 
 Thanks
 
 
 Something else is wrong.  Either of these platforms should be able to
 forward at something close to 100Mbps, if not higher.

Agreed...unless those MT1000's are plugged into 100Mbit ports (but I
guess that would fall under the something else is wrong) :)  Then
70Mbit wouldn't be entirely out of line (depending on the test
software).  500Mbit throughput is about all you'll practically get on
a 33Mhz 32bit slot and in practice, it'll be somewhat slower (closer
to 3-400Mbit).  A 64bit/66Mhz slot will make that a much higher
ceiling.

--Bill


Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Tommaso Di Donato
mmhhh, not sure if I understood well what you are measuring.. 
I'm doing some tests on a EPIA PD6000 (so I think much lower than your hardware):
http://www.via.com.tw/en/products/mainboards/mini_itx/epia_pd/
I can reach an average throughput of about 92-97 MB/s, with unencrypted
traffic. I am testing it with iperf. I'm about to test same config, but
using two box and an ipsec VPN between (I want to test the accelerated
crypto hw).
Anybody altready tested it?
Tom
On 3/14/06, Chun Wong [EMAIL PROTECTED] wrote:
Hi,I have two fw platforms, mono 1.21 running on a Nokia120 and pfsense1.0beta2running on an AMD athlon 900.I can get 2.2MBs on the 120 platform, at 96% cpu usage. On the athlon,32bit, 33Mhz pci, I can get 7MBs using Intel PRO 1000MT 64 bit PCI cards.
My question is what speed/type cpu do I need to use to improve on this witha PCI-X bus? (64bit, 33Mhz or maybe 66Mhz)I would like to get 15-20MBs, but without spending too much. I am looking ata 2nd hand Supermicro FPGA370 dual Pentium mb, with PCI-X bus.
All my NICs are Intelpro MT1000, 64bit.Thanks


Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Rainer Duffner


Am 14.03.2006 um 20:52 schrieb Greg Hennessy:




I'd love to get the chance to throw an Avalanche at a decent system  
running

PF to see what it really can stand upto.



Andre Oppermann is working on that.
http://people.freebsd.org/~andre/
But the results won't show-up until 7.0 is released, which looks to  
be sometime in 2007.

http://www.freebsd.org/releng/index.html


Rainer





Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Jim Thompson

Chun Wong wrote:


guys,
2.2MBs, 2.2 megabytes per second (120)
7MBs, 7 megabytes pers second (athlon)
 


Yes.  These are, respectively:

17.6Mbps and 56Mbps (your values * 8 to translate to 'megabits per second')


thats from smart ftp transfering 3GB size files.

On the fw traffic graph, I see 30 megabits per second on the 120 (95% cpu)
and 75 megabits peak on the athlon platform (45% cpu)
 

Note how the graph is indicating some 33% to 70% more traffic than your 
application.  Something is 'wrong' (could be as simple as you 
mis-reading the graph, but probably not.)


Are you measuring ftp 'get' or ftp 'put'?   If you're using 'put', 
please stop and use 'get', or move to an application like 'iperf' to 
measure throughput.


Reason: 

When you use ftp's 'put', the client stops measuring the transfer time 
immediately after all of the data are written through the socket 
interface, and the socket is closed. In practice, the socket writes only 
transfer data from the application to the socket buffers maintained in 
the kernel. The TCP protocol is then responsible for transmitting the 
data from the socket buffers to the remote machine. Details of 
acknowledgments, packet losses, and retransmissions that are implemented 
by TCP are hidden from the application. As a result, the time interval 
between the first write request and the last socket write and socket 
close only indicates the time to transfer all of the application data to 
the socket buffer. This does not measure the total time that elapses 
before the last byte of data actually reaches the remote machine. In the 
extreme case, a socket buffer full of data, (normally 8 KBytes, but 
could be much larger), could still be awaiting transmission by TCP.


In the case of a ftp 'get', the clock only stops when the last byte of 
the file is read.


Normally, the socket implementation guarantees that the socket buffer is 
empty when the socket close call succeeds, but only if SO_LINGER was set 
on the socket at creation (or using setsockopt() (or if you call 
shutdown() on a socket, but now I'm getting into the weeds)).  If 
SO_LINGER is not set, (or shutdown() is not called) then you can have 
data in flight (and in the kernel buffers) that is not yet ACKed or 
delivered.



to be honest I was expecting a lot more.

I am using an 8 port SMC gigabit switch that supports jumbo frames - how do
I increase the ethernet frame size on the firewall interface ?
 


Are you sure your cards support it?


I'll see if I can rig up an extra long crossover cable to bypass the switch.

If I am supposed to see 400 megabits, then I presume this is split between
the incoming nic and outgoing nic, so 200 megabits per second ??
 

No, I'm sure Bill was referring to a single flow (in one direction, 
modulo the ACKs and other protocol overhead) measured in terms of data 
delivery, not the marketing speak of multiplying x2 because your card 
supports full-duplex.



Any ideas where I should be checking ?
 


full-duplex .vs half-duplex mismatch
mtu mismatch


Thanks !

 


--- Ursprüngliche Nachricht ---
Von: Bill Marquette [EMAIL PROTECTED]
An: discussion@pfsense.com
Betreff: Re: [pfSense-discussion] throughput - cpu, bus
Datum: Tue, 14 Mar 2006 13:41:15 -0600

On 3/14/06, Jim Thompson [EMAIL PROTECTED] wrote:
   


Chun Wong wrote:

 


Hi,
I have two fw platforms, mono 1.21 running on a Nokia120 and
   


pfsense1.0beta2
   


running on an AMD athlon 900.

I can get 2.2MBs on the 120 platform, at 96% cpu usage. On the athlon,
32bit, 33Mhz pci, I can get 7MBs using Intel PRO 1000MT 64 bit PCI
   


cards.
   


My question is what speed/type cpu do I need to use to improve on this
   


with
   


a PCI-X bus? (64bit, 33Mhz or maybe 66Mhz)

I would like to get 15-20MBs, but without spending too much. I am
   


looking at
   


a 2nd hand Supermicro FPGA370 dual Pentium mb, with PCI-X bus.

All my NICs are Intelpro MT1000, 64bit.

Thanks


   


Something else is wrong.  Either of these platforms should be able to
forward at something close to 100Mbps, if not higher.
 


Agreed...unless those MT1000's are plugged into 100Mbit ports (but I
guess that would fall under the something else is wrong) :)  Then
70Mbit wouldn't be entirely out of line (depending on the test
software).  500Mbit throughput is about all you'll practically get on
a 33Mhz 32bit slot and in practice, it'll be somewhat slower (closer
to 3-400Mbit).  A 64bit/66Mhz slot will make that a much higher
ceiling.

--Bill

   



 





Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Bill Marquette
On 3/14/06, Chun Wong [EMAIL PROTECTED] wrote:
 On the fw traffic graph, I see 30 megabits per second on the 120 (95% cpu)
 and 75 megabits peak on the athlon platform (45% cpu).

This certainly suggests that CPU on the athlon is not your limiting factor.

 to be honest I was expecting a lot more.

 I am using an 8 port SMC gigabit switch that supports jumbo frames - how do
 I increase the ethernet frame size on the firewall interface ?

I believe there is a hidden option to change MTU - I'll leave it to
someone else to provide that option.

 I'll see if I can rig up an extra long crossover cable to bypass the switch.

 If I am supposed to see 400 megabits, then I presume this is split between
 the incoming nic and outgoing nic, so 200 megabits per second ??

No, that's 400Mbit throughput :)  A [EMAIL PROTECTED] bus is roughly around
1Gbit transfer rate so 500Mbit would be the absolute max.

 Any ideas where I should be checking ?

netstat -ni from the shell and see if you're taking any interface
errors on all the machines involved in the test.

--Bill


Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Bill Marquette
On 3/14/06, Rainer Duffner [EMAIL PROTECTED] wrote:

 Am 14.03.2006 um 20:52 schrieb Greg Hennessy:

 
 
  I'd love to get the chance to throw an Avalanche at a decent system
  running
  PF to see what it really can stand upto.

Quite a bit.  I ran out of Avalanche/Reflector capacity at 750Mbit,
but the OpenBSD box I pointed the firehose at, was only hitting about
30% CPU load at the time.  I expect I'd see better performance out of
FreeBSD (w/ or w/out Andre's work).  I plan on running the same tests
against pfSense 1.0 when released.

--Bill


[pfSense-discussion] Embedded hardware

2006-03-14 Thread Gil Freund
Hi,

I had a look at the Checkpoint [EMAIL PROTECTED] device and I am looking for a 
similar
platform for pfsense. I currently use Wraps, but I am looking for something with
more interfaces (5 or 6, of which 4 are a lan switch) and one or (preferably)
two MiniPCI.
Soekris has a similar model but the PCI quadport lacks MDI/X auto sensing.
I can add a small 5 port switch, but this would require an additional power
outlet and would not look nice.

This os to avoid getting the Checkpoint which are being considered as a VPN
gateway for executives at home.

Thanks

Gil



Re: [pfSense-discussion] Embedded hardware

2006-03-14 Thread Jim Thompson

Gil Freund wrote:


Hi,

I had a look at the Checkpoint [EMAIL PROTECTED] device and I am looking for a 
similar
platform for pfsense. I currently use Wraps, but I am looking for something with
more interfaces (5 or 6, of which 4 are a lan switch) and one or (preferably)
two MiniPCI.
 


We're considering carrying this:
http://www.fabiatech.com/products/fx5620.htm

But it only sort-of meets your requirements.It has 6 interfaces (5 x 
10/1000, 1 x 10/100/1000) and a single miniPCI socket.   I don't know if 
the interfaces support auto MDI/X or not.


Linitx.com carrys them, and donated one to Bill, Scott and Chris.  It 
works with pfSense out of the box.



Soekris has a similar model but the PCI quadport lacks MDI/X auto sensing.
 


And its not switch-based, either.

Jim



RE: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Greg Hennessy
 
 
 Quite a bit.  I ran out of Avalanche/Reflector capacity at 
 750Mbit, but the OpenBSD box I pointed the firehose at, was 
 only hitting about 30% CPU load at the time.  

Interesting, what nics were in the box ?

 I expect I'd 
 see better performance out of FreeBSD (w/ or w/out Andre's 
 work).  I plan on running the same tests against pfSense 1.0 
 when released.

Looking forward to it. 

Putting in a DL-385 for the same client, on 6.x/PF with 4 * em to firewall
off a large network backup environment.
I should have some pretty symon pictures soon.


Greg





RE: [pfSense-discussion] Embedded hardware

2006-03-14 Thread Holger Bauer
Sorry, the link is in german but you should get the facts: 
http://www.level-one.de/products3.php?sklop=14id=520056
it's a NIC with integrated 5 port switch. If you use a soekris 4801 you could 
add such a card to the PCI slot. I use a similiar card with one of my routers ( 
http://routerdesign.com/routers/36/pic02.jpg , 
http://routerdesign.com/routers/36/pic04.jpg ).

Holger

 -Original Message-
 From: news [mailto:[EMAIL PROTECTED] Behalf Of Gil Freund
 Sent: Tuesday, March 14, 2006 9:31 PM
 To: discussion@pfsense.com
 Subject: [pfSense-discussion] Embedded hardware
 
 
 Hi,
 
 I had a look at the Checkpoint [EMAIL PROTECTED] device and I am 
 looking for a similar
 platform for pfsense. I currently use Wraps, but I am looking 
 for something with
 more interfaces (5 or 6, of which 4 are a lan switch) and one 
 or (preferably)
 two MiniPCI.
 Soekris has a similar model but the PCI quadport lacks MDI/X 
 auto sensing.
 I can add a small 5 port switch, but this would require an 
 additional power
 outlet and would not look nice.
 
 This os to avoid getting the Checkpoint which are being 
 considered as a VPN
 gateway for executives at home.
 
 Thanks
 
 Gil
 
 


Virus checked by G DATA AntiVirusKit



RE: [pfSense-discussion] Embedded hardware

2006-03-14 Thread Holger Bauer
If you bridge NICs and create a switch this way your throughput will be limited 
by the bus and the CPU. If you use a switchcard like I suggested the switch 
will take care of the networktraffic between these ports. I get 90 mbit/s with 
this card between the switchports though the firewall itself is only driven by 
a pentium 233MMX. Of course, traffic going to other interfaces will be limited 
by cpu speed and bus capacity. The card that I suggested has 5 autouplink 
ports. So if a soekris 4801 is fast enough for your needs and you only want to 
have the switch integrated this is an option to consider.

Holger

 -Original Message-
 From: Jim Thompson [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, March 14, 2006 10:57 PM
 To: discussion@pfsense.com
 Subject: Re: [pfSense-discussion] Embedded hardware
 
 
 Holger Bauer wrote:
 
 Sorry, the link is in german but you should get the facts: 
 http://www.level-one.de/products3.php?sklop=14id=520056
 it's a NIC with integrated 5 port switch. If you use a 
 soekris 4801 you could add such a card to the PCI slot. I use 
 a similiar card with one of my routers ( 
 http://routerdesign.com/routers/36/pic02.jpg , 
 http://routerdesign.com/routers/36/pic04.jpg ).
 
 Holger
   
 
 OK, my error.
 
 here is something similar (if not identical, I can't tell if 
 it has the 
 Kendin chip on it or not): http://www.outletpc.com/c3442.html
 
 But you could still potentially bridge the 5 (or 6) individual 
 interfaces in pfSense, and get something fairly 'switch 
 like', too.  No?
 
 Also, using the card you describe, the forwarding rate is going to be 
 limited when the packets have to pass through over the PCI bus. 
 
 This is more interesting (especially in light of the recent 
 discussions):
 http://www.dssnetworks.com/v3/gigabit_pcie_6468.asp
 
 
 
 
 
 


Virus checked by G DATA AntiVirusKit



RE: [pfSense-discussion] Embedded hardware

2006-03-14 Thread Holger Bauer
everything depends on needs...and probably the price. the switchcard I have in 
my router was only 30 euros (not ebay or something, regular price). we all can 
only give suggestions. I didn't say your option is bad either but I guess more 
expensive.

Holger

 -Original Message-
 From: Jim Thompson [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, March 14, 2006 11:17 PM
 To: discussion@pfsense.com
 Subject: Re: [pfSense-discussion] Embedded hardware
 
 
 
 Understood, but a 1GHz c3 (on the box I showed) is a bit more 
 CPU than 
 the 233/266MHz Geode on the Soekris  WRAP boards.
 
 You'll probably get something approaching similar performance with 
 either solution.   I don't know if you've got the software written to 
 control VLAN framing, packet filtering, etc with the 4-port 
 switch card 
 or not.  If not, then pfSense is going to see this as a 
 single Ethernet 
 port, and all the traffic that stays on the switch will be 
 invisible 
 to pfSense.
 
 By bridging multiple NICs together, you can gain visibility (and 
 control) off all the traffic that passes through the box.
 
  I'm not saying that the 4-port switch card is bad, or that 
 bridging 
 multiple NICs together is better.  Each application is different.
 
 I think a variant of pfSense that supported the 8 port GigE 
 switch card 
 that I pointed to would be really cool.
 
 Holger Bauer wrote:
 
 If you bridge NICs and create a switch this way your 
 throughput will be limited by the bus and the CPU. If you use 
 a switchcard like I suggested the switch will take care of 
 the networktraffic between these ports. I get 90 mbit/s with 
 this card between t
 he switchports though the firewall itself is only driven by a 
 pentium 233MMX. Of course, traffic going to other interfaces 
 will be limited by cpu speed and bus capacity. The card that 
 I suggested has 5 autouplink ports. So if a soekris 4801 is 
 fast enough
  for your needs and you only want to have the switch 
 integrated this is an option to consider.
 
 Holger
 
   
 
 -Original Message-
 From: Jim Thompson [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, March 14, 2006 10:57 PM
 To: discussion@pfsense.com
 Subject: Re: [pfSense-discussion] Embedded hardware
 
 
 Holger Bauer wrote:
 
 
 
 Sorry, the link is in german but you should get the facts: 
   
 
 http://www.level-one.de/products3.php?sklop=14id=520056
 
 
 it's a NIC with integrated 5 port switch. If you use a 
   
 
 soekris 4801 you could add such a card to the PCI slot. I use 
 a similiar card with one of my routers ( 
 http://routerdesign.com/routers/36/pic02.jpg , 
 http://routerdesign.com/routers/36/pic04.jpg ).
 
 
 Holger
  
 
   
 
 OK, my error.
 
 here is something similar (if not identical, I can't tell if 
 it has the 
 Kendin chip on it or not): http://www.outletpc.com/c3442.html
 
 But you could still potentially bridge the 5 (or 6) individual 
 interfaces in pfSense, and get something fairly 'switch 
 like', too.  No?
 
 Also, using the card you describe, the forwarding rate is 
 going to be 
 limited when the packets have to pass through over the PCI bus. 
 
 This is more interesting (especially in light of the recent 
 discussions):
 http://www.dssnetworks.com/v3/gigabit_pcie_6468.asp
 
 
 
 
 
 
 
 
 
 
 Virus checked by G DATA AntiVirusKit
   
 
 
 


Virus checked by G DATA AntiVirusKit



Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Bill Marquette
On 3/14/06, Greg Hennessy [EMAIL PROTECTED] wrote:

 
  Quite a bit.  I ran out of Avalanche/Reflector capacity at
  750Mbit, but the OpenBSD box I pointed the firehose at, was
  only hitting about 30% CPU load at the time.

 Interesting, what nics were in the box ?

HP DL380G3 w/ Broadcom and Intel NICs.  I also ran an iperf test, but
ran out of physical boxes to generate and receive the load at around
900Mbit (I did determine the maximum xmit/receive rate of a Sun v120
running Solaris 8 though ;) )  During the iperf tests, the cpu load
was closer to 25%, but iperf generates larger packets, so that's no
huge surprise and why Avalanche is a much closer to real life test.

I've got some interestingly crappy test results while working on the
shaper before Beta 2 on a 1Ghz Via cpu here:
http://www.pfsense.com/~billm/spirent/1/
And I do mean crappy.  I wasn't trying too hard to get a good working
test, just tossing traffic to see what's blowing up and why.

  I expect I'd
  see better performance out of FreeBSD (w/ or w/out Andre's
  work).  I plan on running the same tests against pfSense 1.0
  when released.

 Looking forward to it.

 Putting in a DL-385 for the same client, on 6.x/PF with 4 * em to firewall
 off a large network backup environment.
 I should have some pretty symon pictures soon.

Very interested in results from a high throughput environment.  I'm
probably a good year or so away from deploying pfSense anywhere near
our high throughput (high dollar) production environment but I'm
interested in others results in the meantime.  For now, that
environment is staying on OpenBSD (and pf's native OS).  We're a large
company and pfSense doesn't meet our internal audit requirements just
yet - that's on my todo list (multi-user, change logs, etc).

--Bill


Re: [pfSense-discussion] throughput - cpu, bus

2006-03-14 Thread Chris Buechler

Greg Hennessy wrote:
 
That's ~20 megabits/sec, not bad for an IP-120 given its horsepower
  


Not for m0n0wall/FreeBSD 4.x.  That box should be about the same speed 
as a Soekris 4801 or WRAP, either of which will hit ~40-45 Mbps.  If 
this were pfsense/FreeBSD 6.x, I would say ~20 Mbps is low, but 
acceptable.  Neither FreeBSD 5 or 6 will even boot on the Nokia IP1xx's 
though (kernel panics). 

Though I have heard one person complain of poor performance (~25 Mbps 
tops, IIRC) on a IP110 with m0n0wall, so there may be something odd with 
that hardware that makes it slower than it appears it should be given 
the specs.