With those hardware specs I imagine you could handle several entire class-A
networks all running at Gigabit speeds.

Here we have a P3-450, 128MB ram, 10GB HDD and we can handle several hundred
machines all on 100Mbit. Even with the nic's bandwidth maxed out we only hit
about 7-10% CPU usage and PF is really memory light unless you have 1) lots
of states 2)a HUGE rule list and even then it's not bad.

--David Chubb



> -----Original Message-----
> From: Ganbaa [mailto:[EMAIL PROTECTED]
> Sent: Thursday, June 26, 2003 1:01 AM
> To: Trevor Talbot; [EMAIL PROTECTED]
> Subject: Re: Limit Bandwidth
> 
> 
> Hi Trevor Talbot,
> 
> Thank you for quick response. I want to use pf queue for 
> Bandwidth Manager
> on the our network.
> The purpose is limit bandwidth for each hosts and subnets on 
> the network. So
> What kind of equipment do you recommend to us? Network card?, 
> CPU?, RAM? e.g
> 
> How about this:
> CPU : P-IV 1.5Ghz or higher
> RAM : 512MB
> NIC: Intel Pro/100 S Server Adapter, Intel Pro/1000 MT Quad 
> Port Server
> Adapter . e.g
> 
> Awaiting soonest reply
> 
> Thanks
> 
> Ganbaa
> 
> ----- Original Message -----
> From: "Trevor Talbot" <[EMAIL PROTECTED]>
> To: "Ganbaa" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> Sent: Thursday, June 26, 2003 5:11 AM
> Subject: Re: Limit Bandwidth
> 
> 
> > [ Dual response, Ganbaa sent me details in private. ]
> >
> > On Wednesday, Jun 25, 2003, at 02:21 US/Pacific, Ganbaa wrote:
> >
> > > I'm trying to do. I installed OpenBSD 3.3 and configured 
> pf on the our
> > > LAN.
> > > OpenBSD box has 2 network cards (Internal and External). 
> The purpose is
> > > testing to limit bandwidth for each hosts on the  LAN. 
> LAN has more
> > > than 30
> > > hosts. I divided into several groups those hosts. 
> Example: developers,
> > > marketing, servicing e.g
> > > The problem is all traffic is going only one default 
> queue (std queue
> > > ) on
> > > the external interface. I attached pf.conf file and debug 
> message. So
> > > Could
> >
> > The issue is the use of NAT on the external interface:
> >
> > > nat on $ext_if from $internal_net to any -> ($ext_if)
> >
> > Translation happens before filtering, so by the time the 
> packet gets to
> >
> > > pass out on $ext_if from { <developers> } to any keep state queue
> > > developers_ex
> >
> > the source address has already been changed from <developers> to
> > ($ext_if).
> >
> > The setup already uses queues on the internal interface, so 
> tagging for
> > external queues can't happen there.
> >
> > OpenBSD -current has a tagging feature that could be used 
> here, if you
> > want to try that (keeping up with -current is a bit of work 
> though, and
> > it's hard to justify in a production environment).  It 
> would look like:
> >
> >    pass in on $int_if from <developers> to any keep state queue
> > developers_in tag developers
> >    pass out on $ext_if all keep state tagged developers queue
> > developers_ex
> >
> > The only other workaround I can think of is broken in 3.3.  
> It's also
> > fixed in -current, but hasn't been kicked back to -stable yet.  The
> > idea is to use the source port range for decisions:
> >
> >    nat on $ext_if inet from <developers> to any -> ($ext_if) port
> > 45001:50000
> >    nat on $ext_if inet from <servicing> to any -> ($ext_if) port
> > 50001:55000
> >    ...
> >    pass out on $ext_if proto { tcp, udp } from any port 
> 45000><50001 to
> > any queue developers_ex
> >
> > Unfortunately it's useless for protocols other than TCP and UDP.
> >
> > Anyone have suggestions I missed?
> >
> >
> >
> 

Reply via email to