Make sure you've enabled all the various hardware assists in the server's BIOS (Intel VT-x or AMD-V). Makes for a definite improvement in performance overall. No particular experience other than Intel chip'ed NIC cards, but they definitely seem to have excellent support for all the various configurations you'd need to have this perform well.

Good luck,
Andy

On 10/02/2010 03:53 PM, Chris Buechler wrote:
On Sat, Oct 2, 2010 at 2:44 PM, Adam Thompson<athom...@c3a.ca>  wrote:
This started with 4.0, I have upgraded to 4.1 but haven't specifically
tested performance since.  Routing from one VLAN to another entirely
inside VMware is still slow, however.  AFAIK this is somehow related to
interrupt handling and/or mitigation.  The bad news is that since
upgrading to 4.1, the pfSense guest occasionally loses ALL network
interrupts for about 15 minutes at a time - this happens at least once or
twice a week.  It starts slowly, performance is merely degraded, then
nothing, then slowly returns to normal - whole event takes ~15min.

Traffic arriving at or leaving the VMWare HOST shows normal performance
levels, it's only traffic within the host that seems slow: SMB traffic
across the pfSense router, no NAT involved, one pass-all pf rule, runs
between 10Mbit/sec and 100Mbit/sec.  I also see lots of TCP badness if I
run a sniffer on either end - dup acks, dup pkts, and missing packets.


That's not the normal experience from what I've seen, sounds specific
to something in particular you're doing. I believe every environment
I've seen that routes between VLANs within ESX handles the VLANs
entirely at the ESX level, with one vswitch per VLAN and the firewall
connected to the individual vswitches, maybe that's the difference.

Running inside of VMware isn't nearly as fast as running on equivalent
bare metal, but most of the time you don't need that kind of
performance, 300 Mbps is easily achievable with e1000 NICs and
moderately new (anything with VT) server hardware. I've been on dozens
of such systems personally this year alone, across numerous different
customer environments. It's a common setup, and works well including
for routing between VLANs. I know at least a couple setups that route
backups between VLANs, maxes out the system at a bit over 300 Mbps,
but runs fine every night and the resulting performance degradation
for the other interfaces while the firewall VM is pegged isn't an
issue in that environment (everything else still works fine). We have
customers who run their entire colo environments in vSphere including
firewalls, setting the edge CARP pair so the two never get vmotioned
to the same host for proper redundancy.

To answer the original question, there are numerous environments
running that way with great results. Very solid performance and
reliability. ESX and ESXi are equivalent, any mentions of ESX here
could be ESXi just the same (and many of the environments I'm
referring to are ESXi).

---------------------------------------------------------------------
To unsubscribe, e-mail: discussion-unsubscr...@pfsense.com
For additional commands, e-mail: discussion-h...@pfsense.com

Commercial support available - https://portal.pfsense.org


---------------------------------------------------------------------
To unsubscribe, e-mail: discussion-unsubscr...@pfsense.com
For additional commands, e-mail: discussion-h...@pfsense.com

Commercial support available - https://portal.pfsense.org

Reply via email to