Daniel Pocock <[email protected]> wrote:

>> For something really latency sensitive, you might be better just running a 
>> firewall on the server.
> 
> That would be preferable, but in this case space for physical servers is
> limited.

Sorry, I don't understand this bit. All I'm suggesting is that for a really 
latency sensitive application, you run a local (software) firewall on that 
(virtual) server and connect it's interface outside of your firewall (virtual) 
device. There's no hardware difference, just removing some (software) elements 
from the packet path.


But there is an important thing to remember about software firewalling like 
this. If you go out and spend loads of dosh on a firewall device from the likes 
of Cisco, part of what that money buys you is a hardware packet processing 
engine.
The first packet in any conversation will (may ?*) still go through the 
supervisor processor, but once it's evaluated all the rules, it will cache it 
in the hardware filter engine - thereafter, the packets are processed in 
hardware, probably with "cut through**" and be handled very fast.
Using a software firewall on a Linux box, every packet must traverse the IP 
stack. So each packet must be received fully into a buffer, then the next level 
up can decide where that packet needs to be passed, at various points filters 
will be applied, it ends up in another buffer, and finally gets sent out of an 
interface. No matter how fast you make the processing, there is a fundamental 
limit that the packet isn't processed until it's all been received, and it 
can't start transmitting on the outbound interface until it's finished 
processing.

So I don't think any software implementation is going to match a hardware 
firewall/router UNLESS your processing rules are such that the traffic of 
interest can't be processed through the "fast path" of the hardware routing 
engine.

* I'm not that clued up on the current state, but I believe that modern 
hardware routing engines now have sufficient capabilities to apply some rules 
independently of the supervisory processor. If the packet can be processed by 
the fast routing engine then it is, only if it exceeds the capabilities or 
knowledge (eg needs more complicated rule processing than the engine can do) 
does the packet leave the fast path and be passed up to the supervisory engine 
for a decision. Once that decision is made, the results are cached so that the 
engine can handle further packets in the conversation - basically a hardware 
equivalent to the Linux conntrack processing.

** cut-through packet handling allows the packet to be sent out as soon as 
there is enough information to determine it's routing - and providing the 
egress interface is not already busy. So typically, you only need the packet 
headers (MAC addresses for switching, IP header for basic routing, 
IP+TCP/UDP/whatever headers for advanced routing or filtering) to make that 
decision, and can start sending the packet before the rest of it has come in.
A bit more reading from Cisco here 
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5020-switch/white_paper_c11-465436.html


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Shorewall-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shorewall-users

Reply via email to