Hi. Long time user of postfix here wanting to discuss Anvil.

In IPv4, the max number of sessions per remote site is pretty much limited by the scarcity of IPv4 together with 65535 source port numbers. So individual remote sites were limited in what they could do by the underlying infrastructure and Anvil could track individual remote machines.

I've been doing some investigation into the performance of Anvil when confronted by large numbers of IPv6 sessions.

With IPv6, the address space is much larger, and individual users have much more source address space allocated per site, and I wanted to know if individual /64 and /48 address ranges could be used to mount any sort of meaningful attack, and whether this could be prevented by Anvil.

The baseline problem statement would be:
Can Anvil store enough state to be able to track (and filter) a DoS attack or resource depletion attack from an individual IPv6 site, whilst still being able to provide service to other remote sites, and not hogging the host machines resources entirely?

The parameters would be:
single attacker with access to a few /64's or /48's of address space. Not trying to fend off a distributed million-node botnet. mail server with 100Mbps full-duplex Internet connection = 50000 sessions per second approx (100000 packets per second with SYN, SYN-ACK, ACK three way handshake)
storage time of approx 30-60 seconds.

If you multiple that up, that's 3 million sessions per minute/ 3 million sessions worth of storage in Anvil [assuming everything else can keep up].

My results rather surprised me so far: the limit on Anvil seemed to be very much related to the CPU processing time, and network bandwidth, rather than the storage involved, although it's early days in my testing/experimenting.

So I've been looking at a self-pruning Patricia Tree to store IPv6 sessions quickly and efficiently as an alternative, whilst at the same time being able to track on multiple prefix lengths simultaneously.

On my machine I can get close to the required performance without very much optimization at all (again mainly limited by CPU). I seem to be able to get around 2.5 million remote addresses stored in 60 seconds using approx 8GB of memory in a pure test of the hash storage (without daemon overhead).

Compared to the original hash function that's only about 1/10 as fast as the original code (I think I can still speed it up quite a bit by avoiding unnecessary string copying etc.)

But the Patricia Tree does allow simultaneous tracking on all nibble boundaries e.g. to limit a /64 range to 100 concurrent connections whilst a /48 could allow e.g. 400 concurrent connections. And once a limit is triggered I could avoid storing any further state beyond that point in the tree i.e. for longer prefix lengths.

Whereas I suspect the original code would allow a single user with access to a /48 or /64 to swamp postfix with several million sessions without anvil even detecting that at all.

Is this the correct list to discuss this?

Thoughts?

Is there anyone interested in taking this further?

--
Regards,
RayH

Reply via email to