Hi Damien,

Many thx for your reply.

Let me clarify ...

I am not sure I understand perfectly what you mean. Why should a small home
site would have millions of destinations? It would mean that you observe at 
least
millions of flows within your small network.

Yes. Imagine if you or me announce wiki-leaks-2 content on our IPv4/IPv6 home web server. I think number of flows will explode. And imagine we are lucky users of FTTH so no bandwith issue :)

Moreover those may not be DOS or spoofing. Those may be legitimate addresses. And I do not think the server will die.

And I am not saying LISP will or will not be able to handle it. I am just asking how it will handle it ;) Maybe the answer is to in the moment of exceeding capacity threshold of any xTR push some of the traffic to few bigger proxies ... just a thought.

Cheers,
R.



Even in campus networks like our
university we do not observe millions of concurrent flows. In one day we have in
general 3 million different IP addresses in our network and there is no 
mechanism
to limit the traffic but more interestingly, during 5 minute periods, we observe
peaks at around 60K to 70K concurrent destination addresses and  peaks to 1.2M
L3 flows and interestingly, from 5min time slots to 5min time
slots, we can see big variations (having 60K destination then 20K then
40K ... is common) maning that actually the ground traffic that would stay in 
the
cache for what ever TTL is not so important. Indeed, a top few (~ thousands)
destinations are contacted all the time, while the rest could be seen as noise
with short period of time (from a few seconds to a few hours). So a good cache
eviction algorithm should solve most of the case. Regarding the cache size, I
think that the mistake is in the specs, where it is recommended to have a TTL
of one day.  From the temporal locality of Internet traffic, we can say that 
this
choice is meaningless. xTR should be able to fix themselves the TTL of the
mappings they install in their cache (TTL_cache<= TTL_Map-Reply).

Nevertheless, it is true that a site like google may have to deal with millions 
of
mappings while indexing. I do not know how google work for indexing, but I 
presume
that this million of pages are not indexed in one minute by a single machine.
Probably that the indexing takes several hours (days?) and is from different 
machines
with load balancers. Probably that the indexing of a site takes at much a few 
tens of
minutes, the ITR could thus limit the TTL to that amount of time. In addition, 
if
load balancers are actually used, one could imagine to use these LB directly as
xTRs.

Another case where you can observe peaks of destinations is with mails, if your
server is attacked and becomes a relay for spams, it can potentially have to
contact a huge number of destinations. It is a problem, but I think that the 
problem
is not LISP but the way the server is managed.

But you and Jeff are right that LISP has a big problem in case of sort of flash 
crowds.
The question is to know what will die first, the cache or the services that are
flash crowded. Such peaks can also be done with DDoS and this is why we are
working on techniques to avoid spoofing with LISP. And yes, we have to be
honest we do not have the perfect solution now for this problem.

Thank you,

Damien Saucez


Many thx,
R.




_______________________________________________
lisp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lisp

Reply via email to