On Jul 18, 2011, at 15:03 , Robert Raszuk wrote:

> Hi Damien,
> 
> > So this goes in the direction of Jeff saying that the churn problem
> > should not be neglected.
> 
> Very much indeed.
> 
> And to save one more email from Luigi .. I do not buy the argument that this 
> is simply "out of scope" as this would really mean "out of practical reality".

Too late..  ;-)

May be I was unclear. What I wanted to say is that this was out of scope of the 
papers that were cited in the thread. 

I was not saying that such kind of work is "out of scope" in a general way. 
Rather the contrary, I truly think that security issues need to be explored, 
understood, solved.

ciao

Luigi


> 
> Cheers,
> R.
> 
> 
>> Hello,
> >
>> On 18 Jul 2011, at 10:12, Robert Raszuk wrote:
>> 
>>> Hi Damien,
>>> 
>>> Many thx for your reply.
>>> 
>>> Let me clarify ...
>>> 
>>>> I am not sure I understand perfectly what you mean. Why should a small home
>>>> site would have millions of destinations? It would mean that you observe 
>>>> at least
>>>> millions of flows within your small network.
>>> 
>>> Yes. Imagine if you or me announce wiki-leaks-2 content on our IPv4/IPv6 
>>> home web server. I think number of flows will explode. And imagine we are 
>>> lucky users of FTTH so no bandwith issue :)
>>> 
>>> Moreover those may not be DOS or spoofing. Those may be legitimate 
>>> addresses. And I do not think the server will die.
>>> 
>>> And I am not saying LISP will or will not be able to handle it. I am just 
>>> asking how it will handle it ;) Maybe the answer is to in the moment of 
>>> exceeding capacity threshold of any xTR push some of the traffic to few 
>>> bigger proxies ... just a thought.
>>> 
>> 
>> Thank you for the clarification, this is a nice example indeed!
>> 
>> You are right that for such legitimate traffic there is problem as it can be
>> assimilated to a flash crowd. In you particular example, I think
>> that it is the Map-Resolver or the control plane itself that will cause the
>> problems!
>> 
>> If we consider  the cache size it should be ok. If we have
>> one million client *simultaneously* and that the system is stupid and
>> stores the mapping records as-is, that all the requesters are IPv4 and
>> have two rlocs, then 40MB of storage is required. It is especially what
>> the small home routers have today, but it is not 1million times more
>> than the memory they provide now, so it would not be too hard.
>> 
>> But the control-plane in these routers would suffer and these boxes
>> are in general pure software and have slow processors so I don't know
>> if they will enjoy to update the cache at a rate higher than a few hundreds
>> entries per second. To be honest I don't know what could be achieved by
>> such devices. And even if they are able to deal with this speed, it will 
>> maybe
>> be complicated for the boxes to find a MR that will not rate limit the number
>> of requests.
>> 
>> So this goes in the direction of Jeff saying that the churn problem should
>> not be neglected.
>> 
>> Thank you,
>> 
>> Damien Saucez
>> 
>>> Cheers,
>>> R.
>>> 
>>> 
>>> 
>>> Even in campus networks like our
>>>> university we do not observe millions of concurrent flows. In one day we 
>>>> have in
>>>> general 3 million different IP addresses in our network and there is no 
>>>> mechanism
>>>> to limit the traffic but more interestingly, during 5 minute periods, we 
>>>> observe
>>>> peaks at around 60K to 70K concurrent destination addresses and  peaks to 
>>>> 1.2M
>>>> L3 flows and interestingly, from 5min time slots to 5min time
>>>> slots, we can see big variations (having 60K destination then 20K then
>>>> 40K ... is common) maning that actually the ground traffic that would stay 
>>>> in the
>>>> cache for what ever TTL is not so important. Indeed, a top few (~ 
>>>> thousands)
>>>> destinations are contacted all the time, while the rest could be seen as 
>>>> noise
>>>> with short period of time (from a few seconds to a few hours). So a good 
>>>> cache
>>>> eviction algorithm should solve most of the case. Regarding the cache 
>>>> size, I
>>>> think that the mistake is in the specs, where it is recommended to have a 
>>>> TTL
>>>> of one day.  From the temporal locality of Internet traffic, we can say 
>>>> that this
>>>> choice is meaningless. xTR should be able to fix themselves the TTL of the
>>>> mappings they install in their cache (TTL_cache<= TTL_Map-Reply).
>>>> 
>>>> Nevertheless, it is true that a site like google may have to deal with 
>>>> millions of
>>>> mappings while indexing. I do not know how google work for indexing, but I 
>>>> presume
>>>> that this million of pages are not indexed in one minute by a single 
>>>> machine.
>>>> Probably that the indexing takes several hours (days?) and is from 
>>>> different machines
>>>> with load balancers. Probably that the indexing of a site takes at much a 
>>>> few tens of
>>>> minutes, the ITR could thus limit the TTL to that amount of time. In 
>>>> addition, if
>>>> load balancers are actually used, one could imagine to use these LB 
>>>> directly as
>>>> xTRs.
>>>> 
>>>> Another case where you can observe peaks of destinations is with mails, if 
>>>> your
>>>> server is attacked and becomes a relay for spams, it can potentially have 
>>>> to
>>>> contact a huge number of destinations. It is a problem, but I think that 
>>>> the problem
>>>> is not LISP but the way the server is managed.
>>>> 
>>>> But you and Jeff are right that LISP has a big problem in case of sort of 
>>>> flash crowds.
>>>> The question is to know what will die first, the cache or the services 
>>>> that are
>>>> flash crowded. Such peaks can also be done with DDoS and this is why we are
>>>> working on techniques to avoid spoofing with LISP. And yes, we have to be
>>>> honest we do not have the perfect solution now for this problem.
>>>> 
>>>> Thank you,
>>>> 
>>>> Damien Saucez
>>>> 
>>>>> 
>>>>> Many thx,
>>>>> R.
>>>> 
>>>> 
>>>> 
>>> 
>> 
>> 
>> 
> 
> _______________________________________________
> lisp mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/lisp

_______________________________________________
lisp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lisp

Reply via email to