Re: The IPv6 Transitional Preference Problem

2010-06-28 Thread Phillip Hallam-Baker
Well lets unpack this.

If we care about 50ms response then we probably expect the operator of the
service to be interested in making sure that information that can assist is
delivered to the DNS resolver.

At the moment the DNS service does not provide hints like 'this host best if
calling from France' but there is no real reason that we could not
distribute the information through DNS is we care about the 50ms thing.

So we can address the uncertainty in the server end of things. That leaves
the client path to the server as the source of uncertainty.

Either past experience is going to be a good guide to future actions or it
isn't. If there is no consistency in response times to different hosts then
it is probably best to simply give up on the 50ms goal or do what is
necessary to improve the service. Otherwise the client can use its past
history to optimize its use of the server info.


Since most of the applications I use take a heck of a lot more than 50ms to
start up, I am not sure why a 50ms connection time would be important. I can
certainly see latency or jitter being an issue for online games. But
connection startup seems a stretch.

I tend to think that the original requirement was bogus. But if it is
genuine I think we would need a more comprehensive approach than working out
whether to try IPv4 first or second or in parallel.

On Fri, Jun 25, 2010 at 1:54 PM, David Conrad d...@virtualized.org wrote:

 Phillip,

 On Jun 25, 2010, at 10:06 AM, Phillip Hallam-Baker wrote:
  Am I the only person that thinks that if shaving 50ms off HTTP latency is
 a worthwhile goal it would be more appropriate to look at a DNS based
 signaling mechanism that is going to support that goal (and also do the
 right thing for IPv4/6) rather than look at various ways to coax the desired
 behavior from the legacy infrastructure.

 How would a DNS-based signaling mechanism work when DNS servers do not
 necessarily know anything about the state of connectivity to the server they
 provide name service for?

 Regards,
 -drc




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Carsten Bormann
On Jun 25, 2010, at 09:56, Brian E Carpenter wrote:

 trying v6 for a couple of seconds before trying v4 in parallel

I don't think this is realistic for applications like the Web, where people are 
now creating Youtube-Spots with high-speed cameras that show, in slow-motion, a 
potato cannon fired in parallel with a web page loading (the web page is faster 
than the potato, of course).
Shaving 50 ms off the HTTP latency is a major improvement in user experience 
for a Web user.

Gruesse, Carsten

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Brian E Carpenter
On 2010-06-25 20:08, Carsten Bormann wrote:
 On Jun 25, 2010, at 09:56, Brian E Carpenter wrote:
 
 trying v6 for a couple of seconds before trying v4 in parallel
 
 I don't think this is realistic for applications like the Web, where people 
 are now creating Youtube-Spots with high-speed cameras that show, in 
 slow-motion, a potato cannon fired in parallel with a web page loading (the 
 web page is faster than the potato, of course).
 Shaving 50 ms off the HTTP latency is a major improvement in user experience 
 for a Web user.

I think we're talking about the initial phase of contact with a server. 
Obviously,
once a best path is chosen, you will stick to it until there is a glitch.

   Brian
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Carsten Bormann
On Jun 25, 2010, at 16:16, Brian E Carpenter wrote:

 initial phase of contact with a server

To get the front page of the New York Times (http://www,nytimes.com), a 
server a couple of minutes ago meant

http://admin.brightcove.com/
http://b.scorecardresearch.com/
http://creativeby1.unicast.com/
http://googleads.g.doubleclick.net/
http://graphics8.nytimes.com/
http://markets.on.nytimes.com/
http://pagead2.googlesyndication.com/
http://ping1.unicast.com/
http://s0.2mdn.net/
http://secure-us.imrworldwide.com/
http://up.nytimes.com/
http://wt.o.nytimes.com/
http://www.nytimes.com/

(I'm not sure I caught all the ones accessed from JavaScript.)
And NYTimes is not nearly the worst offender here.

No, browser vendors won't put in a two-second wait for each of these.

Gruesse, Carsten

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Phillip Hallam-Baker
Nah, the service provider tells the client what to use via SRV records.

In most cases the service provider is going to know if IPv4 or IPv6 is going
to work better. They use different DNS names for the v4 and v6 interfaces
and prioritize them accordingly.

In most cases though the server is going to be IPv4 only or have equally
good IPv4 and IPv6.


On the client end the client is going to have a consistently better
experience with v4 or v6. And that information can be used to inform the
choice when making future connections.

The only case where I can see a client preferring IPv6 over 4 is when they
are behind a super-NAT and the v4 service is degraded. Or when they are
attempting to accept an incoming connection for VOIP or video conferencing.


The key is to take the decision out of the hands of the application software
so that it can be taken by the platform and allow the experience from one
connection to be used to inform the choice made on the next.


On Thu, Jun 24, 2010 at 7:57 PM, David Conrad d...@virtualized.org wrote:

 On Jun 24, 2010, at 4:48 PM, Mark Andrews wrote:
  The third choice is to do a non-blocking connect to IPv6 then if that
 does
  not succeed in 1 or 2 seconds (most successful connects are within this
  peroid but connect failures are considerably longer) initiate a non
 blocking
  IPv4 connection and keep whichever completes first and abort the other.

 That sounds like a variation of the v6-then-v4 serial case, just slightly
 accelerated.  But maybe I'm missing something...

 Regards,
 -drc

 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Phillip Hallam-Baker
Code will be used for decades which is why IPv4 will always be with us.

The IPv4-6 transition will inevitably take place in middleboxes. Most actual
networks will use IPv4 internally and IPv6 will be an Inter-Network
protocol.

Trying to code for the end state before the middlebox spec is known is
futile. The end point device is going to have to work with the middlebox to
work out the best choice of route. I can't see the point of piddling about
with heuristic protocols when we can write a middlebox spec that tells the
endpoint what to do straight off the bat.


Part of the problem here is that we have an IPv6 transition strategy that
does not mesh with the DNS.

Another part of the problem is that people are trying to fit everything into
the mould of end-to-end as if it was some sacred cow. E2E was a better way
to design a system from scratch compared to the telephone system. Now that
we have legacy systems in place we have billions of end points and similar
problems of rigidity as the telco network had.


At the moment the assumption is that the DNS is not intelligent. But that
does not have to be the case. If we make the service discovery mechanism
consistent across protocols there is more scope for the DNS to do the right
thing.



On Thu, Jun 24, 2010 at 9:01 PM, Mark Andrews ma...@isc.org wrote:


 In message aanlktiknlr5c5nkc8ewwvi9-h1zmvqybmfarrerj7...@mail.gmail.com,
 Phil
 lip Hallam-Baker writes:
  Nah, the service provider tells the client what to use via SRV records.
 
  In most cases the service provider is going to know if IPv4 or IPv6 is
 going
  to work better. They use different DNS names for the v4 and v6 interfaces
  and prioritize them accordingly.
 
  In most cases though the server is going to be IPv4 only or have equally
  good IPv4 and IPv6.
 
  On the client end the client is going to have a consistently better
  experience with v4 or v6. And that information can be used to inform the
  choice when making future connections.

 With well connected clients.  For clients with connectivity problems
 it can matter.

  The only case where I can see a client preferring IPv6 over 4 is when
 they
  are behind a super-NAT and the v4 service is degraded. Or when they are
  attempting to accept an incoming connection for VOIP or video
 conferencing.

 Super-NAT's will become common place.

 You also want to prefer IPv6 over IPv4 so that one can see when you
 can stop supporting IPv4 by looking at the traffic levels.  Code will
 be used for decades after it is written.  You need to write code for
 the end state even if it is painful at the beginning.

  The key is to take the decision out of the hands of the application
 software
  so that it can be taken by the platform and allow the experience from one
  connection to be used to inform the choice made on the next.

 --
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Phillip Hallam-Baker
Am I the only person that thinks that if shaving 50ms off HTTP latency is a
worthwhile goal it would be more appropriate to look at a DNS based
signaling mechanism that is going to support that goal (and also do the
right thing for IPv4/6) rather than look at various ways to coax the desired
behavior from the legacy infrastructure.

On Fri, Jun 25, 2010 at 10:16 AM, Brian E Carpenter 
brian.e.carpen...@gmail.com wrote:

 On 2010-06-25 20:08, Carsten Bormann wrote:
  On Jun 25, 2010, at 09:56, Brian E Carpenter wrote:
 
  trying v6 for a couple of seconds before trying v4 in parallel
 
  I don't think this is realistic for applications like the Web, where
 people are now creating Youtube-Spots with high-speed cameras that show, in
 slow-motion, a potato cannon fired in parallel with a web page loading (the
 web page is faster than the potato, of course).
  Shaving 50 ms off the HTTP latency is a major improvement in user
 experience for a Web user.

 I think we're talking about the initial phase of contact with a server.
 Obviously,
 once a best path is chosen, you will stick to it until there is a glitch.

   Brian
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread David Conrad
Phillip,

On Jun 25, 2010, at 10:06 AM, Phillip Hallam-Baker wrote:
 Am I the only person that thinks that if shaving 50ms off HTTP latency is a 
 worthwhile goal it would be more appropriate to look at a DNS based signaling 
 mechanism that is going to support that goal (and also do the right thing for 
 IPv4/6) rather than look at various ways to coax the desired behavior from 
 the legacy infrastructure.

How would a DNS-based signaling mechanism work when DNS servers do not 
necessarily know anything about the state of connectivity to the server they 
provide name service for?

Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-25 Thread Mark Andrews

In message 0ddf0b8f-4d3e-42d7-97c5-944cb0171...@tzi.org, Carsten Bormann writ
es:
 On Jun 25, 2010, at 16:16, Brian E Carpenter wrote:
 
  initial phase of contact with a server
 
 To get the front page of the New York Times (http://www,nytimes.com), a serv
 er a couple of minutes ago meant
 
 http://admin.brightcove.com/
 http://b.scorecardresearch.com/
 http://creativeby1.unicast.com/
 http://googleads.g.doubleclick.net/
 http://graphics8.nytimes.com/
 http://markets.on.nytimes.com/
 http://pagead2.googlesyndication.com/
 http://ping1.unicast.com/
 http://s0.2mdn.net/
 http://secure-us.imrworldwide.com/
 http://up.nytimes.com/
 http://wt.o.nytimes.com/
 http://www.nytimes.com/
 (I'm not sure I caught all the ones accessed from JavaScript.)
 And NYTimes is not nearly the worst offender here.
 
 No, browser vendors won't put in a two-second wait for each of these.

If we use the .5 of a second delay then there is 1 second of delay
total for this page attempting to connect through a broken IPv6
transport layer.  .5 of second trying www.nytimes.com and .5 of a
second for each of the others however they are all being fetched
in parallel so the effective delay of all the other items combined
is only .5.

 Gruesse, Carsten
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-24 Thread David Conrad
Martin,

On Jun 23, 2010, at 6:06 AM, Martin Rex wrote:
 What you described results in a negative incentive for servers to
 become accessible under IPv6 as an alternative to IPv4.  That is a real 
 problem.

I guess I'm not seeing how it is a significant negative incentive to servers.

 If IPv6 connectivity is still bad, then the connection request will
 not reach the server and the server will not notice.  

Right.  So, the only case where a parallel open would potentially have any 
impact on the server is when there is good v6 connectivity.  Presumably, if a 
server operator has configured v6, they are anticipating v6 load and will have 
engineered for that load.  Since for any given session, the application will 
either communicate via v4 or via v6, not both, the additional load on the 
server will be exactly one additional communication initiation event.  I 
honestly have difficulty seeing a server operator building a system so close to 
the edge that this would be a real concern, particularly given any server 
connected to the Internet today is going to be subject to vastly more load due 
to random scans from malware.

In the serial case, there are two options: v4 first or v6 first.  If v4 is 
chosen first, it is unlikely v6 will ever be used, thus the server operator 
setting up v6 would be a waste of time.  If v6 is chosen first, then the client 
will have to wait for the v6 initiation to time out in the case of bad v6 
connectivity.  My guess is that this would result in an increase in support 
calls to the server operator (why is your server so slow?) with the typical 
support center response being turn off v6 support.  I believe this has been 
empirically demonstrated.

I personally don't see how we'll get anywhere with v6 deployment using the 
serial approach nor do I see any other options than parallel vs. serial.  Since 
you believe parallel open to be a problem, what is your proposed alternative?

Regards,
-drc



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-24 Thread Mark Andrews

In message 14d6dc7c-fcd2-4eb9-aa56-ecf13a110...@virtualized.org, David Conrad
 writes:
 Martin,
 
 On Jun 23, 2010, at 6:06 AM, Martin Rex wrote:
  What you described results in a negative incentive for servers to
  become accessible under IPv6 as an alternative to IPv4.  That is a real pro
 blem.
 
 I guess I'm not seeing how it is a significant negative incentive to servers.
 
  If IPv6 connectivity is still bad, then the connection request will
  not reach the server and the server will not notice.  
 
 Right.  So, the only case where a parallel open would potentially have any im
 pact on the server is when there is good v6 connectivity.  Presumably, if a s
 erver operator has configured v6, they are anticipating v6 load and will have
  engineered for that load.  Since for any given session, the application will
  either communicate via v4 or via v6, not both, the additional load on the se
 rver will be exactly one additional communication initiation event.  I honest
 ly have difficulty seeing a server operator building a system so close to the
  edge that this would be a real concern, particularly given any server connec
 ted to the Internet today is going to be subject to vastly more load due to r
 andom scans from malware.
 
 In the serial case, there are two options: v4 first or v6 first.  If v4 is ch
 osen first, it is unlikely v6 will ever be used, thus the server operator set
 ting up v6 would be a waste of time.  If v6 is chosen first, then the client 
 will have to wait for the v6 initiation to time out in the case of bad v6 con
 nectivity.  My guess is that this would result in an increase in support call
 s to the server operator (why is your server so slow?) with the typical sup
 port center response being turn off v6 support.  I believe this has been em
 pirically demonstrated.

The third choice is to do a non-blocking connect to IPv6 then if that does
not succeed in 1 or 2 seconds (most successful connects are within this
peroid but connect failures are considerably longer) initiate a non blocking
IPv4 connection and keep whichever completes first and abort the other.
 
 I personally don't see how we'll get anywhere with v6 deployment using the se
 rial approach nor do I see any other options than parallel vs. serial.  Since
  you believe parallel open to be a problem, what is your proposed alternative
 ?
 
 Regards,
 -drc
 
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-24 Thread David Conrad
On Jun 24, 2010, at 4:48 PM, Mark Andrews wrote:
 The third choice is to do a non-blocking connect to IPv6 then if that does
 not succeed in 1 or 2 seconds (most successful connects are within this
 peroid but connect failures are considerably longer) initiate a non blocking
 IPv4 connection and keep whichever completes first and abort the other.

That sounds like a variation of the v6-then-v4 serial case, just slightly 
accelerated.  But maybe I'm missing something...

Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-24 Thread JORDI PALET MARTINEZ
I'm a bit late into this debate. I just want to point that some time ago, we
worked in the idea of the OS to be able to detect the best path, when IPv6
and one or several transition mechanisms are available.

Here is the last version of the document:

http://tools.ietf.org/html/draft-palet-v6ops-auto-trans-02


At that time, v6ops considered that it was not an interesting work ... So me
be is time to revive it ?

Regards,
Jordi




 From: Mark Andrews ma...@isc.org
 Reply-To: ma...@isc.org
 Date: Fri, 25 Jun 2010 11:01:53 +1000
 To: ietf@ietf.org
 Subject: Re: The IPv6 Transitional Preference Problem
 
 
 In message aanlktiknlr5c5nkc8ewwvi9-h1zmvqybmfarrerj7...@mail.gmail.com,
 Phil
 lip Hallam-Baker writes:
 Nah, the service provider tells the client what to use via SRV records.
 
 In most cases the service provider is going to know if IPv4 or IPv6 is going
 to work better. They use different DNS names for the v4 and v6 interfaces
 and prioritize them accordingly.
 
 In most cases though the server is going to be IPv4 only or have equally
 good IPv4 and IPv6.
 
 On the client end the client is going to have a consistently better
 experience with v4 or v6. And that information can be used to inform the
 choice when making future connections.
 
 With well connected clients.  For clients with connectivity problems
 it can matter.
  
 The only case where I can see a client preferring IPv6 over 4 is when they
 are behind a super-NAT and the v4 service is degraded. Or when they are
 attempting to accept an incoming connection for VOIP or video conferencing.
 
 Super-NAT's will become common place.
 
 You also want to prefer IPv6 over IPv4 so that one can see when you
 can stop supporting IPv4 by looking at the traffic levels.  Code will
 be used for decades after it is written.  You need to write code for
 the end state even if it is painful at the beginning.
  
 The key is to take the decision out of the hands of the application software
 so that it can be taken by the platform and allow the experience from one
 connection to be used to inform the choice made on the next.
 
 -- 
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf



**
The IPv6 Portal: http://www.ipv6tf.org

This electronic message contains information which may be privileged or 
confidential. The information is intended to be for the use of the 
individual(s) named above. If you are not the intended recipient be aware that 
any disclosure, copying, distribution or use of the contents of this 
information, including attached files, is prohibited.



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-24 Thread AJ Jaghori
there are many packet level tools ad server based packages that can  
help forecast and pipe v6 loads seamless to your user stack.


Sent from my iPhone (SDK).  Please excuse my brevity.

On Jun 24, 2010, at 1:23 PM, David Conrad d...@virtualized.org wrote:


Martin,

On Jun 23, 2010, at 6:06 AM, Martin Rex wrote:

What you described results in a negative incentive for servers to
become accessible under IPv6 as an alternative to IPv4.  That is a  
real problem.


I guess I'm not seeing how it is a significant negative incentive to  
servers.



If IPv6 connectivity is still bad, then the connection request will
not reach the server and the server will not notice.


Right.  So, the only case where a parallel open would potentially  
have any impact on the server is when there is good v6  
connectivity.  Presumably, if a server operator has configured v6,  
they are anticipating v6 load and will have engineered for that  
load.  Since for any given session, the application will either  
communicate via v4 or via v6, not both, the additional load on the  
server will be exactly one additional communication initiation  
event.  I honestly have difficulty seeing a server operator building  
a system so close to the edge that this would be a real concern,  
particularly given any server connected to the Internet today is  
going to be subject to vastly more load due to random scans from  
malware.


In the serial case, there are two options: v4 first or v6 first.  If  
v4 is chosen first, it is unlikely v6 will ever be used, thus the  
server operator setting up v6 would be a waste of time.  If v6 is  
chosen first, then the client will have to wait for the v6  
initiation to time out in the case of bad v6 connectivity.  My guess  
is that this would result in an increase in support calls to the  
server operator (why is your server so slow?) with the typical  
support center response being turn off v6 support.  I believe this  
has been empirically demonstrated.


I personally don't see how we'll get anywhere with v6 deployment  
using the serial approach nor do I see any other options than  
parallel vs. serial.  Since you believe parallel open to be a  
problem, what is your proposed alternative?


Regards,
-drc



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-23 Thread Martin Rex
David Conrad wrote:
 
 On Jun 18, 2010, at 7:21 PM, Martin Rex wrote:
  
  What you described is a client with a pretty selfish attitude
  that doesn't care about network, servers and the other clients
  put into code.
 
 Well, no.  What I described was my understanding of a proposal to
 facilitate transition that comes with some benefits and some costs.
 If nothing else, given the truly inspirational amount of crap on the
 Internet, I find it a bit difficult to get worked up about a few
 additional packets at communication initiation that are actually beneficial.

What you described results in a negative incentive for servers to
become accessible under IPv6 as an alternative to IPv4.  That is a real
problem.  If a large number of clients would follow your proposed
strategy, ever server that announces an IPv6 address gets hit by
twice the amount of connection requests, half of them being killed
prenatal or during infancy.

If IPv6 connectivity is still bad, then the connection request will
not reach the server and the server will not notice.  But it is a clear
goal to considerably improve on IPv6 connectivity in near term.
So the problem this selfish client-side hack addresses is going to
become worse for the servers over time.

I wonder at what point clients with a selfish attitude will stop
optimizing for their own interest alone.  The largest effort for
client apps is to implement the parallel connection handling at
all.  Using it for parallel IPv4 connects and not only IPv4+IPv6
comes essentially for free.  For typical HTTP-style protocols
with small app-requests, sending the client requests in paralell
would also be cheap for the client.  Deciding which connection
to retain based on which one yields the fastest server reply
is going to improve the user experience even more.  But the
more it seems to improve for the client, the worse it gets
for the server, the network and all the other clients.


 
  The concept works only as long as very few individuals try to
  get an unfair advantage over the rest.  But it definitely is
  doomed if EVERYONE, or even a larger number of people would
  practice this.
 
 We seem to be talking about different things.

At the abstract level, it is exactly the same thing.


When a project is falling behind schedule there are two things
that the responsible manager could do:

 - ask for more frequent status reports
 - ask the team what he could do to help them getting it done

One of them is inconsiderate, ineffective and popular.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-23 Thread Joel Jaeggli


Joel's iPad

On Jun 23, 2010, at 6:06 AM, Martin Rex m...@sap.com wrote:

 David Conrad wrote:
 
 On Jun 18, 2010, at 7:21 PM, Martin Rex wrote:
 
 What you described is a client with a pretty selfish attitude
 that doesn't care about network, servers and the other clients
 put into code.
 
 Well, no.  What I described was my understanding of a proposal to
 facilitate transition that comes with some benefits and some costs.
 If nothing else, given the truly inspirational amount of crap on the
 Internet, I find it a bit difficult to get worked up about a few
 additional packets at communication initiation that are actually beneficial.
 
 What you described results in a negative incentive for servers to
 become accessible under IPv6 as an alternative to IPv4.  That is a real
 problem.  If a large number of clients would follow your proposed
 strategy, ever server that announces an IPv6 address gets hit by
 twice the amount of connection requests, half of them being killed
 prenatal or during infancy.

We have tcp syn cookies actually to protect against the impact of state 
generation on connect. As long as you as a client reply only to one syn/ack, 
everything is cool.

 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-23 Thread Martin Rex
Joel Jaeggli wrote:
 
 On Jun 23, 2010, at 6:06 AM, Martin Rex m...@sap.com wrote:
  
  What you described results in a negative incentive for servers to
  become accessible under IPv6 as an alternative to IPv4.  That is a real
  problem.  If a large number of clients would follow your proposed
  strategy, ever server that announces an IPv6 address gets hit by
  twice the amount of connection requests, half of them being killed
  prenatal or during infancy.
 
 We have tcp syn cookies actually to protect against the impact of
 state generation on connect. As long as you as a client reply only
 to one syn/ack, everything is cool.

If it was the TCP stack that generated both original SYNs to decide
then this might work.  But it is some app code several layers higher
with a non-negligible latency.  Originally, there were two choices
for the app: multi-threaded blocking connect()s or asynchronous
non-blocking.  In the blocking variant, it becomes pretty difficult
to prevent TCP from completing the handshake.

If the IETF really thinks that there is value in going down that path,
then it should define parallel IPv4IPv6 connects at the network level,
so that either connection knows about the other one.  This should
be accompanied by a hint in DNS indicating that a node (a) technically
supports and (b) does not mind parallel connect()s.  When it is part
of the network stack, it could work with any existing app.


My knowledge about the TCP, IP and NAT stuff is pretty limitited.
While the TCP SYN cookies might help to protect the server apps,
how much resources do a single SYN/ACK use at the kernel/TCP stack
layer and how much resources do they use on the NATs in between?
Without FIN/ACK or RST only the closing client knows that the
resource can be freed.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-23 Thread Carsten Bormann
On Jun 23, 2010, at 15:06, Martin Rex wrote:

 optimizing for their own interest alone

I don't know about you, but when I set up a server, I have a strong interest 
that my clients get their data fast.
So whatever it takes to do that, is in my interest.

BTW, initial analyses of iOS 4 (iPhone OS) show that they are always v6 enabled:
http://isc.sans.edu/diary.html?storyid=9058rss
It would be interesting to know how their v6/v4 preference goes.

They also appear to be using MAC-based addresses (as opposed to RFC4941/RFC3041 
privacy-enhanced).
If that is indeed true, that will be a *great* incentive for server operators 
to switch on NAT-free IPv6, once they start to understand the implications (no 
need for cookies any more :-).

Gruesse, Carsten

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-20 Thread Fred Baker

On Jun 20, 2010, at 10:36 AM, ned+i...@mauve.mrochek.com wrote:

 I said that right now
 it's extremely ill-advised to ship products that have IPv6 support enabled by
 default. This is one of the many reasons for this.

I would argue the opposite; people won't turn it on otherwise, due to lack of 
knowledge or negligence. What I would also argue is that the API that opens a 
session should try all available address pairs in relatively short order - on 
the order of tens of milliseconds between new attempts - keeping in some 
scratchpad memory notes on what worked, and trying those first on a subsequent 
access to the same name. If IPv4 addresses are what work, so be it. 

Happy Eyeballs...
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-20 Thread Geert Jan de Groot

 I would argue the opposite; people won't turn it on otherwise, 
 due to lack of knowledge or negligence. What I would also argue is 
 that the API that opens a session should try all available 
 address pairs in relatively short order - on the order of 
 tens of milliseconds between new attempts 

Internet protocols historically have had good scaling properties
on widely varying bandwiths and RTT times.
Short probe intervals will cause difficulty if RTT isn't also
in the order of tens of milliseconds.

There's places (I was in one only 2 weeks ago) where RTT to USA
was 400 ms, unless we were on backup vsat, in which case it was
2-4 seconds, with congestion and packet loss.
And they pay a lot more for bytes transferred than we're used to.

Not all the world has low latency and high bandwith.
Adding a dependency on this in IPv6 will not help acceptance.

IMHO, there's 2 issues:
1. Global IPv6 connectivity doesn't exist - at best, it's a tunnel mess
   with bits and pieces continuously falling off, then getting reconnected
   again, and nobody seems to care - there's no effort to make connectivity
   more stable
2. A new client query type - A - (that's 5 A's, meaning give me IPv6
   unless it doesn't exist, in which case return me IPv4),
   with this result cached, would be helpful in high-latency
   situations

Geert Jan

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-20 Thread Mark Andrews

In message 20100620195212.4bd395...@berserkly.xs4all.nl, Geert Jan de Groot 
writes:
  I would argue the opposite; people won't turn it on otherwise, 
  due to lack of knowledge or negligence. What I would also argue is 
  that the API that opens a session should try all available 
  address pairs in relatively short order - on the order of 
  tens of milliseconds between new attempts 
 
 Internet protocols historically have had good scaling properties
 on widely varying bandwiths and RTT times.
 Short probe intervals will cause difficulty if RTT isn't also
 in the order of tens of milliseconds.
 
 There's places (I was in one only 2 weeks ago) where RTT to USA
 was 400 ms, unless we were on backup vsat, in which case it was
 2-4 seconds, with congestion and packet loss.
 And they pay a lot more for bytes transferred than we're used to.
 
 Not all the world has low latency and high bandwith.
 Adding a dependency on this in IPv6 will not help acceptance.
 
 IMHO, there's 2 issues:
 1. Global IPv6 connectivity doesn't exist - at best, it's a tunnel mess
with bits and pieces continuously falling off, then getting reconnected
again, and nobody seems to care - there's no effort to make connectivity
more stable

And many of the tunnels are not a issue even if it would be better if they
were turned into native connections.

 2. A new client query type - A - (that's 5 A's, meaning give me IPv6
unless it doesn't exist, in which case return me IPv4),
with this result cached, would be helpful in high-latency
situations

It would do *nothing* to help.  The client or the libraries it calls
can already do this if they want.  Making two lookups is not a real
issue.  You would need to upgrade both clients and libraries to
make this change without breaking existing application.  Just add
a new flag to getaddrinfo() and upgrade the clients to set it.

While you are upgrading getaddrinfo() change the sorting order of
of the addresses you return and you have addressed most of the tunnel
issues as well.

Mark

 Geert Jan
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-20 Thread Joe Abley

On 2010-06-20, at 15:52, Geert Jan de Groot wrote:

 IMHO, there's 2 issues:
 1. Global IPv6 connectivity doesn't exist - at best, it's a tunnel mess
   with bits and pieces continuously falling off, then getting reconnected
   again, and nobody seems to care - there's no effort to make connectivity
   more stable

I think this is an over-statement (at least, if you consider that global IPv4 
connectivity *does* exist, which I might choose to argue about over an open 
pack of Stroopwafels).

 2. A new client query type - A - (that's 5 A's, meaning give me IPv6
   unless it doesn't exist, in which case return me IPv4),
   with this result cached, would be helpful in high-latency
   situations

I haven't run the numbers, but my instinct is that this is a problem worth 
solving. If a thousand out of every thousand and one queries is answered from 
the cache, then optimising a thousand and two ( and A) back to a thousand 
and one isn't going to make a perceptible difference to anybody, at the cost of 
interop and fallback in a world where A is not universally available.

It would be good if someone with access to a nice variety of query dumps from 
resolvers in various situations was able to estimate the practical impact on 
the end user of optimising (A, ) into (A -- nice name) since this idea 
is a good enough one that (plain to see) it keeps coming up. If it has value, 
let's see the numbers and do something about it. If not, let's put it to bed.


Joe
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-19 Thread David Conrad
On Jun 18, 2010, at 7:21 PM, Martin Rex wrote:
 EVERY server is trivially susceptible to DoS attacks.
 That is no such thing as a server that is not.

Not so interested in getting into a pedantic argument on DoS susceptibility.

 What you described is a client with a pretty selfish attitude
 that doesn't care about network, servers and the other clients
 put into code.

Well, no.  What I described was my understanding of a proposal to facilitate 
transition that comes with some benefits and some costs.  If nothing else, 
given the truly inspirational amount of crap on the Internet, I find it a bit 
difficult to get worked up about a few additional packets at communication 
initiation that are actually beneficial.

 The concept works only as long as very few individuals try to
 get an unfair advantage over the rest.  But it definitely is
 doomed if EVERYONE, or even a larger number of people would
 practice this.

We seem to be talking about different things.

Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-19 Thread David Conrad
Ned,

On Jun 17, 2010, at 2:18 PM, Ned Freed wrote:
 Well, first of all, there are plenty of places that do not enjoy the benefits
 of all that fancy stuff. What may be a tiny bit of meaningless overhead may
 be something else entirely for someone else.

Perhaps.  However, since connecting to the Internet today implies you are 
constantly flooded with probes of various flavors from myriad malware and 
miscreants, I have some skepticism that anyone would even notice a few 
additional packets/kbytes of data during communication set up.  And in such 
cases, I'd imagine connection status info would be cached (like most resolvers 
cache EDNS0 support in DNS).

 And those failed lookups have a real support overhead cost. Among other 
 things,
 they create entries in appliction and firewall logs.

I would think the delays implied by IPv6-then-IPv4 communication setup failure 
would result in _far_ more calls than calls from the (relatively) few people 
who look at application and firewall logs (ever look at the console log on your 
average MacOSX system? I doubt Apple gets many calls despite the tremendous 
amount of noise that gets logged).

 At this point I wouldn't consider shipping an application that doesn't support
 at least basic IPv6 connectivity, but there's also no way I'd consider 
 shipping
 an application that has that support enabled by default, because it's
 guaranteed to be a support call generator.

An understandable and conservative approach, however won't the implication of 
taking this route be that in the longer term, your products will, by default, 
generate support calls as customers will not be able to connect to IPv6-only 
sites (or connect sub-optimally to multi-layer NATv4 sites)?

[On APIs]
 At this point the work is really too little too late.

I've read long ago that back in the 80s, the guy who wrote make(1) realized 
that makefile's use of whitespace was problematic, but the user base of about 
50 people was too large to make any changes...

Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-18 Thread Phillip Hallam-Baker
Lets make sure this does not become a bash Apple affair.

Apple's airport is the only WiFi hardware I have had that has not stopped
working within 9 months due to shoddy manufacture. I have had every brand
you can name and every one failed in the same way, connections became flaky
and then the router started requiring regular resets.

Hang on a moment, the rant is relevant to the IPv6 transition

When I mentioned the hardware issue to a certain tech savy person known to
us all they reported having no problems at all with their routers. Then I
watched them using said router and rebooting it without noticing they had
done it.

Can be made to work for a geek is a much lower bar than the one we face
here. The premise is that we need IPv6 transition because we have the
billions of users. It has to work absolutely flawlessly to be useful.


On Thu, Jun 17, 2010 at 7:38 AM, Sabahattin Gucukoglu 
m...@sabahattin-gucukoglu.com wrote:

 Just in case someone here wants to take sides, have a look at this thread
 on the IPv6 discussion list at Apple:
 http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html
 (the thread actually goes back earlier than that, but I can't be bothered
 going looking for it because I can't stand that awful PiperMail interface)

 Summary: it is a problem for some people, notably content providers, that
 connectivity losses result from a preference to use (advertised) v6 routes.
  Mac OS X, in particular, has this habit of treating even the commonplace
 6to4 routes, which of course fail occasionally for a number of reasons, just
 as native IPv6, and preferring them.  It has no address selection table, so
 it will flunk even when IPv4 would have worked fine, i.e., will not treat
 common v4-NAT environment as global scope.

 My take: it's not their fault.  Transition mechanisms must improve, because
 they're needed even into the IPv4 twilight.  However, it's been noted before
 that an API for selection of the protocol based on application requirements
 is a solution to this particular paradox, and I agree with that.

 Cheers,
 Sabahattin
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf




-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: The IPv6 Transitional Preference Problem

2010-06-18 Thread Dan Wing
 -Original Message-
 From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On 
 Behalf Of Sabahattin Gucukoglu
 Sent: Thursday, June 17, 2010 4:38 AM
 To: ietf@ietf.org
 Subject: The IPv6 Transitional Preference Problem
 
 Just in case someone here wants to take sides, have a look at 
 this thread on the IPv6 discussion list at Apple:
 http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html
 (the thread actually goes back earlier than that, but I can't 
 be bothered going looking for it because I can't stand that 
 awful PiperMail interface)
 
 Summary: it is a problem for some people, notably content 
 providers, that connectivity losses result from a preference 
 to use (advertised) v6 routes.  Mac OS X, in particular, has 
 this habit of treating even the commonplace 6to4 routes, 
 which of course fail occasionally for a number of reasons, 
 just as native IPv6, and preferring them.  It has no address 
 selection table, so it will flunk even when IPv4 would have 
 worked fine, i.e., will not treat common v4-NAT environment 
 as global scope.
 
 My take: it's not their fault.  Transition mechanisms must 
 improve, because they're needed even into the IPv4 twilight.  
 However, it's been noted before that an API for selection of 
 the protocol based on application requirements is a solution 
 to this particular paradox, and I agree with that.

My take:  fix it in the delay-sensitive applications.  That
means Safari, Firefox, IE, etc., as we describe in Happy
Eyeballs, http://tools.ietf.org/html/draft-wing-http-new-tech-00
which effectively says:  tend to prefer IPv6 (or IPv4) as long
as IPv6 (or IPv4) is working well.  If it's a new network
connection, try both at the same time (because the host does
not yet know which will work well).

-d


 Cheers,
 Sabahattin
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www.ietf.org/mailman/listinfo/ietf

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


The IPv6 Transitional Preference Problem

2010-06-17 Thread Sabahattin Gucukoglu
Just in case someone here wants to take sides, have a look at this thread on 
the IPv6 discussion list at Apple:
http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html
(the thread actually goes back earlier than that, but I can't be bothered going 
looking for it because I can't stand that awful PiperMail interface)

Summary: it is a problem for some people, notably content providers, that 
connectivity losses result from a preference to use (advertised) v6 routes.  
Mac OS X, in particular, has this habit of treating even the commonplace 6to4 
routes, which of course fail occasionally for a number of reasons, just as 
native IPv6, and preferring them.  It has no address selection table, so it 
will flunk even when IPv4 would have worked fine, i.e., will not treat common 
v4-NAT environment as global scope.

My take: it's not their fault.  Transition mechanisms must improve, because 
they're needed even into the IPv4 twilight.  However, it's been noted before 
that an API for selection of the protocol based on application requirements is 
a solution to this particular paradox, and I agree with that.

Cheers,
Sabahattin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Arnt Gulbrandsen

On 06/17/2010 01:38 PM, Sabahattin Gucukoglu wrote:

Just in case someone here wants to take sides, have a look at this thread on 
the IPv6 discussion list at Apple:
http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html
(the thread actually goes back earlier than that, but I can't be bothered going 
looking for it because I can't stand that awful PiperMail interface)


What I've never understood is why (almost) everyone tries addresses in 
sequence instead of in parallel.


Even applications that routinely open two or more concurrent connections 
to the server first try IPvX, then wait many seconds, then try IPvY. Why 
not try both in parallel and use whatever address answers first?


Arnt
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Sabahattin Gucukoglu
On 17 Jun 2010, at 13:30, Arnt Gulbrandsen wrote:
 On 06/17/2010 01:38 PM, Sabahattin Gucukoglu wrote:
 Just in case someone here wants to take sides, have a look at this thread on 
 the IPv6 discussion list at Apple:
 http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html
 (the thread actually goes back earlier than that, but I can't be bothered 
 going looking for it because I can't stand that awful PiperMail interface)
 
 What I've never understood is why (almost) everyone tries addresses in 
 sequence instead of in parallel.
 
 Even applications that routinely open two or more concurrent connections to 
 the server first try IPvX, then wait many seconds, then try IPvY. Why not try 
 both in parallel and use whatever address answers first?

It's Apple we're talking about here.  Have a look at this for some nasty 
surprises:
http://www.fix6.net/archives/2010/03/06/the-strange-behavior-of-apples-mdnsresponder/

Admittedly this is just for DNS, but I think it illustrates the general 
problem, you can't win, you can't break even, and you can't even quit the game 
with this one.

Cheers,
Sabahattin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Wes Beebee
 On 17 Jun 2010, at 13:30, Arnt Gulbrandsen wrote:
 On 06/17/2010 01:38 PM, Sabahattin Gucukoglu wrote:
 Just in case someone here wants to take sides, have a look at this thread on
 the IPv6 discussion list at Apple:
 http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html
 (the thread actually goes back earlier than that, but I can't be bothered
 going looking for it because I can't stand that awful PiperMail interface)
 
 What I've never understood is why (almost) everyone tries addresses in
 sequence instead of in parallel.
 
 Even applications that routinely open two or more concurrent connections to
 the server first try IPvX, then wait many seconds, then try IPvY. Why not try
 both in parallel and use whatever address answers first?
 
 It's Apple we're talking about here.  Have a look at this for some nasty
 surprises:
 http://www.fix6.net/archives/2010/03/06/the-strange-behavior-of-apples-mdnsres
 ponder/
 
 Admittedly this is just for DNS, but I think it illustrates the general
 problem, you can't win, you can't break even, and you can't even quit the game
 with this one.

It seems to me that you would want to kick off both the DNS and the
connection in parallel.  In thread 1, do DNS A followed by IPv4 connection.
In thread 2, do DNS  followed by IPv6 connection.  Whichever thread
completes BOTH the DNS lookup AND the connection first kills the other
thread.  That way, you don't have the problem of my  succeeded first,
but my connection can only be made over IPv4...

You can do this cleanly be encapsulating BOTH the DNS lookup AND the
connection in the same lookup using connect by name...

- Wes

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Martin Rex
Arnt Gulbrandsen wrote:
 
 What I've never understood is why (almost) everyone tries addresses in 
 sequence instead of in parallel.
 
 Even applications that routinely open two or more concurrent connections 
 to the server first try IPvX, then wait many seconds, then try IPvY. Why 
 not try both in parallel and use whatever address answers first?

Maybe because it would be a big waste of network bandwidth and close
to a Denial of Service (DoS) attack if every client would try every
IPv4 and IPv6 address in parallel that it can get hold of for a hostname.

Similarly, it could require a major redesign of lots of applications
in order to be able to manage several parallel requests
-- multi-threaded with blocking DNS-lookups and blocking connect()s
or parallel asynchronous/non-blocking DNS-lookups and connect()s.

I was hit by a funny Bug in Microsoft Windows NT 3.1 in 1993 when using
asynchronous connect()s for the first time.  After the connection
timeout (60 seconds) I would still get an event delivered with
the original socket number -- even if I had long closed that
connect-pending socket and been reassigned the same socket number
on the next socket() call from the OS.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


[Fwd: Re: [Fwd: Re: The IPv6 Transitional Preference Problem]]

2010-06-17 Thread Denis Walker
Hi

I pointed this issue to the DNS group at the RIPE NCC and their response
is below.

regards
Denis
Business Analyst
RIPE NCC Database Group


 Original Message 
Subject:Re: [Fwd: Re: The IPv6 Transitional Preference Problem]
Date:   Thu, 17 Jun 2010 16:53:53 +0200
From:   Anand Buddhdev ana...@ripe.net
To: Denis  de...@ripe.net
CC: 
References: 4c1a311a.3040...@ripe.net



It might be an issue with earlier versions of OS X, but on my Snow
Leopard (10.6.3) system, it's not an issue any more. Here's an example:

I type 'www.apnic.net' in my browser, and 2 DNS queries go out. I get
back a response with the A record first, and next I get back the
response with the  record. However, my system does not ignore the
second response. It does make use of the  record, and my browser
connects to APNIC's site over IPv6.

16:40:34.638265 IP 193.0.2.23.51663  193.0.19.6.53: 21148+ A?
www.apnic.net. (31)
16:40:34.641143 IP 193.0.19.6.53  193.0.2.23.51663: 21148 1/9/12 A
202.12.29.230 (491)
16:40:34.648670 IP 193.0.2.23.51652  193.0.19.6.53: 6434+ ?
www.apnic.net. (31)
16:40:34.650519 IP 193.0.19.6.53  193.0.2.23.51652: 6434 1/9/12 
2001:dc0:2001:11::211 (503)

On 17/06/2010 16:28, Denis wrote:

 Hi
 
 Does this affect us in any way with all the MAC machines we use
 here?
 
 cheers denis
 
 
  Original Message  Subject:   Re: The IPv6
 Transitional Preference Problem Date: Thu, 17 Jun 2010 15:05:09
 +0100 From:   Sabahattin Gucukoglu m...@sabahattin-gucukoglu.com 
 Reply-To: Sabahattin Gucukoglu 
 mail-dated-1279375511.e59...@sabahattin-gucukoglu.com To:
 ietf@ietf.org References: 
 150fa845-8f32-436d-962c-33a0baefe...@sabahattin-gucukoglu.com 
 4c1a1561.8010...@gulbrandsen.priv.no
 
 
 
 On 17 Jun 2010, at 13:30, Arnt Gulbrandsen wrote:
 On 06/17/2010 01:38 PM, Sabahattin Gucukoglu wrote:
 Just in case someone here wants to take sides, have a look at
 this thread on the IPv6 discussion list at Apple: 
 http://lists.apple.com/archives/ipv6-dev/2010/Jun/msg0.html 
 (the thread actually goes back earlier than that, but I can't be
 bothered going looking for it because I can't stand that awful
 PiperMail interface)
 
 What I've never understood is why (almost) everyone tries addresses
 in sequence instead of in parallel.
 
 Even applications that routinely open two or more concurrent
 connections to the server first try IPvX, then wait many seconds,
 then try IPvY. Why not try both in parallel and use whatever
 address answers first?
 
 It's Apple we're talking about here.  Have a look at this for some
 nasty surprises: 
 http://www.fix6.net/archives/2010/03/06/the-strange-behavior-of-apples-mdnsresponder/

  Admittedly this is just for DNS, but I think it illustrates the
 general problem, you can't win, you can't break even, and you can't
 even quit the game with this one.
 
 Cheers, Sabahattin ___ 
 Ietf mailing list Ietf@ietf.org 
 https://www.ietf.org/mailman/listinfo/ietf
 
 
 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread David Conrad
On Jun 17, 2010, at 12:18 PM, Martin Rex wrote:
 Maybe because it would be a big waste of network bandwidth and close
 to a Denial of Service (DoS) attack if every client would try every
 IPv4 and IPv6 address in parallel that it can get hold of for a hostname.

In a world of broadband, gigabit ethernet interfaces, high speed wireless, 
etc., I have some skepticism that attempting both v4 and v6 connections in 
parallel is a big waste, much less anywhere near close to a Denial of 
Service (DoS) attack.

 Similarly, it could require a major redesign of lots of applications
 in order to be able to manage several parallel requests
 -- multi-threaded with blocking DNS-lookups and blocking connect()s
 or parallel asynchronous/non-blocking DNS-lookups and connect()s.

Well, yes.  However, applications already have to be modified to deal with 
IPv6.  I'd agree that modifying applications from a simple synchronous path to 
dealing with parallel asynchronous connections would not be a good idea. 
Personally, I'm of the strong opinion that the socket() API is fundamentally 
broken as is the separation of naming lookup from connection initiation/address 
management. In the vast majority of cases, applications should not know or care 
what about anything but the destination name/service.  As I understand it, new 
APIs are evolving towards something conceptually like

connection_id = connect_by_name( hostname, service )

allowing the kernel to manage the address, expiration of the address, name to 
address mapping change, etc. transparently to the application.

Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Simon Perreault
Please also refer to the results of the DNS64/NAT64 experiment that we
ran at IETF 77. Users of the service encountered a bug due to parallel
resolving in one particular operating system. We believe the bug is due
to that particular implementation. Parallel resolving is still a Good
Idea(TM), but this shows that implementing it correctly is not trivial.

See:
http://viagenie.ca/publications/2010-03-26-ietf-behave-nat64-experiment.pdf
http://www.ietf.org/proceedings/77/minutes/behave.txt

Hopefully it will be fixed by IETF 78... ;)

Simon
-- 
NAT64/DNS64 open-source -- http://ecdysis.viagenie.ca
STUN/TURN server-- http://numb.viagenie.ca
vCard 4.0   -- http://www.vcarddav.org
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Simon Perreault
On 2010-06-17 12:55, David Conrad wrote:
 Well, yes.  However, applications already have to be modified to deal with 
 IPv6.  I'd agree that modifying applications from a simple synchronous path 
 to dealing with parallel asynchronous connections would not be a good idea. 
 Personally, I'm of the strong opinion that the socket() API is fundamentally 
 broken as is the separation of naming lookup from connection 
 initiation/address management. In the vast majority of cases, applications 
 should not know or care what about anything but the destination name/service. 
  As I understand it, new APIs are evolving towards something conceptually like
 
 connection_id = connect_by_name( hostname, service )
 
 allowing the kernel to manage the address, expiration of the address, name to 
 address mapping change, etc. transparently to the application.

Exactly. One rule of thumb I've been following regarding migrating
applications to IPv6 is to banish the use of struct sockaddr (and
variants such as _in, _in6, _storage, etc.). If you never use that
structure, your app is probably IPv6-ready, or very close to it.

Simon
-- 
NAT64/DNS64 open-source -- http://ecdysis.viagenie.ca
STUN/TURN server-- http://numb.viagenie.ca
vCard 4.0   -- http://www.vcarddav.org
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Martin Rex
David Conrad wrote:
 
 On Jun 17, 2010, at 12:18 PM, Martin Rex wrote:
  Maybe because it would be a big waste of network bandwidth and close
  to a Denial of Service (DoS) attack if every client would try every
  IPv4 and IPv6 address in parallel that it can get hold of for a hostname.
 
 In a world of broadband, gigabit ethernet interfaces, high speed
 wireless, etc., I have some skepticism that attempting both v4 and v6
 connections in parallel is a big waste,

I don't know what the broadbands for the average home users look
like where you are, but here they're typically = 640kBit/s upstream.



 much less anywhere near close to a Denial of Service (DoS) attack.

If you look at hostnames such as hp.com which have 13 IPv4 listed in
the DNS, it would probably have a significant effect on their
infrastructure if suddenly every client would attempt 13 parallel
TCP-connects and kill 12 of them pre-natal or during infancy.

One would be needlessly and senselessly flooding the listen queues
of many servers.  Effectively, there is little that distinguishes
such clients from SYN flood attackers.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread David Conrad
Martin,

On Jun 17, 2010, at 1:24 PM, Martin Rex wrote:
 I don't know what the broadbands for the average home users look
 like where you are, but here they're typically = 640kBit/s upstream.

And?  How much bandwidth does a parallel connection use up?  Presumably if this 
felt to be a problem, kernels could cache what works and what doesn't.

 much less anywhere near close to a Denial of Service (DoS) attack.
 
 If you look at hostnames such as hp.com which have 13 IPv4 listed in
 the DNS, it would probably have a significant effect on their
 infrastructure if suddenly every client would attempt 13 parallel
 TCP-connects and kill 12 of them pre-natal or during infancy.

I'd be surprised, as them even noticing would tend to indicate they'd be 
trivially susceptible to D(D)oS attacks.

However, I thought we were talking about doing parallel lookups/connects to an 
IPv6 address at the same time an IPv4 lookup/connect was done.  Don't see any 
particular point in opening parallel lookups to multiple IPv4 (or IPv6) 
addresses.

Regards,
-drc

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Arnt Gulbrandsen

On 06/17/2010 07:24 PM, Martin Rex wrote:

If you look at hostnames such as hp.com which have 13 IPv4 listed in
the DNS, it would probably have a significant effect on their
infrastructure if suddenly every client would attempt 13 parallel
TCP-connects and kill 12 of them pre-natal or during infancy.


Set up a strawman and shoot him down. Feel clever.

I said try v4 and v6 in parallel, not try every IPv4 address in parallel.

Arnt
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread ned+ietf
 On Jun 17, 2010, at 12:18 PM, Martin Rex wrote:
  Maybe because it would be a big waste of network bandwidth and close
  to a Denial of Service (DoS) attack if every client would try every
  IPv4 and IPv6 address in parallel that it can get hold of for a hostname.

 In a world of broadband, gigabit ethernet interfaces, high speed wireless,
 etc., I have some skepticism that attempting both v4 and v6 connections in
 parallel is a big waste, much less anywhere near close to a Denial of
 Service (DoS) attack.

Well, first of all, there are plenty of places that do not enjoy the benefits
of all that fancy stuff. What may be a tiny bit of meaningless overhead may
be something else entirely for someone else.

But this is all really beside the point. The big problem, as John Klensin, PHB,
and others have pointed out repeatedly but to no avail, is support overhead.
Not network overhead.

And those failed lookups have a real support overhead cost. Among other things,
they create entries in appliction and firewall logs. In many cases lots of
entries. And when customers see those entries - and don't understand what they
mean - they call up and complain.

And please don't try and convince me this won't happen. I know better. I've
been on the other end of these sorts of calls more times than I can count.

At this point I wouldn't consider shipping an application that doesn't support
at least basic IPv6 connectivity, but there's also no way I'd consider shipping
an application that has that support enabled by default, because it's
guaranteed to be a support call generator.

  Similarly, it could require a major redesign of lots of applications
  in order to be able to manage several parallel requests
  -- multi-threaded with blocking DNS-lookups and blocking connect()s
  or parallel asynchronous/non-blocking DNS-lookups and connect()s.

 Well, yes.  However, applications already have to be modified to deal with
 IPv6.  I'd agree that modifying applications from a simple synchronous path to
 dealing with parallel asynchronous connections would not be a good idea.
 Personally, I'm of the strong opinion that the socket() API is fundamentally
 broken as is the separation of naming lookup from connection 
 initiation/address
 management. In the vast majority of cases, applications should not know or 
 care
 what about anything but the destination name/service.  As I understand it, new
 APIs are evolving towards something conceptually like

 connection_id = connect_by_name( hostname, service )

 allowing the kernel to manage the address, expiration of the address, name to
 address mapping change, etc. transparently to the application.

The situation is a lot more complex than this. For one thing, most new
applications are being written in languages that provide their own
connectbyname APIs, often tightly bound into the overall I/O infrastructure.

An immediate consequence of this is that the evolution of the socket API is
no longer directly relevant to many if not most application developers. They're
going to use the calls the language they've chosen provides, end of story. To
the extent the evolution of the socket interface retains any indirect
relevance, it is that the language implementors may decide to take advantage of
it. But that's unlikely because (a) Language developers are especially
sensitive to being able to run seamlessly on old, and sometimes very old,
operating system versions, versions that do not have any form of the stuff
we're talking about, and (b) THey've already done the work to write their own
connectbyname API they know works, why switch?

And for applications written in a language like C, the present lack of a
well-specified and full-featured cross-platform API of this kind is a problem
that, in a very sense, it is now too late to solve. When we surveyed the
platforms and OS versions we have to support, several had no connectbyname call
of any sort available, and of those that did, not a single one had a call with
the feature set we needed. We had no choice but to write our own, and I doubt
very much we're alone in reaching this conclusion.

The painful reality is that in order to be effective a connectbyname API needed
to have been specified at least five years ago and probably more like ten. At
this point the work is really too little too late.

Ned
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: The IPv6 Transitional Preference Problem

2010-06-17 Thread Byron Ellacott
On 18/06/2010, at 12:05 AM, Sabahattin Gucukoglu wrote:

 It's Apple we're talking about here.  Have a look at this for some nasty 
 surprises:
 http://www.fix6.net/archives/2010/03/06/the-strange-behavior-of-apples-mdnsresponder/

Apple's mDNS responder is open source, and you can read through the relevant 
function yourself at:

http://www.opensource.apple.com/source/Libinfo/Libinfo-330.3/lookup.subproj/mdns_module.c

The function you're looking for is _mdns_query_mDNSResponder.  The block 
comment describing the behaviour is consistent both with the code beneath it 
and with observed behaviour on the wire.  I would not call it a nasty surprise.

If you were to look at si_addrinfo_list_from_hostent in si_getaddrinfo.c you 
might find a genuine nasty surprise -  results always appear before A 
results on OS X, even if the local host only has loopback IPv6 addresses.  (And 
even if the local host has global unicast v4 and 6to4 v6).

  Byron

-- not speaking for my employer
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf