Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-21 Thread Masataka Ohta
Mark Nottingham wrote:

 Then, protocols not have any authoritative specification and
 should never be standardized and there should be no central
 authority to manage different versions of the protocols.
 
 From a PRISM viewpoint, the cost of parsing different formats,
 understanding different wire protocols, etc. is trivial.

That is a reasoning to deny the point of you:

: I draw the opposite conclusion, actually. With good standards,
; we can encourage a larger number of services to exist,
: raising the cost of monitoring them all.

So, denying the point, you agree with me.

Note that the number of services != the number of service
providers.

 The real cost is negotiating with / bullying each provider into
 giving access. Especially if it's not hosted or doing business
 in a country you control.

If only the number of cloud providers were large.

However, as there is some scale merit, there is a tendency that
the number of the providers will be small and all of the providers
naturally have considerable amount of hardware at the central part
of the Internet, that is, in US, which means the providers are
subject to USG control.

And, USG is not the only government we should be protected from.

 I should be able to choose my own data sync server, whether
 it's one I run, or one run by my paranoid friend, or by a
 local company, or a US company that's in bed with the NSA.

 The only secure way is to run your own.
 
 That's a very simplistic definition of secure.

See above how simplistic your view is against so complex
nature of PRISM etc, against which, only the simplest
protection is effective.

Masataka Ohta



Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-20 Thread Masataka Ohta
Josh Howlett wrote:

 I confess that I am confused by much of this discussion.

Several people in IETF is under control of NSA, maybe.

 As I understand
 it, PRISM is not a signals intelligence activity; it only addresses that
 data at rest within those organisations who have partnered with the NSA.
 As such, improving protocol security will achieve nothing against PRISM;
 it is a socio-political issue that is outside of the scope of a technical
 standards organisation.

Right.

 As such the only practical way for a typical user to protect themselves
 against PRISM is to switch to other providers based in jurisdictions that
 provide the appropriate protections, or agitate to change the applicable
 laws within their own jurisdiction, where appropriate.

Not necessarily.

The proper protection is to avoid cloud services and have our
own end systems fully under control of ourselves.

Toward the goal, IETF should shutdown all the cloud related
WGs and never develop any protocol to promote cloud service.

 This is not, of course, an argument not to improve the security of our
 protocols for other reasons, but let's please motivate this work
 correctly. It will yield a greater probability of success.

Using DH could protect us, until USG start deploying active attack.

So, it is important to develop technologies to detect attacks
against DH.

Masataka Ohta



Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-20 Thread Masataka Ohta
(2013/09/20 21:15), Jari Arkko wrote:
 Josh, Stephen,
 
 It is important to understand the limitations of technology in this
 discussion. We can improve communications security, and in some
 cases reduce the amount information communicated. But we cannot
 help a situation where you are communicating with a party that
 you cannot entirely trust with technology alone.

We can discourage people communicating with a party that are
under full control of USG, which is why using cloud services
should be discouraged, which is a technical issue.

Masataka Ohta


Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-20 Thread Masataka Ohta
Hannes Tschofenig wrote:

 We can discourage people communicating with a party that are
 under full control of USG, which is why using cloud services
 should be discouraged, which is a technical issue.
 
 An open standardization process means that everyone can participate,
 including people who work for the government (directly or indirectly).

As long as a standard being developed is within the scope of
the process, yes.

 Whether you like what someone is putting forward is a completely
 different story but I hope you would at least look at the content before
 judging it.

Developing protocols to promote antisocial activities is worse
than developing Ethernet/Wifi protocol in IETF.

 So, I believe this attitude against people and companies who may have
 had, or still have relationships with governments is counterproductive.

Protection from governments is not very productive, indeed, which
does not mean we shouldn't do it.

 On your argument against cloud standardization in the IETF I have two
 remarks, namely :
 
 * Cloud services (with whatever definition you use) indeed presents
 challenges for privacy and security.
 
 * There is no standardization in the IETF on something like the cloud.
 On the other hand  I have to say that almost every protocol we
 standardize in the IETF could be used in a cloud service. For example,
 many cloud services use HTTP. Should we stop working on HTTP?

For example, the following RFC:

6208Cloud Data Management Interface (CDMI) Media Types
K. Sankar, A. Jones [ April 2011 ] (TXT = 23187) (Status:
INFORMATIONAL) (Stream: IETF, WG: NON WORKING GROUP)

is a product of IETF to promote cloud service.

Masataka Ohta


Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-20 Thread Masataka Ohta
Mark Nottingham wrote:

 Not necessarily.

 The proper protection is to avoid cloud services and have our
 own end systems fully under control of ourselves.

 Toward the goal, IETF should shutdown all the cloud related
 WGs and never develop any protocol to promote cloud service.
 
 I draw the opposite conclusion, actually. With good standards,
 we can encourage a larger number of services to exist,
 raising the cost of monitoring them all.

Cost for monitoring should be large?

Then, protocols not have any authoritative specification and
should never be standardized and there should be no central
authority to manage different versions of the protocols.

 I should be able to choose my own data sync server, whether
 it's one I run, or one run by my paranoid friend, or by a
 local company, or a US company that's in bed with the NSA.

The only secure way is to run your own.

 Good standards allow that to happen.

I'm afraid you want to increase monitoring cost.

Masataka Ohta



Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-19 Thread Masataka Ohta
Jim Gettys wrote:
 Radio clock receivers often don't work where these devices are deployed
 (like in my basement).  Not enough view of the sky (and multiple layers of
 floors).  Radios are nice to have, but can't be guaranteed to work.

No, the problem of radio clock is not its availability but that
they can easily be faked, which means it is no better than relying
on some NTP service, such as time.nist.gov, which is as
(un)trustworthy as USG.

Masataka Ohta


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-14 Thread Masataka Ohta
robert bownes wrote:

 A 1pulse per second aligned to GPS is good to a few ns.

GPS time may be accurate, if it were assured to be secure.

Masataka Ohta



Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-14 Thread Masataka Ohta
Dickson, Brian wrote:

 In order to subvert or redirect a delegation, the TLD operator (or
 registrar) would need to change the DNS server name/IP, and replace the DS
 record(s).

Only to a victim to be deceived.

 This would be immediately evident to the domain owner, when they query the
 TLD authority (delegation) servers.

Wrong. The domain owners can't know some victims are supplied
faked data.

Masataka Ohta



Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-14 Thread Masataka Ohta
Martin Rex wrote:

 There is no problem with the assumption that trusted third party
 _could_ exist.

It couldn't.

What organization in US can be trusted against attacks by USG?

Note that Snowden demonstrated that even USG failed to keep its
top secret.

 The reason where PKI breaks badly is whenever the third party that
 Bob selected as _his_ third party is not a third party that Alice
 has volutarily chosen herself to trust.  Instead, PKI forces
 Alice to trust dozens of third parties, one or more per every
 Bob out there.

In short, PKI is against the end to end principle, because
CAs are intelligent intermediate systems.

But, if CAs were trusted third parties, it means both Alice
and Bob can safely trust them.

Masataka Ohta


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-12 Thread Masataka Ohta
Phillip Hallam-Baker wrote:

 3) A relying party thus requires a demonstration that is secure against a
 replay attack from one or more trusted parties to be assured that the time
 assertion presented is current but this need not necessarily be the same as
 the source of the signed time assertion itself.

 The real design decision is who you decide you are going to rely on for
 (3). TLS is proof against replay attack due to the exchange of nonces.

How can you get secure time to securely confirm that a certificate
of TLS has not expired?

Use yet another PKI?

Masataka Ohta


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-12 Thread Masataka Ohta
Arturo Servin wrote:

 3) A relying party thus requires a demonstration that is secure against a
 replay attack from one or more trusted parties to be assured that the time
 assertion presented is current but this need not necessarily be the same as
 the source of the signed time assertion itself.

 The real design decision is who you decide you are going to rely on for
 (3). TLS is proof against replay attack due to the exchange of nonces.

 How can you get secure time to securely confirm that a certificate
 of TLS has not expired?

 Use yet another PKI?

  No, you have your own clock.

No, you can't, because the original assumption by Jim is:

 1) DNSSEC needs to have the time within one hour.  But these
 devices do not have TOY clocks (and arguably, never will, nor
 even probably should ever have them).

Even if you can, you can't be sure that the clock is accurate
enough.

Thus, PKIs requiring time stamps for expiration or CRL are
broken.

Masataka Ohta


Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-12 Thread Masataka Ohta
Theodore Ts'o wrote:

 More importantly, what problem do people think DNSSEC is going to
 solve?

Insufficient revenue of registries.

 It is still a hierarchical model of trust.  So at the top, if you
 don't trust Verisign for the .COM domain and PIR for the .ORG domain
 (and for people who are worried about the NSA, both of these are US
 corporations), the whole system falls apart.

Right. PKI is fundamentally broken, because its fundamental
assumption that trusted third parties could exist is a total
fallacy.

Masataka Ohta



Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-12 Thread Masataka Ohta
Ted Lemon wrote:

 This isn't _quite_ true.   DNSSEC supports trust anchors at
 any point in the hierarchy, and indeed I think the right
  model for DNSSEC is that you would install trust anchors
 for things you really care about, and manage them in the
 same way that you manage your root trust anchor.   E.g.,
 you'd install a trust anchor for your employer, and your
 bank, and maybe your local town government.   This is
  all future UI work, of course.

Operationally, that's not practical. Users can't manage
their trust anchors securely.

 Furthermore, if the root key is compromised and that is then
 used to substitute a bogus key, it isn't that hard to notice
 that this has happened, and indeed we ought to be
 systematically noticing these things.   So hacking the root
 key is certainly a valid threat, but there is a great deal
 more transparency in the DNSSEC system than in the TLS PKI,
 and that should mean that the system is more robust in the
 face of this kind of attack.

According to your theory, we don't need DNSSEC, because
cache poisoning attacks on plain DNS is easily detectable.

Masataka Ohta



Re: [DNSOP] Practical issues deploying DNSSEC into the home.

2013-09-12 Thread Masataka Ohta
robert bownes wrote:
 A 1pulse per second aligned to GPS is good to a few ns. Fairly
 straightforward to plug into even a OpenWrt type of router. Turn on
the pps
 in NTP on the router and you are good to go.

Faking GPS signal is trivially easy.

Iraq successfully captured US unmanned plain, apparently by
supplying it false location.

Masataka Ohta


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-07 Thread Masataka Ohta
Noel Chiappa wrote:

 There was actually a proposal a couple of weeks back in the WG to encrypt all
 traffic on the inter-xTR stage.

Making intermediate systems more intelligent is against
the end to end principle and assured to fail.

Considering that google, facebook, yahoo, etc., which are
end systems that many victims are relying upon, are socially
compromised by USG, it can not protect the victims.

Worse, considering that services of Microsoft, Apple, etc. are
socially compromised by USG, end systems manufactured by
Microsoft, Apple, etc.  are totally unsafe.

As for secure end systems, PCs with open source UNIX are much
safer, even though USG can still use a lot of approaches to
compromise them.

Masataka Ohta


Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-09-05 Thread Masataka Ohta
John C Klensin wrote:

 Still, the draft may assure new usages compatible with each
 other.
 
 That is the hope.

The problem is that an existing and an new usages may not be
compatible.

 If we need subtypes because 16bit RRTYPE space is not enough
 (I don't think so), the issue should be addressed by itself
 by introducing a new RRTYPE (some considerations on subtype
 dependent caching may be helpful), not TXT, which can assure
 compatibilities between subtypes.
 
 Again, I completely agree.  But it isn't an issue for this
 proposed registry.

With a new RRTYPE, new usages are assured to be compatible with
existing TXT usages.

Thanks,

Masataka Ohta



Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-08-31 Thread Masataka Ohta
The draft does not assure that existing usages are compatible
with each other.

Still, the draft may assure new usages compatible with each other.

However, people who want to have new (sub)types for the new usages
should better simply request new RRTYPEs.

If we need subtypes because 16bit RRTYPE space is not enough
(I don't think so), the issue should be addressed by itself
by introducing a new RRTYPE (some considerations on subtype
dependent caching may be helpful), not TXT, which can assure
compatibilities between subtypes.

For the existing usages, some informational RFC, describing
compatibilities (or lack of them) between the existing usages,
might help.

Masataka Ohta


Re: Not Listening to the Ops Customer

2013-06-04 Thread Masataka Ohta
Noel Chiappa wrote:

 I persist in thinking that those 32-bit names are continuing their evolution
 into local-scope names, with translation at naming region boundaries. How can
 we improve that - reduce the brittleness of the middleboxes you refer to, by
 making their data more visible (and thus replicable, etc)?

Just say, A+P, except that the original proposal of A+P has
wrongly assumed that a NAT box is required inside a site.

Still, surely, port numbers are local-scope names.

However, if the 16bit port numbers are combined with 32bit IP
addresses, they are the globally scoped names with 48 bit
length, which is a lot more than enough to make the Internet
fully end to end transparent even at the current global scale.

 but I think in retrospect that's a non-starter - changing TCP
 (which is what that would require) is not really an option (see point 1).

Changing TCP is required not for address space extension but
for better handling of multiple addresses for better routing
table aggregation.


Masataka Ohta


Re: Not Listening to the Ops Customer

2013-06-01 Thread Masataka Ohta
Doug Barton wrote:

 Not picking on you here, in fact I'm agreeing with you regarding the 
 early days. In '94 SLAAC/RA was a good idea, and remains a good idea for 
 dumb devices that only need to know their network and gateway to be 
 happy.

Wrong.

Even at that time and even on small end user LANs, it is
better to let the gateway manage the address configuration
state in centralized fashion than to have, so called, SLAAC,
which is full of address configuration state, which is
maintained in fully distributed manner involving all the nodes.

Masataka Ohta



Re: Not Listening to the Ops Customer

2013-06-01 Thread Masataka Ohta
Arturo Servin wrote:

 Even at that time and even on small end user LANs, it is
 better to let the gateway manage the address configuration
 state in centralized fashion than to have, so called, SLAAC,
 which is full of address configuration state, which is
 maintained in fully distributed manner involving all the nodes.

 YMMV.
 
   I was working on TCP/IP, Novell and AppleTalk nets in the mid 90s and
 as network engineers we hated to maintain a database of static IP
 addresses for users,

I think you mean a database of static IP addresses DNS, not HOSTS.TXT.
Then, neither Novell nor AppleTalk enables it.

Today, with DNS dynamic update, you don't have to maintain the database
manually.

The difference is that DHCP with centralized state is a lot more
good at automatic maintenance of the database than so stateful
SLAAC with fully distributed state.

 and we loved how AT for example was totally
 automatic (IPX was in the middle because we also hated the long addresses).

So is most NAT boxes behaving as a DHCP server.

   But any how, I agree with Brian that it was a good idea at that time.

Not at all.

It was merely that those who had little operational insight had
thought so.

Rest of the people developed DHCP and, now, you can see which is
better.

Masataka Ohta


Re: Not Listening to the Ops Customer

2013-06-01 Thread Masataka Ohta
Arturo Servin wrote:

   No, I meant a table of static ip addresses (possibly it was in excel,
 db2, or any other old database) for each host so we do not configured
 the same IP to two or three different hosts.

So, it's like HOSTS.TXT.

 It was a nightmare.

Yes, it was.

   With IPX, AT address assignment was automatic. No DHCP in those old 
 times.

That something was automatic only to produce useless (to me) results
does not mean it was a good idea.

Masataka Ohta



Re: Not Listening to the Ops Customer

2013-06-01 Thread Masataka Ohta
Arturo Servin wrote:

   Those were different times. At least us we were not so preoccupied by
 tracking users, accounting, etc. So a central point to record IP address
 was not as important as a central port to give IP address.

A merit to have the central server is that you don't have to
waste time and packets to have DAD, with additional overehead
of IGMP, to confirm all the distributed nodes maintaining the
state of SLAAC consistently, that is, a new configured address
is unique.

Masataka Ohta



Re: Not Listening to the Ops Customer

2013-06-01 Thread Masataka Ohta
cb.list6 wrote:

 I think there is something here that is interesting, and that is the
 interplay between paper design, evolution, and ultimately the emergent
 complex dynamical system we call the internet ... which is almost
 completely zero compliant to the e2e principle.  Not that e2e is the wrong
 principle, but ipv4 could not support it as of 10+ years ago.

As long as a NAT box is directly connected to the Internet
and is UPnP capable, which is the case at my home, hosts
behind it can enjoy full end to end transparency (PORT
command of FTP works), if hosts' IP and transport stack is
modified a little.

 Hence, nearly
 every internet node is behind a stateful device (cpe or cgn nat)

That is not a problem if state of the device is known to
the end, because IP and transport stack of the end
can reverse the translation.

See draft-ohta-e2e-nat-00.txt for details.

 or server
 load balancer. There is also widespread disbelief in how dns operates in
 the real world (not everyone gets the same answer).

It has nothing to do with the end to end transparency.

If operators of the servers think any server is fine, it is not
a problem.

Masataka Ohta



Re: [IETF] Not Listening to the Ops Customer (was Re: Issues in wider geographic participation)

2013-05-31 Thread Masataka Ohta

Warren Kumari wrote:


Unfortunately the was a bad case of creeping featuritis and we got:
A new, and unfortunately very complex way of resolving L2 addresses.


You may use ARP (and DHCP) with IPv6.


Extension headers that make it so you cannot actually forward

 packets in modern hardware
 ( http://tools.ietf.org/html/draft-wkumari-long-headers-00)

True.


SLAAC, which then required privacy addressing and then fun that

 that entails / the DHCP debacle.

The problem of SLAAC is that it is stateful in fully distributed
manner. That is all the nodes have their own states on address
assignment


Most operators address ptp Ethernet links with a /30 or /31 in V4. Took a long time to 
get this in V6 (RFC 6164 - Using 127-Bit IPv6 Prefixes on Inter-Router Link) 
and it is still controversial.
We ended up in a space where perceived elegance and shiny

 features overshadowed what folk actually wanted
 -- 96 more bits, no magic.

Maybe. But the folk actually needed 8 (or 16 at most) more bits.

 15 years later,
 dhcp is still a cf, i have to run a second server (why the hell does
 isc not merge them?), i can not use it for finding my gateway or vrrp
 exit, ...  at least we got rid of the tla/nla classful insanity.  but
 u/g?  puhleeze.

 Yup

TLA/NLA is a good thing, *IF*  multiple addresses of a node
and automatic renumbering including routers and DNS were
properly supported. It is not very difficult to have done so.

Masataka Ohta




Re: Not Listening to the Ops Customer (was Re: Issues in wider geographic participation)

2013-05-31 Thread Masataka Ohta
John C Klensin wrote:

 Similarly, various applications folks within the IETF have
 pointed out repeatedly that any approach that assigns multiple
 addresses, associated with different networks and different
 policies and properties, either requires the applications to
 understand those policies, properties, and associated routing
 (and blows up all of the historical application-layer APIs in
 the process) or requires request/response negotiation that TCP
 doesn't allow for (and blows up most of the historical
 application-layer APIs).  One of the original promises about

That is a very old problem of IPv4, except that DNS takes care
of multiple addresses of name servers at IP level and SMTP
(and some TELNET implementation, though it is not very useful)
supports multiple addresses of mail servers at TCP level.

Thus, API of IPv4 must change.

 IPv6 was no need for changes to TCP and consequent transparency
 to most applications.  Ha!

There is no need for IPv6 specific changes.

Masataka Ohta




Re: congestion control? - (was Re: Appointment of a Transport Area Director)

2013-03-06 Thread Masataka Ohta

Cameron Byrne wrote:


In the 3GPP case of GSM/UMTS/LTE, the wireless network will never drop
the packet, by design.


According to the end to end argument, that's simply impossible,
because intermediate equipments holding packets not confirmed
by the next hop may corrupt the packets or suddenly goes down.

 It will just delay the packet as it gets

resent through various checkpoints and goes through various rounds of
FEC.  The result is delay,


Even with moderate packet drop probability, it means *A LOT OF* delay
or connection oriented communication, either of which makes 3GPP
mostly unusable.

Masataka Ohta



Re: congestion control? - (was Re: Appointment of a Transport Area Director)

2013-03-06 Thread Masataka Ohta
l.w...@surrey.ac.uk wrote:

 3GPP has to never drop a packet because it's doing zero-header
 compression.

has to never? Even though it must, when it goes down.

 Lose a bit, lose everything.

You totally deny FEC. Wow!!!

 And ROHC is an IETF product.
 
 I'm pretty sure the saving on headers is more than made up for in
 FEC, delay, etc. Not the engineering tradeoff one might want.

It has nothing to do with congestion, not at all.

Masataka Ohta



Re: congestion control? - (was Re: Appointment of a Transport AreaDirector)

2013-03-06 Thread Masataka Ohta
John E Drake wrote:

 See also:  
 http://www.akamai.com/html/about/press/releases/2012/press_091312.html

It seems to me that Akamai is doing things which must be
banned by IETF.

Akamai IP Application Accelerator
http://www.atoll.gr/media/brosures/FS_IPA.pdf
Packet Loss Reduction
Application performance is also affected
by packet loss, which may be particularly
troublesome when traffic traverses
international network paths. IP Application
 ^^
Accelerator uses a variety of advanced
^^
packet loss reduction techniques, including
^^^
forward error correction and optional packet

replication to eliminate packet loss.

Masataka Ohta




Re: Internet Draft Final Submission Cut-Off Today

2013-02-27 Thread Masataka Ohta
(2013/02/27 21:19), Tony Finch wrote:
 Paul E. Jones pau...@packetizer.com wrote:

 Seriously, what the heck is 24:00?
 
 See http://dotat.at/tmp/ISO_8601-2004_E.pdf section 4.2.3 midnight

The IS should clarify that a day is [0:0, 24:0), not [0:0, 24:0]
and everything should be fine.

Masataka Ohta



Re: Internet Draft Final Submission Cut-Off Today

2013-02-27 Thread Masataka Ohta
Tony Finch wrote:

 The IS should clarify that a day is [0:0, 24:0), not [0:0, 24:0]
 and everything should be fine.
 
 Its definition of a calendar day is consistent with its definition of a
 time interval. It doesn't make any distinction between including and
 excluding the end instants. And I don't think it would be sensible to do
 so: it would clutter the time interval notation with no clear benefit.

In Japan, at the beginning of 2004, copyright protection period
of movies, copyright of which were unexpired at that time, was
extended by 20 years.

But, copyright of some movies expired at the end of 2003.

There were disputes in courts and the supreme court judged
that copyright of the movies expired.

Masataka Ohta



Re: back by popular demand - a DNS calculator

2013-02-15 Thread Masataka Ohta
Joe Touch wrote:

 Seems clear to me:
 
 RFC1035:
 
 The labels must follow the rules for ARPANET host names.  They must
 start with a letter, end with a letter or digit, and have as interior
 characters only letters, digits, and hyphen.  There are also some
 restrictions on the length.  Labels must be 63 characters or less.

That is syntax in section 2.3.1. Preferred name syntax as syntax
preferred by applications that use host names.

   The following syntax will result in fewer problems with many
   applications that use domain names (e.g., mail, TELNET).

The basic rule of the RFC is:

   Although labels can contain any 8 bit values in octets that make up a
   label

A label may include '.' character as is, as there is no escape
mechanism, though applications expecting host names may fail to
recognize it.

Masataka Ohta


Re: WCIT outcome?

2013-01-03 Thread Masataka Ohta

Dearlove, Christopher (UK) wrote:


Given the ever increasing number of mobile devices, one could argue that the 
world
has never been more dependent on radio spectrum allocation.


If you don't insist on allocating fixed bandwidths, CSMA/CA takes care 
of most of issues.


Masataka Ohta


Re: WCIT outcome?

2013-01-03 Thread Masataka Ohta
Dearlove, Christopher (UK) wrote:

 It's really not that simple. If it were all the world would be doing it for 
 everything.

You should recognize that all the smart phones are working fine
(or even better than LTE) with Wifi and that Wifi support
prioritized packets.

Masataka Ohta



Re: WCIT outcome?

2013-01-03 Thread Masataka Ohta
Dearlove, Christopher (UK) wrote:

 Yes, all smart phones support Wi-Fi. In some unregulated (not
 actually entirely so) frequency bands, and with regulated powers,
 over a short range.

CSMA/CA can also work over a long range too and can, in a sense,
fairly arbitrate packets from different countries at border towns.

 But the main long range 2G/3G signals are
 TDMA or CDMA in regulated bands.

Forget them.

Masataka Ohta


Re: WCIT outcome?

2013-01-03 Thread Masataka Ohta
Phillip Hallam-Baker wrote:

 But one of the reasons those auctions were originally proposed was to force
 various military interests to stop hogging 95% of the available bandwidth
 on the offchance they might have a use for it some day. Putting a price on
 the resource forced the Pentagon and GCHQ etc. to explain why they really
 needed the resource which in most cases they didn't.

If they are winning, they need the resource not inside but outside,
of their border.

If they really need them inside their border, they are loosing a
war, which authorizes them to preempt all the already assigned
bandwidth within their border.

Anyway, there can be no inter-border coordination by ITU-T
between fighting countries.

Masataka Ohta


Re: WCIT outcome?

2012-12-31 Thread Masataka Ohta
SM wrote:

 What people say and what they actually do or mean is often a very 
 different matter.  An individual may have principles (or beliefs).  A 
 stakeholder has interests.  There was an individual who mentioned on an 
 IETF mailing list that he/she disagreed with his/her company's stance.  
 It's unlikely that a stakeholder would say that.

That's how the global routing table has bloated so much because
of requests from ISPs as stakeholders even though it is harmful
to not only end users but also ISPs, which is fallacy of
composition.

ITU did it better for phone numbers.

Masataka Ohta


Re: WCIT outcome?

2012-12-29 Thread Masataka Ohta
Phillip Hallam-Baker wrote:

 The stakeholders in the Internet don't even align to countries. My own
 employer is relatively small but was founded in the UK, moved its
 headquarters to the US and has operations in a dozen more countries and
 many times that number of affiliates. The same is even more true of the
 likes of Google, Cisco, Apple, IBM, Microsoft etc.

You miss the most important stakeholders, the end users aligning
to countries.

Masataka Ohta

PS

Of course, countries acting as representatives for telephone
companies are just as bad as countries acing as representatives
for ISPs.


Re: Gen-ART telechat review of draft-ietf-6man-udpzero-06

2012-10-09 Thread Masataka Ohta
Richard Barnes wrote:

 Document: draft-ietf-6man-udpzero-06

It's interesting that the draft denies the following statements
in RFC2765:

   Fragmented IPv4 UDP packets that do not contain a UDP checksum (i.e.
   the UDP checksum field is zero) are not of significant use over
   wide-areas in the Internet and will not be translated by the
   translator.

While the draft considers only tunnels and states:

   IPv6 middlebox
   deployment is not yet as prolific as it is in IPv4. Thus, relatively
   few current middleboxes may actually block IPv6 UDP with a zero
   checksum.

interactions between the draft and existing implementations
of RFC2765 will be quite exciting.

The following statement of the draft:

  The computed checksum
  value may be cached (before adding the Length field) for each flow
  /destination and subsequently combined with the Length of each
  packet to minimise per-packet processing.

is bogus, because searching cache means a lot more effort than
recomputing checksum.

Masataka Ohta



Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-08-15 Thread Masataka Ohta
Joe Touch wrote:

 Again, this doc is about updating the IPv4 ID specification in RFC791 -
 which has not yet been updated.

 But, the way you update rfc791 requires updating rfc2460,
 rfc2765 and their implementations, for which there is no
 consensus.
 
 It certainly does not.

The following requirement in your draft:

Sources of non-atomic IPv4 datagrams MUST rate-limit their output
   to comply with the ID uniqueness requirements.

requires updating rfc2460 and/or rfc2765 to rate-limit IPv6
datagrams with fragment headers below 6.4 Mbps (assuming
unfragmented packet is 1500B long).

 That is, though your draft claims to more closely reflect
 current practice and more closely match IPv6, the way you
 update rfc791 does not reflect current practice nor match
 IPv6.
 
 It does -

The current practice, with both IPv4 and IPv6, is to have
loose uniqueness of IDs.

Or, do you mean the current practice the practice not of
the real world network operation but of the way of
discussion in intarea WG?

 it doesn't reflect the errors in IPv6-IPv4 translation, which
 is not IPv6.

Most, if not all, implementers do not think them errors.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-08-13 Thread Masataka Ohta
Joe Touch wrote:

 Again, this doc is about updating the IPv4 ID specification in RFC791 -
 which has not yet been updated.

But, the way you update rfc791 requires updating rfc2460,
rfc2765 and their implementations, for which there is no
consensus.

That is, though your draft claims to more closely reflect
current practice and more closely match IPv6, the way you
update rfc791 does not reflect current practice nor match
IPv6.

As your draft states:

   it is clear that
   existing systems violate the current specification

and rfc2026 states:

   4.1.3  Internet Standard
   A specification for which significant implementation and successful
   operational experience has been obtained may be elevated to the
   Internet Standard level.

there is no point to insist on the ID uniqueness requirement of
rfc791 as a requirement of an Internet Standard.

 The IPv6-IPv4 translation that creates IPv4 IDs is currently
 non-compliant with RFC791 and does not override RFC791. This document's
 update might make that translation easier,

Wrong.

Insisting on rfc791 makes the translation a lot harder than
just closely reflect current practice to loosen uniqueness
request of rfc791, which is what almost all (if not all) the
IPv4 and IPv6 implementations today are doing.

 If you disagree with that conclusion, please contact the INTAREA WG
 chairs directly.

As we are in IETF last call stage, I can see no point you
insist on the original conclusion of the WG, which
overlooked the complication caused by 6-4 translation.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-08-12 Thread Masataka Ohta
Joe Touch wrote:

Hi,

 RFC2765 specifies that translators can merely copy the
 low-order bits of the field.

 Yes, but this is not compatible with RFC791.

 Then, which should we revice? RFC791, RFC2765 or both?
 
 2765.

Is it a consensus of IETF?

Note that it also imply revising RFC2460, because rfc2765 specifies
a way of ID mapping, where as rfc2460 specifies 32bit-16bit mapping
will be performed by translators.

Note also that there already are working implementations of
rfc2460 and rfc2765.

 There is no useful way to revise 791 to make the text in 2765 correct.

Revising rfc791 not requiring a very strong uniqueness is
useful reflecting current practice better than your draft,
even though your draft claims closely reflect current
practice.

Moreover, I can see no useful way to revise rfc2765. Do you
have any suggestion?

 Without such a decision, there is no point to publish something
 based on RFC791 and is not compatible with RFC2765.
 
 Sure there is - when IPv6 is not involved.

But, your draft involves IPv6 and, for example, the following
statement in your draft:

   IPv6, even at typical MTUs, is capable of 18.7 Tbps with
   fragmentation between two IP endpoints as an aggregate across all
   protocols,

is not compatible with rfc2460 and rfc2765.

 An IPv6 source might never send packets larger than IPv4 can natively
 handle - i.e., it could send packets 576 bytes or smaller. In that case,
 the IPv6 source would never get an ICMP too big because they're not as
 far as IPv4 is concerned. In that case, the IPv6 source would never
 insert the Fragmentation Header.

I'm afraid minimum MTU of IPv4 is 68, not 576 bytes.

Anyway, IPv6 node having no idea on PMTU will send a lot of
exactly 1280 byte long packets, because it will make TCP
most efficient.

 That is the fundamental flaw in these IPv6 RFCs, but it is behavior that
 is out of scope for an IPv4 source. My doc focuses on the behavior of
 IPv4 sources.

While I think IPv6 RFCs have many fundamental flaws, a problem
is that there are people in IETF insisting not to admit them
flaws.

Do you think you can make them admit a flaw?

 That is the problem. That is, if you insist on RFC791 as is, not
 enough is specified on how to generate IPv6 ID.
 
 Yes, but that does not affect an IPv4 source; it remains a problem, but
 out of scope for this doc.

It is out of scope, if only rfc791 is not revised.


 Thus, there can be only one way (the one in RFC2765) to map IPv6
 ID to IPv4 ID

 Yes, this is a nice goal, but it would have required IPv6 hosts insert
 16-bit IDs into *every* packet and make them just as unique as IPv4 does.

No, not *every* packet. But, as you wrote:

 Further, the source might already be inserting the fragmentation header
 (e.g., on a 2KB packet).

every such packet is required to have a unique 32bit ID which must
also be unique as a 16bit ID after translation.

 Then, as fixing RFC2460 is politically impossible, we must
 abandone IPv6 and live with IPv4 forever.
 
 I didn't say we couldn't fix - or at least try to fix - this situation.
 But it remains out of scope for this doc.

If only you can convince people we should fix rfc2460, not rfc791.

 but, the specification of the similar field in IPv6 is, in
 your opinion, incomplete, let's finish it first,
 
 IPv6 is fine when it talks to IPv6 only. The goal is to make IPv4 work
 in a similar way.

You are saying dual stack is clean and the way to go and all the
other IPv6 deployment scenarios should be ignored.

 it doesn't make it completely correct, though - there remain
 problems that have nothing to do with the changes in this doc that need
 to be addressed separately.

The question is whether you can have a consensus that rfc2460
is not completely correct or not.

 For another example,

  Finally, the IPv6 ID field is
  32 bits, and required unique per source/destination address pair for
  IPv6,

 is, in your opinion, violation of RFC791.
 
 No; the violation occurs only when the lower 16 bits are masked off and
 used by themselves by on-path translators. That has nothing to do with
 the quoted text above.

The problem is that, if rfc2460 is not completely correct, above
text in your draft should be something based not on the current
rfc2460 but on completely correctly revised rfc2460, such as
IPv6 ID field is required unique after translated into 16 bit
IPv4 ID.

 But neither of the above requires that IPv6 IDs must be easily
 translatable into valid IPv4 addresses using the existing mechanism.

IPv4 addresses? What do you mean?

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-08-07 Thread Masataka Ohta
Joe Touch wrote:

 RFC2765 specifies that translators can merely copy the
 low-order bits of the field.
 
 Yes, but this is not compatible with RFC791.

Then, which should we revice? RFC791, RFC2765 or both?

Without such a decision, there is no point to publish something
based on RFC791 and is not compatible with RFC2765.

 The case above occurs only when the source gets back a packet too big
 message with a desired MTU less than 1280.

which means RFC2460 expects it.

 Note that this might never
 happen, in which case there would never be any Fragment header.

If you can assume so, let's better assume that accidental ID
match never happen and we all can be very happy.

 However, even when it does happen, there is no instruction above about
 how to construct the header that is compliant with RFC791.

That is the problem. That is, if you insist on RFC791 as is, not
enough is specified on how to generate IPv6 ID.

 Further, the source might already be inserting the fragmentation header
 (e.g., on a 2KB packet).

Thus, there can be only one way (the one in RFC2765) to map IPv6
ID to IPv4 ID

Or, if multiple IPv6 fragments with same ID use different IPv4
translators with different mapping algorithm, each IPv6
fragments will have different IPv4 IDs.

 There's no instruction in how fragment headers
 are constructed in general that complies with RFC791.

Is it a problem of RFC791 or RFC2460/2765?

 Simply using the low 16 bits is not correct. In particular, RFC2460
 suggests that its 32-bit counter can wrap once a minute, and that only
 one such counter might be needed for an endpoint for all connections.

Never mind.

IPv6 specification is not self consistent at all.

 Or, are you saying RFC2460 and RFC2765 violate RFC791?
 
 Yes.

Then, as fixing RFC2460 is politically impossible, we must
abandone IPv6 and live with IPv4 forever.

 This document updates RFC791, but does not fix either RFC2460 or
 RFC2765. This document does not make any statements about how IPv6
 generates its IDs.

You draft says:

   This document updates the specification of the IPv4 ID field to more
   closely reflect current practice, and to include considerations taken
   into account during the specification of the similar field in IPv6.

but, the specification of the similar field in IPv6 is, in
your opinion, incomplete, let's finish it first, only after
which you can revise your draft to include considerations
taken into account during the specification of the similar
field in IPv6.

More specifically, for example, the following statement in your draft:

   The latter case is relevant only for
   IPv6 datagrams sent to IPv4 destinations to support subsequent
   fragmentation after translation to IPv4.

is incorrect because NAT646 refer RFC2765.

For another example,

   Finally, the IPv6 ID field is
   32 bits, and required unique per source/destination address pair for
   IPv6,

is, in your opinion, violation of RFC791.

 As I stated above, RFC2460 guarantees a suitable Identification
 value for IPv4 ID is there in IPv6 fragmentation ID.
 
 Not the way I interpret the text, especially because there are other
 ways to generate IDs in RFC2460 that could be translated to IPv4

As I stated above, there can be only one way common to all the
6-4 translators. Or, different fragments with an IPv6 ID will
have different IPv4 IDs.

 Or, if you think RFC2460 does not mind ID uniqueness (of IPv4,
 at least) so much, RFC791 should not either.
 
 I think there are a lot of IETF documents that are not reviewed in the
 correct context of existing standards. I don't think that applies to
 this draft, though.

So, you are lucky that I have noticed the problem at the last call.

Masataka Ohta



Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-08-03 Thread Masataka Ohta
Joe Touch wrote:

 Translators violate RFC791. They cannot merely copy the
 low-order bits of the field, since that is insufficiently
 unique, and isn't specified as being generated at the
 IPv6 source in compliance with IPv4 requirements.

RFC2765 specifies that translators can merely copy the
low-order bits of the field.

Moreover, RFC2460 specifies:

   In that case, the IPv6 node
   is not required to reduce the size of subsequent packets to less than
   1280, but must include a Fragment header in those packets so that the
   IPv6-to-IPv4 translating router can obtain a suitable Identification
   value to use in resulting IPv4 fragments.

That is, RFC2460 guarantees that translators can obtain a
suitable Identification value from IPv6 Fragment header.

Or, are you saying RFC2460 and RFC2765 violate RFC791?

I'm afraid you must say so, if you insist on existing systems
violate the current specification (quote from abstract of your
draft).

 It quotes IPv6 examples, but does not propose to change
 IPv6 processing. That may be needed, but that would be
 outside the scope of this doc.

It is inside the scope because RFC2765 specifies how IPv4
ID is generated from RFC2460 fragment header, which is,
according to your draft, a violation of RFC791.

Finally, the IPv6 ID field is
32 bits, but lower 16 bits are required unique per
source/destination address pair for
IPv6,
 
 That's incorrect as per RFC2460. Other RFCs may violate that
 original spec, but that needs to be cleaned up separately.

As I stated above, RFC2460 gurantees a suitable Identification
value for IPv4 ID is there in IPv6 fragmentation ID.

Or, if you think RFC2460 does not mind ID uniqueness (of IPv4,
at least) so much, RFC791 should not either.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-07-18 Thread Masataka Ohta
Joe Touch wrote:

 Or, are 6 to 4 translators are required to rate limit and
 drop rate-violating packets to make the stateless
 translators full of states.

 I would expect that the translator would be responsible
 for this, though

Do you mean translators must rate limit, or translators
violate RFC2765:

Identification:
Copied from the low-order 16-bits in the
Identification field in the Fragment header.

and use some other number as an ID?

 there is the problem that multiple translators interfere
 with each other.

Yes, even rate limiting translators may interfere each other,
which means rate limiting must be done at the IPv6 source
node.

 Regardless, this is outside the scope of the ipv4-id-update doc.

In the ID, there are a lot of references to IPv6.

For example, the following statement of the ID:

   Finally, the IPv6 ID field is
   32 bits, and required unique per source/destination address pair for
   IPv6, whereas for IPv4 it is only 16 bits and required unique per
   source/destination/protocol triple.

must be modified as:

   Finally, the IPv6 ID field is
   32 bits, but lower 16 bits are required unique per
   source/destination address pair for
   IPv6, whereas for IPv4 it is only 16 bits and required unique per
   source/destination/protocol triple.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-20 Thread Masataka Ohta
Though the ID states:

   This
   document underscores the point that not only is reassembly (and
   possibly subsequent fragmentation) required for translation, it can
   be used to avoid issues with IPv4 ID uniqueness.

according to RFC2765, which does not need port numbers for
address mapping:

   If the IPv6 packet contains a Fragment header the header fields are
   set as above with the following exceptions:

 Identification:
 Copied from the low-order 16-bits in the
 Identification field in the Fragment header.

 Flags:
 The More Fragments flag is copied from the M flag in
 the Fragment header.  The Don't Fragments flag is set
 to zero allowing this packet to be fragmented by IPv4
 routers.

the translated IPv4 packets are not atomic with 16bit IDs.

Note that the RFC is referred by NAT646 etc.

Then, aren't IPv6 nodes are required to rate limit packets
they generate with fragment headers assuming 16bit IDs?

Or, are 6 to 4 translators are required to rate limit and
drop rate-violating packets to make the stateless
translators full of states.

Or?

Masataka Ohta

PS

As the RFC specifies ID=0 when DF is set 0, it should also
be updated to conform to the robustness principle.


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-18 Thread Masataka Ohta
Joe Touch wrote:

  draft-generic-v6ops-tunmtu-03.txt

 to fragment IPv6 packets by intermediate routers should be
 very interesting to you.
 
 It is aware of our IPv4-ID doc, and consistent with it.

What?

 When the DF is ignored, the ID field is rewritten - i.e.,

If the ID field could be rewritten by intermediate entities,
it is fine for intermediate routers to clear DF.

 The
 rewriting is hidden - happens only inside the tunnel, is
 controlled uniquely by the source, and does not need coordination
 by other sources.

The tunnels are often controlled uniquely only by the destinations.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-18 Thread Masataka Ohta
Joe Touch wrote:

 It is a fair action by innocent providers.

 It is a violation of standards. They may do it innocently, but it's
 still a violation.

 You misunderstand standardization processes.
 
 Standards remain so until revoked explicitly. Common use does
 not itself revoke a standard - it can also represent either
 operator error or ignorance. That decision has not yet been
 made for ignoring the DF bit - if you want to make that case,
 you need to take it through the IETF process to obsolete the
 existing standards.

U, I'm afraid standards can be ignored a lot more easily.

That operators think it fine to clear DF means IETF must
admit it especially because there is RFC 4821.

 Your draft reduces existing requirements to make RFC1191-style
 PMTUD more harmful.
 
 It does not change existing requirements that the DF bit should
 not be ignored.

So what?

That it does change existing requirements is the problem.

 If you have a specific example of how this draft makes 1191 PMTUD
 more harmful, please explain.

As I already wrote, your draft enables people insisting on 1191
PMUTD can, now, set ID always zero, which is harmful to people,
IN THE REAL WORLD, to clear DF.

 Merely restating existing requirements on preservation of the
 DF bitn

The reality is that you merely loosening existing requirements
to make IPv4 PMTUD more harmful.

 Your draft has too much to do with RFC1191-stype PMTUD and
 is narrowly scoped to make RFC1191-stype PMTUD more harmful,
 which means it is in scope.
 
 The draft neither mentions nor discusses 1191. If you want to
 update existing standards regarding PMTUD, you should write
 that doc.

That you maliciously do not mention 1191 dose not mean your
draft is strongly tied to 1191.

 Once RFC1191 is obsoleted, your draft becomes almost useless
 because no one will follow the rate limitation requirement of
 your draft.
 
 You can make that case in the doc that obsoletes 1191 if you like.

My point is that that's what IETF must do.

Masataka Ohta

PS

While your draft is rather harmful than useless, I'm fine
if the following point of the draft:

Originating sources MAY set the IPv4 ID field of atomic
   datagrams to any value.

is changed to:

Originating sources MUST set the IPv4 ID field of atomic
   datagrams to values as unique as possible.

which is what the current BSD implementations do.


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-18 Thread Masataka Ohta
Joe Touch wrote:

 While your draft is rather harmful than useless, I'm fine
 if the following point of the draft:

 Originating sources MAY set the IPv4 ID field of atomic
  datagrams to any value.

 is changed to:

 Originating sources MUST set the IPv4 ID field of atomic
  datagrams to values as unique as possible.

 which is what the current BSD implementations do.
 
 There are implementations that set DF=1 and ID=0 (cellphones).

They are broken implementations violating RFC791.

To quote from RFC1122:

   1.2  General Considerations

  There are two important lessons that vendors of Internet host
  software have learned and which a new vendor should consider
  seriously.


  1.2.2  Robustness Principle

 At every layer of the protocols, there is a general rule whose
 application can lead to enormous benefits in robustness and
 interoperability [IP:1]:

Be liberal in what you accept, and
 conservative in what you send


 The second part of the principle is almost as important:
 software on other hosts may contain deficiencies that make it
 unwise to exploit legal but obscure protocol features.  It is
 unwise to stray far from the obvious and simple, lest untoward
 effects result elsewhere.  A corollary of this is watch out
 for misbehaving hosts; host software should be prepared, not
 just to survive other misbehaving hosts, but also to cooperate
 to limit the amount of disruption such hosts can cause to the
 shared communication facility.

it is obvious that your draft violates the robustness principle.

 BSD does not make IDs as unique as possible; it selects them according
 to a pseudorandom algorithm that does not take into account the
 datagram's source IP, destination IP, or protocol. I.e., BSD code
 repeats the IDs more frequently than necessary when a host concurrently
 sources datagrams with different (srcIP, dstIP, proto) tuples.

You merely mean BSD code does not try to make ID unique for
the longest possible MSL and the fastest possible packet rate,
which is not my point.

As the uniqueness is broken at some MSL and rate anyway, the
current code is enough to be as unique as possible.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-16 Thread Masataka Ohta
Joe Touch wrote:

 Again, this document doesn't change the current situation. Operators who
 clear the DF bit are not innocent - they need to override a default
 setting. They are active participants. They ARE guilty of violating
 existing standards.

While IETF is not a protocol police and clearing DF is not
considered guilty by operators community, the following
draft:

draft-generic-v6ops-tunmtu-03.txt

to fragment IPv6 packets by intermediate routers should be
very interesting to you.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-15 Thread Masataka Ohta
After thinking more about the draft, I think it is
purposelessly hostile against innocent operators and
end users who are suffering between people filtering
ICMP and people insisting on PMTUD.

Today, innocent operators often clear DF bit and
end users are happy with it, because, today, probability
of accidental ID match is small enough.

However, as the ID specifies:

Originating sources MAY set the IPv4 ID field of atomic datagrams
   to any value.

IPv4 datagram transit devices MUST NOT clear the DF bit.

people insisting on PMTUD are now authorized to set ID always
zero, trying to discourage ICMP filtering and DF bit clearing.

But, as people filtering ICMP won't stop doing so and if
operators can do nothing other than clearing DF, it is
end users who suffers.

Then, end users may actively act against PMTUD and/or IETF.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-15 Thread Masataka Ohta
Joe Touch wrote:

 Hi,

Hi,

 But, then,

 Sources emitting non-atomic datagrams MUST NOT repeat IPv4 ID
  values within one MSL for a given source address/destination
  address/protocol triple.

 makes most, if not all, IPv4 hosts non compliant if MSL=2min.
 
 This is already noted throughout this document, however there is little
 impact to such non-compliance if datagrams don't persist that long.

So, the question to be asked is how long datagrams persist.

As you now say MSL may be smaller than 2min, the draft is
useless to promote implementers implement rate limiting.

If you can't define hard value of MSL, implementors can
assume anything.

 Worse, without hard value of MSL, it is a meaningless
 requirement. Note that MSL=2min derived from RFC793 breaks
 150Mbps TCP.
 
 It breaks at 6.4 Mbps for 1500 byte packets, as is already noted in the doc.

With practically very low probability.

 The proper solution, IMHO, to the ID uniqueness is to request
 a destination host drop fragments from a source host after
 it receives tens (or hundreds) of packets with different IDs
 from the same source host.
 
 That doesn't help ID uniqueness; it helps avoid fragmentation
 overload.

It does help ID uniqueness, because fragments with accidental
matches are quickly discarded.

 FWIW, such issues were discussed at length in the INTAREA WG when this
 doc was developed.

I appreciate that the draft is terse not including so lengthy
discussions in the WG.

However, at the same time, don't expect me to read all the log
of the discussions in the WG, especially because you and
the discussions misunderstand the problem as:

 That doesn't help ID uniqueness; it helps avoid fragmentation
 overload.

That does help ID uniqueness.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-15 Thread Masataka Ohta
Joe Touch wrote:

 After thinking more about the draft, I think it is
 purposelessly hostile against innocent operators and
 end users who are suffering between people filtering
 ICMP and people insisting on PMTUD.

 Today, innocent operators often clear DF bit and
 end users are happy with it, because, today, probability
 of accidental ID match is small enough.

 That is not an innocent action.

It is a fair action by innocent providers.

 It defeats PMTUD, which is a draft
 standard.

So, the proper thing for IETF to do is to obsolete RFC1191.

There is no reason for IETF to ignore operational feedback
from the real world that RFC1191 is a bad idea.

 It also violates RFC 791 and 1121.

To stop the fair violation, obsolete RFC1191.

 This document only restates existing requirements in this regard,

  Originating sources MAY set the IPv4 ID field of atomic datagrams
   to any value.

is not a restatement of existing requirements.

 stating them in 2119-language. It does not create any new requirement.
 Operates that clear the DF bit are already in violation of three
 standards-track RFCs.

That many operators are actively violating the standard track
RFCs means the standard track RFCs are defective.

 Then, end users may actively act against PMTUD and/or IETF.
 
 I disagree; if they wanted to do so, they already would have acted since
 the requirements already exist, albeit in pre-RFC2199 language.

As your draft actively tries to change the current situation
that:

 Today, innocent operators often clear DF bit and
 end users are happy with it, because, today, probability
 of accidental ID match is small enough.

it is not surprising if end users think you are guilty.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-15 Thread Masataka Ohta
Joe Touch wrote:

 That is not an innocent action.

 It is a fair action by innocent providers.

 It is a violation of standards. They may do it innocently, but it's
 still a violation.

You misunderstand standardization processes.

That innocent operators must violate some standard means
the standard must be changed.

Moreover, there already is a change, RFC4821-stype PMTUD.

 This document doesn't make it more of a violation; it
 just explains the existing violation.

There is no point just to explain standards when the real
world recognizes that the standards can not be followed
and must be changed.

 So, the proper thing for IETF to do is to obsolete RFC1191.

 There is no reason for IETF to ignore operational feedback
 from the real world that RFC1191 is a bad idea.
 
 That is not the focus of this document. Again, we don't create a new
 requirement.

Your draft reduces existing requirements to make RFC1191-style
PMTUD more harmful.

 If you feel there is consensus to raise this change, that
 would be a separate issue.

Do you think there is explicit consensus on your draft that
we should make RFC1191-stype PMTUD more harmful?

 It also violates RFC 791 and 1121.

 To stop the fair violation, obsolete RFC1191.
 
 The steps needed to allow DF clearing need to be determined; I don't
 know what they are, but that's outside the scope of this doc.

Your draft has too much to do with RFC1191-stype PMTUD and
is narrowly scoped to make RFC1191-stype PMTUD more harmful,
which means it is in scope.

 - regarding clearing the DF bit, this document only restates existing
 requirements
 
 Yes, there are other, new requirements that this document introduces.
 But the rules about not clearing the DF bit are not new.

The problem is that the real world requires standards must change.

We can obsolete RFC1191. Or, we must permit DF clearing.

So, there is no point to say DF clearing forbidden without
obsoleting RFC1191.

 That many operators are actively violating the standard track
 RFCs means the standard track RFCs are defective.
 
 Or that they have defective equipment,

Yes, they often have equipments with RFC1191-style PMTUD.

 or that their logic in deciding
 to run their equipment this way is defective.

Don't ignore the logic in the real world to filter ICMP,
which requires standard changes.

 Just because everyone
 does it doesn't make it correct or warrant changing the specs (see RFC
 2525 for treatment of TCP implementation errors, e.g.).

That is a completely different example from your draft.

 it is not surprising if end users think you are guilty.
 
 Again, this document doesn't change the current situation.

That's not a excuse not to change the current situation.

Worse, your draft will, by loosening a requirement, changes
the current operational situation worse.

 Operators who
 clear the DF bit are not innocent - they need to override a default
 setting. They are active participants. They ARE guilty of violating
 existing standards.

Comments are Requested For by IETF to them, the active participants.

Their comments are that they can't follow the standard, because
*SOMEONE ELSE* filters ICMP. They are not guilty.

Now, it's time for IETF to modify RFCs.

 You seem to think that this is OK because they have good reasons. That
 may make their actions acceptable, but it will not make them compliant
 *until* someone updates the standards that require the DF not be
 cleared. This is not that document.

Once RFC1191 is obsoleted, your draft becomes almost useless
because no one will follow the rate limitation requirement of
your draft.

Masataka Ohta



Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-03 Thread Masataka Ohta
C. M. Heard wrote:

 Existing routers, which was relying on ID uniqueness of atomic
 packets, are now broken when they fragment the atomic packets.
 
 Such routers were always broken.  An atomic packet has DF=0 and any
 router fragmenting such a packet was and is non-compliant with
 the relevant specifications (RFCs 791, 1122, 1812).

Thank you. I have overlooked that atomic implied DF=1.

But, then,

Sources emitting non-atomic datagrams MUST NOT repeat IPv4 ID
   values within one MSL for a given source address/destination
   address/protocol triple.

makes most, if not all, IPv4 hosts non compliant if MSL=2min.

Worse, without hard value of MSL, it is a meaningless
requirement. Note that MSL=2min derived from RFC793 breaks
150Mbps TCP.

The proper solution, IMHO, to the ID uniqueness is to request
a destination host drop fragments from a source host after
it receives tens (or hundreds) of packets with different IDs
from the same source host.

A source host, then, is only required to remember the
previous ID used for each destination.

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-01 Thread Masataka Ohta
C. M. Heard wrote:

 My one reservation is that I do not think it is strictly necessary
 to ban re-use of the IPv4 ID value in retransmitted non-atomic IPv4
 datagrams.

Do you mean

  Sources of non-atomic IPv4 datagrams MUST rate-limit their output
   to comply with the ID uniqueness requirements.

is too strict?

If so, I agree with you.

 On the other hand, the evidence available to me suggests
 that existing implementations overwhelmingly comply with this ban
 anyway, so it does not seem to do any harm.

I think most NAT boxes do not care ID uniqueness.

But, it is a lot worse than that.

Existing routers, which was relying on ID uniqueness of atomic
packets, are now broken when they fragment the atomic packets.

So, the requirement may be:

  Sources of non-atomic IPv4 datagrams SHOULD rate-limit their output
   to comply with the ID uniqueness requirements.

or:

  Sources of non-atomic IPv4 datagrams is encouraged to rate-limit
their output
   to comply with the ID uniqueness requirements.

In addition, I have one question:

 Is there some document provided to obsolete the following:

   The IPv6 fragment header is present

   when the source has received
   a packet too big ICMPv6 error message when the path cannot support
   the required minimum 1280-byte IPv6 MTU and is thus subject to
   translation

 which is meaningless from the beginning, because length of
 IPv6 ID is 32 bit, from which it is impossible to generate
 unique IPv4 ID.

and one comment:

 As expired IDs are referenced, may I suggest to add

   draft-ohta-e2e-nat-00.txt

 along with [Bo11] and [De11].

Masataka Ohta


Re: Last Call: draft-ietf-intarea-ipv4-id-update-05.txt (Updated Specification of the IPv4 ID Field) to Proposed Standard

2012-06-01 Thread Masataka Ohta
Joe Touch wrote:

 Existing routers, which was relying on ID uniqueness of atomic
 packets, are now broken when they fragment the atomic packets.

 The recommendation in this doc - that such sources MUST rate-limit - is
 to comply with the ID uniqueness requirements already in RFC791 that
 this doc does not deprecate - e.g., its use to support fragmentation.

It means that the uniqueness requirements must be loosened.

Another example is that, when route changes, routers
fragmenting atomic packets may change, which means rate
limiting does not guarantee ID uniqueness.

 We all recognize that there are plenty of non-compliant NAT boxes

Rest of my examples are plain routers, which has been fully
compliant.

Is there some document provided to obsolete the following:

  The IPv6 fragment header is present

  when the source has received
  a packet too big ICMPv6 error message when the path cannot support
  the required minimum 1280-byte IPv6 MTU and is thus subject to
  translation

which is meaningless from the beginning, because length of
IPv6 ID is 32 bit, from which it is impossible to generate
unique IPv4 ID.
 
 None of which I am aware.

There should be. May I volunteer?

Masataka Ohta


Re: IPv6 networking: Bad news for small biz

2012-04-05 Thread Masataka Ohta
Margaret Wasserman wrote:

 Many internet-connected enterprises have been willing to pay
 extra money to have fixed IP addresses, and, worse, independent
 global routing table entries for multihoming, to reliably
 maintain the end to end transparency to reach their servers.
 
 Earlier comments on this list indicated that there are ~40K
 enterprises that have chosen to incur these costs.

I'm afraid the number is too large, which will make the Internet,
regardless of whether it is IPv4 or IPv6 based, collapse.

 How many enterprises have chosen to use IPv4 NAT instead?

Most of them, if I don't underestimate how large the Internet is.

All of them should be happy with IPv4 NAT with the end to end
transparency.

Masataka Ohta


Re: [pcp] Last Call: draft-ietf-pcp-base-23.txt (Port Control Protocol (PCP)) to Proposed Standard

2012-02-26 Thread Masataka Ohta
The IESG wrote:

 The IESG has received a request from the Port Control Protocol WG (pcp)
 to consider the following document:
 - 'Port Control Protocol (PCP)'
draft-ietf-pcp-base-23.txt  as a Proposed Standard

 The IESG plans to make a decision in the next few weeks, and solicits
 final comments on this action. Please send substantive comments to the
 ietf@ietf.org mailing lists by 2012-02-27.

The protocol lacks transaction IDs and is fatally broken.

That is, the protocol is expected to generate refresh request
packets and expect response packets.

However, as all the request packets are expected to be
identical, it is impossible to have correspondences between
request and response packets.

So, if requests are sent at 0, 5, 240, 245, 480 and 485 seconds
and responses with lifetime of 300 are received at 10, 250 and
490 seconds, it is impossible to know which request corresponds
to the response at 490 seconds.

If the response is generated against request at 485 second,
remaining lifetime of the response is 295 seconds.

However, if the response is generated against request at 0
second, lifetime of the response has expired 190 seconds ago.

But, without transaction ID to distinguish requests and
responses, the worst case must be assumed, which means
the response must be interpreted that lifetime expired
190 seconds ago, which means all the subsequent responses
are meaningless because their lifetime must be interpreted
to have expired.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Issues with prefer IPv6 [Re: Variable length internet addresses in TCP/IP: history]

2012-02-23 Thread Masataka Ohta
Mark Andrews wrote:

 Brian already covered unconditional prefer-IPv6 was a painful lesson
 learned, and I'm not saying that those older systems did it right.

You learned a wrong lesson, then.

The essential problem is that there is half hearted support
for handling multiple addresses.

It is not an operational problem but a fundamental defects
of protocols.

 I contend that OS are IPv6 ready to exactly the same extent as they
 are IPv4 ready.  This isn't a IPv6 readiness issue.  It is a
 *application* multi-homing readiness issue.  The applications do
 not handle unreachable addresses, irrespective of their type, well.

In part, it is an application problem. However, it is also an
IP layer problem.

 The address selection rules just made this blinding obvious when
 you are on a badly configured network.

The half hearted address selection rules will keep causing
this kind of problems, until IPv6 specification is
fundamentally fixed.

 No one expect a disconnected IPv4 network to work well when the
 applications are getting unreachable addresses.  Why do they expect
 a IPv6 network to work well under those conditions?

With proper IP layer support, which is lacking, which means
IPv6 specification is not ready to handle multiple addresses,
which means hosts are not IPv6 ready to handle multiple
addresses, we can expect applications work well if one of
an address among many works and rest of the addresses are
unreachable.

Masataka Ohta

PS

IPv4, of course, is not ready to handle multiple addresses
properly, which causes some problems for multihomed hosts.

But it is not a serious problem because IPv4 hosts do not
have to have IPv6 addresses.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Issues with prefer IPv6 [Re: Variable length internet addresses in TCP/IP: history]

2012-02-23 Thread Masataka Ohta
ned+i...@mauve.mrochek.com wrote:

 This definition of ready is operationally meaningless in many cases.

The meaningful question is whether we have to modify code or not.

If we have to, a host is not ready. And, if it is not an
implementation problem, protocols must be fixed.

If we don't have to, it's an operational issue.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-20 Thread Masataka Ohta
Bob Hinden wrote:

 ID/locator split, which I've been
 a proponent of for very many years, works a lot better with more bits,
 because it allows topological addressing both within and outside an
 organization.

 To confirm what your are saying about an ID/locator split in
 IPv6, that the other reason why we went with 128-bit address
 with a 64/64 split as the common case and defining IIDs
 that indicate if they have global uniqueness.  This creates
 a framework that an ID/locator split could be implemented.

I actually implemented such a system about 10 years ago
and it worked fine.

It was an experiment for hosts to use multiple IPv4 and IPv6
addresses, some of which may have ID/loc separation. DNS
was used for ID-loc mapping. Mobility was also supported
with multiple home locators and multiple foreign locators.

We did something related to end to end multihoming and
happy eyeball.

ID locator separation was good, because it requires about
half amount of space to store multiple addresses sharing
an ID.

Moreover, rewriting destination locators enables elegant
forwarding from home agents to mobile nodes without
tunneling (and associated MTU tax), if transport check
sums do not involve locators.

But, that's all. It was not very interesting that I
abandoned further work on it after government funding
period expired.

 Opinions vary if ID/locator split is useful, but we have a
 framework that would allow it without having to roll
 out another version of IP.  A win IMHO.

Assuming address must be 16B long, it may be good to
have ID/loc separation.

However, if we have choice, having 8B long addresses
is much better, because it is more compact than 16B
address with ID/loc separation.

Even with 8B addresses, we can rewrite destination addresses,
if there is a destination header option (or a mandatory field)
original destination address to be used to confirm transport
identity and checksum.

So, along with the failure with a lot of confusion to have
SLAAC, we can conclude that IPv6 addresses should have been
8B long.

Maybe, even with the current IPv6 packet format, routers
can look only at first 8B of addresses and ignore the
latter 8B (used as original source/destination address
for source/destination address rewriting).

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: SEARS - Search Engine Address Resolution Service (and Protocol)

2012-02-16 Thread Masataka Ohta
Todd Glassey wrote:

 So SEARS is a method of replacing the DNS roots with a well-known
 service portal providing a Google or other SE based access model.  The
 session can interface with traditional HTTP or DNS-Lookup Ports to
 deliver content or addresses to a browser in the form of a HTTP redirection.

It's no different from plain search engine with I'm feeling
lucky turned on.

 The protocol specification is almost done and is intended to make
 threats of attacks against the DNS roots less of an issue.

I heard that Anonymous is planning to attack root servers.

Is it google who hired them to promote SEARS. :-)

Anyway, root servers are (or will be) anycast with a lot less
centralized management than google servers that they are more
resistant against attacks.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-16 Thread Masataka Ohta
Steven Bellovin wrote:

 Thus, IPv6 was mortally wounded from the beginning.
 
 The history is vastly more complex than that.  However, this particular 
 decision
 was just about the last one the IPng directorate made before reporting back to
 the IETF -- virtually everything else in the basic IPv6 design had already
 been agreed-to.

I understand that, unlike 64 bit, 128 bit enables MAC based
SLAAC with full of states, which is as fatal as addresses
with 32 hexadecimal characters.

 I don't think this was the wrong decision.

Isn't it obvious that, with a lot more than 1% penetration of the
Internet to the world today, we don't need address length much more
than 32 bits?

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-15 Thread Masataka Ohta
Steve Crocker wrote:

 The only way variable length address would have provided a
 smooth transition is if there had been room to increase the
 length of the address after some years of use.

Bottom up approach to extend address length toward port numbers,
thus, worked, is working and will keep working with or without
NAT.

If necessary, we can have new TCP, UDP and other transport
protocols with 32 or 48 bit long port numbers without changing
core routers, though it requires minor modifications to transport
and application code.

Anyway the current 48 bit space of IPv4 addresses and 16 bit port
numbers will be suffice for a decade or two.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-15 Thread Masataka Ohta
Steven Bellovin wrote:

 Scott, if memory serves you and I wanted the high-order 2 bits of the IPng
 address to select between 64, 128, 192, and 256-bit addresses -- and when
 we couldn't get that we got folks to agree on 128-bit addresses instead of
 64-bit, which is what had been on the table.

Thus, IPv6 was mortally wounded from the beginning.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-14 Thread Masataka Ohta

The more serious problem is that IPv6 people in IETF do
not admit IPv6 broken, which makes it impossible to fix
IPv6.


Make a draft, gather your supporters and take that discussion on
6man wg. I'm sure there are people open to consider any arguments on
what's wrong/or not.


Now, I'm tired of speaking with those who decided that IPv6
must be perfect.

They have been warned about 10 years ago.

Nowadays, I'm telling operators how IPv6 is broken to be
not operational.


Either way, we're way passed changing any of the important parts of
IPv6. That has to be IPv6 v2 WITH backward compability to IPv6 (as we
currently know it).


A problem, maybe the worst operationally, of IPv6 is that IPv6
address is 16B, divine to remember, because of failed attempt
to have SLAAC.

As most operators are human, forget backward compatibility.

It is a lot easier for all the players to make IPv4 with NAT
end to end transparent.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-14 Thread Masataka Ohta
Brian E Carpenter wrote:

 I'm sorry, but *any* coexistence between RFC791-IPv4-only hosts and
 hosts that are numbered out of an address space greater than 32 bits
 requires some form of address sharing,

Sure.

 address mapping, and translation.

Not at all.

Realm Specific IP [RFC3102] is such an example without any
mapping nor translations. Abstract of the RFC states:

   This document examines the general framework of Realm Specific IP
   (RSIP).  RSIP is intended as a alternative to NAT in which the end-
   to-end integrity of packets is maintained.  We focus on
   implementation issues, deployment scenarios, and interaction with
   other layer-three protocols.

 It doesn't matter what choice we made back in 1994. Once you get to the
 point where you've run out of 32 bit addresses and not every node can
 support32 bit addresses, you have the problem.

The only problem is that some people misinterpret it that
we had needed IPv6.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-14 Thread Masataka Ohta
Mark Andrews wrote:

 Happy eyeballs just points out problems with multi-homing in general.
 IPv4 has the *same* problem and sites spend 1000's of dollars working
 around the issue which could have been addressed with a couple of
 extra lines of code on the client side in most cases.

It's Brian Carpenter who refused to discuss such issues in
MULTI6 WG.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Let ARIN decide (was Re: Another last call for draft-weil)

2012-02-14 Thread Masataka Ohta
Bradner, Scott wrote:

 the IAB advised ARIN that such assignments were in the purview of the IETF

Then, isn't it enough for IETF to conclude let ARIN decide?

Are there any objections to conclude so?

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-14 Thread Masataka Ohta
Randy Bush wrote:

 what silliness.  it will be used as rfc 1918 space no matter what the document
 says.

The difference is on how future conflicts can be resolved.

 nine years ago i was in bologna and did a traceroute out.  i was surprised
 to find that the isp was using un-announced us military space as rfc 1918
 space internal to their network.  this turns out to be common.

What if the military space is sold to public and announced?

Who is forced to renumber and why?

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Variable length internet addresses in TCP/IP: history

2012-02-14 Thread Masataka Ohta
Brian E Carpenter wrote:

 With a fully backwards compatible transparent addressing scheme,
 a much larger fraction of the nodes would have switched to actively
 use IPv6 many years ago.

 Why? They would have needed updated stacks. The routers would
 have need updated stacks. The servers would have needed updated
 stacks. The firewalls would have needed updated stacks. The load
 balancers would have needed updated stacks. Many MIBs would have
 needed to be updated. DHCP servers would have needed to be updated.
 ARP would have needed to be updated, and every routing protocol.

With Realm Specific IP [RFC3102]:

   This document examines the general framework of Realm Specific IP
   (RSIP).  RSIP is intended as a alternative to NAT in which the end-
   to-end integrity of packets is maintained.  We focus on
   implementation issues, deployment scenarios, and interaction with
   other layer-three protocols.

what is necessary are minor modification on IPv4/transport stack
of new (but not existing) hosts and minor extension of DHCP.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt

2012-02-13 Thread Masataka Ohta
Martin Rex wrote:

 The problem of ISP not newly shipping CPE that is not IPv6 capable
 needs to be addressed by regulatory power (legistation),

That's how OSI failed.

 rather than by ignorance of the part of the IETF.

So will be IPv6 by IETF as a regulatory power to prohibit
address space allocation.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Backwards compatibility myth [Re: Last Call: draft-weil-shared-transition-space-request-14.txt]

2012-02-13 Thread Masataka Ohta
Brian E Carpenter wrote:

 There were very specific reasons why this was not done. And it doesn't
 change the fact that an old-IP-only host cannot talk to a new-IP-only host
 without a translator. It is that fact that causes our difficulties today.

The fact is that an old-IP-only host can talk to a new-IP-only
host without a translator, if the new IP uses port numbers for
addressing/routing.

Though NAT is doing similar thing with translation, the
translation is not essential and can be reversed at end
systems using NAT information obtained through UPnP/PCP.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-13 Thread Masataka Ohta
Brian E Carpenter wrote:

 Sure, that's very common, but these devices are consumer electronics and
 will get gradually replaced by IPv6-supporting boxes as time goes on.

The problem is that IPv6 specification is still broken in
several ways to be not operational that existing boxes must
be replaced after the specification is fixed.

The more serious problem is that IPv6 people in IETF do
not admit IPv6 broken, which makes it impossible to fix
IPv6.

For an example, see my recent presentation at APOPS:

http://meetings.apnic.net/__data/assets/file/0018/38214/pathMTU.pdf

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-11 Thread Masataka Ohta

Nilsson wrote:


For which there is better use than prolonging bad technical solutions.


A problem is that IPv6 is a bad technical solution.

For examples, its bloated address space is bad, ND with full
of bloated useless features is bad and multicast PMTUD only
to cause ICMP implosions is bad.


Address translation has set the state of consumer computing back severely.


No, not at all.


Do keep in mind that the real driver in IP technology is the ability
for end-nodes to communicate in a manner they chose without prior
coordination with some kind of protocol gateway. NAT and more so CGN
explicitly disables this key feature.


Wrong. Nat can be end to end transparent, if hosts recognize
existence of NAT and cooperate with NAT gateways to reverse
address (and port) translation.

See RFC3102 and draft-ohta-e2e-nat-00.txt. I have confirmed
that PORT command of unmodified ftp client just work.

Though my draft assumes special NAT gateway, I recently
noticed that similar thing is possible with UPnP capable
existing NAT gateways.


And this is not what the IETF should be doing. The IETF should seek
to maximise the technical capabilities of the Internet protocol
suite so that it may continue to enable new uses of the key feature,
ie. end-node reachability.


Yes. So, let's abandon IPv6, a bad technical solution, and
deploy NAT with UPnP or PCP (if designed properly) capabilities.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-10 Thread Masataka Ohta
Pete Resnick wrote:

 and can be used by other 
 people who build sane equipment that understands shared addresses can 
 appear on two different interfaces.

With so complicated functionality of NAT today, the only
practical approach to build such equipment is to make it
a double NAT as:

  +-+   +-+
(a private space)-| NAT |---| NAT |-(same private space)
  +-+   +-+

the only problem is that we need another private address
space used only between element NATs within the double NAT.

Can IETF allocate such address?

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-weil-shared-transition-space-request-14.txt (IANA Reserved IPv4 Prefix for Shared Address Space) to BCP

2012-02-10 Thread Masataka Ohta
Pete Resnick wrote:

 and can be used by other
 people who build sane equipment that understands shared addresses can
 appear on two different interfaces.
 With so complicated functionality of NAT today, the only
 practical approach to build such equipment is to make it
 a double NAT
 
 Correct.

Note that the double NAT, here, is AN equipment.

 Can IETF allocate such address?
 
 That's what this document is doing: Allocating address space to go 
 between 2 NATs.
 
 Maybe I don't understand your question?

My point is that those who are arguing against the proposed
allocation because of the capability of a double NAT
equipment should argue for allocation of another space
to enable development of the double NAT equipment.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: IPv6 not operational (was Re: Consensus Call: draft-weil-shared-transition-space-request)

2011-12-12 Thread Masataka Ohta
Toerless Eckert wrote:

 Not sure why rfc1981 PMTUD was never fixed.

Because IPv6 people believe multicast PMTUD MUST work.

RFC1981 even states:

   The local
   representation of the path to a multicast destination must in fact
   represent a potentially large set of paths.

that they actively bless multicast PMTUD to cause implosions.

 We did manage to get section 11.1 into rfc 3542 though. It's a little
 bit long due to committee design, but i was hoping it was sufficient to avoid
 the problem by default.

It is not a protection against malicious attempts nor decided
attempts by PMTUD believers-in, which means cautious operators
must still filter ICMPv6.

Considering that destination address fields of inner packets
in ICMPv6 are located at the 73th to 88th bytes (ignoring MAC
header), which may not be filtered by hardware, it is likely
ICMPv6 packet too big against unicast packets will also be
filtered.

 If folks see IPv6 multicast  1280 as a
 real problem in deployments,

It is merely that IPv6, as is, is not operational and
operators MUST violate RFC2463 not to generate ICMPv6
packet too big and filter ICMPv6 packet too big.

It, of course, is a real problem against PMTUD, including
unicast ones.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Consensus Call (Update): draft-weil-shared-transition-space-request

2011-12-07 Thread Masataka Ohta
Noel Chiappa wrote:

 I was suggesting them purely for infrastucture use, in (probably _very_
 limited) usage domains where their visibility would be over a limited scope,
 one where all devices can be 'pre-cleared' for using them.

More generally, class E should be used for unicast only when
operators are statically, without dynamic investigation, sure
that all the equipments support class E.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


IPv6 not operational (was Re: Consensus Call: draft-weil-shared-transition-space-request)

2011-12-03 Thread Masataka Ohta

Daryl Tanner wrote:


The IPv6 chickens and eggs discussion could (and probably will) go on
forever:

service provider -  no content



IPv6 is the right answer,


Wrong.

IPv6 is not operational, which is partly why most service
providers refuse it.

For example, to purposelessly enable multicast PMTUD, RFC2463
(ICMPv6) mandates routers generate ICMPv6 packet too big
against multicast packets, which causes ICMPv6 packet
implosions, which is not operational.

For further details, see my presentation at the last APNIC:

How Path MTU Discovery Doesn't work
http://meetings.apnic.net/__data/assets/file/0018/38214/pathMTU.pdf

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Consensus Call: draft-weil-shared-transition-space-request

2011-12-03 Thread Masataka Ohta
Mark Andrews wrote:

 224/10 could be made to work with new equipement provided there was
 also signaling that the equipment supported it.  That doesn't help
 ISP that have new customers with old equipment and no addresses.

Yes, it takes time.

However, 224/4 (or most of it) and 240/4 (except for 255.255.255.255)
should be released for unicast uses to reduce market price on
IPv4 addresses.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Consensus Call: draft-weil-shared-transition-space-request

2011-12-03 Thread Masataka Ohta
Victor Kuarsingh wrote:

 However, 224/4 (or most of it) and 240/4 (except for 255.255.255.255)
 should be released for unicast uses to reduce market price on
 IPv4 addresses.

 I have not objection to this.  But anything that requires replacement of
 equipment only will have longer term benefit. (Built it, Ship it, Stock
 it, Sell it, put it in).

Remember that the current IPv6 is not operational, because of
implosions of ICMPv6 packet too big generated against multicast
packets, which means it takes time to fix ICMPv6 operational
before Built it, Ship it, Stock it, Sell it, put it in

 I would also hope that when we have an
 opportunity to change out a box, it's with a IPv6 capable one (although it
 does not help that many boxes on the retail shelf are still IPv4-only -
 where we are).

Thus, it is easier to expand IPv4 address space using port
numbers as specified in RFC3102:

   RSIP is intended as a alternative to NAT in which the end-
   to-end integrity of packets is maintained.

or something like that, which is recently called port restricted
IP.

 I wish to spend most of our time on IPv6 deployment (and only do what is
 necessary for IPv4 to keep the lights on).  Activity working on getting
 equipment to utilize 240/4 seems like a lot of effort.

The problem is that IPv6 is broken, which means its
deployment is meaningless until it is fixed.

The multicast PMTUD example above is just an example and
there are other problems in IPv6 specification.

 but I am not sure if the IETF is the right place to attempt and
 control market forces and the IPv4 Futures Market.

Instead, from my experience to fix the so obvious multicast
PMTUD problem within IETF, I'm sure IETF is not the right
place to attempt to fix IPv6, which means IPv6 is hopeless.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: the success of MIME types was Re: discouraged by .docx was Re: Plagued by PPTX again

2011-11-29 Thread Masataka Ohta
t.petch wrote:

 You will be aware of the recent threads on apps-discuss about MIME types

The threads are on PPTX and DOCX, that is, file name extensions,
not MIME types, which demonstrates that MIME was not necessary
and uuencode is just enough.

 If this were not true, then I believe that something such as
 text/plain would indeed be the basis of our discussion here,

The MIME type of choice, other than text/plain, is
application/octet-stream.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: LISP is not a Loc-ID Separation protocol

2011-10-29 Thread Masataka Ohta
Robin Whittle wrote:

 Hi Luigi (and other LISP people),

As a member of the LISP people (I wrote an interpreter and
a compiler for 8080), your statement above is irritating.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: LISP is not a Loc-ID Separation protocol

2011-10-29 Thread Masataka Ohta
Robin Whittle wrote:

 I mean no offence.  I try to keep my messages brief, and it would be
 tortuous to write the LISP protocol at every point in this discussion.

 In the context of the IETF, I think LISP means the protocol.  I was
 not discussing the LISP programming language at all.

You have demonstrated that the problem is what the LISP people
means within IT community including, but not limited to, IETF.

Thank you and I acknowledge that your demonstration was brief enough.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 240/4 unreservation (was RE: Last Call: draft-weil-shared-transition-space-request-03.txt (IANA Reserved IPv4 Prefix for Shared Transition Space) to Informational RFC)

2011-09-26 Thread Masataka Ohta
Frank Ellermann wrote:

 Maybe the IETF could agree that it won't use the former class E for
 any past, present, or future experiments.  Updating RFC 1112 (STD 5)
 or maybe RFC 1166, and then RFC 5735 for 6.25% of all windmills minus
 one.

Given that NAT can expand the space 100 or 1000 times, while
maintaining the end to end transparency if you so wish, it's
not 6.25% but 625% or 6250%.

That is, unreservation is worth doing and as the first step,
it is a good idea to use some of the space for shared
transition where all the equipments having the unreserved
addresses should be able to treat the unreservation.

However, as the unreserved space is still precious, only
small part of the space should be allocated for shared
transition.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 240/4 unreservation (was RE: Last Call: draft-weil-shared-transition-space-request-03.txt (IANA Reserved IPv4 Prefix for Shared Transition Space) to Informational RFC)

2011-09-26 Thread Masataka Ohta
Frank Ellermann wrote:

Oops, I failed to send the following to the list.

 Updating RFC 1112 (STD 5)
 or maybe RFC 1166, and then RFC 5735 for 6.25% of all windmills minus
 one.

Updating RFC1112 is not necessary because, even though it says:

   *a datagram whose source address does not define a single
host -- e.g., a zero address, a loopback address, a
broadcast address, a multicast address, or a Class E
address.

Class E is an example and, more interestingly, the rule is silently
ignored by ICMPs generated against private use addresses, which
do not define single hosts.

Updating RFC can be as simple as follows:

   240.0.0.0/4 - This block, formerly known as the Class E address
   space [RFC1112], and was reserved for future use [RFC5735] is
   now allocated for use in IPv4 unicast address assignments with
   default netmask of 0x.

   The addresses in the block is not special use IPv4 addresses
   anymore.

   There are two exceptions to this.

   One exception is the limited broadcast
   destination address 255.255.255.255.  As described in [RFC0919]
   and [RFC0922], packets with this destination address are not
   forwarded at the IP layer.

   Another exception is the shared transition block
   of 240.0.0.0/12, which is set aside for use in private ISP
   (not end user) networks. Its intended use is to be documented
   in a future RFC, As will be described in that RFC, addresses
   within this block do not legitimately appear on the public
   Internet. These addresses can be used without any coordination
   with IANA or an Internet registry.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: TSVDIR review of draft-ietf-nfsv4-federated-dns-srv-namespace

2011-09-13 Thread Masataka Ohta
Keith Moore wrote:

 Why use SRV records at all if you also need TXT records to convey
 part of the information needed by apps (and thus, have to
 do multiple queries for the same level of information)?  Why not
 just encode all of the information in TXT records?

I fully agree with you.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-intarea-ipv6-required-01.txt (IPv6 Support Required for all IP-capable nodes) to Proposed Standard

2011-08-26 Thread Masataka Ohta
Before requiring IPv6 support, it is necessary to revise obviously
broken parts of IPv6.

For example, ICMPv6 generated agaist multicast packets should be
forbidden or ICMPv6 implosions will occur. It will let ISPs filter
ICMPv6, including but not limited to, those against multicast,
which means PMTUD won't work.

Another example is lack of guaranteed value for payload size.

RFC791 specifies:

The number 576 is selected to allow a reasonable sized data block to
be transmitted in addition to the required header information.  For
example, this size allows a data block of 512 octets plus 64 header
octets to fit in a datagram.  The maximal internet header is 60
octets, and a typical internet header is 20 octets, allowing a
margin for headers of higher level protocols.

and DNS just send 512B messages (520B including UDP header,
which should be a mistake but is safe as no one use IPv4
options).

However, there is no such size specified with IPv6, because
arbitrarily lengthy header options may be inserted. Note that
some header options, such as mobility ones, are inserted by
IP layers without application control.

Thus, applications like DNS can not specify like RFC1035:

   Messages carried by UDP are restricted to 512 bytes (not
   counting the IP or UDP headers).

Masataka Ohta

PS

You have been warned.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [precis] Path towards a multilingalization IUse referent

2011-08-22 Thread Masataka Ohta
Worley, Dale R (Dale) wrote:

 But in my social context, when someone argues for a universal
 communication system, they usually continue with a demand that
 everybody speaks English.  And I don't think that's what we intend.
 (-- this is a joke)

You are confusing characters and languages.

Plain DNS is already multilingual, because all the languages in the
world can be represented in ASCII characters.

Adding fancy characters means most of the international people can't
recognize the characters, which is against Internationalization.

That is, plain DNS with ASCII is the most internationalized DNS.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re:

2011-07-31 Thread Masataka Ohta
Mark Atwood wrote:

 Moreover, unlike nat64, nat44 can be fully transparent end to end.

 Some people may consider it a feature that only incumbents with power
 and money can usefully call listen(), and that useful user to user
 activities requires the cooperation of an intrusive 3rd party, while
 mere users and upstarts can only call connect().

 It is, in fact, getting really hard to avoid assuming malice on the
 part of people who want to nail the world to the nat44 cross.

Isn't port forwarding over legacy nat44 enough for you?

For me, port forwarding is not enough because it lacks
end to end transparency.

For example, you can't use PORT command of your ftp
client behind legacy nat.

See draft-ohta-e2e-nat-00.txt for how nat44 can be
transparent end to end.

OTOH, nat64 can not have end to end transparency.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re:

2011-07-29 Thread Masataka Ohta
Philip Homburg wrote:

 I think that would have been a much better use of thse bits then simply
 storing the ethernet address there.

IPv6 address was (when it was SIP) and should be 8B, but extended
to be 16B to store ethernet address with wrong reasoning of RFC1715
only to make IPv6 inoperational.

At that time, it was 10B+6B, but as I pointed it out that IEEE1394
MAC is 8B long, it became 8B+8B.

 But anyhow, it should be possible to do that with a destination option with is
 then inspected by some middle box that makes a routing decision. But
 that would require a lot of changes to retrofit it in todays operating
 systems.

That's no different from legacy NAT to violate the end to end
principle. For example, just as legacy NAT, transport check sum depends
on actually used address family and address. Thus, transport
check sum must be recalculated by the middle box which changes
address family.

 That would require the dns64 box to do destination selection. Possible, but
 maybe also tricky to keep a dns resolver informed about the current state of
 the network.

That's guaranteed to be impossible by the end to end argument:

   The function in question can completely and correctly be
   implemented only with the knowledge and help of the
   application standing at the end points of the communication
   system. Therefore, providing that questioned function as a
   feature of the communication system itself is not possible.

Destination address selection is the function in question and
complete and correct implementation by middle boxes is just
impossible.

The only approach for the address selection function is to do
it at the end systems using knowledge and help of the end
systems, which requires end systems have knowledge on
default free global routing tables.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re: draft-ietf-v6ops-6to4-to-historic (yet again))

2011-07-28 Thread Masataka Ohta
Philip Homburg wrote:

 I think the problem is that we don't know how to do 'proper' address
 selection.

I know and it's trivially easy.

 It would be nice if 5 or 10 years ago there would have been a good
 standard to do address selection.

11 years ago in draft-ohta-e2e-multihoming-00.txt, I wrote:

   End systems (hosts) are end systems. To make the end to end principle
   effectively work, the end systems must have all the available
   knowledge to make decisions by the end systems themselves.

   With regard to multihoming, when an end system want to communicate
   with a multihomed end system, the end system must be able to select
   most appropriate (based on the local information) destination address
   of the multihomed end system.

which means an end system should have a full routing table, IGP
metrics in which tell the end system what is the best address of
its multihomed peer. Full routing table should and can, of course,
be small.

The approach is totally against node/router separation of IPv6 to
make routers, the intermediate systems, a lot more intelligent
than nodes, the end systems, which is partly why IPv6 is hopeless.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re: draft-ietf-v6ops-6to4-to-historic (yet again))

2011-07-28 Thread Masataka Ohta
Philip Homburg wrote:

 which means an end system should have a full routing table, IGP
 metrics in which tell the end system what is the best address of
 its multihomed peer. Full routing table should and can, of course,
 be small.
 
 Even in the unlikely case that it would be feasible to give every host a
 complete copy of the DFZ routing table...

With RFC2374, DFZ of IPv6 has at most 8192 entries.

 That still would leave a lot of issues open...
 1) End-to-end latency. Maybe some future generation BGP provides that, but
 that doesn't help now.

Your requirement can be fair, only with a routing protocol
supporting latency based routing for *an* address with
*multiple* paths to its destination.

There is no point to have a latency based selection of
multiple paths to the destination, only if the destination
has multiple addresses.

 2) For 6to4, the use of anycase. You probably need a link-state routing
 protocol to allow a host to figure out which relays are going to be used 
 on
 a give path.

With anycast, you can use only a single relay. Instead, you can
compare metrics between IPv4 and IPv6 addresses of a host.

 3) Filters in firewalls. I'd love to see a routing protocol that reports the
 settings of all firewalls in the world :-)

Are you saying filtering of firewalls can be disabled by proper
address selection?

 4) Other performance metrics, like jitter, packets loss, etc.

See 1).

 Maybe you can do some experiments and report on how well your draft works for
 deciding when to prefer a 6to4 address over IPv4.

A problem is that there is no point to stick to IPv6 broken
so much.

But, it's not my problem.

Masataka Ohta

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re:

2011-07-28 Thread Masataka Ohta
Cameron Byrne wrote:

 The primary reason why the IPv4 -  IPv6 transition is so painful
 is that it requires everyone one and everything to become multi-homed
 and every software to perform multi-connect, even though most
 devices actually just have a single interface.

It was not a problem, because IPv6 was designed to be able to
handle multiple addresses properly with a single interface.

 It would be so much easier if hosts on the public internet could
 use one single IPv6 address that contains both, the IPv6 network prefix
 and the IPv4 host address, and then let the network figure out whether
 the connect goes through as IPv4 or IPv6 (for IPv6 clients).

 This is largely (not entirely)  achieved with nat64 / dns64.

Assuming NAT, we don't need IPv6.

Moreover, unlike nat64, nat44 can be fully transparent end to end.

So, why do we have to be bothered by 6to4?

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re: draft-ietf-v6ops-6to4-to-historic (yet again))

2011-07-27 Thread Masataka Ohta
Keith Moore wrote:

 I see clear evidence that 6to4 is damaging the Internet and although there 
 are
 those who can use it without causing damage, I believe that the principle is
 'First, do no harm'

 I put the word feature in quotes because this can be a pain in the a** 

It means it's IPv6 which is damaging the Internet.

 Introducing IPv6 - any kind of IPv6 - into a world of hosts that 

A revised IPv6 and transport which can properly handle multiple
addresses can happily coexist with IPv4.

However, as IPv6 is totally broken, it is better to start with
a clean slate than to revise it. Or, more practically, just stick
to IPv4.

 Nothing in IPv4 prohibited hosts from having multiple addresses

IPv4 with revised transport is far better than the current IPv6
and transport.

 And this is an explicitly chosen architectural feature of IPv6.

which is partly why IPv6 is broken and harmful. Proper support
of multihoming can be done only at the transport layer and above.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4 damages the Internet (was Re: draft-ietf-v6ops-6to4-to-historic (yet again))

2011-07-27 Thread Masataka Ohta
Keith Moore wrote:

 It means it's IPv6 which is damaging the Internet.
 
 Except that the (v4) Internet is already doing its best to damage
 itself.   So the choice is between a thoroughly brain-damaged
 v4 Internet that is continually getting worse, and a somewhat
 less brain-damaged v6 Internet that has at least some chance
  of having a few things fixed

You are wrong. While IPv4 is demonstratively usable, IPv6 along
with ND is too bloated that it is totally unusable.

For example, packet too big for multicast PMTUD causes ICMPv6
implosions which may be used for DoS.

While IPv4 guarantees that 516B payload receivable by any host,
IPv6 can't. Because of indefinitely length header options, some
of which (e.g. options for mobility) are inserted without
trasnport/application control, there is no room, not even 1B,
reserved for payload.

And, the worst problem is that 16B address, thanks to poor
estimation of RFC1715, is impossible to remember.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: 6to4v2 (as in ripv2)?

2011-07-26 Thread Masataka Ohta
Brian E Carpenter wrote:

 Since 6to4 is a transition mechanism it has no long term future
 *by definition*.

By observation, the transition is continuing for these 16 years
since RFC1883.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf



Re: [hybi] Last Call: draft-ietf-hybi-thewebsocketprotocol-10.txt (The WebSocket protocol) to Proposed Standard

2011-07-25 Thread Masataka Ohta
Willy Tarreau wrote:

 What are you saying ? Your browser embeds the resolved as a library, so when
 I'm saying that the browser has to retry, I mean the resolver part of the
 browser has to retry.

You should mean that the browser can control behavior of the
resolver.

If the resolver in the browser retries within a hundred ms:

 So yes, it can take several seconds to just resolve a host and then
 only a few hundreds of ms to retrieve
 the objects. I've observed it.

won't happen.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [hybi] Last Call: draft-ietf-hybi-thewebsocketprotocol-10.txt (The WebSocket protocol) to Proposed Standard

2011-07-24 Thread Masataka Ohta
Hector Santos wrote:

 A Major Application will offer all services necessary for the customer 
 to leverage. They are not going to eliminate ftp just because the 
 developer likes http better or whats customers to switch to http. Even 
 then, where I have seen a history of people using a http link, I have 
 also seen many changed back, if only to help balance or spread loads.

I'm afraid you are not distinguishing providers of intermediate
infrastructure and competing developers of various software used
on end systems. The differentiations between them are essential
to understand the Internet.

Roy T. Fielding wrote:

 HTTP would not, cannot,
 and never will benefit from SRV even if we had a magic wand that
 could deploy it on all browsers.  SRV simply doesn't fit the Web
 architecture.

SRV is a tool for port based real/virtual hosting, which is why it
has increased its usefulness with IPv4 address exhaustion.

Masataka Ohta
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


  1   2   3   4   5   6   >