Re[6]: IPv6 addressing limitations (was national security)

2003-12-04 Thread Anthony G. Atkielski
[EMAIL PROTECTED] writes:

 If you know of a better way than BGP, feel free to suggest it ...

I've described variable-length addresses in the past.  Essentially a
system like that of the telephone network, with addresses that can be
extended as required at either end.  Such addressing allows unlimited ad
hoc extensibility at any time without upsetting any routing already in
place.




Re: Re[6]: IPv6 addressing limitations (was national security)

2003-12-04 Thread Johnny Eriksson
Anthony G. Atkielski [EMAIL PROTECTED]:

 [EMAIL PROTECTED] writes:
 
  If you know of a better way than BGP, feel free to suggest it ...
 
 I've described variable-length addresses in the past.  Essentially a
 system like that of the telephone network, with addresses that can be
 extended as required at either end.  Such addressing allows unlimited ad
 hoc extensibility at any time without upsetting any routing already in
 place.

You can start designing the ASICs now.  It won't be easy.

--Johnny



Re: IPv6 addressing limitations (was national security)

2003-12-04 Thread Masataka Ohta
Anthony G. Atkielski;

I've described variable-length addresses in the past.  Essentially a
system like that of the telephone network, with addresses that can be
extended as required at either end.  Such addressing allows unlimited ad
hoc extensibility at any time without upsetting any routing already in
place.
Unlimited? The limitation on public part is 20 digits.

Ad hoc extension beyond hardware supported length at that time will
fatally hurt performance.
Masataka Ohta





Re[8]: IPv6 addressing limitations (was national security)

2003-12-04 Thread Anthony G. Atkielski
Johnny Eriksson writes:

 You can start designing the ASICs now.  It won't be easy.

It worked with Strowger switches and crossbar mechanical exchanges; why
would it be more difficult with ASICs?




Re[2]: IPv6 addressing limitations (was national security)

2003-12-04 Thread Anthony G. Atkielski
Masataka Ohta writes:

 Unlimited? The limitation on public part is 20 digits.

That's just a matter of programming these days.

 Ad hoc extension beyond hardware supported length
 at that time will fatally hurt performance.

What hardware limits numbers to 20 digits today?




Re: Ietf ITU DNS stuff III

2003-12-04 Thread Dan Kolis
Franck said:
Well to come back to my original comment, is that IETF, IANA and ICANN
by being individual members organisations do not have the front of
ITU, which is unfortunate as the Internet is not being done in ITU.
Governments have to understand that and for that dissociate themselves
from the old telco concept...

Interesting point. IETF, IANA and even (maybe) ICANN should have a banner
advertising program, so many/most/nearly all websites have an anchor/link to
a constituentcy web precence explaining where internet came from.

You people in the list that represent big money... CISCO, Motorola, Juniper,
etc: If ITU get in this, the pace of innovation will cease. I mean, they
like H.323 not SIP and X.400 Email. So this will materially hurt your business.

Here's what they will do if there allowed to: Make pacts with federal
goverments; (like the GSA and European Union), to only buy stuff conforming
to there standards, which evolve as slowly as possible and are designed to
make only incremental investments in hardware likely.

So... The big contracts are pulled. Nodays, the civilian pull is pretty
big, so this isn't a full stop. I mean, linksys care far more about what the
buyer thinks at Wallmart than the D.O.D.

But at some level, this (proposed) string pulling will hurt network advancement.

So its worth developing a paid ad campaign, but hopefully most if not all
the media should be on the web itself. Of course, a paper sack of unmarked
bills always helps when dealing with professional polititians.

This is totally a hardcore I told you so issue. I hope I'm wrong, but if
it plays out badly you will think; Dad-burn-it! he was right back in 2003!.

Regards,
Dan




An apology of sorts

2003-12-04 Thread Dan Kolis
Hi

One paragraph to apologize about being aggressive about the ITU. So much
comes out of them as a group that is nessessary and excellent, I'm sorry to
be critical of their proposed increased role in internet. Stuff like AC-3
sound, the WARC process, is good work. Its not the people that slow it all
down and so it, its the process of just too much decision making my
concensus. Did you ever heard why ATM got 53? is it? byte cells? They just
averaged a bunch of competing propopsals. Too much concensus makes things
less functional. the RFC etc process is odd but seems to do the job.




Re: Ietf ITU DNS stuff

2003-12-04 Thread Mike S
At 07:30 PM 12/3/2003, Dean Anderson wrote...
There are, though, good reasons to have some government controls on
telecom.  Whether these controls are too excessive or too lax is not up to
ICANN or the ITU.  I can think of cases were some good has come of it.  
E911, for example. Radio, TV, cellphone allocations. Ham Radio licences.
If license-free wireless operation weren't restricted in power, few people
would be able to use 802.11 because one company would be broadcasting at
hundreds of watts, etc.

None of what you mention is even remotely comparable to the Internet. RF spectrum is a 
naturally shared, limited medium. Because physical law cannot be changed, manmade laws 
must be used to regulate it for efficient use.

No such case can be made for the Internet, which is not bounded in either bandwidth or 
number of connections in any practical sense. It is also not something which can be 
subjected to any sort of control, as it is not a thing. The Internet is strictly an 
intellectual construct, nothing more. There is nothing physical or real to control. 
It's a bunch of network operators who have agreed to interconnect using agreed-upon 
protocols. 

Sure, some governments can try to control some of the physical media which the 
Internet makes use of, but getting around that is simply a matter of reconfiguration. 




Re: Ietf ITU DNS stuff III

2003-12-04 Thread jfcm
On 06:27 04/12/03, Paul Vixie said:
there's plenty to worry about wrt the big boys controlling things, but the 
internet is definitionally and constitutionally uncontrollable.  yay!
This seems untrue in terms of operations if I refer myself to the USG 
relations with the nets.
This sounds like talking about a serial killer to me if you talk about the 
impact of the Internet on real people's life.

I am afraid it is also technically extremely confuse. The missing subjects 
and mssing URLs in: 
http://www.iab.org/documents/resources/architectural-issues.html say a lot 
to non internuts trying to understand it. Unless you might have a better 
focal portal?

Right now, many Governments uses http://whitehouse.gov/pcipb as an entry 
point into the internet issues.

thank you.
jfc





Re: national security

2003-12-04 Thread jfcm
Dear Mr. Lindqvist,
I am afraid I do not understand some of the points you try to make. I will 
give basic responses, please do not hesitate to elaborate.

On 21:27 02/12/03, Kurt Erik Lindqvist said:
 The post KPQuest updates are a good example of what Govs do not want
 anymore.
I can't make this sentence out. Do you mean the diminish of KPNQwest?
In that case, please explain. And before you do: I probably know more 
about KPNQwest than anyone else on this list with a handful of exceptions 
that where all my colleagues doing the IP Engineering part with me. Please 
go on...
I am refering (post KPNQuest) to the reference management lesson ICANN 
gave concerning root management when the 66 ccTLD secondaries supported by 
KPNQuest were to be updated. NO one will forget at many ccTLDs, and Govs.

 Consider the French (original) meaning of gouvernance. For networks
 it would be net keeping. Many ICANN relational problem would
 disappear.
Ok, enough of references to France/French/European. I am born and grown
up in Finland, I have more or less lived in Germany and the Netherlands
for 6-36 months, I live in Sweden since 9 years and I have a resident
in Switzerland. I have worked on building some of the largest Internet
projects in Europe and the largest pan-European networks. Even with
governments trying to meet their needs. So I should be the perfect
match of what you are trying to represent. And I just don't buy any of
your arguments. Sorry.
I suppose that you are living in a French speaking Switzerland part then. 
May be people there have not a common command of the XIIIth century French 
from North of France (where the word comes from) or from current Senegal 
administration (where the word is in current legal use)?

 What would be the difference if the ccNSO resulted from an MoU? It
 would permit to help/join with ccTLDs, and RIRs, over a far more
 interesting ITU-I preparation. I suppose RIRs would not be afraid an
 ITU-I would not be here 2 years from now.
As someone who is somewhat involved in the policy work of the RIRs, I
really,really, really want you to elaborate on this.
Glad you do. I keep your entries to simplify the reading.

I just fail to see this. What is it with the ITU that will give us

   a) More openness? How do I as an individual impact the ITU process?
This is not the topic (I come initially from a national point of view) and 
not to disuss but to listen.

But this is also an IETF separted issue. As deeply involved for years in 
@large issues (ICANN) and far longer political, public, coporate, 
technology development network issues and for having shared for some years 
in the ITU process (at that time CCITT), I think I will say Yes.

1. As a user I have no impact on IETF ICANN. Not even do not get heard.
2. but (and with a big but unlil ITU adapts and created an I sector for 
us) ITU has the structures and procedures (Focus Groups and Members called 
meetings) just to do that.

You may have studied/shared in the WSIS and observed the way it works?

   b) More effectiveness and a faster adoption rate?
Probably yes. For a simple reason. Internet is just another technology to 
support users data communications needs. I may find faster, better; 
parallel solutions else where. Competition fosters speed and quality or 
death. As a user I am darwinian about the process I use.

   c) A better representation of end-user needs?
Certainly. This is a recurring issue. Quote me the way IETF listen to 
end-users needs. I have been flamed enough as a user representative to know 
it. And don't tell me who do you represent? or I will bore everyone 
responding. This thread show it. As a user I rose a question. Responses:

- question are disputed. I learned a long ago that questions are never 
stupid, but responses may be.
- question asked back to me: who are you. I appreciate that you may warn me 
about KPNQuest to spare us a trolls response. But I wander why the author 
would have any impact on a new question.


 The lack of users networks. Multiorganization TLDs Jerry made
 introduced as a reality we started experiencing. Just consider that
 the large user networks (SWIFT, SITA, VISA, Amadeus, Mnitel, etc.)
 started before 85. OSI brought X.400. CERN brought the Web. But ICANN
 - and unreliable technology - blocks ULDs (User Level Domains).
To be honest, none of those networks are really large compared to the
Internet, or in terms of users and especially bandwidth to some of the
large providers.
I agree. But I fail to see howit relates to the point?

My point is that SWIFT should have been able to become .swift for a very 
long. That .bank was denied to the World Bank Association and that SITA was 
given a try with .aero.

So we can technically compare the capacity of Internet to support the needs 
of a very, very old network like SITA. It does seem to be very appealing on 
the air transportation community. Never saw any ad for aerolinas.aero yet 
howver the mnemonic interest.

And, yes, OSI brought 

Re: national security

2003-12-04 Thread jfcm
At 09:21 03/12/03, Kurt Erik Lindqvist wrote:
I agree and realize this. However, the let's take that argument out in the 
open and not hide it behind national security.
I regret such an agressiveness. I simply listed suggestions I collected to 
ask warning, advise, alternative to problems identified not from inside the 
internet but from outside. I was labelled as a topic of national security 
because it was to prepare a menting on national vulnerability to Internet. 
If it had been about a Web Information and Services Providers, or User 
Networks demands, it would have been the same

I expected warnings, advices, alternative propositions. If you need a long 
discssion among specialists to come with that, please do. I am only 
interested in an authorized outcome. And we will all thank you for that.
jfc









Re: IPv6 addressing limitations (was national security)

2003-12-04 Thread jfcm
Dear Masataka,
my interest in this is national security. I see a problem with IPv6 in two 
areas.

1. the 001 numbering plan as inadequate to national interests - digital 
soverignty, e-territory organization, law enforcement, security and 
safetey, etc. related reasons (I do not discuss their degree of relevance, 
just the existance).

2. the Y2K syndrome. IPv6 has 6 potential numbering plan. Launching it in 
real production without certifying it and the equipment to multiple plan 
support capacity is unacceptable. When there will be millions on IPv6 
inside without large scale testing of multiple numbering plan support, and 
many ways of use, applciations etc. developed with one single actually used 
plan in mind, no one will able to seriously propose an additional plan.

Comments welcome.
jfc


At 05:20 04/12/03, Masataka Ohta wrote:
Content-Transfer-Encoding: 7bit

Iljitsch;

We need to keep the size of the global routing table in check, which 
means wasting a good deal of address space.
That's not untrue. However, as the size of the global routing table
is limited, we don't need so much number of bits for routing.
61 bits, allowing 4 layers of routing each with 32K entries, is
a lot more than enough.
Even in IPv4, where addresses are considered at least somewhat scarce, a 
significant part of all possible addresses is lost because of this.
Only 20 bits or so for routing is, certainly, no good.

If we want to keep stateless autoconfig and be modestly future-proof we 
need at least a little over 80 bits. 96 would have been a good number, 
but I have no idea what the tradeoffs are in using a broken power of two. 
If we assume at least 96 bits are necessary, IPv6 only wastes  2 x 32 
bits = 8 bytes per packet, or about 0,5% of a maximum size packet. Not a 
huge deal. And there's always header compression.
Stateless autoconfig is mostly useless feature applicable only
to hosts within a private IP network that 64 bits could have
worked.
128 bit is here to enable separation of 64 bit structured ID
and 64 bit locator.
Masataka Ohta








RE: Ietf ITU DNS stuff

2003-12-04 Thread Mike S
At 10:45 AM 12/4/2003, Steve Silverman wrote...
The Internet is _in part_ an intellectual construction but so is
the telephone network. 

I disagree.

It doesn't do much without a physical implementation.

Cognitive thought doesn't exist without a brain. That doesn't mean that thought is 
only _in part_ an intellectual construction. :-)

Whatever rights you, I, or anyone else may think are
inalienable, in many parts of the world, the only rights anyone has
are what the
government allows. I'm not saying I like with this but as a practical
matter,
if the government controls the switches and can throw people in jail
(or simply shoot them), it can
also restrict what is implemented on the network equipment.

Many governments have over time attempted to control thought and personal speech, and 
none has been successful for any extended period of time. The Internet is no 
different, as it is easily re-configured and is by design self-healing. 






RE: Ietf ITU DNS stuff

2003-12-04 Thread Steve Silverman
The Internet is _in part_ an intellectual construction but so is
the telephone network. It doesn't do much without a physical
implementation.
Whatever rights you, I, or anyone else may think are
inalienable, in many parts of the world, the only rights anyone has
are what the
government allows. I'm not saying I like with this but as a practical
matter,
if the government controls the switches and can throw people in jail
(or simply shoot them), it can
also restrict what is implemented on the network equipment.

Steve Silverman

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]
 Behalf Of Mike S
 Sent: Thursday, December 04, 2003 9:18 AM
 To: [EMAIL PROTECTED]; Dean Anderson
 Subject: Re: Ietf ITU DNS stuff


 At 07:30 PM 12/3/2003, Dean Anderson wrote...
 There are, though, good reasons to have some government controls on
 telecom.  Whether these controls are too excessive or too
 lax is not up to
 ICANN or the ITU.  I can think of cases were some good has
 come of it.
 E911, for example. Radio, TV, cellphone allocations. Ham
 Radio licences.
 If license-free wireless operation weren't restricted in
 power, few people
 would be able to use 802.11 because one company would be
 broadcasting at
 hundreds of watts, etc.

 None of what you mention is even remotely comparable to the
 Internet. RF spectrum is a naturally shared, limited
 medium. Because physical law cannot be changed, manmade
 laws must be used to regulate it for efficient use.

 No such case can be made for the Internet, which is not
 bounded in either bandwidth or number of connections in any
 practical sense. It is also not something which can be
 subjected to any sort of control, as it is not a thing.
 The Internet is strictly an intellectual construct, nothing
 more. There is nothing physical or real to control. It's a
 bunch of network operators who have agreed to interconnect
 using agreed-upon protocols.

 Sure, some governments can try to control some of the
 physical media which the Internet makes use of, but getting
 around that is simply a matter of reconfiguration.









Re: Ietf ITU DNS stuff

2003-12-04 Thread jfcm
At 15:17 04/12/03, Mike S wrote:
Sure, some governments can try to control some of the physical media which 
the Internet makes use of, but getting around that is simply a matter of 
reconfiguration.
Dear Mike,
I am only interested in technical issues in here. You may realize that the 
very possibility of what you say is a concern for every user. Who is to 
carry that reconfiguration? from which authority? with which responsiblity 
for the consequences? for he people who may be hurt or die from it? for the 
econmies which can be hurt? Will only be that reconfiguration signed?

You may realize that the hacking you describe is act of terrorism if you 
perform it as a private citizen, and an act of war if you are ordered to 
carry it.

My interest is in hearing about the ways:
- to prevent, address and correct such unconsidered moves through practical 
patches of any nature (people to protect).
- to study the solutions to prevent, to make them impossible

Why? Because, no one can take seriously a technology/a system of which the 
designers may write what you write and be technically approved by its 
technical community. And we all know you are true.

I am not in favor if ITU or not in favor if ITU. I am for my secure, safe, 
stable, innovative use of my own network resources, in a consensual way, 
with you and others. And I am strongly in favor of an ITU-I - to be shaped 
by us in common - should ITU be involved. But the first role of the ... 
1867 ITU is to make sure for you to use your phone and TV even with 
countries at war. The same as the Postal Union is to make sure that you can 
send mail to countries at war. When I see the size of my spam junker in 
peace time, I am not sure the Internet currently got the same kind of 
solution. I am not sure the Internet continued operating in Iraq.

Is there a technical way against spam for example? All I see here is 
please, call in the lawBut law is not the USG outside of the USA. Law 
is necessarily ITU. Because Law is States and for 136 years States use ITU 
to address/fight their communications related issues. The 190 of them. That 
helps and serves the citizens of every of them. You would be surprized to 
learn that ITU (Embassador's Lounge) is probably the place in the world 
where the more open or undercover military actions were decided ... to free 
the telegaph, the telex, the telephone during wars, revolutions, for 136 
years. For you to be able to use them as much as possible 24/365.

Again, I am not interested in political comments. But in technical 
responses about technical or archtectural ideas to make sure the Internet 
cannot be used as the terrorist bomb you describe. Every idea welcome.
jfc




Re: national security

2003-12-04 Thread Kurt Erik Lindqvist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 I agree and realize this. However, the let's take that argument out 
 in the open and not hide it behind national security.

 I regret such an agressiveness. I simply listed suggestions I 
 collected to ask warning, advise, alternative to problems identified 
 not from inside the internet but from outside.

Why don't you simply go inside and find out? There is nothing like 
first hand knowledge!

 I was labelled as a topic of national security because it was to 
 prepare a menting on national vulnerability to Internet. If it had 
 been about a Web Information and Services Providers, or User Networks 
 demands, it would have been the same

I know a number of countries that have looked at this from a national 
perspective. None of them have argued that the ITU is the solution. On 
the contrary, the distributed control of the Internet is a good value.

 I expected warnings, advices, alternative propositions. If you need a 
 long discssion among specialists to come with that, please do. I am 
 only interested in an authorized outcome. And we will all thank you 
 for that.

What the collective Internet thinks is documented largely through the 
IETF process, or related organizations. I think that the issues you are 
trying to raise are already answered at any point in history as being a 
reflection of the current set of standards.

- - kurtis -

-BEGIN PGP SIGNATURE-
Version: PGP 8.0.2

iQA/AwUBP8+D+KarNKXTPFCVEQJm9QCgzecWX5+0R1RcADym1rrZHICjvPAAoK2o
DBfR0ezNIcNGpKt4bb+J8bGl
=HL9l
-END PGP SIGNATURE-




Re: Ietf ITU DNS stuff

2003-12-04 Thread John C Klensin


--On Thursday, 04 December, 2003 18:29 +0100 jfcm 
[EMAIL PROTECTED] wrote:

...
Is there a technical way against spam for example? All I see
here is please, call in the lawBut law is not the USG
outside of the USA. Law is necessarily ITU. Because Law is
States and for 136 years States use ITU to address/fight their
communications related issues. The 190 of them.
Jefsey,

ITU-T is quite insistent that they make _Recommendations_ only. 
Interpretation and enforcement is up to each individual 
government.  They also insist that they have never, and do not 
intend to try, to extend or interpret the provisions of the 
radio frequency treaties that permits the Radio Bureau (and 
WARC) to make binding regulations about, e.g., frequency use, to 
extend to telephony, the Internet, or similar topics.

That is in sharp contrast to your Law is necessarily ITU 
assertion... sharp enough that, if logic is applied, there are 
only two possibilities:

(1) The senior ITU personnel who fairly regularly make those 
statements are trying to obscure their real power and plans, if 
not outright lying about them.  If that were true --and, for the 
record, I don't believe it is-- it would be irrational to trust 
them with the Internet or anything else.

(2) You are speaking nonsense, to the extent that it is probably 
irrational for any of us to continue reading or responding to 
your messages.

   john






Re: national security

2003-12-04 Thread Kurt Erik Lindqvist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

  The post KPQuest updates are a good example of what Govs do not 
 want
  anymore.
 I can't make this sentence out. Do you mean the diminish of KPNQwest?
 In that case, please explain. And before you do: I probably know more 
 about KPNQwest than anyone else on this list with a handful of 
 exceptions that where all my colleagues doing the IP Engineering part 
 with me. Please go on...

 I am refering (post KPNQuest) to the reference management lesson 
 ICANN gave concerning root management when the 66 ccTLD secondaries 
 supported by KPNQuest were to be updated. NO one will forget at many 
 ccTLDs, and Govs.

I was there when KPNQwest went down. I think I have concluded that what 
you are referring to was a machine called ns.eu.net. That machine has a 
history that goes back to the beginning of the Internet in Europe. 
Through mergers and acquisitions it ended up on the KPNQWest network. 
It was secondary for a large number of domains, including ccTLDs. When 
KPNQwest down, the zone content and address block was transfered to 
RIPE NCC. As far as I can tell it is still there. TLDs where asked to 
move away from the machine over time.

As a matter of fact, several studies the year before KPNQwest went 
down, pointed out the problem with having all the worlds TLDs using 
just a few machines as slave servers. However, the DNS is designed to 
work fine even with one slave not reachable. So even if ns.eu.net would 
have gone off-line abruptly, which it never did, people got, and 
apparently still have, plenty of time to move. I think this incident 
clearly shows the robustness of the current system, more than anything 
else.


 I just fail to see this. What is it with the ITU that will give us

a) More openness? How do I as an individual impact the ITU process?

 This is not the topic (I come initially from a national point of view) 
 and not to disuss but to listen.

 But this is also an IETF separted issue. As deeply involved for years 
 in @large issues (ICANN) and far longer political, public, coporate, 
 technology development network issues and for having shared for some 
 years in the ITU process (at that time CCITT), I think I will say 
 Yes.

 1. As a user I have no impact on IETF ICANN. Not even do not get heard.

IETF and ICANN in this prospect are two completely different 
organizations and processes. In IETF, you are making yourself heard. 
Quite a lot actually.

 2. but (and with a big but unlil ITU adapts and created an I 
 sector for us) ITU has the structures and procedures (Focus Groups and 
 Members called meetings) just to do that.

 You may have studied/shared in the WSIS and observed the way it works?

It certainly doesn't strike me as open at least. I have read the 
following : http://www.itu.int/wsis/participation/accreditation.html. 
An organization where I have to apply for accreditation doesn't sound 
open to me. Actually I am not even sure what WSIS expect as input. To 
me it seems as a forum for governments to be seen. With the hope that 
they will have a forum where they can raise issues to other governments.

What I am missing is a) The input of the professionals b) How they 
expect to use any eventual output.

Again, I fail to see what the ITU process gives that have a clear 
advantage over the current IETF process. And as said, there are also 
governments who have come to understand this and learnt how to deal 
with the IETF process at the same time as making contingency planning.

b) More effectiveness and a faster adoption rate?

 Probably yes. For a simple reason. Internet is just another technology 
 to support users data communications needs. I may find faster, better; 
 parallel solutions else where. Competition fosters speed and quality 
 or death. As a user I am darwinian about the process I use.

So you are saying that the ITU will provide better standards at fast 
speed? That has most certainly not been the case before...

c) A better representation of end-user needs?

 Certainly. This is a recurring issue. Quote me the way IETF listen to 
 end-users needs. I have been flamed enough as a user representative to 
 know it. And don't tell me who do you represent? or I will bore 
 everyone responding. This thread show it. As a user I rose a question. 
 Responses:

The IETF makes decisions by rough consensus. If you have a point that 
is valid enough, you will get enough people to support you. If not, 
life goes on.

 - question are disputed. I learned a long ago that questions are never 
 stupid, but responses may be.

No, but the question might tell a lot about who you are and what your 
motives are.

 - question asked back to me: who are you. I appreciate that you may 
 warn me about KPNQuest to spare us a trolls response. But I wander why 
 the author would have any impact on a new question.

Knowing peoples background is always helpful in understanding a 
discussion.

 I agree. But I fail to see howit 

Re: Ietf ITU DNS stuff III

2003-12-04 Thread Franck Martin




It always striked me that a programme as popular as BBC Click online, never showed up at an ISOC (INET) or IETF meeting, but went to meetings where the Internet is made (Internet World, CeBit,...)

Cheers

On Fri, 2003-12-05 at 01:14, Dan Kolis wrote:



So... The big contracts are pulled. Nodays, the civilian pull is pretty
big, so this isn't a full stop. I mean, linksys care far more about what the
buyer thinks at Wallmart than the D.O.D.

But at some level, this (proposed) string pulling will hurt network advancement.

So its worth developing a paid ad campaign, but hopefully most if not all
the media should be on the web itself. Of course, a paper sack of unmarked
bills always helps when dealing with professional polititians.

This is totally a hardcore I told you so issue. I hope I'm wrong, but if
it plays out badly you will think; Dad-burn-it! he was right back in 2003!.

Regards,
Dan





Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: IPv6 addressing limitations (was national security)

2003-12-04 Thread Masataka Ohta
Anthony G. Atkielski;

Unlimited? The limitation on public part is 20 digits.

That's just a matter of programming these days.
On the Internet these days, it is a matter of hardware.

Ad hoc extension beyond hardware supported length
at that time will fatally hurt performance.

What hardware limits numbers to 20 digits today?
On psuedo packet network, such as X.25 or ATM, with full of
connection, packets are forwarded by hardware with short
connection ids where e.164 numbers are used at the time of
complex signalling processed by software.
		Masataka Ohta





Re: Ietf ITU DNS stuff III

2003-12-04 Thread Franck Martin




On Fri, 2003-12-05 at 01:05, jfcm wrote:

On 06:27 04/12/03, Paul Vixie said:
there's plenty to worry about wrt the big boys controlling things, but the 
internet is definitionally and constitutionally uncontrollable.  yay!

This seems untrue in terms of operations if I refer myself to the USG 
relations with the nets.
This sounds like talking about a serial killer to me if you talk about the 
impact of the Internet on real people's life.

I am afraid it is also technically extremely confuse. The missing subjects 
and mssing URLs in: 
http://www.iab.org/documents/resources/architectural-issues.html say a lot 
to non internuts trying to understand it. Unless you might have a better 
focal portal?

Right now, many Governments uses http://whitehouse.gov/pcipb as an entry 
point into the internet issues.


And they are WRONG!

Once again, they deal with the Internet in the wrong forum. They are trying to deal with issues with people who have no power or at best proxy power on how the Internet is made. It is a waste of resources.

The old schema of political organisation and telecommunication is being challenged by the Internet and people try to hold onto it, like they hold on their monopolies, or their office.

There would be here a sense in the Internet of alter mondialisation that I would not be surprised. Surprisingly too, it is the most communist (Internet is for Everyone) project that ever came out of the USA.

So we better spend our energies explaining to traditional structures, where the decisions are made, by who and why... Get them a plane ticket to the next IETF, INET, IAB, ICANN meeting and stop to move this discussion to places where decisions cannot be implemented... (Don't ask a bus driver to change the traffic lights...)

Cheers







Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: national security

2003-12-04 Thread Franck Martin




On Fri, 2003-12-05 at 09:00, Kurt Erik Lindqvist wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

  The post KPQuest updates are a good example of what Govs do not 
 want
  anymore.
 I can't make this sentence out. Do you mean the diminish of KPNQwest?
 In that case, please explain. And before you do: I probably know more 
 about KPNQwest than anyone else on this list with a handful of 
 exceptions that where all my colleagues doing the IP Engineering part 
 with me. Please go on...

 I am refering (post KPNQuest) to the reference management lesson 
 ICANN gave concerning root management when the 66 ccTLD secondaries 
 supported by KPNQuest were to be updated. NO one will forget at many 
 ccTLDs, and Govs.

I was there when KPNQwest went down. I think I have concluded that what 
you are referring to was a machine called ns.eu.net. That machine has a 
history that goes back to the beginning of the Internet in Europe. 
Through mergers and acquisitions it ended up on the KPNQWest network. 
It was secondary for a large number of domains, including ccTLDs. When 
KPNQwest down, the zone content and address block was transfered to 
RIPE NCC. As far as I can tell it is still there. TLDs where asked to 
move away from the machine over time.

As a matter of fact, several studies the year before KPNQwest went 
down, pointed out the problem with having all the worlds TLDs using 
just a few machines as slave servers. However, the DNS is designed to 
work fine even with one slave not reachable. So even if ns.eu.net would 
have gone off-line abruptly, which it never did, people got, and 
apparently still have, plenty of time to move. I think this incident 
clearly shows the robustness of the current system, more than anything 
else.


There are now organisations installing root servers in all countries that want one. If you are operating a ccTLD, you may want have sitting next to your machines a root server, so if the national Internet link goes down (something major but not impossible when many countries have only one link to the Internet) the system still works for all the national domain names...

This is a not a very well known fact, and I stumbled upon it recently after wanting to complain that root servers where only in developed countries.

Oh, btw to install a root server, any PC will do, it is not something difficult as it carries only a couple of hundred records (200 countries and a few gTLDs), not the millions of a .com.

Cheers




Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: Ietf ITU DNS stuff

2003-12-04 Thread Masataka Ohta
John C Klensin;

ITU-T is quite insistent that they make _Recommendations_ only. 
W.r.t. enforcement, ITU-T makes standards, regardless of whether
it is called recommendations or requests for comments.
Interpretation and enforcement is up to each individual government.
No. WTO agreement helps a lot for them to enforce ITU standards.

			Masataka Ohta





Re: IPv6 addressing limitations (was national security)

2003-12-04 Thread Masataka Ohta
jfcm;

Dear Masataka,
my interest in this is national security. I see a problem with IPv6 in 
two areas.
Only two?

IPv6 contains a lot of unnecessary features, such as stateless
autoconfiguration, and is too complex to be deployable.
Comments welcome.
As it is too complex, it naturally has a lot of security problems.

I'm not surprised some of them are national ones.

			Masataka Ohta





RE: An apology of sorts

2003-12-04 Thread Tomson Eric \(Yahoo.fr\)
Almost perfect : they averaged the US and the EU propositions (32 and 64
bytes) for the data = 48 bytes, and then added 5 bytes for the header = 48
+ 5 = 53 bytes.

E.T.

=-Original Message-
=From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
=Behalf Of Dan Kolis
=Sent: jeudi 4 décembre 2003 14:20
=To: [EMAIL PROTECTED]
=Subject: An apology of sorts
=
=
=Hi
=
Did you ever heard why ATM got 53? is 
=it? byte cells? They just averaged a bunch of competing 
=propopsals.





Re: Ietf ITU DNS stuff

2003-12-04 Thread grenville armitage

Mike S wrote:
[..]
 Many governments have over time attempted to control thought and personal speech,
 and none has been successful for any extended period of time.

OT, but in my more cynical moments i'm inclined to think govt (societal) control of
thought and speech has been far more common throughout human history than the 
alternative.

(insert ob. in the long run we're all dead)

gja



Re: national security

2003-12-04 Thread Franck Martin




On Fri, 2003-12-05 at 12:16, Suzanne Woolf wrote:

On Fri, Dec 05, 2003 at 10:44:00AM +1200, Franck Martin wrote:
 There are now organisations installing root servers in all countries
 that want one. If you are operating a ccTLD, you may want have sitting
 next to your machines a root server, so if the national Internet link
 goes down (something major but not impossible when many countries have
 only one link to the Internet) the system still works for all the
 national domain names...

We (ISC) are widely anycasting f.root-servers.net. Several of the
other operators of root nameservers have begun to anycast their
servers as well, or announced plans to do so.

Is this what you meant? If not, could you elaborate?


Yes this is what I mean


 This is a not a very well known fact, and I stumbled upon it recently
 after wanting to complain that root servers where only in developed
 countries.

It's hard to quantify what developed means in this context. Our
anycast f-root systems, for example, do need some infrastructure
around them in order to be useful, but we have anycast clusters in
over a dozen locations, most outside of the G8. See
f.root-servers.org.

Well just use the LDS index of the UN if you are in doubt, but we are not here in any contest... Outside the G8 is something. Yes they do need some infrastructure that you may not find in developing country... but then see my last point...


 Oh, btw to install a root server, any PC will do, it is not something
 difficult as it carries only a couple of hundred records (200 countries
 and a few gTLDs), not the millions of a .com.

Operationally, this is a dangerous half-truth. It may be the case that
you can run a nameserver that believes it is authoritative for the
root zone and will answer for it in this way. But under real world
conditions (significant numbers of queries, possibility of DDoS or
other attack, etc.) this is far from adequate.


This is not a dangerous half-truth, It has to be demystified. Let's take the example of a country like Tonga. A simple PC will do for them because the number of Internet Users there is may be about a 1000 people. With anycast properly set up only the packet of that country will reach the local root-server (proximity), so it is unlikely to be under heavy load with a 1000 of people on the Internet there...

Finally before a root-server is installed somewhere, someone will do an assessment of the local conditions and taylor it adequately. I want countries to request installation of root servers, and I know about 20 Pacific Islands countries that need root-servers in case their Internet link goes dead.

cf www.picisoc.org if you want to join us...


thanks,
Suzanne



Suzanne Woolf+1-650-423-1333
Senior Programme Manager, ISC		

		** Fortune favors the prepared mind **





Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9 D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard








Re: national security

2003-12-04 Thread Dean Anderson
On 5 Dec 2003, Paul Vixie wrote:

 my experience differs.  when a root name server is present it has to be
 fully fleshed out, because if it isn't working properly or it falls over
 due to a ddos or if it's advertising a route but not answering queries,
 then any other problem will be magnified a hundredfold. 

It depends on the problem profile you working to avoid.

 doing root name service in a half-baked way is much worse than not doing
 it at all, since over time the costs of transit to a good server will be
 less than the costs of error handling and cleanup from having a bad one.

This could be true, but irrelevant. It depends on the costs of transit and 
cleanup.  Transit to a remote island could be very expensive, while labor 
to cleanup any problems might be very cheap.

 moreover, your statement only the packet(s) of that country will reach the
 local root server is presumptive.  

Not necessarilly.  If the country operates a root server that is only
accessible from that country, that is, only in the preloaded in the caches
of that countries' nameservers, then the 'presumption' is true.  The list
of root nameservers is determined by the lists that are pre-loaded into
other nameservers, not by the 'dig . ns' query on a real root.  You could
have hundreds of root slaves, but only a small number of truly global root
servers without any problems at all.  This would probably be a good thing 
for the global servers.  

 under error conditions where transit is leaked, such a server could end
 up receiving global-scope query loads. in our current
 belt-and-suspenders model, we (f-root) closely monitor our peer routing,
 AND we are massively overprovisioned for expected loads, since a ddos
 or a transit-leak can give us dramatically unexpected loads.

This is a feature that is specific to your anycast setup.  Simpler,
non-anycast setups wouldn't have this problem.

 if you know someone who is willing to provision a root name server without
 a similar belt and similar suspenders, then please tell them to stop.

Are all the roots doing anycast?  I've run private roots without any
problems, and have experienced significant improvements for doing so. (see
below)

 on a connectivity island (which might be in the ocean or it might just be
 a campus or dwelling), the way to ensure that local service is not disrupted
 by connectivity breaks is to make your on-island recursive name servers
 stealth slaves of your locally-interesting zones.  in new zealand for
 example it was the custom for many years to use a forwarding hierarchy
 where the nodes near the top were stealth slaves of .NZ, .CO.NZ, etc.
 that way if they lost a pacific cable system they could still reach the
 parts of the internet which were on the new zealand side of the break.

This assumes that you are mixing authoritative and caching nameservers. 
Something that many people (including you) advise against.

Operating a root nameserver is much easier.  Obviously, in the case of an 
island or small country that has only one connection, or perhaps one 
network center, a DDOS that affects the local root is going to affect all 
connectivity. Their only option may be to drop connectivity.  Actual war 
could have the same impact, due to broken communications line. A local 
root in each country is probably a good idea.

I've also found that when setting up non-connected laboratory networks, it
is better to have a 'lab root' server, that acts like a root, since
machines in the lab can't access real root servers.  This greatly enhances
performance in the case where a wrong, or just non-lab domainname is typed
in, since you can get an nxdomain back right away instead of waiting for a
timeout as the root servers at tried.




Re: national security

2003-12-04 Thread jfcm
Paul,
1. all this presumes that the root file is in good shape and has not been 
tampered.
How do you know the data in the file you disseminate are not polluted 
or changed?
2. where is the best documentation - from your own point of veiw - of a 
root server organization.
thank you
jfc

At 02:53 05/12/03, Paul Vixie wrote:
On Fri, Dec 05, 2003 at 10:44:00AM +1200, Franck Martin wrote:
 Oh, btw to install a root server, any PC will do, it is not something
 difficult as it carries only a couple of hundred records (200 countries
 and a few gTLDs), not the millions of a .com.
On Fri, 2003-12-05 at 12:16, Suzanne Woolf wrote:
 Operationally, this is a dangerous half-truth. It may be the case that
 you can run a nameserver that believes it is authoritative for the
 root zone and will answer for it in this way. But under real world
 conditions (significant numbers of queries, possibility of DDoS or
 other attack, etc.) this is far from adequate.
[EMAIL PROTECTED] (Franck Martin) writes:
 This is not a dangerous half-truth, It has to be demystified. Let's take
 the example of a country like Tonga. A simple PC will do for them because
 the number of Internet Users there is may be about a 1000 people. With
 anycast properly set up only the packet of that country will reach the
 local root-server (proximity), so it is unlikely to be under heavy load
 with a 1000 of people on the Internet there...
my experience differs.  when a root name server is present it has to be
fully fleshed out, because if it isn't working properly or it falls over
due to a ddos or if it's advertising a route but not answering queries,
then any other problem will be magnified a hundredfold.  doing root name
service in a half-baked way is much worse than not doing it at all, since
over time the costs of transit to a good server will be less than the costs
of error handling and cleanup from having a bad one.
moreover, your statement only the packet(s) of that country will reach the
local root server is presumptive.  under error conditions where transit
is leaked, such a server could end up receiving global-scope query loads.
in our current belt-and-suspenders model, we (f-root) closely monitor our
peer routing, AND we are massively overprovisioned for expected loads,
since a ddos or a transit-leak can give us dramatically unexpected loads.
if you know someone who is willing to provision a root name server without
a similar belt and similar suspenders, then please tell them to stop.
 Finally before a root-server is installed somewhere, someone will do an
 assessment of the local conditions and taylor it adequately. I want
 countries to request installation of root servers, and I know about 20
 Pacific Islands countries that need root-servers in case their Internet
 link goes dead.

 cf www.picisoc.org if you want to join us...
on a connectivity island (which might be in the ocean or it might just be
a campus or dwelling), the way to ensure that local service is not disrupted
by connectivity breaks is to make your on-island recursive name servers
stealth slaves of your locally-interesting zones.  in new zealand for
example it was the custom for many years to use a forwarding hierarchy
where the nodes near the top were stealth slaves of .NZ, .CO.NZ, etc.
that way if they lost a pacific cable system they could still reach the
parts of the internet which were on the new zealand side of the break.
using a half-baked root-like server to do the same thing would be grossly
irresponsible, both to the local and the global populations.
note that f-root, i-root, j-root, k-root, and m-root are all doing anycast
now, and it's likely that even tonga would find that one or more of these
rootops could find a way to do a local install.  (c-root is also doing
anycast but only inside the cogent/psi backbone; b-root has announced an
intention to anycast, but has not formally launched the programme yet.)




Re: national security

2003-12-04 Thread Franck Martin
On Fri, 2003-12-05 at 15:32, jfcm wrote:
 Paul,
 1. all this presumes that the root file is in good shape and has not been 
 tampered.
  How do you know the data in the file you disseminate are not polluted 
 or changed?
Because somebody will complain... ;)



Franck Martin
[EMAIL PROTECTED]
SOPAC, Fiji
GPG Key fingerprint = 44A4 8AE4 392A 3B92 FDF9  D9C6 BE79 9E60 81D9 1320
Toute connaissance est une reponse a une question G.Bachelard



Re[2]: IPv6 addressing limitations (was national security)

2003-12-04 Thread Anthony G. Atkielski
Masataka Ohta writes:

 On the Internet these days, it is a matter of hardware.

And the hardware is a matter of firmware.






Re: SMTP compressed protocol...

2003-12-04 Thread John C Klensin
--On Friday, 05 December, 2003 15:29 +1200 Franck Martin 
[EMAIL PROTECTED] wrote:

While talking about HTML in e-mail messages that consume a lot
of bandwidth...
Why SMTP servers do not negotiate to send an 8bit compressed
stream between themselves. The same way HTTP negotiate a
compressed stream between client and server if the client has
the capability...
When the relevant WG looked briefly at the question a _long_ 
time ago (in Internet years), the conclusion was that it wasn't, 
in general, worth the trouble.  Or, if you prefer, worth the 
trouble often enough to justify the effort.   That conclusion 
was conditioned, if I recall, by the combination of several 
things:

(1) At the time, there was no obvious compression algorithm 
(given IPR encumbrances, etc.) and standards-conforming 8bit 
transport, having just been defined, was obviously not widely 
deployed.   That second condition has obviously changed.

(2) Relaying complicates everything with SMTP, since one could 
not guarantee that a negotiation was going on between sending 
and recipient machines.  If one had to compress between sender 
and initial relay, then have the relay decompress (and maybe 
recompress using a different algorithm) in order to pass the 
message along, it might cancel out any possible efficiencies.

(3) Some message body parts (usually attachments), especially 
large ones, are already compressed (e.g., zip files or 
equivalent) or not effectively compressible (e.g., almost 
anything encrypted), reducing the value of message transport 
compression techniques.  My very subjective and anecdotal 
impression is that fewer people are routinely compressing 
attachments these days, possibly because MUAs have gotten better 
at building attachments on a one-click basis that doesn't 
provide for compression at attachment time.  Similarly, I 
suspect that the amount of material that is being mailed that 
has very low information density per number of bits transmitted 
(HTML being only one example) is on the rise.   But others may 
have different experience and impressions.

(4) Operationally, the most important requirement for 
compression arises between the endpoints of a slow and/or 
expensive and/or intermittent point to point link (at SOPAC, you 
are probably very familiar with those).  For those situations, 
usually the right thing to do is to (i) set up MX records that 
force mail to end up on the top end of such a link, (ii) have 
that machine aggregate an entire data stream, presumably 
consisting of several messages, (iii) compress that stream and 
send it via some sort of batch SMTP or local protocol, (iv) 
decompress and disaggregate at the far end and either go back to 
SMTP or just distribute the results into a mail store, as 
appropriate.  That model compresses not only the message content 
but also the headers and envelope and, more important in many 
cases, eliminates all of the per-message command-reply 
turnarounds (which Pipelining merely reduces).   Of course, that 
approach has been in use over such links for years and years, 
starting long before the first discussions that led to MIME and 
8bit email transport.

(5) Finally, any scheme for compressing entire message bodies 
for transport purposes that doesn't also batch the messages 
themselves needs to deal with the inconveniences created by the 
incomplete separation of envelope and headers in SMTP, i.e., the 
fact that MTAs are required to insert trace (Received: and 
ultimately Reply-to:) fields into message bodies without any 
knowledge of what else is going on with the message body, 
including with previously-applied Received: fields.  RFC 1869 
and its predecessors, and then 2821, raised the bar, but RFC 821 
does not explicitly require that message bodies have headers, or 
that those headers be in 822 format, as long as it is possible 
to prepend those trace fields.

Today, I would also worry a bit that compression might turn out 
to be the enemy of various strategies for early interception and 
repelling of spam.  One should at least think about that issue 
when contemplating compression schemes.

All of that said, if you think it would be worthwhile, 
especially after thinking about (4), I'd recommend proposing it. 
You would need to think carefully through the model and 
practical implications of (5) but, otherwise, an appropriate 
ESMTP extension wouldn't be hard to design and write up.  If you 
had such a proposal written up, even in outline, the right place 
to discuss it is probably the ietf-smtp list, hosted at imc.org. 
Only by starting from a specific writeup would you be likely to 
get a good handle on whether the idea would get enough traction 
to be worth pursuing further.

regards,
   john
p.s. Don't you know you aren't supposed to raise technical 
issues on the IETF list?  It might drop the noise to signal 
ratio below infinity, which many of those who seem to post the 
most messages to the list might find very disappointing.   :-(