Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-04 Thread Marc Schneiders
On Mon, 2 Dec 2002, at 10:36 [=GMT-0700], Vernon Schryver wrote:

  From: [EMAIL PROTECTED]
  On Sat, 30 Nov 2002 16:57:20 +0100, Marc Schneiders said:
 
   It would make long domain names of the type
   domainnamebargainscheaper.com obsolete.
 
  Why? unless you manage to get 'cheaper.' as a TLD, and create the name
  as domain.name.bargains.cheaper. - or am I missing something?

 And what's wrong with DomainNameBargainsCheaper.com or
 domain-name-bargains-cheaper.com?
 How would replacing '-' with '.' affect anything?

It would mean you were dependent on the enduring goodwill of the
registrant of cheaper.com to delegate the subdomain to you.

 I've noticed an odd thing while draining my spam traps.  When I see an
 advertised domain name that consists of two or concatenated more English
 words, it's usually Oriental.  I don't mean necessarily hosted in Asia
 but with non-ASCII content.  It's as if Oriental spammers are smarter
 about creating memorable English domain names and avoiding the squatters.

My observations give a different impression. People from Asia (esp.
Korea) register any two or three word .com domain that expires,
because a speculator is no longer willing to put money into it. I
would say it proves that they are good scripters.

 Using domains will become
   easier.
 
  Empirical evidence indicates the biggest problem is finding the 1 out of 41M
  .com domains and avoiding all the typosquatters...

 and neither of those has anything to do with the last 4 characters of the
 name.

Well, if there is .shop, .bargain, .free, .web in addition to .biz and
.com, more people can just do business from NAME.TLD and do not have
to go for NAME-corp.TLD, NAME-easy.TLD, NAME-bargain.TLD etc. Try the
following on some dull party perhaps? Test which of the two following
sets can more easily be memorized:

domainbargain.com
domainnamebargain.com
namebargain.com
easydomain.com
easydomains.com

domain.bargain
domain.shop
domain.biz
domain.easy
domain.reg
domains.com (this one to make it not too easy)

-- 
[01] All ideas are vintage not new or perfect.
http://logoff.org/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-04 Thread Marc Schneiders
On Tue, 3 Dec 2002, at 13:49 [=GMT-0600], Stephen Sprunk wrote:
 Thus spake Eric A. Hall [EMAIL PROTECTED]
  on 12/2/2002 11:13 AM Stephen Sprunk wrote:

  4/ high entry fees

Who will get the money?

 Well, that'll certainly be needed, since the root registrar will need a few
 hundred DNS servers to handle the volume of new queries in the root now that
 you've made a flat namespace.

Not long ago it was mentioned here, that 98% of all queries to the
root-servers are typos etc, (NXDOMAIN replies). If there is a maximum
number of characters of 3, the traffic would not grow that much at
all, esp. since an NXDOMAIN is usually cashed shorter than a positive
answer. Whether a root-server has to answer: .cim doesn't exist, or:
.cim? go to a, b or c, it doesn't make much difference in load.






Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-04 Thread Einar Stefferud
Well, I for one am not ready to retreat from a more global view,
with a retreat to essentially balkanizing the Internet Name Space.

I see nothing wrong with having a .IBM or a .AOL, or .MSN, or a .NMA for that matter.  
All these companies operate in International Cyberspace in terms of name recognition, 
and WIPO, in due course, is going to have to yield to market demands, regardless of 
how long it takes.

To accept WIPO rule and control of global commerce language (and I do believe that DNS 
is a language which uses names to communicate concepts) is just not going to work, as 
WIPO is working to harness global control of naming systems in general.  Give then an 
inch and they will take a mile.

So, I see no point in retrograde suggestions such as your proposal that we all just 
lie down in our WIPO moulds, let them tighten the screws and turn up the heat, and 
soon we will all be just fine.

No thank you for your kind offer of supposed comfort;-)...\Stef


At 1:42 PM -0600 12/3/02, Stephen Sprunk wrote:
Thus spake Einar Stefferud [EMAIL PROTECTED]
  In case you have not noticed, one possible solution is to eliminate all
  TLDs other than .COM, which is the only one that you say so may people
  believe exists.
 
  At which point someone will notice that all addresses have a
  redundant .COM (because all the other TLDs have been removed, and
  so the browsers and mail systems will offer to append (or just assume)
  the redundant .COM suffix for you, and voile!...

No, keep the ccTLDs and let each country do with them as they wish.  Most
countries have a hierarchical namespace within their ccTLD, though a few are
flat.

Either way, I'll take 250+ flat namespaces (ccTLDs) over one flat namespace
(the root).

COM is a failed experiment and needs to be closed and/or eliminated.

  All all solved for the minor cost of forcing all non .COM domain name
  owners to find and register a new non-colliding domain name under
  .COM!

While international trademark law is a joke at best, each country does have
a framework in place which can be used to resolve conflicts within their own
ccTLD.  This is a lot easier than trying to manage a single global namespace
using the WTO's trademark rules.

S




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-03 Thread Eric A. Hall

on 12/2/2002 11:13 AM Stephen Sprunk wrote:

 Okay, so when every foo.com. applies to become a foo., how will you
 control the growth?

1/ no trademarks allowed

2/ competitive rebidding every two years

3/ mandatory open downstream registrations (no exclusions)

4/ high entry fees

 IMHO, the only solution to this problem is the elimination of gTLDs
 entirely.

There isn't enough demand to support more than a few dozen popular TLDs.
Generic TLDs are user-driven, with the market deciding which ones they
want to use. Geographic TLDs are completely arbitrary and favor the
functionary instead of the user.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-03 Thread Eric A. Hall

on 12/2/2002 11:53 AM Måns Nilsson wrote:

 I hope it would shut the nutcases arguing about new TLDs up, because they
 have been given what they so hotly desire (why escapes me, but I suppose
 they believe they'll make a big bag of money selling domain names. Good
 luck.) 
 
 Technically, it is no problem to keep 500 delegations in sync -- even with
 higher demands on correctness than are made today, both for the root and
 most TLDs. 
 
 However, there can only be one root. That is not up for discussion. (in
 case somebody thought I think so.)

This is also my position entirely.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-03 Thread Eric A. Hall

on 12/3/2002 1:49 PM Stephen Sprunk wrote:
 Thus spake Eric A. Hall [EMAIL PROTECTED]

 1/ no trademarks allowed
 
 Every combination of characters is trademarked for some purpose in some
  jurisdiction.  If you find some exceptions, I'll find some VC money
 and take care of that; problem solved.

Let's not get carried away. Trademark didn't stop .info and it won't stop
.car or .auto either.

 2/ competitive rebidding every two years
 
 IBM is not going to like potentially losing IBM.

see item 1.

 3/ mandatory open downstream registrations (no exclusions)
 
 A hierarchy without any kind of classification?

Nobody has been able to make any kind of classification work in the
generalized sense. Every classification scheme eventually proves to be
derived and arbitrary. Markets are chaotic, but the ordering that makes
sense to the customers does eventually emerge.

 COM. vs NET. today, most SLDs from one exist in the other, and VeriSign
 even offers a package where they'll register your SLD in every single
 TLD that exists for one price.

This is completely irrelevant.

 4/ high entry fees
 
 Well, that'll certainly be needed, since the root registrar will need a
 few hundred DNS servers to handle the volume of new queries in the root
 now that you've made a flat namespace.

I don't see anybody arguing for a flat root. That may be the argument you
want to have but I haven't seen it suggested.

-- 
Eric A. Hallhttp://www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/coreprot/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Marc Schneiders
On Fri, 29 Nov 2002, at 14:08 [=GMT-0500], Keith Moore wrote:

   Well, it also matters that the set be constrained to some degree.
   A large flat root would not be very managable, and caches wouldn't
   be very effective with large numbers of TLDs.
 
  That's old fiction.  If it works for .com it will work for ..

 well, it's not clear that it works well for .com.  try measuring
 delay and reliability of queries for a large number of samples
 sometime, and also cache effectiveness.

I guess the burden of proof is on those who argue that it doesn _not_
work well.

 let's put it another way.  under the current organization if .com breaks
 the other TLDs will still work.   if we break the root, everything fails.

Since .com was running _on_ the root-servers.net until recently
without problems, what are we talking about?

Naturally there won't be 1 million TLDs all at once. We could start
with a couple of hundreds. That would merely double the size of the
root.




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Marc Schneiders
On Fri, 29 Nov 2002, at 14:37 [=GMT-0500], Keith Moore wrote:

   let's put it another way.  under the current organization if .com breaks
   the other TLDs will still work.   if we break the root, everything fails.

  Naturally there won't be 1 million TLDs all at once. We could start
  with a couple of hundreds. That would merely double the size of the
  root.

 It's not just the size of the root that matters - the distribution
 of usage (and thus locality of reference) also matters.

For those in databases: What runs more smoothly: a few subgroups in a
main group with millions of records, or a few thousand subgroups with
thousands of records?

 The point is that if removing constraints on the root causes problems
 (and there are reasons to believe that it will) we can't easily go back
 to the way things were before.

Sure, call it a testbed, like the IDN-testbed of VeriSign.




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Marc Schneiders
On Fri, 29 Nov 2002, at 17:13 [=GMT-0500], Keith Moore wrote:

  If when .com breaks, the other TLDs still work...
  then, isn't that a good reason to have more TLDs?

 it's a good reason to not put all of your eggs in one basket.

 also by limiting the size of the root we make it somewhat easier
 to verify that the root is working correctly.

So this means not millions of TLDs. I agree with that. Not even
thousands, I would say. Not everyone who now has a .com needs a . That
would flatten the namespace, already flattened to the second level,
completely. First target: twice as many as now. And these 300 or so
will also include a lot that will be small like so many ccTLDs now
are.

-- 
[05] Round the clock here on the internet.
http://logoff.org/




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Marc Schneiders
On Fri, 29 Nov 2002, at 17:24 [=GMT-0500], Keith Moore wrote:

  First target: twice as many as now.

 why?  how will that improve life on the internet?

It would make long domain names of the type
domainnamebargainscheaper.com obsolete. Using domains will become
easier. Less load on nameservers (incl. tld servers) because of
typo's. That is just on a practical level. Other improvements
(probably off topic here) include lower prices, breaking of a cartel.




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Måns Nilsson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



- --On Friday, November 29, 2002 17:24:43 -0500 Keith Moore
[EMAIL PROTECTED] wrote:

 First target: twice as many as now.
 
 why?  how will that improve life on the internet?

Basically, it will take some of the exclusiveness out of the TLD concept.
That is a good thing for peace and quiet on several mailing lists and on
the Internet name debate in general. 
I hope it would shut the nutcases arguing about new TLDs up, because they
have been given what they so hotly desire (why escapes me, but I suppose
they believe they'll make a big bag of money selling domain names. Good
luck.) 

Technically, it is no problem to keep 500 delegations in sync -- even with
higher demands on correctness than are made today, both for the root and
most TLDs. 

However, there can only be one root. That is not up for discussion. (in
case somebody thought I think so.)
- -- 
Måns Nilssonhttp://vvv.besserwisser.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (OpenBSD)

iD8DBQE9654E02/pMZDM1cURArisAKCna8uTBH2ueV52O+FaYti9RS9JxACgniNh
SCNhNLgmFRP7ViXav1KZxvI=
=cZdX
-END PGP SIGNATURE-




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Rick Wesson

[ cc list trimmed ]

On Mon, 2 Dec 2002, Stephen Sprunk wrote:


 Okay, so when every foo.com. applies to become a foo., how will you control
 the growth?  What is to keep the root from becoming a flat namespace within
 a few weeks?  It won't take long for the masses realize that an SLD is not
 as prestigious as their own personal TLD...


I know... a nameing hierarchy like in usenet but it will only be
controlling at the top -- then a organization will be CHARTERED to be the
caretaker of each of the top level names. maybe we'll start off with just
3 and see how it goes...


-rick





Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-02 Thread Michael Froomkin - U.Miami School of Law
I dispute the accuracy of this assertion below (unless registrars is a
typo for registries in which case we agree totally and you can ignore
what follows):

On Mon, 2 Dec 2002 [EMAIL PROTECTED] wrote:

 Notice that you don't get the lower prices and cartel breaking by increasing
 the number of domains, you get it by increasing the number of registrars.

Please explain your reasoning.  In particular, note whether you consider
registrars and registries to be separate vertical markets.  If so, please
explain how competition in a downstream market affects prices upstream.

Also, please note the vital distinction between number of domains (which
I agree increasing does not increase competition if the number of registry
operators remains constant) and number of registry operators (which I
submit *will* increase competition if this increases as the number of
domains increases -- at least if the new operators are allowed to pick
their character string and given substantial freedom to set their policies
as opposed to the ICANN model of picking strings and setting highly
restrictive policies to discourage wide use [e.g. .coop]). 

-- 
Please visit http://www.icannwatch.org
A. Michael Froomkin   |Professor of Law|   [EMAIL PROTECTED]
U. Miami School of Law, P.O. Box 248087, Coral Gables, FL 33124 USA
+1 (305) 284-4285  |  +1 (305) 284-6506 (fax)  |  http://www.law.tm
--It's warm here.--





Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-12-01 Thread Michael Froomkin - U.Miami School of Law
Competition for registry services is quite likely to lower prices if they
get to pick their own TLD strings and policies.

Look how much prices went down due to registrar competition.

On Fri, 29 Nov 2002, Keith Moore wrote:

  First target: twice as many as now.
 
 why?  how will that improve life on the internet?
 
 

-- 
Please visit http://www.icannwatch.org
A. Michael Froomkin   |Professor of Law|   [EMAIL PROTECTED]
U. Miami School of Law, P.O. Box 248087, Coral Gables, FL 33124 USA
+1 (305) 284-4285  |  +1 (305) 284-6506 (fax)  |  http://www.law.tm
--It's warm here.--




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-11-29 Thread Joe Baptista

On Fri, 29 Nov 2002, Keith Moore wrote:

  It doesn't matter who selects the TLDs;
  all that matters is that there be a consistent set.

 Well, it also matters that the set be constrained to some degree.
 A large flat root would not be very managable, and caches wouldn't
 be very effective with large numbers of TLDs.

That's old fiction.  If it works for .com it will work for ..

I don't see much in the way of difficulties here.

regards
joe baptista




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-11-29 Thread Joe Baptista

On Fri, 29 Nov 2002, Keith Moore wrote:

   Well, it also matters that the set be constrained to some degree.
   A large flat root would not be very managable, and caches wouldn't
   be very effective with large numbers of TLDs.
 
  That's old fiction.  If it works for .com it will work for ..

 well, it's not clear that it works well for .com.  try measuring
 delay and reliability of queries for a large number of samples
 sometime, and also cache effectiveness.

 let's put it another way.  under the current organization if .com breaks
 the other TLDs will still work.   if we break the root, everything fails.

I just can't buy the argument.  The root won't break.  .com works fine -
so would the root.  The only issue would be vulnerability - if the roots
were under attack and the . file was as large as the .com zone - then i
would imgine there would be a significant problem.  These same
vulnerability issues exist for the .com zone everyday.  It's a very
vulnerable namespace to attack.

Thats about the only significant problem i see to a . file being as
large as .com.

regards
joe baptista




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-11-29 Thread Einar Stefferud
OK, we now have several words used for suposedly the same thing.

1)  ONE MONOPOLY ROOT OWNED and CONTROLLED BY ICANN; making all decisions and 
leasing out TLD and lower domain name holder-ships, which supposedly yields ONE 
SINGE ROOT controlled by ICANN.  Also provides a pseudo-legal court system (UDRP) for 
adjudicating holder disputes below the ICANNIC root.  Any domain names in use 
outside this construct are declared to be operated by PIRATE and Dishonest parties, 
whether they existed before ICANN came into existence or not, and even when created by 
Jon Postel pre-ICANN.

2)  A Consistent Set of TLDs which do not include any collisions, and hopefully also 
do not endure any colliding domain names outside this Consistent Set.  How the 
collisions are avoided apparently assumes some kind of communications system that is 
used for coordinating the introductions of new domain names to avoid introducing any 
and all collisions.

3)  A Centrally Coordinated Root that entails some kind of communications system 
that is used for coordinating the introductions of new domain names to avoid 
introducing any and all collisions.

I can see some equivalence between 2 and 3, both of which can be seen to achieve the 
desired result of a collision free root and thus a collision free DNS name tree, if 
this same coordination responsibility is attached to all delegations under the root.

but, I see no justification for creation of a monopolistic single point of failure 
with the unilateral unquestioned power to unilaterally set many kinds of policies 
regarding registration business models and use rules for DNS names.

Please explain how you see these relationships.

Cheers...\Stef


At 12:09 PM -0500 11/29/02, Steven M. Bellovin wrote:
In message [EMAIL PROTECTED], Valdis.Kletni
[EMAIL PROTECTED] writes:
 
 
 
 On Wed, 27 Nov 2002 12:45:23 PST, Einar Stefferud [EMAIL PROTECTED]  sai
 d:
 
  ICANN stands alone in its EXCLUSIVNESS, while arguing 
  that there must only be one root.  All others must die!
 
 Think .BIZ.
 
 Now go back and *CAREFULLY* re-read RFC 2826.  Note that nowhere
 does it say that ICANN has to be the root.  What it says is either you
 have one centrally coordinated root, or you have Balkanization.
 

This is precisely the point.  It doesn't matter who selects the TLDs; 
all that matters is that there be a consistent set.

   --Steve Bellovin, http://www.research.att.com/~smb (me)
   http://www.wilyhacker.com (Firewalls book)




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-11-27 Thread Rick Wesson


Steve,


On Wed, 27 Nov 2002, Steve Hotz wrote:


 H?
 At the risk of feeding a diversionary thread, it does
 seem appropriate to address the question of the number
 of Internet users who can see New.net's domain names.

[ many lines of self gratification trimmed ]

I don't believe the topic to be relivant to this list, butI do have a
suggestion for you...

turn off the new.net root servers and see if any press gets written
about the event. When you get some press, then you'll know some folks can
see you servers, until then they [the new.net servers] probably don't
matter.


best,

-rick





Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-11-27 Thread Einar Stefferud

Steve most Internet users who
Steve can see/access New.net domains do so either (a) via recursives
Steve that rely on USG-root but augment with NN domains, or (b) user
Steve machines that have the NN client plugin (which still relies
Steve on the USG-root).  So, New.net does not contribute significantly
Steve to the use of a non-USG root.


Of course;-)...

It should be patently obvious that most people want to see 
The Whole Internet when they decide they want to see some 
sites that are not included in the ICANNIC root.

So, any rational arrangement to make non-ICANNIC TLDs visible to 
users will arrange to be additive, as compared to exclusionary.

So, the situation is very simple:

The ICANNIC ROOT is EXCLUSIONARY, so
The other roots must be INCLUSIONARY.

So, the ICANNIC root will always have more users, 
no matter what anyone does, short of converting the 
ICANNIC ROOT to be INCLUSIONARY.

Thus this whole discussion thread is just plain silly, 
as long as the ICANNIC root remains EXCLUSIONARY.

But, maybe some good will come from it because 
it puts the blurred ICANNIC game in plain sight.

ICANN stands alone in its EXCLUSIVNESS, while arguing 
that there must only be one root.  All others must die!
So, now the real issue becomes totally clear.

Enjoy;-)...\Stef




Re: new.net (was: Root Server DDoS Attack: What The Media Did NotTell You)

2002-11-27 Thread Joe Baptista

On Wed, 27 Nov 2002, Dave Crocker wrote:

 if new.net were so sure of the efficacy of their approach, why do they
 (redundantly) use new.net in the ICANN/IANA root?

they want to be backwards compatible with the old legacy internet.