Re: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Eliot Lear
Joe,
 SRV records are not equivalent to either assigned or mutually-negotiated
 ports; they would require extra messages, extra round-trip times, and/or
 extra services (DNS) beyond what is currently required.
   
Just to be clear, I am not suggesting that no assignments be done, but
that SRV records be used where appropriate.  If setup time or circular
dependencies are a concern, SRV records may not be appropriate.

Eliot

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Best practice for data encoding?

2006-06-06 Thread Eliot Lear
Iljitsch van Beijnum wrote:
 I was wondering:

 What is considered best practice for encoding data in protocols within
 the IETF's purview?
One should always think about what one needs and choose the appropriate
solution to the task.  Of course sometimes it's hard to take into
account what level of performance one would need out of a protocol
implementation.  RAM is considerably cheaper now than it was twenty
years ago, and so one approach in protocol design would be to define
multiple encodings as they are required.  So, if you don't think
performance is crucial but toolset reuse is for an RPC-based approach,
perhaps XML is a good start, and if you need to optimize later, perhaps
consider something more compact like XDR.

As to whether ASN.1 was a good choice or a bad choice for SNMP, there
never was an argument.  It was THE ONLY CHOICE.  All three protocols
(CMIP, SGMP, HEMP)  under consideration made use of it.  Nobody
seriously considered anything else due to the practical limits of the
time.  Is it still a reasonable approach?  I think a strong argument
could be made that some sort of textual representation is necessary in
order to satisfy more casual uses and to accommodate tool sets that are
more broadly utilized, but that doesn't mean that we should do away with
ASN.1, archaic as it may seem.

Eliot

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Tony Finch
On Mon, 5 Jun 2006, David Harrington wrote:

  CERTR Advisory CA-2001-18 Multiple Vulnerabilities in Several
 Implementations of the Lightweight Directory Access Protocol (LDAP)

 Vulnerability Note VU#428230 Multiple vulnerabilities in S/MIME
 implementations

Oh yes, I forgot those were ASN.1 too.

Tony.
-- 
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
FORTIES CROMARTY FORTH TYNE DOGGER: VARIABLE 3 OR 4. MAINLY FAIR. MODERATE OR
GOOD.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Hallam-Baker, Phillip
 From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED] 

 (2) As I understand it, for ports above 1024, the IANA does 
 _not_ assign
 values - it just registers uses claimed by others.  Eliminating
 well-known ports eliminates any assignment role, and 
 leaves us with
 just a registry of what people have claimed.  Note that this means
 there is no mechanism which prevents the same number from being
 registered by more than one registry.

So how is a server to support two services that happen to have chosen the same 
port number?

I think that what is indicated here is that service discovery by port number is 
broken and no longer scalable. 

There are only 65536 possible port numbers, we expect to see rather more Web 
Services become established. We have 10,000 registrations already. This is a 
failed discovery strategy.

The scalable discovery strategy for the Internet is to use SRV records. For 
this to be possible it has to become as easy to register an SRV code point as 
it is currently to register a port. It makes no sense for there to be more 
restrictions on issue of the unlimited resource than on the limited one.

Getting an SRV code point registered is not a trivial task and there is in fact 
a parallel non-IANA registry already operating because most people cannot be 
bothered to deal with the IETF process. It should not be necessary to write an 
RFC or go through the IESG to register a code point. The implicit assumption 
here is that the IESG controls the Internet through control of discovery 
aparatus, a silly notion that the other Internet standards bodies are not going 
to accept.

If the W3C or OASIS develops a spec for a Web service it makes no sense for 
them to then be required to write an RFC and the group be required to grovel to 
the IESG and worse be held captive by the IESG work schedule. Not going to 
happen, nor does it. People who want SRVs cut in those groups just do it.


 I do _not_ support the introduction of a charging model, for 
 a couple of 
 reasons.  First, I don't want to see port numbers become a 
 politicized 
 commodity, like IP address space and domain names have.

I think this is a very bad idea at this stage. At this point introducing 
charging is more likely to lead to speculation and premature exhaustion of the 
supply.


 (*) Some years ago, there was a period of time lasting 
 several months when 
 users of a particular large network provider were unable to 
 communicate 
 with CMU, because that provider had usurped 128.2/16 for some 
 private use 
 within its network. 

This particular weakness with the allocation of IPv4 addresses is likely to be 
exercised with increasing frequency when the IPv4 address store begins to reach 
exhaustion. 

One can well imagine that a large ISP operating in Asia might decide that 
rather than pay an exhorbitant amount to buy another 4 million addresses it 
might just make a private agreement to divy up net 18 (18... = MIT) and make a 
private agreement with its neighboring ISPs to do so.

The bad effects resulting from such practices hardly need to be stated. If we 
are lucky people will go for the Class D and Class E space first. But that is 
going to upset some people (Mbone users for instance).


The governance mechanisms of the Internet assume a degree of authoritarian 
control that simply does not exist. It is goodwill rather than authority that 
keeps the Internet together.

My theory (which I make no appologies for acting on) is that Vint Cerf and Jon 
Postel intended the mechanisms set up to control and allocate to act as the 
Gordian knot.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Pre-IPV6 maintenance of one of the www.ietf.org servers - 2006/06/03 - 12:00am EST

2006-06-06 Thread Tim Chown
On Fri, Jun 02, 2006 at 09:29:21PM -0400, [EMAIL PROTECTED] wrote:
 
 Hi All,
 
 Tomorrow Saturday June 3 at 12:00am EST, we will be taking down one of
 the round robin www servers for the IETF (209.173.53.180) for
 maintenance in preparation for supporting IPV6.  The outage should be
 less than 1 hour.  This system also serves as the primary site for...
 
  noc.ietf.org
  www.iab.org
  www.iesg.org
 
 so those sites will also be down.

Congratulations!

$ telnet www.ietf.org 80
Trying 2001:503:c779:b::d1ad:35b4...
Connected to www.ietf.org.

-- 
Tim/::1

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Pre-IPV6 maintenance of one of the www.ietf.org servers - 2006/06/03 - 12:00am EST

2006-06-06 Thread Pekka Savola

On Tue, 6 Jun 2006, Tim Chown wrote:

On Fri, Jun 02, 2006 at 09:29:21PM -0400, [EMAIL PROTECTED] wrote:
Congratulations!

$ telnet www.ietf.org 80
Trying 2001:503:c779:b::d1ad:35b4...
Connected to www.ietf.org.


Umm.  That's ARIN's Critical Infrastructure Allocation block.  I 
doubt if IETF web servers fall into that category (heck, ICANN 
qualifies! :-), Neustar seems to be (re-?)using a block assigned to it 
for other reasons...


--
Pekka Savola You each name yourselves king, yet the
Netcore Oykingdom bleeds.
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Best practice for data encoding?

2006-06-06 Thread Tony Finch
On Mon, 5 Jun 2006, Steven M. Bellovin wrote:
 On Mon, 5 Jun 2006 16:06:28 -0700, Randy Presuhn
 [EMAIL PROTECTED] wrote:
 
  I'm curious, too, about the claim that this has resulted in security
  problems.  Could someone elaborate?

 See http://www.cert.org/advisories/CA-2002-03.html

ASN.1 implementation bugs have also caused security problems for SSL,
Kerberos, ISAKMP, and probably others. These bugs are also not due to
shared code history: they turn up again and again.

Are there any other binary protocols that can be usefully compared with
ASN.1's security history?

Tony.
-- 
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
THE MULL OF GALLOWAY TO MULL OF KINTYRE INCLUDING THE FIRTH OF CLYDE AND THE
NORTH CHANNEL: VARIABLE 2 OR 3 WITH AFTERNOON ONSHORE SEA BREEZES. FAIR
VISIBILITY: MODERATE OR GOOD WITH MIST OR FOG PATCHES SEA STATE: SMOOTH OR
SLIGHT.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


IETF Sites Support IPv6

2006-06-06 Thread Ray Pelletier
I am pleased to report this 6th day of June 2006 that IETF FTP, Mail  
Web support  IPv6.


I want to thank NeuStar Secretariat Services for their efforts and Jordi 
Palet Martinez for his assistance.


Ray Pelletier
IAD


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Wasting address space (was: Re: Last Call: 'Considerations on the IPv6 Host density Metric' to Informational RFC (draft-huston-hd-metric))

2006-06-06 Thread Tim Chown
On Mon, Jun 05, 2006 at 08:12:28PM +0200, Iljitsch van Beijnum wrote:
 
 Having to choose between /60 and /48 would be much better than having  
 to choose between /64 and bigger in general, as it removes the will  
 I ever need a second subnet consideration, the average allocation  
 size goes down and moving to a /48 after having grown out of a /60  
 isn't too painful.

There's a certain appeal to this, to have to renumber before your
network grows too big.  Interesting suggestion.
 
 It's also really helpful if all ISPs use the same subnet sizes. For  
 instance, I can set up my routes as DHCPv6 prefix delegation clients,  
 so they can be reconfigured with new address prefixes automatically  
 when changing ISPs, but I still need to put the subnet bits (and  
 therefore the subnet size) in the configuration by hand, so having to  
 change that defeats the purpose of the exercise.

Which was the point of /48 pervasively?

-- 
Tim/::1



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Joe Touch


Eliot Lear wrote:
 Joe,
 SRV records are not equivalent to either assigned or mutually-negotiated
 ports; they would require extra messages, extra round-trip times, and/or
 extra services (DNS) beyond what is currently required.
   
 Just to be clear, I am not suggesting that no assignments be done, but
 that SRV records be used where appropriate.  If setup time or circular
 dependencies are a concern, SRV records may not be appropriate.

Right - I agree that assignments should not differentiate between privilege.

SRV records serve two purposes: to unload the fixed list from IANA (like
moving hosts.txt to the DNS did) and to allow local control over the map
between service name and port (which can allow more than 65,000 services
total).

The first use is fine, but overkill IMO for a list with 65,000 entries
at most. The second is a problem, for reasons explained in my I-D,
because it puts control over host service offerings in the hands of
whomever controls its DNS (e.g., another thing for ISPs to claim makes
you a commercial customer at commercial prices) and because it's
inefficient.

Joe



signature.asc
Description: OpenPGP digital signature
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF Sites Support IPv6

2006-06-06 Thread JORDI PALET MARTINEZ
Hi Ray, all,

I must say that most of the job has been done by NeuStar, so my
congratulations to them for achieving this and my continuous offer to anyone
that need any similar assistance. I was very happy to record the ever first
pings and traceroutes to IETF servers :-)))

I take the opportunity to provide a couple of URLs that are related somehow
to all this:

http://www.ipv6day.org
http://www.ipv6-to-standard.org

Regards,
Jordi




 De: Ray Pelletier [EMAIL PROTECTED]
 Responder a: [EMAIL PROTECTED]
 Fecha: Tue, 06 Jun 2006 10:31:58 -0400
 Para: IETF Discussion ietf@ietf.org
 Asunto: IETF Sites Support IPv6
 
 I am pleased to report this 6th day of June 2006 that IETF FTP, Mail 
 Web support  IPv6.
 
 I want to thank NeuStar Secretariat Services for their efforts and Jordi
 Palet Martinez for his assistance.
 
 Ray Pelletier
 IAD
 
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf




**
The IPv6 Portal: http://www.ipv6tf.org

Barcelona 2005 Global IPv6 Summit
Slides available at:
http://www.ipv6-es.com

This electronic message contains information which may be privileged or 
confidential. The information is intended to be for the use of the 
individual(s) named above. If you are not the intended recipient be aware that 
any disclosure, copying, distribution or use of the contents of this 
information, including attached files, is prohibited.




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Hallam-Baker, Phillip
 From: Joe Touch [mailto:[EMAIL PROTECTED] 

 The second is a problem, for reasons 
 explained in my I-D, because it puts control over host 
 service offerings in the hands of whomever controls its DNS 
 (e.g., another thing for ISPs to claim makes you a commercial 
 customer at commercial prices) and because it's inefficient.

This is an irrelevant issue based on a premise that is absolutely and totally 
wrong.

There is NO CHANGE OF CONTROL due to SRV, none, zip, nadda.

If a party controls the DNS information for a host it controls all name based 
inbound connections to that host absolutely and irrevocably.

Devolving additional functions to the DNS does not entail any change of control 
because that control is already lost.


If I control example.com I control the inbound email, web, ftp services. If you 
are binding to a raw IP address then SRV is not exactly going to be very 
relevant in any case is it?


The Internet is the DNS, the IP based packet transport is mere plumbing. 


If someone wants to be a first class citizen on the Internet they have to own 
and control their own DNS service. Otherwise they can have no meaningful 
control or security. 

DNS names are not free but they are exceptionaly cheap. If you want to put up 
some service and your ISP refuses to allow you control of the DNS there are 
plenty of DNS service providers who will be happy to help. 


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF Sites Support IPv6

2006-06-06 Thread Iljitsch van Beijnum

On 6-jun-2006, at 16:31, Ray Pelletier wrote:

I am pleased to report this 6th day of June 2006 that IETF FTP,  
Mail  Web support  IPv6.


Congratulations!

Web works. I'll be checking the headers for this message as soon as  
it comes back, they should be entirely IPv4-free...


Iljitsch

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Hallam-Baker, Phillip

 From: Steven M. Bellovin [mailto:[EMAIL PROTECTED] 

 More precisely -- when something is sufficiently complex, 
 it's inherently bug-prone.  That is indeed a good reason to 
 push back on a design.  The question to ask is whether the 
 *problem* is inherently complex -- when the complexity of the 
 solution significanlty exceeds the inherent complexity of the 
 problem, you've probably made a mistake.  When the problem 
 itself is sufficiently complex, it's fair to ask if it should 
 be solved.  Remember point (3) of RFC 1925.

I think that the term 'too complex' is probably meaningless and is in any case 
an inaccurate explanation for the miseries of ASN.1 which are rather different 
to the ones normally given.

People push back on protocols all the time for a range of reasons. Too complex 
is a typically vague and unhelpful pushback. I note that all too often the 
complexity of deployed protocols is the result of efforts of people to reduce 
the complexity of the system to the point where it was insufficient for the 
intended task.

Having had Tony Hoare as my college tutor at Oxford I have experienced a 
particularly uncompromising approach to complexity. However the point Hoare 
makes repeatedly is as simple as possible but no simpler.


In the case of ASN.1 I think the real problem is not the 'complexity' of the 
encoding, it's the mismatch between the encoding used and the data types 
supported in the languages that are used to implement ASN.1 systems.

DER encoding is most certainly a painful disaster, certainly DER encoding is 
completely unnecessary in X.509 which is empirically demonstrated by the fact 
that the Internet worked just fine without anyone noticing (ok one person 
noticed) in the days when CAs issued BER encoded certs. 

The real pain in ASN.1 comes from having to deal with piles of unnecessary 
serialization/deserialization code.


The real power of S-Expressions is not the simplicity of the S-Expression. 
Dealing with large structures in S-Expressions is a tedious pain to put it 
mildly. The code to deal with serialization/deserialization is avoided because 
the data structures are introspective (at least in Symbolics LISP which is the 
only one I ever used).

If ASN.1 had been done right it would have been possible to generate the 
serialization/deserialization code automatically from native data structures in 
the way that .NET allows XML serialization classes to be generated 
automatically.


Unfortunately ASN.1 went into committee as a good idea and came out a camel. 
And all of the attempts to remove the hump since have merely created new humps.


At this point XML is not a bad choice for data encoding. I would like to see 
the baroque SGML legacy abandonded (in particular eliminate DTDs entirely). XML 
is not a perfect choice but is is not a bad one and done right can be efficient.

The problem in XML is that XML Schema was botched and in particular namespaces 
and composition are botched. I think this could be fixed, perhaps.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Joe Touch


Hallam-Baker, Phillip wrote:
 From: Joe Touch [mailto:[EMAIL PROTECTED] 
 
 The second is a problem, for reasons 
 explained in my I-D, because it puts control over host 
 service offerings in the hands of whomever controls its DNS 
 (e.g., another thing for ISPs to claim makes you a commercial 
 customer at commercial prices) and because it's inefficient.
 
 This is an irrelevant issue based on a premise that is absolutely and totally 
 wrong.
 
 There is NO CHANGE OF CONTROL due to SRV, none, zip, nadda.
 
 If a party controls the DNS information for a host it controls
 all name based inbound connections to that host absolutely and
irrevocably.

The DNS controls the IP address; ISPs aren't reluctant to control the
forward DNS lookup for an IP address, even when transient.

Were the DNS to control the services available, customers would be at
the mercy of their ISP to make new services widely available. ISPs
already want to control that using port filtering.

...
 If someone wants to be a first class citizen on the Internet they
 have to own and control their own DNS service.

How so? What defines first-class?

All they really need is:
- stable IP addresses
- stable matching forward and reverse DNS entries
- a lack of port filtering

If they want control over their DNS name, they also need:
- control over their IP address's reverse DNS entry

Relying on SRV records puts more control in the DNS. While that may not
matter much for users managing their own DNS*, it does matter a LOT for
the five 9's of the rest of us who don't.

 DNS names are not free but they are exceptionaly cheap. 
 If you want to put up some service and your ISP refuses to
 allow you control of the DNS there are plenty of DNS service
 providers who will be happy to help.

That assumes the applications lookup the service name on the DNS name,
rather than the IP address. The former may have multiple IP addresses
with different service name:port bindings; the latter is more
appropriate, IMO. That then results in dependence on the DNS under the
control of the ISP - since they're unlikely to delegate the control of a
single reverse entry to you.

And 5 9's of users may want or need services (e.g., some OS diagnostics
rely on web servers running on your host), but they're not about to run
setup a DNS server, regardless of how inexpensive.

Joe




signature.asc
Description: OpenPGP digital signature
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Hallam-Baker, Phillip

 From: Joe Touch [mailto:[EMAIL PROTECTED] 

 Hallam-Baker, Phillip wrote:
  From: Joe Touch [mailto:[EMAIL PROTECTED]
  
  The second is a problem, for reasons explained in my I-D, 
 because it 
  puts control over host service offerings in the hands of whomever 
  controls its DNS (e.g., another thing for ISPs to claim 
 makes you a 
  commercial customer at commercial prices) and because it's 
  inefficient.
  
  This is an irrelevant issue based on a premise that is 
 absolutely and totally wrong.
  
  There is NO CHANGE OF CONTROL due to SRV, none, zip, nadda.
  
  If a party controls the DNS information for a host it controls all 
  name based inbound connections to that host absolutely and
 irrevocably.
 
 The DNS controls the IP address; ISPs aren't reluctant to 
 control the forward DNS lookup for an IP address, even when transient.

Mine is, I have no forward DNS pointing to my machine at all from my bandwidth 
provider.

You do not have to use the DNS service provided by your ISP, if you do they 
control you.

 Were the DNS to control the services available, customers 
 would be at the mercy of their ISP to make new services 
 widely available. ISPs already want to control that using 
 port filtering.

You are confusing politics with technology and making a hash of both.

You do not have to use the DNS service provided by your ISP.

Regardless of whether you do or not their ability to filter services is far 
greater under the port allocation scheme you champion than under a DNS centric 
model.

If the evil service is on port 666 it is a trivial matter to block it, not so 
if the evil service is being managed by an independent DNS service provider who 
maps the SRV record to a port that the ISP has not blocked.

 ...
  If someone wants to be a first class citizen on the 
 Internet they have 
  to own and control their own DNS service.
 
 How so? What defines first-class?


 All they really need is:
   - stable IP addresses
   - stable matching forward and reverse DNS entries
   - a lack of port filtering

No you need to control your own name. Unless you can do that you are a serf.

That is why it is better to be hallam-baker.com rather than 
hallam-baker.blogspot.com. Unless you own the DNS name you are permanently at 
the mercy of the owner of blogspot.com. If their conditions of service change 
in ways that are unfavorable to you you have no recourse.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Best practice for data encoding?

2006-06-06 Thread Steven M. Bellovin
On Tue, 6 Jun 2006 09:50:22 -0700, Hallam-Baker, Phillip
[EMAIL PROTECTED] wrote:


 
 Having had Tony Hoare as my college tutor at Oxford I have experienced a
 particularly uncompromising approach to complexity. However the point
 Hoare makes repeatedly is as simple as possible but no simpler.

Hoare has been a great influence on my thinking, too.  I particularly
recall his Turing Award lecture, where he noted:

There are two ways of constructing a software design: One way is
to make it so simple that there are obviously no deficiencies, and
the other way is to make it so complicated that there are no
obvious deficiencies. The first method is far more difficult.

(In that same lecture, he warned of security issues from not checking
array bounds at run-time, but that's a separate rant.)

--Steven M. Bellovin, http://www.cs.columbia.edu/~smb

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Christian Huitema
 ASN.1 implementation bugs have also caused security problems for SSL,
 Kerberos, ISAKMP, and probably others. These bugs are also not due to
 shared code history: they turn up again and again.
 
 Are there any other binary protocols that can be usefully compared
with
 ASN.1's security history?

There is indeed a lot of complexity in ASN.1. At the root, ASN.1 is a
basic T-L-V encoding format, similar to what we see in multiple IETF
protocols. However, for various reasons, ASN.1 includes a number of
encoding choices that are as many occasions for programming errors:

* In most TLV applications, the type field is a simple number varying
from 0 to 254, with the number 255 reserved for extension. In ASN.1, the
type field is structured as a combination of scope and number, and the
number itself can be encoded on a variable number of bytes.
* In most TLV applications, the length field is a simple number. In
ASN.1, the length field is variable length.
* In most TLV applications, structures are delineated by the length
field. In ASN.1, structures can be delineated either by the length field
or by an end of structure mark.
* In most TLV applications, a string is encoded as just a string of
bytes. In ASN.1, it can be encoded either that way, or as a sequence of
chunks, which conceivably could themselves be encoded as chunks.
* Most applications tolerate some variations in component ordering and
deal with optional components, but ASN.1 pushes that to an art form.
* I don't remember exactly how many alphabet sets ASN.1 does support,
but it is way more than your average application.
* Most applications encode integer values by reference to classic
computer encodings, e.g. signed/unsigned char, short, long, long-long.
ASN.1 introduces its own encoding, which is variable length.
* One can argue that SNMP makes a creative use of the Object
Identifier data type of ASN.1, but one also has to wonder why this data
type is specified in the language in the first place.

Then there are MACRO definitions, VALUE specifications, and an even more
complex definition of extension capabilities. In short, ASN.1 is vastly
more complex that the average TLV encoding. The higher rate of errors is
thus not entirely surprising.

-- Christian Huitema


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Joe Touch
Hallam-Baker, Phillip wrote:
...
 You are confusing politics with technology and making a hash of both.

I would encourage you to review the doc; it discusses the details of the
differences in technical terms. I'll refrain from repeating them here.

Joe



signature.asc
Description: OpenPGP digital signature
___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Jeffrey Hutzelman



On Tuesday, June 06, 2006 10:33:30 AM -0700 Christian Huitema 
[EMAIL PROTECTED] wrote:



ASN.1 implementation bugs have also caused security problems for SSL,
Kerberos, ISAKMP, and probably others. These bugs are also not due to
shared code history: they turn up again and again.

Are there any other binary protocols that can be usefully compared

with

ASN.1's security history?


There is indeed a lot of complexity in ASN.1. At the root, ASN.1 is a
basic T-L-V encoding format, similar to what we see in multiple IETF
protocols. However, for various reasons, ASN.1 includes a number of
encoding choices that are as many occasions for programming errors:


To be pedantic, ASN.1 is what its name says it is - a notation.
The properties you go on to describe are those of BER; other encodings have 
other properties.  For example, DER adds constraints such that there are no 
longer multiple ways to encode the same thing.  Besides simplifying 
implementations, this also makes it possible to compare cryptographic 
hashes of DER-encoded data; X.509 and Kerberos both take advantage of this 
property.  PER eliminates many of the tags and lengths, and my 
understanding is that there is a set of rules for encoding ASN.1 data in 
XML.




* One can argue that SNMP makes a creative use of the Object
Identifier data type of ASN.1, but one also has to wonder why this data
type is specified in the language in the first place.


Well, I can't speak to the orignial motivation, but under BER, encoding the 
same sort of heirarchical name as a SEQUENCE OF INTEGER takes about three 
times the space the primitive type does, assuming most of the values are 
small.




Then there are MACRO definitions, VALUE specifications, and an even more
complex definition of extension capabilities. In short, ASN.1 is vastly
more complex that the average TLV encoding. The higher rate of errors is
thus not entirely surprising.


There certainly is a rich set of features (read: complexity) in both the 
ASN.1 syntax and its commonly-used encodings.  However, I don't think 
that's the real source of the problem.  There seem to be a lot of ad-hoc 
ASN.1 decoders out there that people have written as part of some other 
protocol, instead of using an off-the-shelf compiler/encoder/decoder; this 
duplication of effort and code is bound to lead to errors, especially when 
it is done with insufficient attention to the details of what is indeed a 
fairly complex encoding.


I also suspect that a number of the problems found have nothing to do with 
decoding ASN.1 specifically, and would have come up had other approaches 
been used.  For example, several of the problems cited earlier were buffer 
overflows found in code written well before the true impact of that problem 
was well understood.  These problems are more likely to be noticed and/or 
create vulnerabilities when they occur in things like ASN.1 decoders, or 
XDR decoders, or XML parsers, because that code tends to deal directly with 
untrusted input.


-- Jeffrey T. Hutzelman (N3NHS) [EMAIL PROTECTED]
  Sr. Research Systems Programmer
  School of Computer Science - Research Computing Facility
  Carnegie Mellon University - Pittsburgh, PA


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Hallam-Baker, Phillip

 From: Steven M. Bellovin [mailto:[EMAIL PROTECTED] 

  Having had Tony Hoare as my college tutor at Oxford I have 
 experienced 
  a particularly uncompromising approach to complexity. However the 
  point Hoare makes repeatedly is as simple as possible but 
 no simpler.
 
 Hoare has been a great influence on my thinking, too.  I 
 particularly recall his Turing Award lecture, where he noted:
 
   There are two ways of constructing a software design: One way is
   to make it so simple that there are obviously no 
 deficiencies, and
   the other way is to make it so complicated that there are no
   obvious deficiencies. The first method is far more difficult.
 
 (In that same lecture, he warned of security issues from not 
 checking array bounds at run-time, but that's a separate rant.)

I think it is a useful illustration of my point.

Dennis Ritchie:
Bounds checking is too complex to put in the runtime library.

Tony Hoare:
Bounds checking is too complex to attempt to perform by hand.


I think that time has proved Hoare and Algol 60 right on this point. It is much 
better to have a single point of control in a system and a single place where 
checking can take place than make it the responsibility of the programer to 
hand code checking throughout their code.

Equally the idea of unifying control and discovery information in the DNS may 
sound complex but the result has the potential to be considerably simpler than 
the numerous ad hoc management schemes that have grown up as a result of the 
lack of a coherent infrastructure.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Hallam-Baker, Phillip

 From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED] 

 It's a subset, in fact.  All DER is valid BER.

It is an illogical subset defined in a throwaway comment in an obscure part of 
the spec.

A subset is not necessarily a reduction in complexity. Let us imagine that we 
have a spec that allows you to choose between three modes of transport to get 
to school: walk, bicycle or unicycle.

The unicycle option does not create any real difficulty for you since you 
simply ignore it and use one of the sensible options. And it is no more complex 
to support since a bicycle track can also be used by unicyclists.

Now the same derranged loons who wrote the DER encoding decide that your 
Distinguished transport option is going to be unicycle, that is all you are 
going to be allowed to do.

Suddenly the option which you could ignore as illogical and irrelevant has 
become an obligation. And that is what DER encoding does. 

Since you don't appear to have coded DER encoding I suggest you try it before 
further pontification. If you have coded it and don't understand how so many 
people get it wrong then you are beyond hope.

BTW its not just the use of definite length tags, there is also a requirement 
to sort the content of sets which is a real fun thing to do. Particularly when 
the spec fails to explain what is actually to be sorted.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Jeffrey Hutzelman



On Tuesday, June 06, 2006 11:55:15 AM -0700 Hallam-Baker, Phillip 
[EMAIL PROTECTED] wrote:





From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED]



To be pedantic, ASN.1 is what its name says it is - a notation.
The properties you go on to describe are those of BER; other
encodings have other properties.  For example, DER adds
constraints such that there are no longer multiple ways to
encode the same thing.  Besides simplifying implementations,


Hate to bust your bubble here but DER encoding is vastly more complex
than any other encoding. It is certainly not simpler than the BER
encoding.


It's a subset, in fact.  All DER is valid BER.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


RE: Best practice for data encoding?

2006-06-06 Thread Hallam-Baker, Phillip

 From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED] 

 To be pedantic, ASN.1 is what its name says it is - a notation.
 The properties you go on to describe are those of BER; other 
 encodings have other properties.  For example, DER adds 
 constraints such that there are no longer multiple ways to 
 encode the same thing.  Besides simplifying implementations, 

Hate to bust your bubble here but DER encoding is vastly more complex than any 
other encoding. It is certainly not simpler than the BER encoding.

The reason for this is that in DER encoding each chunck of data is encoded 
using the definite length encoding in which each data structure is preceded by 
a length descriptor. In addition to being much more troublesome to decode than 
a simple end of structure market such as ), }, or / it is considerably more 
complex to code because the length descriptor is itself a variable length 
integer.

The upshot of this is that it is impossible to write a LR(1) encoder for DER 
encoding. In order to encode the structure you have to recursively size each 
substructure before the first byte of the enclosing structure can be emitted.


 this also makes it possible to compare cryptographic hashes 
 of DER-encoded data; X.509 and Kerberos both take advantage 
 of this property. 

I am not aware of any X.509 system that relies on this property. If there is 
such a system they certainly are not making use of the ability to reduce a DER 
encoded structure to X.500 data and reassemble it. Almost none of the PKIX 
applications have done this properly until recently.

X.509 certs are exchanged as opaque binary blobs by all rational applications. 

  Then there are MACRO definitions, VALUE specifications, and an even 
  more complex definition of extension capabilities. In 
 short, ASN.1 is 
  vastly more complex that the average TLV encoding. The 
 higher rate of 
  errors is thus not entirely surprising.
 
 There certainly is a rich set of features (read: complexity) 
 in both the
 ASN.1 syntax and its commonly-used encodings.  However, I 
 don't think that's the real source of the problem.  There 
 seem to be a lot of ad-hoc
 ASN.1 decoders out there that people have written as part of 
 some other protocol, instead of using an off-the-shelf 
 compiler/encoder/decoder; 

That's because most of the off the shelf compiler/encoders have historically 
been trash.

Where do you think all the bungled DER implementations came from?

 I also suspect that a number of the problems found have 
 nothing to do with decoding ASN.1 specifically, and would 
 have come up had other approaches been used.  For example, 
 several of the problems cited earlier were buffer overflows 
 found in code written well before the true impact of that 
 problem was well understood.  

Before the 1960s? I very much doubt it.

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Offpath BoF at Montreal IETF

2006-06-06 Thread Paul Francis
 

Gang,

The offpath BoF, titled Path-decoupled Signaling for Data, as been approved
for the next IETF.  The proposed topic for the BoF is copied below, and is
also available at http://www.cs.cornell.edu/people/francis/offpath/.

This message is being sent (separately) to the nsis, behave, p2psip,
e2e-interest, and ietf mailing lists.

The purpose of this message is to solicite feedback about the topics and
agenda for the BoF.  Note that the immediate goal of the BoF is to create an
IRTF research group.  Nevertheless, possible alternative goals (i.e. IETF
activity) can be discussed.

A BoF mailing list has been created (see below).

Thanks,

PF


-

Path-decoupled Signaling for Data (offpath)
 

To be held at the 66th (Montreal) IETF, under the auspices of the Transport
Area (Lars Eggert), and coordinated with the IRTF (Aaron Falk).

 

BoF Chair:  TBD

  

BoF Initiators: 
Paul Francis and Saikat Guha (both of Cornell University) francis,
saikat@cs.cornell.edu

 

General Discussion: [EMAIL PROTECTED] To Subscribe:
[EMAIL PROTECTED] In Body: (un)subscribe
Archive: http://www.ietf.org/mail-archive/web/off-path-bof/index.html

This page at:  http://www.cs.cornell.edu/people/francis/offpath

 

Purpose of BoF:
Gauge interest in creating an IRTF research group on this topic.

  

Synopsis: 

Path-decoupled (or off-path) signaling, in the form of SIP, has proven to be
a very powerful mechanism for facilitating media connection establishment
between hosts.  It provides friendly naming, discovery, user mobility,
authentication, transport and application negotiation, and even NAT/FW
traversal for UDP and, more recently, TCP.  Furthermore, it is network
independent, working equally well for IPv4 and IPv6.  This set of features,
however, would be attractive to all types of data sessions, not just media.
The purpose of this BoF is to gauge interest in the design of off-path
signaling (probably but not necessarily using SIP) for establishing all kinds
of non-public-server data sessions.  A positive outcome of this BoF would be
the formation of an IRTF group, though other outcomes will of course be
considered.

We are particularly interested in models of off-path signaling that improve
security beyond today's address/port/deep-inspection model.  The use of
path-decoupled signaling gives both the middle and the ends the opportunity
to assert policy and negotiate an acceptable session.  We would like to
explore cases where the off-path signaling operates alone (i.e. NAT/FW
traversal with legacy NAT/FW's), as well as where it operates in conjunction
with subsequent path-coupled signaling (either in-band or out-of-band).
Considering that an application can always lie about what it is, we would
also like to explore how to couple the signaling primitives with emerging OS
security features (i.e. trusted computing platforms).  Beyond security, we
would like to explore the use of off-path signaling for such features as user
and host mobility, transport negotiation (i.e. TCP versus SCTP), anycast and
multicast, billing, and time-delayed communications messaging). 

The ultimate goal here is that some or all of these features become part of
the standard sockets API of typical OS's, and that infrastructure support for
the signaling becomes ubiquitous (in the same sense that DNS is ubiquitous).
This would allow application developers, security vendors (middlebox and
endhost), users, and network administrators to converge on a unified method
of naming and connection establishment over the Internet.  (By contrast,
naming and connection establishment through NATs and firewalls today is ad
hoc and usually application specific, variously involving email, IM services,
dynamic DNS services, manual configuration of ports, and so on.) 

Of the IETF working groups, this BoF is most closely aligned with the nsis WG
in the Transport Area, especially the NAT/FW NSLP.  The following gives an
example of how an off-path signaling protocol would work in conjunction with
the NAT/FW NSLP.   This is only an example...there are other approaches and
variants on this example. 

Say an application wishes to establish a TCP connection with a peer, where
both peers are behind NAT/FW's.  The initiating peer off-path signals a
connection request.  The request contains the application name, user names,
authentication information (Certs or Diameter), and information about the
preferred transport (TCP, SSL, IPSec, etc.).  The off-path request is checked
by the initiating endhost's policy, and then flows through policy boxes
representing the initiator's and the recipient's networks.  This gives both
sides an opportunity to reject the request, or to request different transport
or security characteristics, or to accept and pass on the request as-is.
Note that the policy boxes could both be far from either ends' physical
network, thus not revealing the IP addresses of either end until the

Re: Questions about draft-lear-iana-no-more-well-known-ports-00.txt

2006-06-06 Thread Mark Andrews

  From: Jeffrey Hutzelman [mailto:[EMAIL PROTECTED] 
 
  (2) As I understand it, for ports above 1024, the IANA does 
  _not_ assign
  values - it just registers uses claimed by others.  Eliminating
  well-known ports eliminates any assignment role, and 
  leaves us with
  just a registry of what people have claimed.  Note that this means
  there is no mechanism which prevents the same number from being
  registered by more than one registry.
 
 So how is a server to support two services that happen to have chosen the sam
 e port number?
 
 I think that what is indicated here is that service discovery by port number 
 is broken and no longer scalable. 
 
 There are only 65536 possible port numbers, we expect to see rather more Web 
 Services become established. We have 10,000 registrations already. This is a 
 failed discovery strategy.
 
 The scalable discovery strategy for the Internet is to use SRV records. For t
 his to be possible it has to become as easy to register an SRV code point as 
 it is currently to register a port. It makes no sense for there to be more re
 strictions on issue of the unlimited resource than on the limited one.
 
 Getting an SRV code point registered is not a trivial task and there is in fa
 ct a parallel non-IANA registry already operating because most people cannot 
 be bothered to deal with the IETF process. It should not be necessary to writ
 e an RFC or go through the IESG to register a code point. The implicit assump
 tion here is that the IESG controls the Internet through control of discovery
  aparatus, a silly notion that the other Internet standards bodies are not go
 ing to accept.

There was never any intention of making getting SRV labels hard.

The reason for the RFC was to handle *existing* protocols and to
handle protocols which wished to use the fields in a non standard
manner.

Retrofiting SRV usage into a existing protocol is not straight
forward.  
http://www.watersprings.org/pub/id/draft-andrews-http-srv-01.txt
is a attempt to retrofit SRV into HTTP.

Designing a new protocol to use SRV should be straight forward.
I would expect it to be about 1 paragraph in the new protocol's
description.

 If the W3C or OASIS develops a spec for a Web service it makes no sense for t
 hem to then be required to write an RFC and the group be required to grovel t
 o the IESG and worse be held captive by the IESG work schedule. Not going to 
 happen, nor does it. People who want SRVs cut in those groups just do it.
 
 
  I do _not_ support the introduction of a charging model, for 
  a couple of 
  reasons.  First, I don't want to see port numbers become a 
  politicized 
  commodity, like IP address space and domain names have.
 
 I think this is a very bad idea at this stage. At this point introducing char
 ging is more likely to lead to speculation and premature exhaustion of the su
 pply.
 
 
  (*) Some years ago, there was a period of time lasting 
  several months when 
  users of a particular large network provider were unable to 
  communicate 
  with CMU, because that provider had usurped 128.2/16 for some 
  private use 
  within its network. 
 
 This particular weakness with the allocation of IPv4 addresses is likely to b
 e exercised with increasing frequency when the IPv4 address store begins to r
 each exhaustion. 
 
 One can well imagine that a large ISP operating in Asia might decide that rat
 her than pay an exhorbitant amount to buy another 4 million addresses it migh
 t just make a private agreement to divy up net 18 (18... = MIT) and make a pr
 ivate agreement with its neighboring ISPs to do so.
 
 The bad effects resulting from such practices hardly need to be stated. If we
  are lucky people will go for the Class D and Class E space first. But that i
 s going to upset some people (Mbone users for instance).
 
 
 The governance mechanisms of the Internet assume a degree of authoritarian co
 ntrol that simply does not exist. It is goodwill rather than authority that k
 eeps the Internet together.
 
 My theory (which I make no appologies for acting on) is that Vint Cerf and Jo
 n Postel intended the mechanisms set up to control and allocate to act as the
  Gordian knot.
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf
 
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: [EMAIL PROTECTED]

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Best practice for data encoding?

2006-06-06 Thread Robert Sayre

On 6/6/06, Hallam-Baker, Phillip [EMAIL PROTECTED] wrote:


At this point XML is not a bad choice for data encoding. I would like to see 
the baroque SGML
legacy abandonded (in particular eliminate DTDs entirely). XML is not a perfect 
choice but is is
not a bad one and done right can be efficient.


JSON http://www.json.org seems like a better fit for the use cases
discussed here. You get better data types, retain convenient ASCII
notation for unicode characters, and lose lots of XML baggage.

draft-crockford-jsonorg-json-04.txt is in the RFC queue, headed for
informational status.

--

Robert Sayre

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'Matching of Language Tags' to BCP (draft-ietf-ltru-matching)

2006-06-06 Thread JFC (Jefsey) Morfin

I noted the following typos:
- there is no Figure 1
- Part 4.3 - typo in private agreement
- Appendix A - typo in Acknowledgments
- Appendix A - some names seem to be missing. I could quote a small 
score of them?

jfc


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Best practice for data encoding?

2006-06-06 Thread Dean Anderson
Some ASN.1 compilers have had some bugs, however, this does not to indicate that
ASN.1 is bug prone. Just the opposite: Once you have a secure compiler, you can
be assured that certain kinds of bugs don't exist.

Further, in the few cases of the bugs that were found, once the bug is fixed in
the ASN.1 compiler, the application just needs to be relinked (or given new
shared library) with the new generated runtime.  And any other application which
used a vulnerable runtime, but for which the vulnerability was unknown, is also
fixed.  So, users of compiled runtime benefit from usage experience by the
entire group.

Building tools that make trustable runtimes is a good approach to certain
classes of security problems. You can't get this by hand written protocol
encode/decode layers.

--Dean

On Mon, 5 Jun 2006, Iljitsch van Beijnum wrote:

 I was wondering:
 
 What is considered best practice for encoding data in protocols  
 within the IETF's purview?
 
 Traditionally, many protocols use text but obviously this doesn't  
 really work for protocols that carry a lot of data, because text  
 lacks structure so it's hard to parse. XML and the like are text- 
 based and structured, but take huge amounts of code and processing  
 time to parse (especially on embedded CPUs that lack the more  
 advanced branch prediction available in the fastest desktop and  
 server CPUs). Then there is the ASN.1 route, but as we can see with  
 SNMP, this also requires lots of code and is very (security) bug  
 prone. Many protocols use hand crafted binary formats, which has  
 the advantage that the format can be tailored to the application but  
 it requires custom code for every protocol and it's hard to get  
 right, especially the simplicity/extendability tradeoff.
 
 The ideal way to encode data would be a standard that requires  
 relatively little code to implement, makes for small files/packets  
 that are fast to process but remains reasonably extensible.
 
 So, any thoughts? Binary XML, maybe?
 
 ___
 Ietf mailing list
 Ietf@ietf.org
 https://www1.ietf.org/mailman/listinfo/ietf
 
 

-- 
Av8 Internet   Prepared to pay a premium for better service?
www.av8.net faster, more reliable, better service
617 344 9000   



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


66th IETF - Visa Requirements for Canada

2006-06-06 Thread IETF Secretariat
66th IETF Meeting
Montreal, Quebec, Canada
July 9-14, 2006
Hosted by Ericsson  http://www.ericsson.com


Don’t forget to check visa requirements for entering Canada. You can get more
information at:
 
http://www.ietf.org/meetings/visa_requirements.html or
http://www.cic.gc.ca/english/visit/visas.html

For more hotel information and meeting registration: 
http://www.ietf.org/meetings/IETF-66.html 


___
IETF-Announce mailing list
IETF-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf-announce


Last Call: 'Matching of Language Tags' to BCP (draft-ietf-ltru-matching)

2006-06-06 Thread The IESG
Note:  there was a previous last call request sent for a status of Proposed
Standard; this document is, however, intended for BCP.

The IESG has received a request from the Language Tag Registry Update WG to 
consider the following document:

- 'Matching of Language Tags '
   draft-ietf-ltru-matching-14.txt as a BCP

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action.  Please send any comments to the
iesg@ietf.org or ietf@ietf.org mailing lists by 2006-06-20.

The file can be obtained via
http://www.ietf.org/internet-drafts/draft-ietf-ltru-matching-14.txt


___
IETF-Announce mailing list
IETF-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf-announce