Re: re the plenary discussion on partial checksums

2003-07-21 Thread Fred Baker
At 02:05 PM 7/16/2003 -0700, Karl Auerbach wrote:
The last time I saw a comparision of checksum algorithm strengths was back
in the OSI days when the IP checksum was compared to the OSI Fletcher
checksum (my memory is that the IP checksum came in second.)
um, well, it was certainly behind the Fletcher checksum; the IP and TCP 
checksums are trivial to beat - you can literally swap any two 16 bit words 
without fear. The Fletcher checksum isn't as strong as a CRC, but I did see 
its strength compared positively with a CRC. The XNS checksum (one's 
complement sum, like IP, but with rotation) fell somewhere in between.

As I understood the discussion, it wasn't so much about getting a better 
error check as it was adding FEC, however. Even a 32 bit CRC loses its 
value above 10^5 bits, and we're talking about 10^5 *bytes*.

At 04:34 PM 7/16/2003 -0400, Bill Strahm wrote:
Why, oh WHY would I want to receive a known corrupted packet ?
usually it is something about it taking multiple seconds for data to arrive 
with a very high BER, and including FEC in the application data that would 
allow the system to recover enough useful information to make it 
worthwhile. Think interplanetary space. There is also discussion of 64K 
MTUs in some sectors, which I tend to think is misguided (I understand 
reasons for larger packets, or at least I think I do, but I think the 
trade-offs don't justify them).  




Re: re the plenary discussion on partial checksums

2003-07-17 Thread Randy Bush
 Why, oh WHY would I want to receive a known corrupted packet ?
 why oh why would you ever want to talk with someone over a phone that
 occasionally clicked or popped?

and why would i mind cheese with holes in it?  i don't care about
cheese or voice phones.  i care about internet data packets.

randy




Re: re the plenary discussion on partial checksums

2003-07-17 Thread Carsten Bormann
The biggest questions I have are:

- where to put this bit?
Right now, the *only* way an L2 with varied service levels can derive 
what service levels to use for best-effort traffic is to perform a 
layer violation.  Continuing this tradition, the bit would be:

less_than_perfect_L2_error_detection = (ip.protocol == IPPROTO_UDPLITE)

Additionally the L3-L2 mapping can layer-violate up into the UDP-lite 
header and extract information about unequal error protection it may 
need i.e., apply more protection to data covered by the UDP-lite 
checksum than the uncovered data.  Of course, the same thing can be 
done for unequal error detection (do apply good L2 error detection to 
the part covered by the UDP-lite checksum).

Ugly?  Yes.
Unfortunately, the alternative is to make L3-L2 mappings a first-class 
citizen in the IP architecture and provide a way for an application to 
provide information that could help/influence this mapping.  As RSVP 
has demonstrated, this is hairy stuff (even ignoring multicast).

- are there unintended consequences of doing this that we can forsee?
Some issues raised at the plenary were:

-- the old NFS-on-UDP-without-checksums story.  Folks using UDP-lite 
for non-error-tolerant data get what they deserve.

-- a general feeling that nobody knows what will blow up when L2s do 
less error detection.  Allowing this only for clearly defined cases 
(like ip.protocol == IPPROTO_UDPLITE) might help to allay these 
well-founded fears.

Gruesse, Carsten

PS.: those who don't know what all this is good for might want to read 
up on voice codecs like GSM 06.10 that have data fields for excitation 
energy -- generally, it is better to have a slightly corrupted packet 
with new values for this arrive than to error-conceal an erasure.
Of course, once this starts to get implemented, there are other 
possible applications, e.g. application-layer FEC.




Re: re the plenary discussion on partial checksums

2003-07-17 Thread Carsten Bormann
How would an app know to set this bit? The problem is that different 
L2s will have different likelihoods of corruption; you may decide that 
it's safe to set the bit on Ethernet, but not on 802.11*.
Aah, there's the confusion.  The apps we have in mind would think that 
it is pointless (but harmless) to set the bit on Ethernet, but would be 
quite interested in setting it on 802.11*.

This is not is my data safe even if I give up some L2 error 
detection, this is L2 error detection destroys data that I could 
salvage.
Don't think NFS on bad Ethernet cards (yes, I'm old enough to have 
been hit by this, too!), think conversational media data on wireless 
links that give you 35 % more bandwidth if you give up some error 
protection (35 % was Pekka Pessi's figure on the plenary jabber, IIRC).

Gruesse, Carsten

*) e.g., in order to salvage half of a video packet that got an error 
in the middle.  (Reducing packet sizes to achieve a similar effect is 
highly counterproductive on media with a very high per-packet overhead 
such as 802.11.)  Of course, 802.11 has retransmissions, so maybe this 
is a bad example, but it does illustrate the point.




Re: re the plenary discussion on partial checksums

2003-07-17 Thread Jonathan Hogg
On 17/7/03 8:30, bill wrote:

 I would have a hard time taking an IP header bit and making it the Do
 not drop this packet in the presense of a bit error somewhere in the
 frame from layer 2 - layer 3.  Don't think it is a good idea.

What if that bit got corrupted?

Jonathan





Re: re the plenary discussion on partial checksums

2003-07-17 Thread Iljitsch van Beijnum
On woensdag, jul 16, 2003, at 21:59 Europe/Amsterdam, Keith Moore wrote:

I'm not sure what the problem is here:

- UDP checksums are optional
Not in IPv6. If this is the only thing we need at the transport layer 
then we might want to change this back to the IPv4 behavior.

- IPv6 could define an option for IP header checksum
  (could be applicable to IPv4 also, if you want a stronger checksum
   for the header)
It would make sense to make this more general. IIRC, GSM uses a scheme 
where some bits must be transferred correct some bits may have errors 
but are important enough to protect with forward error correction and 
some other bits are not important so they don't get any protection at 
all. Now obviously we don't want to carry FEC overhead over our 1 in 
10^8 bit error links so there must be some way to map application 
requirements to different link layers for the same packet so some links 
will add the necessary FEC for the important part of the data on the 
link and then strip it off again, but other links either simply turn 
off the CRC or do nothing (as the bit error rate is so minimal that 
implementing turning off the CRC on a per-packet basis isn't worth it). 
This is what I was getting at (way too cryptically) at the microphone 
yesterday.

Then an application-specific CRC over the uncorruptable part can happen 
at the receiver. Such a mechanism could also address the problem that 
our current checksums or even link CRCs aren't strong enough for much 
larger packets.

- whether L2 drops packets whose checksum fails is an L2 matter;
  the ability to turn this on or off is an L2 feature
Obviously not. If I'm doing A/V over wifi I would probably want the 
802.11b MAC to adopt multicast behavior where it sends the packet at a 
fixed rate without link layer retransmits and then have them delivered 
to the other side if there are errors as well, but if I'm also doing 
TCP over this link I would want this traffic to be treated as usual. So 
this must be a per-packet mechanism.

so it seems like what we need is a bit in the IP header to indicate 
that
L2 integrity checks are optional, and to specify for various kinds of
IP-over-FOO how to implement that bit in FOO.  and maybe that bit could
go in the IP option to provide a stronger checksum than normally exists
in the IP header (so that the header, at least, is protected)
I think the best way to do this is using diffserv to enable/disable the 
special link handling. If special link handling is required then the 
packet can be further inspected to see how much of it should receive 
additional FEC protection.

Yes, the IP header (but certainly not always just that) would be 
uncorruptable. E2e dictates that the receiving host checks this but we 
probably want to do it after special links too.

Interesting aspect: it should be possible to make this work with IPsec 
encryption but not authentication, but not so well with ciphers in CBC 
mode. A stream cipher would be better here.




Re: re the plenary discussion on partial checksums

2003-07-17 Thread Steven M. Bellovin
In message [EMAIL PROTECTED], Iljitsch van Beijn
um writes:


Interesting aspect: it should be possible to make this work with IPsec 
encryption but not authentication, but not so well with ciphers in CBC 
mode. A stream cipher would be better here.


Here is the Security Considerations text that Gorry Fairhurst has
inserted into draft-ietf-tsvwg-udp-lite-01.txt to satisfy my DISCUSS:

---

Security Considerations

The security impact of UDP-Lite is related to its interaction with
authentication and encryption mechanisms. When the partial checksum option
of UDP-Lite is enabled, the insensitive portion of a packet may change in
transit. This is contrary to the idea behind most authentication mechanisms:
authentication succeeds if the packet has not changed in transit. Unless
authentication mechanisms that operate only on the sensitive part of packets
are developed and used, authentication will always fail for UDP-Lite packets
where the insensitive part has been damaged.

The IPSec integrity check (Encapsulation Security Protocol, ESP, or
Authentication Header, AH) is applied (at least) to the entire IP packet
payload. Corruption of any bit within the protected area will then result in
discarding the UDP-Lite packet by the IP receiver.

Encryption (e.g. IPSEC ESP with payload, but no integrity check)
may be used.  Note that omitting an integrity check can, under
certain circumstances, compromise confidentiality [Bell98].

If a few bits of an encrypted packet are damaged, the decryption
transform will typically spread errors so that the packet becomes
too damaged to be of use.  Many encryption transforms today exhibit
this behavior.  There exist encryption transforms, stream ciphers,
which do not cause error propagation.  Proper use of stream ciphers
can be quite difficult, especially when authentication-checking is
omitted [BB01].  In particular, an attacker can cause predictable
changes to the ultimate plaintext, even without being able to
decrypt the ciphertext.


--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)





Re: re the plenary discussion on partial checksums

2003-07-17 Thread Keith Moore

] What bit is needed ?
] 
] Again, this is a layer 2 property.  If you want to receive layer 2
] frames with errors in them just get a Layer 2 device and tell it to not
] do the checksum calculation (much like you put an ethernet nic into
] Promiscous mode so it doesn't drop all of the frames not destined for
] it.

a) some apps can deal with lossage, some not.  so you want to control
   this on a per-packet basis
b) the L2 link in question may not be directly attached to the host
   that wants to control this - so you need a bit in the header that
   lets the sending host communicate to the link.
] 
] As for asking for end-end characterization of this, I think it is
] crazy... The bit can be in your IP header address fields(and on small
] ACK packets, statistically it WILL be there) so the packet wouldn't have
] even made it to your host, 

You have to make sure there's adequate protection for the IP header
(and the packet length) even when the payload isn't protected.  if the IP 
header is corrupted or the packet is a different length than when transmited
on the link, it still has to be dropped.

] it can be in the TCP port fields so it won't
] be received on the correct connection... 

I don't think this will ever be used with TCP.   This is for apps that don't
want to wait for retransmission of damaged packets, so they probably don't
want to wait for retransmission of lost packets either.

] I would have a hard time taking an IP header bit and making it the Do
] not drop this packet in the presense of a bit error somewhere in the
] frame from layer 2 - layer 3.  Don't think it is a good idea.

I don't know where the bit comes from - there certainly aren't many bits left
in IPv4.  It could be a new protocol number, but that's ugly in a different way.
(and dare I say it, it won't work through existing NATs)



Re: re the plenary discussion on partial checksums

2003-07-17 Thread Masataka Ohta
Gruesse, Carsten;

  How would an app know to set this bit? The problem is that different 
  L2s will have different likelihoods of corruption; you may decide that 
  it's safe to set the bit on Ethernet, but not on 802.11*.
 
 Aah, there's the confusion.  The apps we have in mind would think that 
 it is pointless (but harmless) to set the bit on Ethernet, but would be 
 quite interested in setting it on 802.11*.

If the application is voice and the datalink is SONET, it
may be fine to ignore some errors.

However, the problem, in general, is that packets are often
corrupted a lot to be almost meaningless even if the contents
is voice.

Still, it may be possible to design a datalink protocol to ignore
a few bits, but not beyond, of errors in packets.

But, it is, then, easy to have ECC to fix the errors that no
sane person design such protocol.

 in the middle.  (Reducing packet sizes to achieve a similar effect is 
 highly counterproductive on media with a very high per-packet overhead 
 such as 802.11.)  Of course, 802.11 has retransmissions, so maybe this 
 is a bad example, but it does illustrate the point.

The problem of 802.11 is that packets are often corrupted a lot
by collisions (from hidden terminals).

 *) e.g., in order to salvage half of a video packet that got an error 

You need extra check sum which consumes bandwidth.

Note that, for multicast case, you can't behave adaptively to all
the receivers that all the receivers suffer from the bandwidth loss.

Masataka Ohta



Re: re the plenary discussion on partial checksums

2003-07-17 Thread Iljitsch van Beijnum
On donderdag, jul 17, 2003, at 14:24 Europe/Amsterdam, Keith Moore 
wrote:

] I would have a hard time taking an IP header bit and making it the 
Do
] not drop this packet in the presense of a bit error somewhere in the
] frame from layer 2 - layer 3.  Don't think it is a good idea.

I don't know where the bit comes from - there certainly aren't many 
bits left
in IPv4.  It could be a new protocol number, but that's ugly in a 
different way.
(and dare I say it, it won't work through existing NATs)
Again: this seems like a perfect job for diffserv.




Re: re the plenary discussion on partial checksums

2003-07-17 Thread John Stracke
Jonathan Hogg wrote:

On 17/7/03 8:30, bill wrote:
 

I would have a hard time taking an IP header bit and making it the Do
not drop this packet in the presense of a bit error somewhere in the
frame from layer 2 - layer 3.  Don't think it is a good idea.
   

What if that bit got corrupted?
 

This is a good point.  If an L2 error can make a normal discard on 
errors packet come through marked as tolerate errors, then 
implementing this feature can introduce errors in existing applications.

--
/==\
|John Stracke  |[EMAIL PROTECTED]   |
|Principal Engineer|http://www.centive.com |
|Centive   |My opinions are my own.|
|==|
|Where's your sense of adventure? Hiding under the bed.|
\==/




Re: re the plenary discussion on partial checksums

2003-07-16 Thread John Stracke
Keith Moore wrote:

so it seems like what we need is a bit in the IP header to indicate that
L2 integrity checks are optional, and to specify for various kinds of
IP-over-FOO how to implement that bit in FOO.
 

How would an app know to set this bit? The problem is that different L2s 
will have different likelihoods of corruption; you may decide that it's 
safe to set the bit on Ethernet, but not on 802.11*.  And, in general, 
the app doesn't know all of the L2s that may be involved when it sends a 
packet.

--
/==\
|John Stracke  |[EMAIL PROTECTED]   |
|Principal Engineer|http://www.centive.com |
|Centive   |My opinions are my own.|
|==|
|Linux: the Unix defragmentation tool. |
\==/




Re: re the plenary discussion on partial checksums

2003-07-16 Thread Bill Strahm
Ok, I have to ask a silly question (not like that would be a first on this list)

Why, oh WHY would I want to receive a known corrupted packet ?

Are we talking about someone thinks they can eeke out 1% more performance
because their phy/mac can cut over immediately rather than wait for the packet
and verify the checksum ??? (or compute it on the sending side)

I guess I don't see the benefit, I guess rather than a hardware L2 check, you 
rely on something in your hardware later up to fail a check (including a L7
protocol) and drop the frame there ???

I wish I had been there to see the discussion

Bill


On Wed, Jul 16, 2003 at 04:21:47PM -0400, John Stracke wrote:
 Keith Moore wrote:
 
 so it seems like what we need is a bit in the IP header to indicate that
 L2 integrity checks are optional, and to specify for various kinds of
 IP-over-FOO how to implement that bit in FOO.
   
 
 How would an app know to set this bit? The problem is that different L2s 
 will have different likelihoods of corruption; you may decide that it's 
 safe to set the bit on Ethernet, but not on 802.11*.  And, in general, 
 the app doesn't know all of the L2s that may be involved when it sends a 
 packet.
 
 -- 
 /==\
 |John Stracke  |[EMAIL PROTECTED]   |
 |Principal Engineer|http://www.centive.com |
 |Centive   |My opinions are my own.|
 |==|
 |Linux: the Unix defragmentation tool. |
 \==/
 
 



Re: re the plenary discussion on partial checksums

2003-07-16 Thread Karl Auerbach
On Wed, 16 Jul 2003, Keith Moore wrote:

 so it seems like what we need is a bit in the IP header to indicate that
 L2 integrity checks are optional

A lot of folks seem to forget that from the point of view of IP L2
includes the busses between memory and the L2 network interface.  There
have been more than a few recorded cases where packet errors were
introduced as the packet flowed in or out of memory, unprotected by link
CRCs.

To my way of thinking we don't need a bit in the IP header, we need a bit
in the heads of implementors to remind them that relying on link-by-link
protection can be dangerous even if the links have strong CRCs.

 ... IP option to provide a stronger checksum than normally exists

The last time I saw a comparision of checksum algorithm strengths was back 
in the OSI days when the IP checksum was compared to the OSI Fletcher 
checksum (my memory is that the IP checksum came in second.)

--karl--





Re: re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
  so it seems like what we need is a bit in the IP header to indicate
  that L2 integrity checks are optional
 
 A lot of folks seem to forget that from the point of view of IP L2
 includes the busses between memory and the L2 network interface. 
 There have been more than a few recorded cases where packet errors
 were introduced as the packet flowed in or out of memory, unprotected
 by link CRCs.

For apps that tolerate lossage (or more precisely, for apps that work
better in the face of some transmission errors than they do if all
packets with transmission errors are dropped), it doesn't matter whether
the errors occur in the memory-to-interface link or somewhere else -
they'll deal with the errors no matter where they occur.  

Of course, the apps that can't tolerate lossage won't set the lossage
is okay bit, and they'll continue to expect that the packets that do
arrive, arrive intact.   For one particular L2 technology X this might
simply mean that packets that don't have that bit set, but do have
errors, are dropped.   For another L2 technology Y it might mean that if
that bit is not set then the IP-over-Y spec will require FEC or
link-level retry or both to make sure that those packets have a
reasonable probability of getting there intact.

 To my way of thinking we don't need a bit in the IP header, we need a
 bit in the heads of implementors to remind them that relying on
 link-by-link protection can be dangerous even if the links have strong
 CRCs.

Actually, it seems like we need a bit in the heads of people who don't
understand that 

- some kinds of links have inherently high error rates, 
- some apps are capable of dealing with less-than-perfect data,
- adding FEC and/or link-level retry to get error rates down to the
  level we're accustomed to from wire or fiber carries with it a
  substantial penalty in bandwidth and/or delay
- we'd like to be able to use those kinds of links with IP,
- we'd like to be able to run those apps over IP, and over those
  links, without paying the bandwidth or delay penalty for
  apps that don't need it.
- we'd like a stable spec for this so we can carve it in stone, 
  (er, silicon)
- since it's going to be carved in stone (silicon) we would do well
  to get it right.

Yes, this is a change to IP, and to the IP architecture.
But it's not rocket science, and it doesn't have to affect things that
don't use it explicitly.



Re: re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
 so it seems like what we need is a bit in the IP header to indicate
 that L2 integrity checks are optional, and to specify for various
 kinds of IP-over-FOO how to implement that bit in FOO.
 
 How would an app know to set this bit? The problem is that different
 L2s will have different likelihoods of corruption; you may decide that
 it's safe to set the bit on Ethernet, but not on 802.11*.  And, in
 general, the app doesn't know all of the L2s that may be involved when
 it sends a packet.

I'm not sure that the app needs to know how much lossage to expect, or
to specify how much lossage is too much.  It just wants the bits, errors
included.  Depending on the app's needs it might dynamically adapt to
varying degrees of error by adding its own FEC, e2e retransmission,
and/or interleaving, and this probably works better than trying to have
the app either determine statically how much error it can expect or by
having the app specify to the network how much error is acceptable.

I suppose we could define a maximum error rate (say 15%) that
IP-over-FOO should be designed to provide if the lossage okay bit is
set.  But practically speaking I doubt it's necessary to do that-
links that are designed to support lossy traffic will already have
enough FEC or whatever to suit that kind of traffic.

The biggest questions I have are:

- where to put this bit? 

- are there unintended consequences of doing this that we can forsee?


Keith



Re: re the plenary discussion on partial checksums

2003-07-16 Thread Keith Moore
 Why, oh WHY would I want to receive a known corrupted packet ?

why oh why would you ever want to talk with someone over a phone that
occasionally clicked or popped?

why or why would you ever want to watch a video with snow, or an
occasional missing pixel, or even an occasional missing frame?

sometimes getting a lossy packet is better than not getting one at all,
or having to wait for a retransmission.