At Thu, 7 Feb 2008 10:34:42 -0500 (EST),
Leichter, Jerry wrote:
> | Since (by definition) you don't have a copy of the packet you've lost,
> | you need a MAC that survives that--and is still compact. This makes
> | life rather more complicated. I'm not up on the most recent lossy
> | MACing literature, but I'm unaware of any computationally efficient
> | technique which has a MAC of the same size with a similar security
> | level. (There's an inefficient technique of having the MAC cover all
> | 2^50 combinations of packet loss, but that's both prohibitively
> | expensive and loses you significant security.)
> My suggestion for a quick fix:  There's some bound on the packet loss
> rate beyond which your protocol will fail for other reasons.  If you
> maintain separate MAC's for each k'th packet sent, and then deliver k
> checksums periodically - with the collection of checksums itself MAC'ed,
> a receiver should be able to check most of the checksums, and can reset
> itself for the others (assuming you use a checksum with some kind of
> prefix-extension property; you may have to send redundant information
> to allow that, or allow the receiver to ask for more info to recover).

So, this issue has been addressed in the broadcast signature context
where you do a two-stage hash-and-sign reduction (cf. [PG01]), but
when this only really works because hashes are a lot more efficient
than signatures. I don't see why it helps with MACs.

> Obviously, if you *really* use every k'th packet to define what is in
> fact a substream, an attacker can arrange to knock out the substream he
> has chosen to attack.  So you use your encryptor to permute the
> substreams, so there's no way to tell from the outside which packet is
> part of which substream.  Also, you want to make sure that a packet
> containing checksums is externally indistinguishable from one containing
> data.  Finally, the checksum packet inherently has higher - and much
> longer-lived - semantic value, so you want to be able to request that
> *it* be resent.  Presumably protocols that are willing to survive data
> loss still have some mechanism for control information and such that
> *must* be delivered, even if delayed.

This basically doesn't work for VoIP, where latency is a real issue.


[PG01] Philippe Golle, Nagendra Modadugu: Authenticating Streamed Data in the 
Presence of
Random Packet Loss. NDSS 2001

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to