Re: New authentication protocol, was Re: Tinc's response to 'Linux's answer to MS-PPTP'

2003-09-30 Thread Bill Stewart
 =Step 1:
 Exchange ID messages. An ID message contains the name of the tinc
 daemon which sends it, the protocol version it uses, and various
 options (like which cipher and digest algorithm it wants to use).

By name of the tinc daemon, do you mean identification information?
That data should be encrypted, and therefore in step 2.
(Alternatively, if you just mean tincd version 1.2.3.4, that's fine.

 Step 2:
 Exchange METAKEY messages. The METAKEY message contains the public part
 of a key used in a Diffie-Hellman key exchange.  This message is
 encrypted using RSA with OAEP padding, using the public key of the
 intended recipient.

You can't encrypt the DH keyparts using RSA unless you first exchange
RSA public key information, which the server can't do without knowing
who the client is (the client presumably knows who the server is,
so you _could_ have the client send the key encrypted to annoy MITMs.)
To make the protocol generally useful for privacy protection,
you shouldn't exchange this information unencrypted.
So do a Diffie-Hellman exchange first, then exchange any other information,
including RSA signatures on the DH keyparts.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Matt Blaze
I wrote:
 For some recent relevant papers, see the ACM-CCS '02 paper my colleagues
 and I wrote on our JFK protocol (http://www.crypto.com/papers/jfk-ccs.ppt),
...
But of course I meant the url to be
http://www.crypto.com/papers/jfk-ccs.pdf

I don't know what I could have been thinking; I don't use the
program that produces files with that extension unless a gun is
pointed to my head.

-matt



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to 'Linux's answer to MS-PPTP'

2003-09-30 Thread Bill Stewart

 If we use RSA encryption, then both sides know their message can only
 be received by the intended recipient. If we use RSA signing, then we
 both sides know the message they receive can only come from the assumed
 sender. For the purpose of tinc's authentication protocol, I don't see
 the difference, but...

  Now, the attacker chooses 0 as his DH public. This makes ZZ always
  equal to zero, no matter what the peer's DH key is.

You need to validate the DH keyparts even if you're
corresponding with the person you thought you were.
This is true whether you're using signatures, encryption, or neither.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Eric Rescorla
Guus Sliepen [EMAIL PROTECTED] writes:

 On Mon, Sep 29, 2003 at 09:35:56AM -0700, Eric Rescorla wrote:
 
  Was there any technical reason why the existing cryptographic
  skeletons wouldn't have been just as good?
 
 Well all existing authentication schemes do what they are supposed do,
 that's not the problem. We just want one that is as simple as possible
 (so we can understand it better and implement it more easily), and which
 is efficient (both speed and bandwidth).

In what way is your protocol either simpler or more efficient
than, say, JFK or the TLS skeleton?


   And I just ripped TLS from the list.
  
  Define ripped. This certainly is not the same as TLS.
 
 Used as a skeleton. Don't ask me to define that as well.

It doesn't appear to me that you've used the TLS skeleton.
The protocol you described really isn't much more like 
TLS than it is like STS or JFK. On the other hand,
all these back and forth DH-based protocols look more
or less the same, except for some important details.


  That's not the same a sdoing a thorough analysis, which can take
  years, as Steve Bellovin has pointed out about Needham-Schroeder.
 
 True, but we can learn even from the bullet holes.

Again, it's important to distinguish between learning experiences
and deployed protocols. I agree that it's worthwhile to try
to do new protocols and let other people analyze them as
a learning experience. But that's different from putting
a not fully analyzed protocol into a deployed system.


  Look, there's nothing wrong with trying to invent new protocols,
  especially as a learning experience. What I'm trying to figure
  out is why you would put them in a piece of software rather 
  than using one that has undergone substantial analysis unless
  your new protocol has some actual advantages. Does it?
 
 We're trying to find that out. If we figure out it doesn't, we'll use
 one of the standard protocols.

Well, I'd start by doing a back of the envelope performance
analysis. If that doesn't show that your approach is better,
then I'm not sure why you would wish to pursue it as a
deployed solution.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to 'Linux's answer to MS-PPTP'

2003-09-30 Thread Eric Rescorla
Bill Stewart [EMAIL PROTECTED] writes:

  If we use RSA encryption, then both sides know their message can only
  be received by the intended recipient. If we use RSA signing, then we
  both sides know the message they receive can only come from the assumed
  sender. For the purpose of tinc's authentication protocol, I don't see
  the difference, but...
 
   Now, the attacker chooses 0 as his DH public. This makes ZZ always
   equal to zero, no matter what the peer's DH key is.
 
 You need to validate the DH keyparts even if you're
 corresponding with the person you thought you were.
 This is true whether you're using signatures, encryption, or neither.

Not necessarily.

If you're using fully ephemeral DH keys and a properly designed
key, then you shouldn't need to validate the other public share.

-Ekr


-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Monoculture

2003-09-30 Thread Richard Schroeppel
Matt Blaze:
It is probably no longer acceptable, as it was just a few years ago,
to throw together an ad-hoc authentication or key agreement protocol
based on informal obvious security properties, without a strong
proof of security and a clear statement of the model under which the
security holds.

For some recent relevant papers, see the ACM-CCS '02 paper my colleagues
and I wrote on our JFK protocol (http://www.crypto.com/papers/jfk-ccs.ppt),
and Ran Canetti and Hugo Krawczyk's several recent papers on the design
and analysis of various IPSEC key exchange protocols (especially their
CRYPTO'02 paper).

Eric Rescorla:
And I'm trying to understand why. This answer sounds a lot like NIH.

Look, there's nothing wrong with trying to invent new protocols,
especially as a learning experience. What I'm trying to figure
out is why you would put them in a piece of software rather 
than using one that has undergone substantial analysis unless
your new protocol has some actual advantages. Does it?

I imagine the Plumbers  Electricians Union must have used similar
arguments to enclose the business to themselves, and keep out unlicensed
newcomers.  No longer acceptable indeed.  Too much competition boys?

Who on this list just wrote a report on the dangers of Monoculture?

Rich Schroeppel   [EMAIL PROTECTED]
(Who still likes new things.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Can Eve repeat?

2003-09-30 Thread Dan Riley
I'm not an expert on this stuff, but I'm interested enough to chase
a few references...

Ivan Krstic [EMAIL PROTECTED] writes:
 The idea that observing modifies state is something to be approached with 
 caution. Read-only does make sense in quantum world; implementations of 
 early theoretical work by Elitzur and Vaidman achieved roughly 50% success 
 on interaction-free measurements.

Careful there--EV interaction-free measurements do *not* read the
internal state of the system measured.  The trick to the EV IFM is
that it determines the location (or existence) of a system without
interacting with the internal state of that system; a corollary is
that it derives no information about the internal state.

  The meaning of the EV IFM is that if an object changes its internal
   state [...] due to the radiation, then the method allows detection
   of the location of the object without any change in its internal
   state.
   [...]
   We should mention that the interaction-free measurements do not
   have vanishing interaction Hamiltonian. [...] the IFM can change
   very significantly the quantum state of the observed object and we
   still name it interaction free.
Lev Vaidman, Are Interaction-free Measurements Interaction
Free?, http://arxiv.org/abs/quant-ph/0006077

Intercepting QC is all about determining the internal state (e.g.
photon polarization), and AFAIK that requires becoming entangled with
the state of the particle.  EV IFM doesn't appear to provide a way
around this.

and later...
 On Fri, 26 Sep 2003 09:10:05 -0400, Greg Troxel [EMAIL PROTECTED] wrote:
  The current canoncial
  paper on how to calculate the number of bits that must be hashed away
  due to detected eavesdropping and the inferred amount of undetected
  eavesdropping is Defense frontier analysis of quantum cryptographic
  systems by Slutsky et al:
 
http://topaz.ucsd.edu/papers/defense.pdf
 
 Up-front disclaimer: I haven't had time to study this paper with the
 level of attention it likely deserves, so I apologize if the following
 contains incorrect logic. However, from glancing over it, it appears
 the assumptions on which the entire paper rests are undermined by work
 such as that of Elitzur and Vaidman (see the article I linked
 previously). Specifically, note the following:
[...]
 If we do away with the idea that there are no interaction-free
 measurements (which was, at least to me, convincingly shown by the
 Quantum seeing in the dark article), this paper becomes considerably
 less useful; the first claim's validity is completely nullified (no
 longer does interference with particles necessarily introduce
 transmission errors),

If Eve can measure the state of a particle without altering its state
at all, 100% of the time, then QC is dead--the defense function
becomes infinite.  But AFAICT the EV IFM techniques do not provide
this ability.

 while the effect on the second statement is
 evil: employing the proposed key distillation techniques, the user
 might be given a (very) false sense of security, as only a small
 percentage of the particles that Eve observes register as transmission
 errors (=15%, according to the LANL figure).

Err...I think you've missed the point of the paper.  What they're
doing is deriving how many extra bits Alice and Bob have to transmit
given that Eve is intercepting their transmission, and only some
fraction (dependent on the interception technique) of those
interceptions are detectable.  They do not assume that all
interceptions appear as errors; they (initially) assume that all
errors are due to interceptions (they deal with the case of noisy
channels later in the paper).

-dan

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Stephenson recycles cryptic 'Quicksilver'

2003-09-30 Thread R. A. Hettinga
http://usatoday.printthis.clickability.com/pt/cpt?action=cptexpire=urlID=7729492fb=YpartnerID=1663

USA Today


 

Stephenson recycles cryptic 'Quicksilver' 
By Elizabeth Wiese, USA TODAY 
Quicksilver is the first book in author Neal Stephenson's Baroque Cycle trilogy and a 
tangential prequel to his best-selling cult classic Cryptonomicon .

The 927-page novel contains three separate books, numerous plays, a thorough education 
in the science and history of 17th-century Europe, a short course in cryptography 
(Stephenson's literary calling card) and more interesting footnotes than found in many 
academic papers. 

Cyberpunk Stephenson is a man not afraid to follow his interests, wherever they may 
take him. He began his career writing an action novel, moved into science fiction, 
then created his own genre with Cryptonomicon in 1999.( Related item: Read an excerpt 
from Quicksilver )

Cryptonomicon included a fast-paced story of the machinations of Silicon Valley at the 
height of the boom, a most enjoyable World War II adventure, a treasure hunt and an 
excellent primer on the science of cryptography, the designing of codes used to 
scramble messages. 

More about the book 

Quicksilver 
By Neal Stephenson 
William Morrow, 927 pp., $27.95 

Quicksilver picks up the story, if going back 400 years in time can be called 
following. 

Book One examines the literal beginnings of scientific inquiry in England through the 
eyes of one Daniel Waterhouse, a fictional contemporary of the historical leading 
lights of that era. Book Two begins with Jack Shaftoe, a fictional relative of the 
equally fictional, if more folkloric, Bobby Shaftoe. Book Three covers the exploits of 
a young woman named Eliza from the curiously nonexistent European island kingdom of 
Qwghlm, whose adroitness with financial markets brings her to the French court and 
numerous transnational intrigues. Each character is, if surnames are a clue, a direct 
ancestor to the main characters in Cryptonomicon .

Along the way, the reader is either enthralled or bored silly by the numerous 
discursive digressions Stephenson gives to provide background that most readers (in 
fact, all readers who don't happen to be university professors of the history of 
science) might require. 

If the story requires a three-page digression to explain how Isaac Newton first began 
thinking about gravity, then three pages it is. And if two pages on mid-17th-century 
mining techniques or a description of the beginnings of money markets in Amsterdam is 
required, so be it, and damn the torpedoes - which I believe were discussed at length 
in Cryptonomicon .

Stephenson dotes on intricate word puzzles and re-creations of historically accurate 
literary contrivances. Quicksilver abounds with plays, genealogies and an entertaining 
series of letters - a popular literary form of the era - that both describes court 
life at Versailles and provides interested readers with a series of encrypted messages 
that will no doubt keep them happily busy for hours reverse-engineering them to 
discover the algorithms used. 

What Quicksilver lacks, sadly, is the momentum of Cryptonomicon . Though the novel is 
intriguing, there's precious little plot. It's more like real history in that lots of 
things happen; sometimes they're connected and sometimes they're not, and it's only a 
century or so later that we emphasize the connections and forget all the miscellaneous 
stuff in between. 

I doubt Quicksilver will captivate the audience that its predecessor did, but it's 
still an enjoyable read. And Stephenson, who clearly loves to pass along the fruits of 
his studies, isn't anywhere near stopping. 

Quicksilver was originally part of Cryptonomicon but it was chopped off when the first 
book got too long. Now, Quicksilver is volume one of a trilogy called The Baroque 
Cycle . Volume Two, The Confusion , is scheduled to appear in August 2004 and Volume 
Three, The System of the World , in October 2004. Don't get in line at your bookstore 
yet. In 1999, Stephenson said Quicksilver would be out in 2000. 


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Guus Sliepen
On Mon, Sep 29, 2003 at 11:54:20AM -0700, Eric Rescorla wrote:

  Well all existing authentication schemes do what they are supposed do,
  that's not the problem. We just want one that is as simple as possible
  (so we can understand it better and implement it more easily), and which
  is efficient (both speed and bandwidth).
 
 In what way is your protocol either simpler or more efficient
 than, say, JFK or the TLS skeleton?

Compared with JFK: http://www.crypto.com/papers/jfk-ccs.pdf section 2.2
shows a lot of keys, IDs, derivatives of keys, random numbers and hashes
of various combinations of the previous, 3 public key encryptions and 2
symmetric cipher encryptions and HMACs. I do not consider that simple.

Compared with the entire TLS protocol it is much simpler, compared with
just the handshake protocol it is about as simple and probably just as
efficient, but as I said earlier, I want to get rid of the client/server
distinction.

 Again, it's important to distinguish between learning experiences
 and deployed protocols. I agree that it's worthwhile to try
 to do new protocols and let other people analyze them as
 a learning experience. But that's different from putting
 a not fully analyzed protocol into a deployed system.
[...]
 Well, I'd start by doing a back of the envelope performance
 analysis. If that doesn't show that your approach is better,
 then I'm not sure why you would wish to pursue it as a
 deployed solution.

I will not repeat our motiviations again. Please don't bother arguing
about this.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: New authentication protocol, was Re: Tinc's response to 'Linux's answer to MS-PPTP'

2003-09-30 Thread Guus Sliepen
On Mon, Sep 29, 2003 at 09:51:20AM -0700, Bill Stewart wrote:

  =Step 1:
  Exchange ID messages. An ID message contains the name of the tinc
  daemon which sends it, the protocol version it uses, and various
  options (like which cipher and digest algorithm it wants to use).
 
 By name of the tinc daemon, do you mean identification information?
 That data should be encrypted, and therefore in step 2.
 (Alternatively, if you just mean tincd version 1.2.3.4, that's fine.

No, identification information. But still, it's just a name, not a
public key or certificate. It is only used by the receiver to choose
which public key (or certificate etc) to use in Step 2. This information
does not have to be encrypted, it has just as much meaning as the IP
address the sender has.

  Step 2:
  Exchange METAKEY messages. The METAKEY message contains the public part
  of a key used in a Diffie-Hellman key exchange.  This message is
  encrypted using RSA with OAEP padding, using the public key of the
  intended recipient.
 
 You can't encrypt the DH keyparts using RSA unless you first exchange
 RSA public key information, which the server can't do without knowing
 who the client is (the client presumably knows who the server is,
 so you _could_ have the client send the key encrypted to annoy MITMs.)

With tinc, public keys are never exchanged during authentication, they
are known beforehand. And again, there is no distinction between a
client and a server, it is peer to peer.

-- 
Met vriendelijke groet / with kind regards,
Guus Sliepen [EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Johns Hopkins Physics Lab System Detects Digital Video Tampering

2003-09-30 Thread Sunder

And what stops an attacker from taking that digital video, stripping off
the RSA(?) signatures (I'll assume it's just signed), editing it, creating
another, random, one time private key, destroying that private key after
resigning it, and offering it up as unedited?!?!?!?!

They've either obviously not relesed all the details about this method,
since you have no way to validate that the presented public key was
created by their camcorder.  So how would you prove that something came
from a particular camera?  Do you cripple the private key somehow to be
able to identify it?  Do you sign it twice? If you do, then a more
permanent private key lives in the camcorder and can be extracted to also
produce fake keys, etc...

Either that, or this gets a nice wonderful SNAKE OIL INSIDE sticker
slapped on it. :)



Even more obvious: What stops an attacker from taking the camcorder apart,
disconnecting the CCD output, then hooking up an unsigned edited video
signal to it, and recording as a signed video?


IMHO, it has an aroma rich with viperidae lipids.


--Kaos-Keraunos-Kybernetos---
 + ^ + :25Kliters anthrax, 38K liters botulinum toxin, 500 tons of   /|\
  \|/  :sarin, mustard and VX gas, mobile bio-weapons labs, nukular /\|/\
--*--:weapons.. Reasons for war on Iraq - GWB 2003-01-28 speech.  \/|\/
  /|\  :Found to date: 0.  Cost of war: $800,000,000,000 USD.\|/
 + v + :   The look on Sadam's face - priceless!   
[EMAIL PROTECTED] http://www.sunder.net 

On Mon, 29 Sep 2003, R. A. Hettinga wrote:

 Of course, if it's is just signed-frame video, prior art doesn't begin to describe 
 this.
 
 Cheers,
 RAH
 --
 
 http://www.sciencedaily.com/releases/2003/09/030929054614.htm
 
 Science Daily
 
 Source : 
 Johns Hopkins University 
 
 Date : 
 2003-09-29 
 

SNIP
 
 One key, called a private key, is used to generate the signatures and is destroyed 
 when the recording is complete. The second, a public key, is used for 
 verification. To provide additional accountability, a second set of keys is 
 generated that identifies the postal inspector who made the recording. This set of 
 keys is embedded in a secure physical token that the inspector inserts into the 
 system to activate the taping session. The token also signs the Digital Video 
 Authenticator's public key, ensuring that the public key released with the video 
 signatures was created by the inspector and can be trusted. 

SNIP
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Cringley: I Have Seen the Future and We Are It

2003-09-30 Thread R. A. Hettinga
http://craphound.com/cringely_toorcon_2003.txt

Robert Cringely's Keynote:

I Have Seen the Future and We Are It: The Past, Present and
Future of Information Security

From ToorCon 2003, www.toorcon.org

San Diego, CA

Impressionistic transcript by Cory Doctorow

[EMAIL PROTECTED]

Sept 27, 2003

--

I built, by hand, the first 25 Apple ][s, worked on the Lisa's
GUI. I invented the Trashcan Icon.

I had spent the summer of 1979 working for the Fed, debugging
3-Mile Island (I'd been a physicist). Then I wrote a book about
it on a 300-baud modem terminal connected to an IBM mainframe
using a line-editor. I hit the wrong key one night and trashed
70K words. Hell, Lawrence of Arabia lost a handwritten ms for a
350k-word manuscript.

When I went to work on the Lisa, I was determined that deleting a
file would be a two-step process. On some systems, the trashcan
bulges (defies physics); on others, the lid goes off (defies my
mother). In my version, a fly circled the trashcan. The focus
groups thought it was fuckin' awesome. But by turning off the
fly, the computer could be made to run twice as fast. They fired
me.

I went back to Apple in 84 and did the last lick of work I'd ever
do: designed a global comms system called AppleLink -- chat,
boards, etc. Run on a mainframe. Ran for a few years, then
decided that it wasn't worth it for Apple. They sold the code to
a company called Quantum Data Systems, which changed tis name to
AOL, and the rest is history. I wrote AOL 1.0.

My mailer let you retract your mail -- you could send mail and
then take it back. I'd demo it by sending mail to Sculley that
said, Sculley you idiot and then retract it. One day, Sculley
retracted his mail and they fired me.

When I was working on the Three-Mile Island cleanup, we had a lot
of leaks to WashPo. The white-belt/white-shoe consultant stopped
by my cubicle. The PHB asked the consultant, Who are we afraid
of sneaking in here? The consultant said, Why WashPo, of
course. Except that the WashPo guys were getting everything by
socially engineering us when we drank at the bar down the street.
He had no ability to control what employees did in the bar, so he
invented a bogeyman in the drop-ceiling.

Intel once had a counsel called Al the Shredder who would drive
a golf-cart up and down the document-retention center aisles. He
discovered two boxes on the floor, not filed. He asked What are
these? No one knew, so he said, Shred 'em. Turns out these
were the documents specifically requested by the IRS. They were
left saying to the IRS, We have any doc you want, except the
ones you asked for. To this day, the IRS doesn't believe a word
Intel says.

Al had access and authority, but he didn't know what he was
doing.

I was at a company called The Prediction Company in Santa Fe, and
they manage $1BB worth of stock-trading for the Swiss UBS bank,
earning $1MM/day using computers.

I asked, what do you do about data-security? They have firewalls
and stuff, but the building used to be a whorehouse (in a sense
it still is) [Laughs]. I asked if they'd heard of Tempest? No,
they hadn't. Nothing was RF-hardened. The public street is 12'
away -- your competitor could park a van there all day long and
scrape all your screens and put you out of business. This was
just two months ago!

We worry about logical security, we forget about physical
security.

Think about JetBlue: Who wants to be the guy who said, Oh, sure,
by all means, have 1,000,000 customer names!?

I keep a letter on my wall that I got from a student at Uni of
Akron, explaining in vast detail what an idiot I am. At the end
of the letter, he says, I eat people like you for lunch.

[Ed: huh?]

I don't know as much as you know, so I have to look at the big
pic from a 30-year perspective.

We once had a dream of ubiquitous infosec: perfect secrecy,
anonymity, untraceable e-cash -- protect ourselves from
censorship, etc. It hasn't worked. I don't know that it can ever
work. I was the only reporter at the first DefCon -- and that's
what people were talking about then.

By contrast, today's news is a cypherpunk nightmare. Information
turns out not to be power, after all: Power is power. Joe user
doesn't want to encrypt email. Anonymity is overwritten by
court-order. The Great Firewall of China keeps a billion people
from communicating, from knowing what's going on. In 1997, in
Hong Kong, I spoke to the China-Internet people and said, How do
you proxy an entire Internet? They said, Well, it might not
work, but we'll just throw all our resources at it until it
does.

E-commerce is credit-card numbers in SSL. Hides nothing from
anyone. Except it provides a certain sense of comfort. Our
fallback is The most you'll lose is $50. Information is
protected by companies who bring lawsuits against people who
figure out how to read it.

It's wrong to brand figuring out how to decode information as
evil.

The closest thing to strong security that we are likely to have
as a society is Palladium.

That's 

Re: Monoculture

2003-09-30 Thread Matt Blaze
 I imagine the Plumbers  Electricians Union must have used similar
 arguments to enclose the business to themselves, and keep out unlicensed
 newcomers.  No longer acceptable indeed.  Too much competition boys?


Rich,

Oh come on.  Are you willfully misinterpreting what I wrote, or
did you honestly believe that that was my intent?

No one - at least certainly not I - suggests that people shouldn't
be allowed to invent whatever new protocols they want or that some
union card be required in order to do so.  However, we've learned
a lot in recent years about how to design such protocols, and we've
seen intuitively obviously secure protocols turn out to be badly
flawed when more advanced analysis techniques and security models
are applied against them.

Yes, the standards against which newly proposed protocols are measured
have increased in recent years: we've reached a point where it is
practical for the potential users of many types of security protocols
to demand solid analysis of their properties against rather stringent
security models.  It is no longer sufficient, if one hopes to have
a new protocol taken seriously, for designers to simply throw a proposal
over the wall to users and analysts and hope that if the analysts
don't find something wrong with it the users will adopt it.  Now
it is possible - and necessary - to be both a protocol designer and
analyst at the same time.  This is a good thing - it means we've made
progress.  Finally we can now look at practical protocols more
systematically and mathematically instead of just hoping that we
didn't miss certain big classes of attack.  (We're not done, of course,
and we're a long way from discovering a generally useful way to look
at an arbitrary protocol and tell if it's secure).

Fortunately, there's no dark art being protected here.  The literature
is open and freely available, and it's taught in schools.  And unlike
the guilds you allude to, anyone is free to participate.  But if they
expect to be taken seriously, they should learn the field first.

I'd encourage the designer of the protocol who asked the original question
to learn the field.  Unfortunately, he's going about it a sub-optimally.
Instead of hoping to design a just protocol and getting others to throw
darts at it (or bless it), he might have better luck (and learn far
more) by looking at the recent literature of protocol design and analysis
and trying to emulate the analysis and design process of other protocols
when designing his own.  Then when he throws it over the wall to the rest
of the world, the question would be not is my protocol any good but
rather are my arguments convincing and sufficient?

I suppose some people will always take an anti-intellectual attitude
toward this and congratulate themselves about how those eggheads who
write those papers with the funny math in them don't know everything to
excuse their own ignorance of the subject.  People like that with
an interest in physics and engineering tend to invent a lot of
perpetual motion machines, and spend a lot of effort fending off
the vast establishment conspiracy that seeks to suppress their
brilliant work.  (We've long seen such people in cipher design, but
they seem to have ignored protocols for the most part, I guess
because protocols are less visible and sexy).

Rich, I know you're a smart guy with great familiarity (and
contributions to) the field, and I know you're not a kook, but
your comment sure would have set off my kook alarm if I didn't
know you personally.

 
 Who on this list just wrote a report on the dangers of Monoculture?
 
 Rich Schroeppel   [EMAIL PROTECTED]
 (Who still likes new things.)

Me too.

-matt



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-09-30 Thread Perry E. Metzger

Richard Schroeppel [EMAIL PROTECTED] writes:
(Responding to the chorus of protocol professionals saying please do
 not roll your own)
 I imagine the Plumbers  Electricians Union must have used similar
 arguments to enclose the business to themselves, and keep out unlicensed
 newcomers.  No longer acceptable indeed.  Too much competition boys?

TLS, IPSec, JFK, etc. are all intellectual property free. No one gets
money if people use them. There is no union here with an incentive to
eliminate competition. No one's pay changes if someone uses TLS
instead of a roll-your-own-protocol.

 Who on this list just wrote a report on the dangers of Monoculture?

I did. Dependence on a single system is indeed a problem. However, one
must understand the nature of the problem, not diversify blindly.

Some companies are said to require that multiple high level executives
cannot ride on the same plane flight, for fear of losing too many of
them simultaneously. That is a way of avoiding certain kinds of
risk. However, I know of no company that suggests that some of those
executives fly in rickety planes that have never been safety tested
and were built by squirrels using only pine cones. That does not reduce
risk.

I have to agree with Matt Blaze, Eric Rescorla, and numerous others
who have said this before. Cryptographic algorithms and protocols are
exceptionally difficult to design properly, and you should not go
around designing something on a whim and throwing it into your
software, any more than you would invent a new drug one morning and
inject it into patients that afternoon.

There is nothing whatsoever wrong with people proposing a new protocol
or algorithm, publishing it, discussing it, etc. Indeed, TLS, AES and
all the rest started as published documents that were then subjected
to prolonged attempts to break them. If, after something has been
reviewed for some years, it then appears to have unique advantages and
no one has succeeded in attacking the protocol, it might even be fit
for use in products.

This is very very different, however, from subjecting your users to
seat-of-the-pants designed protocols and algorithms that have had no
review whatsoever. Given that even the professionals generally screw
it up the first few times around, it is hardly surprising that the
roll your own attempts are almost always stunningly bad. This is
doubly so given that the protocols and algorithms used in many of
these systems don't even have a pretense of superiority over the
existing ones.

The protocols Peter Gutmann was complaining about in the message that
started this thread are, for the most part, childishly bad in spite of
the protestations of their creators. Are you arguing that it is in the
interest of most people to be using such incompetently designed
security software?

By the way, none of this contradicts what a number of us said in our
monoculture paper.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-09-30 Thread Matt Blaze
Perry writes:
 
 Richard Schroeppel [EMAIL PROTECTED] writes:
 (Responding to the chorus of protocol professionals saying please do
  not roll your own)
  I imagine the Plumbers  Electricians Union must have used similar
  arguments to enclose the business to themselves, and keep out unlicensed
  newcomers.  No longer acceptable indeed.  Too much competition boys?
 
...
 
  Who on this list just wrote a report on the dangers of Monoculture?
 
 I did. Dependence on a single system is indeed a problem. However, one
 must understand the nature of the problem, not diversify blindly.
 
 Some companies are said to require that multiple high level executives
 cannot ride on the same plane flight, for fear of losing too many of
 them simultaneously. That is a way of avoiding certain kinds of
 risk. However, I know of no company that suggests that some of those
 executives fly in rickety planes that have never been safety tested
 and were built by squirrels using only pine cones. That does not reduce
 risk.
 

Speaking of plumbers and electricians, it occurs to me that while
it would be very difficult to find pipe fittings designed without
taking into account static and dynamic analysis or electric wiring
designed without benefit of resistance or insulation breakdown tests
(basic requirements for pipes and wires that nonetheless require
fairly advanced knowledge to understand properly), equipping a house
with such materials might actually end up being safe.  The inevitable
fire might be extinguished by the equally inevitable flood.

-matt



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Literature about Merkle hash tries?

2003-09-30 Thread Benja Fallenstein
Hi all,

Does anybody on this list know literature about cryptographic hash 
tries? (I hit on this idea when mulling about a different problem, and 
was wondering what people have written about it.) I.e., a data structure 
for keeping sets of pieces of data, by:

- computing a cryptographic hash of each piece, treating each hash as a 
bitstring;
- organizing these in a trie (A tree for storing strings in which there 
is one node for every common prefix. The strings are stored in extra 
leaf nodes, http://www.nist.gov/dads/HTML/trie.html);
- treating this trie as a Merkle hash tree.

For example, if we have four hashes starting [0001], [0010], [1110] and 
[] respectively, ::

   [root]
/  \
  [0]  [1]
  /  \
[00] [11]
/   \  \
 0001.. 0010.. [111]
   /   \
1110.. ..
The nodes with one child can also be omitted for efficiency, i.e.::

   [root]
/  \
   /\
  /  \
[00]  \
/   \  \
 0001.. 0010.. [111]
   /   \
1110.. ..
This could easily be extended to provide a mapping, by treating the keys 
as above, and putting each values in the extra leaf node of their 
corresponding key.

It seems to me that this data structure would--

- allow giving efficient proofs not only of membership, but also 
non-membership, by giving the path through the tree that would end up at 
that item, but show that it ends up at a different item. E.g., to prove 
that a hash starting [0011] is not in the above set, give the path to 
0010... (This could be used to implement CRTs.)
- be automatically approximately balanced (I haven't attempted a proof, 
but since all prefixes are conjectured to be equally likely...)
- allow you to maintain a history of such trees with only O(log n) 
additional storage cost per addition or removal-- i.e., if you already 
have a whole tree with n items, and want to additionally store a tree 
that has one item added, you only need O(log n) additional storage 
space-- and you don't need to implement some complicated re-balancing 
algorithm (if the previous conjecture holds).

It's functionally equivalent to having a binary search tree that stores 
a value at each internal node, but it seems potentially simpler to 
implement, particularly when you want to store a versioned history (e.g. 
of a CRT), because you don't need to implement re-balancing.

So, anyway, anybody know references? I've not come across any yet.

Thanks,
- Benja
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Johns Hopkins Physics Lab System Detects Digital Video Tampering

2003-09-30 Thread R. A. Hettinga
Of course, if it's is just signed-frame video, prior art doesn't begin to describe 
this.

Cheers,
RAH
--

http://www.sciencedaily.com/releases/2003/09/030929054614.htm

Science Daily

Source :šš 
Johns Hopkins University 

Date :šš 
2003-09-29 

Johns Hopkins APL Creates System To Detect Digital Video Tampering 

The Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md., has 
opened the door to using reliable digital video as evidence in court by developing a 
system that identifies an attempt to alter digital video evidence. 

It's not too hard to make changes to digital video, says Tom Duerr, APL's project 
manager. But our system quickly and conclusively detects any alterations made to the 
original tape. For the past two years, Duerr has led development of the project for 
the United States Postal Inspection Service. 

We're satisfied that our system can accurately detect tampering and now we're 
building a working prototype that can be attached to a camcorder, says Nick Beser, 
lead engineer for the project. Our authenticator provides proof of tampering when the 
human eye can't detect it. You might theorize that a change has been made, but this 
system takes the theory out of that determination. 

The U.S. Postal Inspection Service, the federal law enforcement agency that safeguards 
the U.S. Postal Service, its employees and assets, and ensures the integrity of the 
mail, uses video surveillance and cutting edge technology as investigative tools in 
many of its cases. We are looking forward to field testing the prototype developed by 
APL, says Dennis Jones, assistant postal inspector in charge of the agency's Forensic 
 Technical Services Division. Being able to present a certifiable digital recording 
in court in support of our investigative efforts will minimize court challenges over 
the admissibility of such evidence. This system could reinforce the public's 
confidence in the work of law enforcement professionals. 

Securing the System 

The authentication system computes secure computer-generated digital signatures for 
information recorded by a standard off-the-shelf digital video camcorder. While 
recording, compressed digital video is simultaneously written to digital tape in the 
camcorder and broadcast from the camera into the Digital Video Authenticator 
(currently a laptop PC). There the video is separated into individual frames and three 
digital signatures are generated per frame -- one each for video, audio, and 
camcorder/DVA control data -- at the camcorder frame rate. 

Public-key cryptography is used to create unique signatures for each frame. The keys 
are actually parameters from mathematical algorithms embedded in the system. Duerr 
says, The keys, signature, and original data are mathematically related in such a way 
that if any one of the three is modified, the fact that a change took place will be 
revealed in the verification process. 

One key, called a private key, is used to generate the signatures and is destroyed 
when the recording is complete. The second, a public key, is used for verification. 
To provide additional accountability, a second set of keys is generated that 
identifies the postal inspector who made the recording. This set of keys is embedded 
in a secure physical token that the inspector inserts into the system to activate the 
taping session. The token also signs the Digital Video Authenticator's public key, 
ensuring that the public key released with the video signatures was created by the 
inspector and can be trusted. 

The signatures that are generated for the recording make it easy to recognize 
tampering. If a frame has been added it won't have a signature and will be instantly 
detected. If an original frame is altered, the signature won't match the new data and 
the frame will fail verification. The method is so perceptive that tampering with even 
a single bit (an eighth of a byte) of a 120,000-byte video frame is enough to trigger 
an alert. After an event is recorded, the signatures and the signed public key are 
transferred to a removable storage device and secured along with the original tape in 
case the authenticity of a tape is challenged. 

When finished, the Digital Video Authenticator is expected to be within the size and 
cost range of consumer-grade digital camcorders. It will be attached to, rather than 
embedded in, a video camera, which allows it to be transferred to different cameras 
when current ones become obsolete. Comparison of signatures with recorded video and 
analysis of the results will be accomplished in separate software that will run on a 
desktop PC. 

Prototype development will include peer review by other researchers and potential 
users and is expected to be completed by 2005. In addition to Postal Inspection 
Service use, the system could serve state and local law enforcement needs and possibly 
corporate and other business venues. 

### 

The Applied Physics Laboratory, a division 

Re: New authentication protocol, was Re: Tinc's response to Linux's answer to MS-PPTP

2003-09-30 Thread Eric Rescorla
Guus Sliepen [EMAIL PROTECTED] writes:

 On Mon, Sep 29, 2003 at 02:07:04PM +0200, Guus Sliepen wrote:
 
  Step 2:
  Exchange METAKEY messages. The METAKEY message contains the public part
  of a key used in a Diffie-Hellman key exchange.  This message is
  encrypted using RSA with OAEP padding, using the public key of the
  intended recipient.
 
 After comments and reading up on suggested key exchange schemes, I think
 this step should be changed to send the Diffie-Hellman public key in
 plaintext, along with a nonce (large random number) to prevent replays
 and the effects of bad DH public keys. Instead of encrypting both with
 RSA, they should instead be signed using the private key of the sender
 (the DH public key and nonce wouldn't fit in a single RSA message
 anyway). 
 
 IKEv2 (as described in draft-ietf-ipsec-ikev2-10.txt) does almost the
 same. However, IKEv2 does not send the signature directly, but first
 computes the shared key, and uses that to encrypt (using a symmetric
 cipher) the signature. I do not see why they do it that way; the
 signature has to be checked anyway, if it can be done before computing
 the shared key it saves CPU time. Encrypting it does not prevent a man
 in the middle from reading or altering it, since a MITM can first
 exchange his own DH public key with both sides (and hence he can know
 the shared keys). So actually, I don't see the point in encrypting
 message 3 and 4 as described at page 8 of that draft at all.
In order to hide the identities of the communicating peers.

Personally, I don't have much use for identity protection,
but this is the reason as I understand it.

-Ekr

-- 
[Eric Rescorla   [EMAIL PROTECTED]
http://www.rtfm.com/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Monoculture

2003-09-30 Thread Rich Salz
 I imagine the Plumbers  Electricians Union must have used similar
 arguments to enclose the business to themselves, and keep out unlicensed
 newcomers.  No longer acceptable indeed.  Too much competition boys?

The world might be better off if you couldn't call something
secure unless it came from a certificated security programmer.
Just like you don't want your house wired by a Master Electrician, who has
been proven to have experience and knowledge of the wiring code -- i.e.,
both theory and practice.

Yes, it sometimes sucks to be a newcomer and treated with derision unless you
can prove that you understand the current body of knowledge.  We should
all try to be nicer.  But surely you can understand a cryptographer's
frustration when a VPN -- what does that P stand for? -- shows flaws
that are equivalent to a syntax error in a Java class.

Perhaps it would help to think of it as defending the field.  When
crap and snake-oil get out, even well-meaning crap and snake-oil,
the whole profession ends up stinking.
/r$

PS:  As for wanting to avoid the client-server distinction in SSL/TLS,
 just require certs on both sides and do mutual authentication.
 The bytestream above is already bidirectional.

--
Rich Salz  Chief Security Architect
DataPower Technology   http://www.datapower.com
XS40 XML Security Gateway  http://www.datapower.com/products/xs40.html
XML Security Overview  http://www.datapower.com/xmldev/xmlsecurity.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Literature about Merkle hash tries?

2003-09-30 Thread Greg Rose
At 01:14 AM 10/1/2003 +0300, Benja Fallenstein wrote:
So, anyway, anybody know references? I've not come across any yet.
I know that the technique dates back (at least) to IBM in the 60s. I used 
to know the name of the inventor but can't bring it to mind at the moment. 
The Berkeley UNIX library dbm uses essentially this philosophy, but the 
tree is not binary; rather each node stores up to one disk block's worth of 
pointers. Nodes split when they get too full. When the point is to handle a 
lot of data, this makes much more sense.

Hope that helps,
Greg.
Greg Rose   INTERNET: [EMAIL PROTECTED]
Qualcomm Australia  VOICE:  +61-2-9817 4188   FAX: +61-2-9817 5199
Level 3, 230 Victoria Road,http://people.qualcomm.com/ggr/
Gladesville NSW 2111232B EC8F 44C6 C853 D68F  E107 E6BF CD2F 1081 A37C
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Literature about Merkle hash tries?

2003-09-30 Thread Benja Fallenstein
Hi Greg--

Greg Rose wrote:
At 01:14 AM 10/1/2003 +0300, Benja Fallenstein wrote:
So, anyway, anybody know references? I've not come across any yet.
I know that the technique dates back (at least) to IBM in the 60s.
Cool-- but--

On second thoughts, do you mean *cryptographic* hash tries or hash tries 
or plain tries? I know literature on both tries and hash tries (Knuth 
claimed to have invented the latter in an Literate Programming exercise) 
but not on using cryptographic hash functions  a Merkle hash tree.

Reason for my second thoughts is that Merkle's patent on hash trees 
dates in the 80s ;-)

I used to know the name of the inventor but can't bring it to mind at the 
moment. The Berkeley UNIX library dbm uses essentially this philosophy, 
but the tree is not binary; rather each node stores up to one disk 
block's worth of pointers. Nodes split when they get too full. When the 
point is to handle a lot of data, this makes much more sense.
(In Merkle hash trees, on the other hand, signature size is minimized 
when using a binary tree, at least if I'm not confused right now. :) )

Thanks,
- Benja
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]