Truecrypt Encryption (WAS: Fwd: [IP] Re: Encrypted laptop poses legal dilemma)

2008-02-09 Thread David Chessler
I forwarded a couple of messages about US Customs seizing computers, 
sometimes failing to return them, and demanding passwords. Cellphones 
are also sometimes seized. The TSA claims it does not do this. This 
can cause problems for people who travel with company-sensitive or 
other private information. Some companies avoid the problem by wiping 
all data from the laptop and having the user access it by SSL or 
other secure method over the network. Other solutions are possible.


The following may be a solution for individual travelers without 
access to high-speed internet connections when in the field, or who 
lack access to secure connections to a secure server.


From: David Farber <[EMAIL PROTECTED]>




From:
Sent: Friday, February 08, 2008 11:27 PM
To: David Farber
Subject: Re: [IP] Encrypted laptop poses legal dilemma

Dave,

Check this as the perfect technological answer to the problem presented below.

Given my position , however, please do not use my name or my 
company's name if you post this.  Like anything, it has as many 
legitimate as illegitimate uses; this is public information and, 
ironically, was brought to my attention by some of the top security 
experts in the industry.


http://www.truecrypt.org/

Creates a virtual drive inside of any object of your choosing.  But 
goes one better.  You can encrypt within the encryption in ways 
undetectable.  Thus you can give a password and allow others to open 
it and inspect.  Those looking will never know that within the 
encrypted space there is another deeper form of encryption.  That 
said, I'd really hate to see the gov't or someone else shut this 
down.  At the same time, for people traveling who are doing 
legitimate things that overreaching gov't officials have no right to 
see (and for which it is too late once compromise), this presents a 
valid solution.  It is also incredibly useful for anyone carrying 
sensitive information b/c it gives you two layers of protection if 
your storage device or laptop is stolen.  Know that if you mount it 
to a flash drive, it formats the entire drive.  Most people create an 
object and mount it to that.  Also, never, ever forget your password 
- did that once - and lost 50 megs worth of data.  (might want to use 
roboform, which encrypts and protetcts your passwords).  There's no 
getting inside of this. Ever.  It's about as rock solid as it gets.


Thanks,

---
Archives: http://v2.listbox.com/member/archive/247/=now
RSS Feed: http://v2.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-09 Thread Victor Duchovni
On Sat, Feb 09, 2008 at 05:04:28PM -0800, David Wagner wrote:

> By the way, it seems like one thing that might help with client certs
> is if they were treated a bit like cookies.  Today, a website can set
> a cookie in your browser, and that cookie will be returned every time
> you later visit that website.  This all happens automatically.  Imagine
> if a website could instruct your browser to transparently generate a
> public/private keypair for use with that website only and send the
> public key to that website.  Then, any time that the user returns to
> that website, the browser would automatically use that private key to
> authenticate itself.  For instance, boa.com might instruct my browser
> to create one private key for use with *.boa.com; later,
> citibank.com could instruct my browser to create a private key for
> use with *.citibank.com.

Microsoft broke this in IE7... It is no longer possible to generate and
enroll a client cert from a CA not on the trusted root list. So private
label CAs can no longer enroll client certs. We have requested a fix,
so this may come in the future, but the damage is already done...

Also the IE7 browser APIs for this are completely different and rather
minimally documented. The interfaces are not portable between browsers,
... It's a mess.

-- 

 /"\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-09 Thread Taral
On 2/9/08, David Wagner <[EMAIL PROTECTED]> wrote:
> By the way, it seems like one thing that might help with client certs
> is if they were treated a bit like cookies.

I don't see how this helps with phishing. Phishers will just go after
the password or other secrets used to authenticate a new system or a
system that has lost its cert.

-- 
Taral <[EMAIL PROTECTED]>
"Please let me know if there's any further trouble I can give you."
-- Unknown

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Toshiba shows 2Mbps hardware RNG

2008-02-09 Thread Peter Gutmann
"Perry E. Metzger" <[EMAIL PROTECTED]> writes:

>EE Times: Toshiba tips random-number generator IC
>
>   SAN FRANCISCO -- Toshiba Corp. has claimed a major breakthrough in
>   the field of security technology: It has devised the world's
>   highest-performance physical random-number generator (RNG)
>   circuit.
>
>   The device generates random numbers at a data rate of 2.0 megabits
>   a second, according to Toshiba in a paper presented at the
>   International Solid-State Circuits Conference (ISSCC) here.

I've always wondered why RNG speed is such a big deal for anything but a few
highly specialised applications.  For security use you've got two options:

1. Use it with standard security protocols, in which case you need all of 128
   or so bits every now and then (and very rarely a few thousand bits for
   asymmetric keygen).

2. Use it at its full data rate, in which case you can only communicate with
   someone else who has the same device, but more importantly you need to
   design and build your own custom security infrastructure to take advantage
   of the high-data-rate randomness, which is much harder than simply
   designing an RNG device and declaring victory.

   (In any case if you really need high-data-rate randomness, you just take
   your initial 128 bits and use it to seed AES in CTR mode).

So your potential market for this is people running Monte Carlo simulations
who don't like PRNGs.  Seems a bit of a limited market...

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-09 Thread Peter Gutmann
David Wagner <[EMAIL PROTECTED]> writes:
>Tim Dierks writes:
>>(there are totally different reasons that client certs aren't being
>>widely adopted, but that's beside the point).
>
>I'd be interested in hearing your take on why SSL client certs aren't widely
>adopted.

Because they're essentially unworkable.  At the risk of spamming this
reference a bit too often here:

http://www.cs.auckland.ac.nz/~pgut001/pubs/usability.pdf

There's detailed discussion there of results of user studies, conference
papers, references, (hopefully) all the information you need.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-09 Thread Anne & Lynn Wheeler

David Wagner wrote:

I'd be interested in hearing your take on why SSL client certs aren't
widely adopted.  It seems like they could potentially help with the
phishing problem (at least, the problem of theft of web authenticators
-- it obviously won't help with theft of SSNs).  If users don't know
the authentication secret, they can't reveal it.  The nice thing about
using client certs instead of passwords is that users don't know the
private key -- only the browser knows the secret key.

The standard concerns I've heard are: (a) SSL client supports aren't
supported very well by some browsers; (b) this doesn't handle the
mobility problem, where the user wants to log in from multiple different
browsers.  So you'd need a different mechanism for initially registering
the user's browser.

By the way, it seems like one thing that might help with client certs
is if they were treated a bit like cookies.  Today, a website can set
a cookie in your browser, and that cookie will be returned every time
you later visit that website.  This all happens automatically.  Imagine
if a website could instruct your browser to transparently generate a
public/private keypair for use with that website only and send the
public key to that website.  Then, any time that the user returns to
that website, the browser would automatically use that private key to
authenticate itself.  For instance, boa.com might instruct my browser
to create one private key for use with *.boa.com; later,
citibank.com could instruct my browser to create a private key for
use with *.citibank.com.  By associating the private key with a specific
DNS domain (just as cookies are), this means that the privacy
implications of client authentication would be comparable to the
privacy implications of cookies.  Also, in this scheme, there wouldn't
be any need to have your public key signed by a CA; the site only needs
to know your public key (e.g., your browser could send self-signed
certs), which eliminates the dependence upon the third-party CAs.
Any thoughts on this?
   


in AADS
http://www.garlic.com/~lynn/x959.html#aads
and certificateless public key
http://www.garlic.com/~lynn/subpubkey.html#certless

we referred to the scenario as person-centric ... as a contrast
to institutional-centric oriented implementations.

past posts in this thread:
http://www.garlic.com/~lynn/aadsm28.htm#20 Fixing SSL (was Re: Dutch 
Transport Card Broken)
http://www.garlic.com/~lynn/aadsm28.htm#24 Fixing SSL (was Re: Dutch 
Transport Card Broken)
http://www.garlic.com/~lynn/aadsm28.htm#26 Fixing SSL (was Re: Dutch 
Transport Card Broken)


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Fixing SSL (was Re: Dutch Transport Card Broken)

2008-02-09 Thread David Wagner
Tim Dierks writes:
>(there are totally different reasons that client certs aren't being
>widely adopted, but that's beside the point).

I'd be interested in hearing your take on why SSL client certs aren't
widely adopted.  It seems like they could potentially help with the
phishing problem (at least, the problem of theft of web authenticators
-- it obviously won't help with theft of SSNs).  If users don't know
the authentication secret, they can't reveal it.  The nice thing about
using client certs instead of passwords is that users don't know the
private key -- only the browser knows the secret key.

The standard concerns I've heard are: (a) SSL client supports aren't
supported very well by some browsers; (b) this doesn't handle the
mobility problem, where the user wants to log in from multiple different
browsers.  So you'd need a different mechanism for initially registering
the user's browser.

By the way, it seems like one thing that might help with client certs
is if they were treated a bit like cookies.  Today, a website can set
a cookie in your browser, and that cookie will be returned every time
you later visit that website.  This all happens automatically.  Imagine
if a website could instruct your browser to transparently generate a
public/private keypair for use with that website only and send the
public key to that website.  Then, any time that the user returns to
that website, the browser would automatically use that private key to
authenticate itself.  For instance, boa.com might instruct my browser
to create one private key for use with *.boa.com; later,
citibank.com could instruct my browser to create a private key for
use with *.citibank.com.  By associating the private key with a specific
DNS domain (just as cookies are), this means that the privacy
implications of client authentication would be comparable to the
privacy implications of cookies.  Also, in this scheme, there wouldn't
be any need to have your public key signed by a CA; the site only needs
to know your public key (e.g., your browser could send self-signed
certs), which eliminates the dependence upon the third-party CAs.
Any thoughts on this?

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Toshiba shows 2Mbps hardware RNG

2008-02-09 Thread Perry E. Metzger

EE Times: Toshiba tips random-number generator IC

SAN FRANCISCO -- Toshiba Corp. has claimed a major breakthrough in
the field of security technology: It has devised the world's
highest-performance physical random-number generator (RNG)
circuit.

The device generates random numbers at a data rate of 2.0 megabits
a second, according to Toshiba in a paper presented at the
International Solid-State Circuits Conference (ISSCC) here.

http://www.eetimes.com/rss/showArticle.jhtml?articleID=206106199

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Fwd: [IP] U.S. Agents Seize Travelers' Devices

2008-02-09 Thread David Chessler



From: David Farber <[EMAIL PROTECTED]>
To: "ip" <[EMAIL PROTECTED]>



From: Sashikumar N [sashikumar.n@ ]
Sent: Thursday, February 07, 2008 1:46 PM
To: David Farber
Subject: U.S. Agents Seize Travelers' Devices

Dear Prof Dave,
Happen to read this link from slashdot, this is a disturbing news, a
direct assault on privacy...shocking that this could be real.

regards
sashi

http://www.washingtonpost.com/wp-dyn/content/article/2008/02/06/AR2008020604763.html
Clarity Sought on Electronics Searches
U.S. Agents Seize Travelers' Devices

By Ellen Nakashima
Washington Post Staff Writer
Thursday, February 7, 2008; Page A01

Nabila Mango, a therapist and a U.S. citizen who has lived in the
country since 1965, had just flown in from Jordan last December when,
she said, she was detained at customs and her cellphone was taken from
her purse. Her daughter, waiting outside San Francisco International
Airport, tried repeatedly to call her during the hour and a half she
was questioned. But after her phone was returned, Mango saw that
records of her daughter's calls had been erased.

A few months earlier in the same airport, a tech engineer returning
from a business trip to London objected when a federal agent asked him
to type his password into his laptop computer. "This laptop doesn't
belong to me," he remembers protesting. "It belongs to my company."
Eventually, he agreed to log on and stood by as the officer copied the
Web sites he had visited, said the engineer, a U.S. citizen who spoke
on the condition of anonymity for fear of calling attention to
himself.

Maria Udy, a marketing executive with a global travel management firm
in Bethesda, said her company laptop was seized by a federal agent as
she was flying from Dulles International Airport to London in December
2006. Udy, a British citizen, said the agent told her he had "a
security concern" with her. "I was basically given the option of
handing over my laptop or not getting on that flight," she said.


The seizure of electronics at U.S. borders has prompted protests from 
travelers who say they now weigh the risk of traveling with sensitive 
or personal information on their laptops, cameras or cellphones. In 
some cases, companies have altered their policies to require 
employees to safeguard corporate secrets by clearing laptop hard 
drives before international travel.


Today, the 
Electronic 
Frontier Foundation and Asian Law Caucus, two civil liberties groups 
in 
San 
Francisco, plan to file a lawsuit to force the government to disclose 
its policies on border searches, including which rules govern the 
seizing and copying of the contents of electronic devices. They also 
want to know the boundaries for asking travelers about their 
political views, religious practices and other activities potentially 
protected by the First Amendment. The question of whether border 
agents have a right to search electronic devices at all without 
suspicion of a crime is already under review in the federal courts.


The lawsuit was inspired by two dozen cases, 15 of which involved 
searches of cellphones, laptops, MP3 players and other electronics. 
Almost all involved travelers of Muslim, Middle Eastern or South 
Asian background, many of whom, including Mango and the tech 
engineer, said they are concerned they were singled out because of 
racial or religious profiling.


A 
U.S. 
Customs and Border Protection spokeswoman, Lynn Hollinger, said 
officers do not engage in racial profiling "in any way, shape or 
form." She said that "it is not CBP's intent to subject travelers to 
unwarranted scrutiny" and that a laptop may be seized if it contains 
information possibly tied to terrorism, narcotics smuggling, child 
pornography or other criminal activity.


The reason for a search is not always made clear. The Association of 
Corporate Travel Executives, which represents 2,500 business 
executives in the United States and abroad, said it has tracked 
complaints from several members, including Udy, whose laptops have 
been seized and their contents copied before usually being returned 
days later, said Susan Gurley, executive director of ACTE. Gurley 
said none of the travelers who have complained to the ACTE raised 
concerns about racial or ethnic profiling. Gurley said none of the 
travelers were charged with a crime.


"I was assured that my laptop would be given back to me in 10 or 15 
days," said Udy, who continues to fly into and out of the United 
States. She said the federal agent copied her log-on and password, 
and asked her to show him a recent document and how she gains access 
to 


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread ' =JeffH '
> > E.g., here's such a specfication excerpt and is absolutely everything said 
> > in 
> > the spec wrt obtaining said signature keys:
> >
> >   When generating MAC keys, the recommendations in [RFC1750] SHOULD be 
> > followed.
> 
> One point, RFC1750 has been superceded by RFC4086.

I'll point that out, thanks.


> >   ...
> >   The quality of the protection provided by the MAC depends on the 
> > randomness of
> >   the shared MAC key, so it is important that an unguessable value be used.
> >
> > How (un)wise is this, in a real-world sense? 
> 
> It seems pretty reasonable to me. They are referring to an RFC with
> lots of good advice about random number generators, and they emphasize
> that the key value should be unguessable. It's probably out of scope to
> go into a lot more detail than that. Referring to other standards like
> RFC1750/4086 is the right way to handle this kind of issue.

agreed (thx for the ptr to RFC4880) after doing some further reading and such. 
RFC4086 covers the notion of "mixing functions" etc, so the above-quoted 
SHOULD statement covers those bases.



> I am the co-author of the OpenPGP Standard, RFC4880. All we say is:
> 
>The sending OpenPGP generates a random number to be used as a
>session key for this message only.
> 
> and
> 
>* Certain operations in this specification involve the use of random
>  numbers.  An appropriate entropy source should be used to generate
>  these numbers (see [RFC4086]).
> 
> Not all that different in thrust than the spec you are looking at.


agreed, thanks again.



=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


fyi: Encrypted laptop poses legal dilemma

2008-02-09 Thread ' =JeffH '
From:[EMAIL PROTECTED] (Dewayne Hendricks)
Subject: [Dewayne-Net] Encrypted laptop poses legal dilemma
To:  Dewayne-Net Technology List <[EMAIL PROTECTED]>
Date:Thu, 07 Feb 2008 15:38:22 -0800


[Note:  This item comes from reader Randall.  DLH]

From: Randall <[EMAIL PROTECTED]>
Date: February 7, 2008 1:53:24 PM PST
To: David Farber <[EMAIL PROTECTED]>, Dewayne Hendricks <[EMAIL PROTECTED]>






Encrypted laptop poses legal dilemma

By JOHN CURRAN, Associated Press Writer

When Sebastien Boucher stopped at the U.S.-Canadian border, agents who  
inspected his laptop said they found files containing child pornography.

But when they tried to examine the images after his arrest,  
authorities were stymied by a password-protected encryption program.

Now Boucher is caught in a cyber-age quandary: The government wants  
him to give up the password, but doing so could violate his Fifth  
Amendment right against self-incrimination by revealing the contents  
of the files.

Experts say the case could have broad computer privacy implications  
for people who cross borders with computers, PDAs and other devices  
that are subject to inspection.

"It's a very, very interesting and novel question, and the courts have  
never really dealt with it," said Lee Tien, an attorney with the  
Electronic Frontier Foundation, a San Francisco-based group focused on  
civil liberties in the digital world.

For now, the law's on Boucher's side: A federal magistrate here has  
ruled that forcing Boucher to surrender the password would be  
unconstitutional.

The case began Dec. 17, 2006, when Boucher and his father were stopped  
at a Derby Line, Vt., checkpoint as they entered the U.S.

Boucher, a 30-year-old drywall installer in Derry, N.H., waived his  
Miranda rights and cooperated with agents, telling them he downloads  
pornography from news groups and sometimes unknowingly acquires images  
that contain child pornography.

Boucher said he deletes those images when he realizes it, according to  
an affidavit filed by Immigration and Customs Enforcement.

At the border, he helped an agent access the computer for an initial  
inspection, which revealed files with names such as "Two year old  
being raped during diaper change" and "pre teen bondage," according to  
the affidavit.

Boucher, a Canadian with U.S. residency, was accused of transporting  
child pornography in interstate or foreign commerce, which carries up  
to 20 years in prison. He is free on his own recognizance.

The laptop was seized, but when an investigator later tried to access  
a particular drive, he was thwarted by encryption software from a  
company called Pretty Good Privacy, or PGP.

A grand jury subpoena to force Boucher to reveal the password was  
quashed by federal Magistrate Jerome Niedermeier on Nov. 29.

"Producing the password, as if it were a key to a locked container,  
forces Boucher to produce the contents of his laptop," Niedermeier  
wrote. "The password is not a physical thing. If Boucher knows the  
password, it only exists in his mind."

Niedermeier said a Secret Service computer expert testified that the  
only way to access Boucher's computer without knowing the password  
would be to use an automated system that guesses passwords, but that  
process could take years.

The government has appealed the ruling.

Neither defense attorney James Budreau nor Vermont U.S. Attorney  
Thomas Anderson would discuss the charge.

"This has been the case we've all been expecting," said Michael  
Froomkin, a professor at the University of Miami School of Law. "As  
encryption grows, it was inevitable there'd be a case where the  
government wants someone's keys."

[snip]


--

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Leichter, Jerry
| >All of this ignores a significant issue:  Are keying and encryption
| >(and authentication) mechanisms really independent of each other? I'm
| >not aware of much work in this direction.
| 
| Is there much work to be done here?  If you view the keyex mechanism
| as a producer of an authenticated blob of shared secrecy and the
| post-keyex portions (data transfer or whatever you're doing) as a
| consumer of said blob, with a PRF as impedance-matcher (as is done by
| SSL/TLS, SSH, IPsec, ..., with varying degrees of aplomb, and in a
| more limited store-and-forward context PGP, S/MIME, ...), is there
| much more to consider?
I don't know.  Can you prove that your way of looking at it is valid?
After all, I can look at encryption as applying a PRF to a data
stream, and authentication as computing a keyed one-way function (or
something) - so is there anything to prove about whether I can choose
and combine them independently?  About whether Encrypt-then-MAC and
MAC-then-Encrypt are equivalent?

I should think by now that we've learned how delicate our cryptographic
primitives can be - and how difficult it can be to compose them in a
way that retains all their individual guarantees.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Open source FDE for Win32

2008-02-09 Thread Hagai Bar-El

List,

Finally, an open source FDE (Full Disk Encryption) for Win32. It is the 
first one I am aware of:


www.truecrypt.org

TC is not a new player, but starting February 5th (version 5) it also 
provides FDE.


Didn't get to try it yet.

Hagai.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread Joseph Ashwood

[to and CC trimmed]
- Original Message - 
From: "' =JeffH '" <[EMAIL PROTECTED]>
To: ""Hal Finney"" <[EMAIL PROTECTED]>; "Eric Rescorla" 
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; "Joseph Ashwood" 
<[EMAIL PROTECTED]>

Cc: <[EMAIL PROTECTED]>; 
Sent: Thursday, February 07, 2008 2:17 PM
Subject: Re: questions on RFC2631 and DH key agreement


I think I already know the answer to this question, but I just want to test 
my

understanding...

How wise (in a real-world sense) is it, in a protocol specification, to
specify that one simply obtain an ostensibly random value, and then use 
that
value directly as the signature key in, say, an HMAC-based signature, 
without
any further stipulated checking and/or massaging of the value before such 
use?


With any authentication the biggest consideration is to examine what the 
intention is. Using a single-use one time key for a symmetric MAC works for 
local authenticity, but not for remote authenticity. That is to say that the 
local process knows that it didn't generate the MAC, and the MAC is shared 
with only one other, so the authenticity is known, but any external source 
can only say that an entity knowing the key generated it. This may or may 
not be the intended condition, in that auditing this is very, very 
difficult.




E.g., here's such a specfication excerpt and is absolutely everything said 
in

the spec wrt obtaining said signature keys:

 When generating MAC keys, the recommendations in [RFC1750] SHOULD be
followed.
 ...
 The quality of the protection provided by the MAC depends on the 
randomness


This should be entropy.


of
 the shared MAC key, so it is important that an unguessable value be used.

How (un)wise is this, in a real-world sense?


It all depends on the intended meaning. If this is intended to authenticate 
to a third party, it fails completely. If it is specifically intended NOT to 
authenticate to a third party it may be exactly what is needed.
   Joe 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread Joseph Ashwood
- Original Message - 
From: ""Hal Finney"" <[EMAIL PROTECTED]>

To: <[EMAIL PROTECTED]>; 
Sent: Wednesday, February 06, 2008 8:54 AM
Subject: Re: questions on RFC2631 and DH key agreement



Joseph Ashwood writes, regarding unauthenticated DH:
I would actually recommend sending all the public data. This does not 
take
significant additional space and allows more verification to be 
performed. I

would also suggest looking at what exactly the goal is. As written this
provides no authentication just privacy, and if b uses the same private 
key

to generate multiple yb the value of b will slowly leak.


I'm not familiar with this last claim, that the value of b's private key
(presuming that is what you mean) would slowly leak if it were reused for
many DH exchanges. Can you explain what you mean? Are you talking about
Lim&Lee style attacks where the recipient does not check the parameters
for validity? In that case I would say the private exponent would leak
quickly rather than slowly. But if the parameters are checked, I don't
see how that would leak a reused exponent.


I am not immediately aware of any known attacks that have been published 
about it, but it is fairly obvious that Eve has more information about the 
private key by having a second key set with the same unknown. With only a 
single pair Eve's information set is:

g_1,p_1,q_1,y_1 where y_1 = g_1^x mod p_1

By adding the second key set Eve now has
g_1,p_1,q_1,y_1 where y_1 = g_1^x mod p_1
g_2,p_2,q_2,y_2 where y_2 = g_2^x mod p_2

This is obviously additional information, and with addition key set _i 
eventually Eve has the information to guess x with improves probability.





You can then use the gpb trio for DSA, leveraging the key set for more
capabilities.


Presuming here you mean (g,p,q) as suitable for reuse. This raises the
question, is the same set of (g,p,q) parameters suitable for use in both
DH exchange and DSA signatures?

From the security engineering perspective, I'd suggest that the goals and
threat models for encryption vs signatures are different enough that one
would prefer different parameters for the two.


I agree with that, presuming that the private key values are different, 
there is at least no harm in using different parameters, and it avoids some 
possible avenues of attack.



For DSA signatures, we'd
like small subgroups, since the subgroup size determines the signature
size. This constraint is not present with DH encryption, where a large
subgroup will work as well as a small one. Large subgroups can then
support larger private exponents in the DH exchange.


Actually there is nothing stopping parameters for DSA from being prime(160 
bit)*prime(5 bit)*2+1 which would have a large enough subgroup as to be 
effectively unbreakable. Now obviously 5 bits is excessive, but my point 
is that finding p with a moderately sized subgroup q and a large additional 
subgroup is entirely possible, even though it is arguably unnecessary.





Now it may be argued that large subgroups do not actually increase
security in the DH exchange, because index calculus methods are
independent of subgroup size. In fact, parameters for DSA signatures
are typically chosen so that subgroup based methods such as Shanks that
take sqrt(q) cost are balanced against estimates of index calculus
work to break p. However, this balancing is inherently uncertain and
it's possible that p-based attacks will turn out to be harder than ones
based on q. Hence one would prefer to use a larger q to provide a margin
of safety if the costs are not too high.


I would consider that except for (semi)ephemeral parameters the cost of 
finding an appropriate prime are minor relative to the other considerations. 
This is especially true with signature parameters where a signing pair can 
be worth more than all the data authenticated by it.



While there is a computational
cost to using a larger subgroup for DH exchange, there is no data cost,
while for DSA there are both computational and data costs. Therefore the
tradeoffs for DH would tend to be different than for DSA, and a larger
q would be preferred for DH, all else equal. In fact it is rather common
in DH parameter sets to use Sophie-Germain primes for q.


I don't know if they are "common" but they are definitely a good idea, or at 
the very least using parameters with very large factors of p-1. Primes of 
the form q*k+1 for small k are certainly a good idea.



We may also consider that breaking encryption keys is a passive
attack which can be mounted over a larger period of time (potentially
providing useful information even years after the keys were retired)
and is largely undetectable; while breaking signatures, to be useful,
must be performed actively, carries risks of detection, and must be
completed within a limited time frame. All these considerations motivate
using larger parameter sets for DH encryption than for DSA signatures.


I'm not as certain about that last point. My experie

Re: Gutmann Soundwave Therapy

2008-02-09 Thread Peter Gutmann
"Leichter, Jerry" <[EMAIL PROTECTED]> writes:

>All of this ignores a significant issue:  Are keying and encryption (and
>authentication) mechanisms really independent of each other? I'm not aware of
>much work in this direction.

Is there much work to be done here?  If you view the keyex mechanism as a
producer of an authenticated blob of shared secrecy and the post-keyex
portions (data transfer or whatever you're doing) as a consumer of said blob,
with a PRF as impedance-matcher (as is done by SSL/TLS, SSH, IPsec, ..., with
varying degrees of aplomb, and in a more limited store-and-forward context
PGP, S/MIME, ...), is there much more to consider?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread "Hal Finney"
Hi Jeff -
> How wise (in a real-world sense) is it, in a protocol specification, to 
> specify that one simply obtain an ostensibly random value, and then use that 
> value directly as the signature key in, say, an HMAC-based signature, without 
> any further stipulated checking and/or massaging of the value before such use?

I think it's OK, as long as it is understood that the random number
generator should be of good quality. If it is not, I don't know that
checking and/or massaging will help much.

> E.g., here's such a specfication excerpt and is absolutely everything said in 
> the spec wrt obtaining said signature keys:
>
>   When generating MAC keys, the recommendations in [RFC1750] SHOULD be 
> followed.

One point, RFC1750 has been superceded by RFC4086.

>   ...
>   The quality of the protection provided by the MAC depends on the randomness 
> of
>   the shared MAC key, so it is important that an unguessable value be used.
>
> How (un)wise is this, in a real-world sense? 

It seems pretty reasonable to me. They are referring to an RFC with
lots of good advice about random number generators, and they emphasize
that the key value should be unguessable. It's probably out of scope to
go into a lot more detail than that. Referring to other standards like
RFC1750/4086 is the right way to handle this kind of issue.

> [yes, I'm aware that using a only a SHOULD here leaves a huge door open 
> compliance-wise, but that's a different issue]

I am the co-author of the OpenPGP Standard, RFC4880. All we say is:

   The sending OpenPGP generates a random number to be used as a
   session key for this message only.

and

   * Certain operations in this specification involve the use of random
 numbers.  An appropriate entropy source should be used to generate
 these numbers (see [RFC4086]).

Not all that different in thrust than the spec you are looking at.

Hal

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread "Hal Finney"
Jeff Hodges wrote:
> Thanks for your thoughts on this Hal. Quite educational. 
>
> > Jeff Hodges wrote:
> > > It turns out the supplied default for p is 1024 bit -- I'd previously 
> > > goofed 
> > > when using wc on it..
> > >
> > > DCF93A0B883972EC0E19989AC5A2CE310E1D37717E8D9571BB7623731866E61EF75A2E27898B057
> > > F9891C2E27A639C3F29B60814581CD3B2CA3986D2683705577D45C2E7E52DC81C7A171876E5CEA7
> > > 4B1448BFDFAF18828EFD2519F14E45E3826634AF1949E5B535CC829A483B8A76223E5D490A257F0
> > > 5BDFF16F2FB22C583AB
> > 
> > This p is a "strong" prime, one where (p-1)/2 is also a prime, a good
> > property for a DH modulus.
>
> Ok, so what tools did you use to ascertain that? I'm curious. 

I copied and pasted it into Python as p, set p1 = (p-1)/2, and did
pow(2L,p1-1,p1), pow(3L,p1-1,p1) and a few such Fermat tests, always
getting 1 as the result, to confirm that p1 is prime. I then did
pow(2L,p1,p) and got p-1 rather than 1, which tells me that 2 generates
the whole group rather than the subgroup of order p1.

Hal

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread ' =JeffH '
I think I already know the answer to this question, but I just want to test my 
understanding...

How wise (in a real-world sense) is it, in a protocol specification, to 
specify that one simply obtain an ostensibly random value, and then use that 
value directly as the signature key in, say, an HMAC-based signature, without 
any further stipulated checking and/or massaging of the value before such use?

E.g., here's such a specfication excerpt and is absolutely everything said in 
the spec wrt obtaining said signature keys:

  When generating MAC keys, the recommendations in [RFC1750] SHOULD be 
followed.
  ...
  The quality of the protection provided by the MAC depends on the randomness 
of
  the shared MAC key, so it is important that an unguessable value be used.

How (un)wise is this, in a real-world sense? 


[yes, I'm aware that using a only a SHOULD here leaves a huge door open 
compliance-wise, but that's a different issue]

thanks,

=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Eric Rescorla
At Thu, 7 Feb 2008 14:42:36 -0500 (EST),
Leichter, Jerry wrote:
> | > Obviously, if you *really* use every k'th packet to define what is in
> | > fact a substream, an attacker can arrange to knock out the substream he
> | > has chosen to attack.  So you use your encryptor to permute the
> | > substreams, so there's no way to tell from the outside which packet is
> | > part of which substream.  Also, you want to make sure that a packet
> | > containing checksums is externally indistinguishable from one containing
> | > data.  Finally, the checksum packet inherently has higher - and much
> | > longer-lived - semantic value, so you want to be able to request that
> | > *it* be resent.  Presumably protocols that are willing to survive data
> | > loss still have some mechanism for control information and such that
> | > *must* be delivered, even if delayed.
> | 
> | This basically doesn't work for VoIP, where latency is a real issue.
> It lets the receiver to make a choice:  Deliver the data immediately,
> avoiding the latency at the cost of possibly releasing bogus data (which
> we'll find out about, and report, later); or hold off on releasing the
> data until you know it's good, at the cost of introducing audible
> artifacts.  In non-latency-sensitive designs, the prudent approach is to
> never allow data out of the cryptographic envelope until you've
> authenticated it.  Here, you should probably be willing to do that, on
> the assumption that the "application layer" - a human being - will know
> how to react if you tell him "authentication has failed, please
> disregard what you heard in the last 10 seconds".

Well, since there's a much simpler procedure accept ~5-10% overhead, this 
doesn't seem like a particularly attractive design.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread ' =JeffH '
Thanks for your thoughts on this Hal. Quite educational. 

> Jeff Hodges wrote:
> > It turns out the supplied default for p is 1024 bit -- I'd previously 
> > goofed 
> > when using wc on it..
> >
> > DCF93A0B883972EC0E19989AC5A2CE310E1D37717E8D9571BB7623731866E61EF75A2E27898B057
> > F9891C2E27A639C3F29B60814581CD3B2CA3986D2683705577D45C2E7E52DC81C7A171876E5CEA7
> > 4B1448BFDFAF18828EFD2519F14E45E3826634AF1949E5B535CC829A483B8A76223E5D490A257F0
> > 5BDFF16F2FB22C583AB
> 
> This p is a "strong" prime, one where (p-1)/2 is also a prime, a good
> property for a DH modulus.

Ok, so what tools did you use to ascertain that? I'm curious. 


> The generator g=2 generates the entire group,
> which is an OK choice. 

Same here.


> But that shouldn't matter,
> the shared secret should be hashed and/or used as the seed of a PRNG to
> generate further keys.

It is hashed, but isn't used to gen further keys. I'm crafting a review of the 
full DH exchange in the target spec that I'll post to the list today.


> The main problem as I said is that 1024 bit moduli are no longer
> considered sufficiently safe for more than casual purposes.

That's what I thought. 


> Particularly
> with discrete logs that use a widely-shared modulus like the one above,
> it would not be surprising to see it publicly broken in the next 5-10
> years, or perhaps even sooner. And if a public effort can accomplish it
> in a few years, conservatively we should assume that well funded secret
> efforts could already succeed today.

Yep. 


thanks again,

=JeffH


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Gutmann Soundwave Therapy

2008-02-09 Thread Crawford Nathan-HMGT87
>> humans are not going to carry around large strong secrets every time
either end of the connection restarts

Which is what makes good crypto challenging.  I think, though, that
because people can understand the concept of physical locks and keys,
that this should be carried forward...

Good security is based on something you have, something you know, and
something you are.  While the third case would be rather difficult to
reliably implement on a mass-market scale, the former two are not
difficult at all.  Especially now that USB drives and CDROMs are the
defacto media standard.

Passwords do have known weaknesses - people tend to pick easily
remembered (and easily guessed) passwords.  However, when used in
combination with an external key, the security damage is at least
partially mitigated.  A system which relied on both would probably be
more secure than one which simply relied on the user entering their
password.

I've been floating the idea of selling keys on removable media.  The
core idea would be that if you get the user to use crypto keys in a
manner similar to the way they use physical keys, that you could avoid a
substantial amount of confusion, and would keep them from doing insecure
things.  You know, the typical weak password kind of thing.  If the user
has to physically plug in a USB stick for every secure session:

1.) It mitigates to a small degree the danger of key leakage because the
keys are only present on the system for small periods of time.
2.) It makes it easier for the user to use crypto - a weak password to
access the key database does not carry forward to a weak password for
session purposes.  That is, the user can have a weak key database
password without compromising the security of the underlying crypto used
for the session.

The problem is not that strong crypto is elusive, but rather, that using
it is often non-intuitive for the average user.  An unusable, or
error-prone crypto system is often worse than having none at all.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Leichter, Jerry
| So, this issue has been addressed in the broadcast signature context
| where you do a two-stage hash-and-sign reduction (cf. [PG01]), but
| when this only really works because hashes are a lot more efficient
| than signatures. I don't see why it helps with MACs.
Thanks for the reference.

| > Obviously, if you *really* use every k'th packet to define what is in
| > fact a substream, an attacker can arrange to knock out the substream he
| > has chosen to attack.  So you use your encryptor to permute the
| > substreams, so there's no way to tell from the outside which packet is
| > part of which substream.  Also, you want to make sure that a packet
| > containing checksums is externally indistinguishable from one containing
| > data.  Finally, the checksum packet inherently has higher - and much
| > longer-lived - semantic value, so you want to be able to request that
| > *it* be resent.  Presumably protocols that are willing to survive data
| > loss still have some mechanism for control information and such that
| > *must* be delivered, even if delayed.
| 
| This basically doesn't work for VoIP, where latency is a real issue.
It lets the receiver to make a choice:  Deliver the data immediately,
avoiding the latency at the cost of possibly releasing bogus data (which
we'll find out about, and report, later); or hold off on releasing the
data until you know it's good, at the cost of introducing audible
artifacts.  In non-latency-sensitive designs, the prudent approach is to
never allow data out of the cryptographic envelope until you've
authenticated it.  Here, you should probably be willing to do that, on
the assumption that the "application layer" - a human being - will know
how to react if you tell him "authentication has failed, please
disregard what you heard in the last 10 seconds".  (If you record the
data, the human being doesn't have to rely on memory - you can tell him
exactly where things went south.)  There are certainly situation where
this isn't good enough - e.g., if you're telling a fighter pilot to fire
a missile, a fake command may be impossible to countermand in time to
avoid damage - but that's pretty rare.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: questions on RFC2631 and DH key agreement

2008-02-09 Thread "Hal Finney"
Jeff Hodges wrote:
> It turns out the supplied default for p is 1024 bit -- I'd previously goofed 
> when using wc on it..
>
> DCF93A0B883972EC0E19989AC5A2CE310E1D37717E8D9571BB7623731866E61EF75A2E27898B057
> F9891C2E27A639C3F29B60814581CD3B2CA3986D2683705577D45C2E7E52DC81C7A171876E5CEA7
> 4B1448BFDFAF18828EFD2519F14E45E3826634AF1949E5B535CC829A483B8A76223E5D490A257F0
> 5BDFF16F2FB22C583AB

This p is a "strong" prime, one where (p-1)/2 is also a prime, a good
property for a DH modulus. The generator g=2 generates the entire group,
which is an OK choice. It means that one bit of the shared secret is
leaked (whether or not it is a quadratic residue, i.e. whether the
discrete log of the number is even or odd). But that shouldn't matter,
the shared secret should be hashed and/or used as the seed of a PRNG to
generate further keys.

The main problem as I said is that 1024 bit moduli are no longer
considered sufficiently safe for more than casual purposes. Particularly
with discrete logs that use a widely-shared modulus like the one above,
it would not be surprising to see it publicly broken in the next 5-10
years, or perhaps even sooner. And if a public effort can accomplish it
in a few years, conservatively we should assume that well funded secret
efforts could already succeed today.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


customs searching laptops, demanding passwords

2008-02-09 Thread John Denker
I quote from
  
http://www.washingtonpost.com/wp-dyn/content/article/2008/02/06/AR2008020604763_pf.html
  By Ellen Nakashima
  Washington Post Staff Writer  
  Thursday, February 7, 2008; A01

> The seizure of electronics at U.S. borders has prompted protests from
> travelers who say they now weigh the risk of traveling with sensitive
> or personal information on their laptops, cameras or cellphones. In
> some cases, companies have altered their policies to require
> employees to safeguard corporate secrets by clearing laptop hard
> drives before international travel.
> 
> Today, the Electronic Frontier Foundation and Asian Law Caucus, two
> civil liberties groups in San Francisco, plan to file a lawsuit to
> force the government to disclose its policies on border searches,
> including which rules govern the seizing and copying of the contents
> of electronic devices.

=

Most of the underlying issue is not new;  a Joe Sharkey article
about customs seizures of laptops appeared in the NY Times back 
on October 24, 2006.  And it has been discussed on this list.
(The news "hook" here is the filing of the lawsuit.)

One wrinkle that was not previously reported is the bit about
customs officers demanding passwords.  That is something I
have thought about, off and on, and the more I think about it 
the more worrisome it seems.

A) Here's one particularly nasty scenario:  Long ago, the traveler
experimented with using an encrypting filesystem, perhaps the 
dm-crypt feature of Linux.  However, he decided it wasn't worth 
the trouble and forgot about it.  This includes forgetting the 
passphrase.  Now he's at the border, and customs is demanding 
the passphrase.   
 -- Just tell us the password.
 -- I forgot.
 -- No you didn't.
 -- Yes I did.
 -- You're lying.
 -- No I'm not.
 -- Yes you are.
 -- No I'm not.
 -- Just tell us the password.
 -- et cetera.

B) Another scenario:  Your employer adopts a policy requiring
you to use a "blank" laptop when traveling, as mentioned in 
the news article.  They also require you to use an encrypting
filesystem, even when not traveling.  They discover that the
easiest way to "blankify" your laptop is to overwrite the IVs
of the encrypting filesystem.  Now any and all passphrases
will fail in the same way:  they all look like "wrong" pass-
phrases.  Now are back to scenario (A), because customs might 
assume you're just lying about the passphrase.

C) Another scenario:  Customs confiscates the laptop.  They 
say that you won't get it back unless/until you give up the 
passphrase.

D) Tangential observation:  If they were being reasonable, they 
would confiscate at most the disk drive, and let you keep the 
rest of the hardware.  But they're under no obligation to be 
reasonable.

E) Remark:  The fundamental problem underlying this whole 
discussion is that the traveler is in a position where he has 
to prove his innocence ... which may not be possible, even if 
he is innocent.

The doctrine of innocent-until-proven-guilty does *not* apply
to customs searches.  Ditto for the doctrine of requiring
probable cause, search warrants, et cetera.

F) A good way (not the easiest way) to "blankify" a laptop
is to remove the hard disk and replace it with a brand-new
obviously-innocuous disk.  (Small, slow disks are very cheap.)
When you get home from your travels, you can undo the switch.

G) It is fun to think about a steganographic filesystem, with
the property that if you mount it with one passphrase you see
one set of files, while if you mount it with another passphrase
you see another set of files.  

The point here is that you give up one passphrase, they never
know if there is a second;  if you give up two passphrases,
they never know if there is a third, et cetera.

Note that we are talking about cryptologically-strong stego
here (as opposed to weak stego which falls into the category
of security-by-obscurity).

>From an information-theory point of view this is perfectly 
straightforward;  solutions have been worked out in connection 
with code division multiplexing.  However, I reckon it would
have serious performance problems when applied to a hard disk.  
If anybody knows how to do this in practice, please speak up!

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Eric Rescorla
At Thu, 7 Feb 2008 10:34:42 -0500 (EST),
Leichter, Jerry wrote:
> | Since (by definition) you don't have a copy of the packet you've lost,
> | you need a MAC that survives that--and is still compact. This makes
> | life rather more complicated. I'm not up on the most recent lossy
> | MACing literature, but I'm unaware of any computationally efficient
> | technique which has a MAC of the same size with a similar security
> | level. (There's an inefficient technique of having the MAC cover all
> | 2^50 combinations of packet loss, but that's both prohibitively
> | expensive and loses you significant security.)
> My suggestion for a quick fix:  There's some bound on the packet loss
> rate beyond which your protocol will fail for other reasons.  If you
> maintain separate MAC's for each k'th packet sent, and then deliver k
> checksums periodically - with the collection of checksums itself MAC'ed,
> a receiver should be able to check most of the checksums, and can reset
> itself for the others (assuming you use a checksum with some kind of
> prefix-extension property; you may have to send redundant information
> to allow that, or allow the receiver to ask for more info to recover).

So, this issue has been addressed in the broadcast signature context
where you do a two-stage hash-and-sign reduction (cf. [PG01]), but
when this only really works because hashes are a lot more efficient
than signatures. I don't see why it helps with MACs.


> Obviously, if you *really* use every k'th packet to define what is in
> fact a substream, an attacker can arrange to knock out the substream he
> has chosen to attack.  So you use your encryptor to permute the
> substreams, so there's no way to tell from the outside which packet is
> part of which substream.  Also, you want to make sure that a packet
> containing checksums is externally indistinguishable from one containing
> data.  Finally, the checksum packet inherently has higher - and much
> longer-lived - semantic value, so you want to be able to request that
> *it* be resent.  Presumably protocols that are willing to survive data
> loss still have some mechanism for control information and such that
> *must* be delivered, even if delayed.

This basically doesn't work for VoIP, where latency is a real issue.


-Ekr

[PG01] Philippe Golle, Nagendra Modadugu: Authenticating Streamed Data in the 
Presence of
Random Packet Loss. NDSS 2001

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Leichter, Jerry
| >I don't propose to get into an extended debate about whether it is
| >better to use SRTP or to use generic DTLS. That debate has already
| >happened in IETF and SRTP is what the VoIP vendors are
| >doing. However, the good news here is that you can use DTLS to key
| >SRTP (draft-ietf-avt-dtls-srtp), so there's no need to invent a new
| >key management scheme.
| 
| Hmm, given this X-to-key-Y pattern (your DTLS-for-SRTP example, as
| well as OpenVPN using ESP with TLS keying), I wonder if it's worth
| unbundling the key exchange from the transport?
A system I designed has this property:  You can choose the key exchange
mechanism separately from the encryption mechanism.  In fact, the
end user can select this (though generally he chooses one of a number
of pre-defined options, which internally are just macros).  The
encryption mechanism is able to enforce a quality constraint on which
keying mechanisms it's willing to deal with - e.g., only the NULL
encryption mechanism is willing to accept the "NO_KEY" key exchange.

I did make a simplifying assumption that there is a linear ranking
of quality for keying mechanisms, so that what an encryptor actually
specifies is "at least this strength".  There's a similar assumed
ranking for encryption mechanisms.  Negotiation is done by having
each end specify which keying and encryption mechanisms it is
willing to use (those it implements, filtered by user-specified
constraints), and then choosing the "strongest" in the intersection
of the mechanisms common to both.  In principle, one could similarly
choose an authentication mechanism.

The linear ranking worked in the particular situation where I designed
this but isn't generalizable.  Without that, things get much more
complex - you lose the nice property of the current implementation
that the two ends need merely exchange what the implement, and then
proceed independently to choose the "best" among the available
choices (and always come to the same conclusions).

All of this ignores a significant issue:  Are keying and encryption
(and authentication) mechanisms really independent of each other?
I'm not aware of much work in this direction.  Most of what's out
there is negative results that, on the one hand, tell you that
general independence theorems are impossible; but on the other,
they tend to be based on clearly pathological combinations, which
hints that independence theorems *might* be possible, if we knew
how to constrain the different components to avoid the pathologies.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Richard Salz
> Thus unlike with bridges, you fundamentally can't 
> evaluate the quality of a security system you built if you're unfamiliar 

> with the state of the art of _attacks_ against security systems, and you 

> can't become familiar with those unless you realize that these attacks 
> have each brought down a system previously considered impregnable.

I don't see how this invalidates my analogy.

In 1940 they didn't know understand about wind-induced vibration and yet 
it brought down the Tacoma Narrows bridge.  A few years ago we didn't know 
much about hash collisions, yet since then the field has brought down MD5.

If the field isn't codified, all the more reason to spread knowledge 
rather than encourage a priesthood.

/r$

--
STSM, DataPower Chief Programmer
WebSphere DataPower SOA Appliances
http://www.ibm.com/software/integration/datapower/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-09 Thread Victor Duchovni
On Thu, Feb 07, 2008 at 08:47:20PM +1300, Peter Gutmann wrote:

> Victor Duchovni <[EMAIL PROTECTED]> writes:
> 
> >While Firefox should ideally be developing and testing PSK now, without
> >stable libraries to use in servers and browsers, we can't yet expect anything
> >to be released.
> 
> Is that the FF devlopers' reason for holding back?  Just wondering... why not
> release it with TLS-PSK/SRP anyway (particularly with 3.0 being in the beta
> stage, it'd be the perfect time to test new features), tested against existing
> implementations, then at least it's ready for when server support appears.  At
> the moment we seem to be in a catch-22, servers don't support it because
> browsers don't, and browsers don't support it because servers don't.

I don't have any idea why or why not, but all they can release now is
source code with #ifdef openssl >= 0.9.9  ... do PSK stuff ... #endif,
with binaries (dynamically) linked against the default OpenSSL on the
oldest supported release of each platform... For RedHat 4.x systems,
for example, that means that binary packages use 0.9.7...

Distributions that build their own Firefox from source may at some point
have PSK (once they ship OpenSSL 0.9.9). I don't think we will see this
available in many user's hands for 2-3 years after the code is written
(fielding new systems to the masses takes a long time...).

-- 

 /"\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Leichter, Jerry
| > - Truncate the MAC to, say, 4 bytes.  Yes, a simple brute
| > force attack lets one forge so short a MAC - but
| > is such an attack practically mountable in real
| > time by attackers who concern you?
| 
| In fact, 32-bit authentication tags are a feature of
| SRTP (RFC 3711). 
Great minds run in the same ruts.  :-)

| > - Even simpler, send only one MAC every second - i.e.,
| > every 50 packets, for the assumed parameters.
| > Yes, an attacker can insert a second's worth
| > of false audio - after which he's caught.  I
| > suppose one could come up with scenarios in
| > which that matters - but they are very specialized.
| > VOIP is for talking to human beings, and for
| > human beings in all but extraordinary circumstances
| > a second is a very short time.
| 
| Not sending a MAC on every packet has difficult interactions with
| packet loss. If you do the naive thing and every N packets send a MAC
| covering the previous N packets, then if you lose even one of those
| packets you can't verify the MAC. But since some packet loss is
| normal, an attacker can cover their tracks simply by removing one out
| of every N packets.
*Blush*.  Talk about running in the same ruts.  I was specifically
talking about dealing with lossy datagram connections, but when I came
to making a suggestion, suggested one I'd previously considered for
non-lossy stream connections.  Streams are so much easier to reason
about - it's easy to get caught.  (It's also all too easy to forget
that no stream implementation really implements the abstract semantics
of a reliable stream - which is irrelevant in some cases, but very
significant in others.)

| Since (by definition) you don't have a copy of the packet you've lost,
| you need a MAC that survives that--and is still compact. This makes
| life rather more complicated. I'm not up on the most recent lossy
| MACing literature, but I'm unaware of any computationally efficient
| technique which has a MAC of the same size with a similar security
| level. (There's an inefficient technique of having the MAC cover all
| 2^50 combinations of packet loss, but that's both prohibitively
| expensive and loses you significant security.)
My suggestion for a quick fix:  There's some bound on the packet loss
rate beyond which your protocol will fail for other reasons.  If you
maintain separate MAC's for each k'th packet sent, and then deliver k
checksums periodically - with the collection of checksums itself MAC'ed,
a receiver should be able to check most of the checksums, and can reset
itself for the others (assuming you use a checksum with some kind of
prefix-extension property; you may have to send redundant information
to allow that, or allow the receiver to ask for more info to recover).

Obviously, if you *really* use every k'th packet to define what is in
fact a substream, an attacker can arrange to knock out the substream he
has chosen to attack.  So you use your encryptor to permute the
substreams, so there's no way to tell from the outside which packet is
part of which substream.  Also, you want to make sure that a packet
containing checksums is externally indistinguishable from one containing
data.  Finally, the checksum packet inherently has higher - and much
longer-lived - semantic value, so you want to be able to request that
*it* be resent.  Presumably protocols that are willing to survive data
loss still have some mechanism for control information and such that
*must* be delivered, even if delayed.

Tons of hand-waving there; at the least, you have to adjust k and
perhaps other parameters to trade off security and overhead.  I'm
pretty sure something along these lines could be done, but it's
certainly not off-the-shelf.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Want to drive a Jaguar?

2008-02-09 Thread Peter Gutmann
  http://eprint.iacr.org/2008/058
  
  Physical Cryptanalysis of KeeLoq Code Hopping Applications

  Recently, some mathematical weaknesses of the KeeLoq algorithm have been
  reported. All of the proposed attacks need at least 2^16 known or chosen
  plaintexts. In real-world applications of KeeLoq, especially in remote
  keyless entry systems using a so-called code hopping mechanism, obtaining
  this amount of plaintext-ciphertext pairs is rather impractical. We present
  the first successful DPA attacks on numerous commercially available products
  employing KeeLoq code hopping. Using our proposed techniques we are able to
  reveal not only the secret key of remote transmitters in less that one hour,
  but also the manufacturer key of receivers in less than one day. Knowing the
  manufacturer key allows for creating an arbitrary number of valid
  transmitter keys.

KeeLoq is used in large numbers of car keyless-entry systems.  Ouch.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-09 Thread Peter Gutmann
Victor Duchovni <[EMAIL PROTECTED]> writes:

>While Firefox should ideally be developing and testing PSK now, without
>stable libraries to use in servers and browsers, we can't yet expect anything
>to be released.

Is that the FF devlopers' reason for holding back?  Just wondering... why not
release it with TLS-PSK/SRP anyway (particularly with 3.0 being in the beta
stage, it'd be the perfect time to test new features), tested against existing
implementations, then at least it's ready for when server support appears.  At
the moment we seem to be in a catch-22, servers don't support it because
browsers don't, and browsers don't support it because servers don't.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-09 Thread Peter Gutmann
"Steven M. Bellovin" <[EMAIL PROTECTED]> writes:

>There's another issue: initial account setup.  People will still need to rely
>on certificate-checking for that.  It's a real problem at some hotspots,
>where Evil Twin attacks are easy and lots of casual users are signing up for
>the first time.

It really depends on the value of the account, for high-value ones I would
hope it's done out-of-band (so you can't just sign up for online banking by
going to a bank's purported web page and saying "Hi, I'm Bob, give me access
to my account"), and for low-value stuff like Facebook I'm not sure how much
effort your password is worth to an attacker when they can get a million
others from the same site.  I agree that it's still a problem, but switching
to failsafe auth is a major attack surface reduction since now an attacker has
to be there at the initial signup rather than at any arbitrary time of their
choosing.  It's turning an open channel into a time- and location-limited
channel.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-09 Thread Steven M. Bellovin
On Thu, 07 Feb 2008 17:37:02 +1300
[EMAIL PROTECTED] (Peter Gutmann) wrote:

> The real issues occur in two locations:
> 
> 1. In the browser UI.
> 2. In the server processing, which no longer gets the password via an
> HTTP POST but as a side-effect of the TLS connect.
> 
> (1) is a one-off cost for the browser developers, (2) is a bit more
> complex to estimate because it's on a per-site basis, but in general
> since the raw data (username+pw) is already present it's mostly a
> case of redoing the data flow a bit, and not necessarily rebuilding
> the whole system from scratch.  To give one example, a healthcare
> provider, they currently trigger an SQL query from an HTTP POST that
> looks up the password with the username as key, and the change would
> be to do the same thing at the TLS stage rather than the post-TLS
> HTTP stage.

There's another issue: initial account setup.  People will still need
to rely on certificate-checking for that.  It's a real problem at some
hotspots, where Evil Twin attacks are easy and lots of casual users are
signing up for the first time.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: TLS-SRP & TLS-PSK support in browsers (Re: Dutch Transport Card Broken)

2008-02-09 Thread Peter Gutmann
Frank Siebenlist <[EMAIL PROTECTED]> writes:

>With the big browser war still going strong, wouldn't that provide fantastic
>marketing opportunities for Firefox?

There's always the problem of politics.  You'd think that support for a free
CA like CAcert would also provide fantastic marketing opportunities for free
browser like Firefox, but this seems to be stalled pretty much idefinitely
because since CAcert doesn't charge for certificates, including it in Firefox
would upset the commercial CAs that do (there's actually a lot more to it than
this, see the interminable flamewars on this topic on blogs and whatnot for
more information).

>If Firefox would support these secure password protocols, and the banks would
>openly recommend their customers to use Firefox because its safer and
>protects them better from phishing, that would be great publicity for
>Firefox, draw more users, and force M$ to support it too in the long run...

Here's a suggestion to list members:

- If you know a Firefox developer, go to them and tell them that TLS-PSK and
  TLS-SRP support would be a fantastic selling point and would allow Firefox
  to trump IE in terms of resisting phishing, which might encourage banks to
  recommend it to users in place of IE.

- If you know anyone with some clout at Microsoft, tell them that your
  organisation is thinking of mandating a switch to Firefox because IE doesn't
  support phish-resistant authentication like TLS-PSK/TLS-SRP, and since you
  have x million paying customers this won't look good for MS.

- If you work for any banking regulators (for example the FFIEC), require
  failsafe authentication (in which the remote site doesn't get a copy of your
  credentials if the authentication fails) rather than the current two-factor
  auth (which has lead to farcical "two-factor" mechanisms like SiteKey).

Oh, and don't tell them I put you up to this :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-09 Thread Peter Gutmann
"James A. Donald" <[EMAIL PROTECTED]> writes:

>However, seems to me that logging into the website using SRP is a non trivial
>refactoring, and not just a matter of dropping in TLS-SRP as a simple
>replacement of TLS-DSA-X509

I've discussed this with (so far) a small sample of assorted corporate TLS
users to get at least a general idea of what'd be involved.  At a very
abstract level all they see is "username + password + TLS" ->
"permitted/denied", the only change is that by moving the verification into
TLS this process happens a bit earlier than when it's done in HTML (and
obviously the failsafe nature means the other side never gets the password if
the auth fails).

At an implementation level it's also fairly simple, it's maybe 2-3 pages of
code added to my SSL implementation, and I spoke to another SSL developer who
gave similar figures.  All you're doing is mixing a little extra keying
material into the premaster secret, it's not a major piece of programming.

The real issues occur in two locations:

1. In the browser UI.
2. In the server processing, which no longer gets the password via an HTTP
   POST but as a side-effect of the TLS connect.

(1) is a one-off cost for the browser developers, (2) is a bit more complex to
estimate because it's on a per-site basis, but in general since the raw data
(username+pw) is already present it's mostly a case of redoing the data flow a
bit, and not necessarily rebuilding the whole system from scratch.  To give
one example, a healthcare provider, they currently trigger an SQL query from
an HTTP POST that looks up the password with the username as key, and the
change would be to do the same thing at the TLS stage rather than the post-TLS
HTTP stage.

With the folks I've discussed this with their concern has been far more "We
want this yesterday, why isn't it here yet" rather than "We can't integrate
this with our existing back-ends".

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Daniel Carosone
Others have made similar points and suggestions, not picking on this
instance in particular:

On Mon, Feb 04, 2008 at 02:48:08PM -0700, Martin James Cochran wrote:
> Additionally, in order to conserve bandwidth you might want to make a 
> trade-off where some packets may be forged with small probability (in the 
> VOIP case, that means an attacker gets to select a fraction of a second of 
> sound, which is probably harmless)

This is ok, if you consider the only threat to be against the final
endpoint: a human listening to a short-term, disposable conversation.
I can think of some counter-examples where these assumptions don't
hold:

 - A data-driven exploit against an implementation vulnerability in
   your codec of choice.  Always a possibility, but a risk you might
   rate differently (or a patch you might deploy on a different
   schedule) for conversations with known and trusted peers than you
   would for arbitrary peers, let alone maliciously-inserted traffic.
   How many image decoding vulnerabilities have we seen lately, again?

 - People have invented and do use such horribly-wrong things as
   fax-over-voip; while they seem to have some belief in their own
   business case, I may not have as much faith in their implementation
   robustness.
   
 - Where it's audio, but the audience is different such that the
   impact of short bursts of malicious sound is different: larger
   teleconferences, live interviews or reporting by journalists, and
   other occasions, particularly where the credibility of the speaker
   is important.  Fractions of seconds of sound is all I might need to
   insert to .. er .. emulate tourette's syndrome. Fractions of
   seconds of soundwave therapy could still be highly unpleasant or
   embarassing.

Particularly for the first point, early validation for packet
integrity in general can be a useful defensive tool against unknown
potential implementation vulnerabilities.  I've used similar arguments
before around the use of keyed authentication of other protocols, such
as SNMPv3 and NTP.

It also reminds me of examples where cryptographic protections have
only covered certain fields in a header or message.  Attackers may
find novel ways to use the unprotected space, plus it just makes the
whole job of risk analysis at deployment orders of magnitude more
complex.

Without dismissing the rest of the economic arguments, when it comes
to these kinds of vulnerabilities, be very wary of giving an attacker
this inch, they may take a mile.  

--
Dan.


pgpP4RjLsu7PD.pgp
Description: PGP signature


Re: Gutmann Soundwave Therapy

2008-02-09 Thread dan

 > Amateurs talk about algorithms.  Professionals talk about economics.


That would be


  Amateurs study cryptography; professionals study economics.
  -- Allan Schiffman, 2 July 04


Quotationally yours,

--dan

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]