Re: [Cryptography] About those fingerprints ...

2013-09-11 Thread Tim Dierks
On Wed, Sep 11, 2013 at 1:13 PM, Jerry Leichter leich...@lrw.com wrote:

 On Sep 11, 2013, at 9:16 AM, Andrew W. Donoho a...@ddg.com wrote:
  Yesterday, Apple made the bold, unaudited claim that it will never save
 the fingerprint data outside of the A7 chip.
 By announcing it publicly, they put themselves on the line for lawsuits
 and regulatory actions all over the world if they've lied.

 Realistically, what would you audit?  All the hardware?  All the software,
 including all subsequent versions?

 This is about as strong an assurance as you could get from anything short
 of hardware and software you build yourself from very simple parts.


When it comes to litigation or actual examination, it's been demonstrated
again and again that people can hide behind their own definitions of terms
that you thought were self-evident. For example, the NSA's definition of
target, collect, etc., which fly in the fact of common understanding,
and exploit the loopholes in English discourse. People can lie to you
without actually uttering a demonstrable falsehood or exposing themselves
to liability, unless you have the ability to cross-example the assertions.

I don't have a precise cite for the Apple claim, but let's take two
summaries: first, from Andrew Apple made the bold, unaudited claim that it
will never save the fingerprint data outside of the A7 chip. Initial
questions: does this mean they won't send the data to third parties? How
about give third parties the ability to extract the data themselves? Does
the phrase fingerprint data include all data derived from the
fingerprint, such as minutiae?

second, from 
Macworldhttp://www.macworld.com/article/2048520/fingerprint-sensor-in-iphone-5s-is-no-silver-bullet-researchers-say.html:
the fingerprint data is encrypted and locked in the device’s new A7 chip,
that it’s never directly accessible to software and that it’s not stored on
Apple’s servers or backed up to iCloud. Similar questions: is the data
indirectly accessible? Is it stored on non-Apple servers? Etc.

Unless you can cross-examine the assertions with some kind of penalty for
dissembling, you can't be sure that an assertion means what you think or
hope it means, regardless of how straightforward and direct it sounds.

 - Tim
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread Tim Dierks
On Fri, Sep 6, 2013 at 3:03 AM, Kristian Gjøsteen 
kristian.gjost...@math.ntnu.no wrote:

 Has anyone, anywhere ever seen someone use Dual-EC-DRBG?

 I mean, who on earth would be daft enough to use the slowest possible
 DRBG? If this is the best NSA can do, they are over-hyped.


It's implemented in Windows and in a number of other libraries*; I can't
find any documentation on which points these implementations use. But I
agree that there's little technical reason to use it—however, who is to
know that a vendor couldn't be influenced to choose it?

In pursuing the list NIST validations, there's aa number of cases where
Dual_EC_DRBG is the only listed mode, but all of them (with one exception)
are issued to companies where they have other validations, generally on
similar products, so it just looks like they got multiple validations for
different modes. The one exception is Lancope, validation #288, which
validated their use of Dual_EC_DRBG, but no other modes. So it looks like
there's at least one implementation at use in the wild.

 - Tim

* - The implementors that NIST
listshttp://csrc.nist.gov/groups/STM/cavp/documents/drbg/drbgval.html
are:
RSA, Certicom, Cisco, Juniper, BlackBerry, OpenPeak, OpenSSL, Microsoft,
Mocana, ARX, Cummings Engineering Consultants, Catbird, Thales e-Security,
SafeLogic, Panzura, SafeNet, Kony, Riverbed, and Symantec. (I excluded
validations where the implementation clearly appears to be licensed, but
people can name it anything they want, and some of the above are probably
just OpenSSL forks, etc.)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-05 Thread Tim Dierks
On Thu, Sep 5, 2013 at 4:57 PM, Perry E. Metzger pe...@piermont.com wrote:

 On Thu, 5 Sep 2013 16:53:15 -0400 Perry E. Metzger
 pe...@piermont.com wrote:
   Anyone recognize the standard?
 
  Please say it aloud. (I personally don't recognize the standard
  offhand, but my memory is poor that way.)

 There is now some speculation in places like twitter that this refers
 to Dual_EC_DRBG though I was not aware that was widely enough deployed
 to make a huge difference here, and am not sure which international
 group is being mentioned. I would be interested in confirmation.


I believe it is Dual_EC_DRBG. The ProPublica
storyhttp://www.propublica.org/article/the-nsas-secret-campaign-to-crack-undermine-internet-encryptionsays:

Classified N.S.A. memos appear to confirm that the fatal weakness,
discovered by two Microsoft cryptographers in 2007, was engineered by the
agency. The N.S.A. wrote the standard and aggressively pushed it on the
international group, privately calling the effort “a challenge in finesse.”

This appears to describe the NIST SP 800-90 situation pretty precisely. I
found Schneier's contemporaneous article to be good at refreshing my
memory:
http://www.wired.com/politics/security/commentary/securitymatters/2007/11/securitymatters_1115

 - Tim
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Today's XKCD is on password strength.

2011-08-10 Thread Tim Dierks
On Wed, Aug 10, 2011 at 10:12 AM, Perry E. Metzger pe...@piermont.comwrote:

 Today's XKCD is on password strength. The advice it gives is pretty
 good in principle...

 http://xkcd.com/936/


FWIW,
http://tim.dierks.org/2007/03/secure-in-browser-javascript-password.html

 - Tim
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: OpenID/Debian PRNG/DNS Cache poisoning advisory

2008-08-08 Thread Tim Dierks
[Sorry for duplicates, but I got multiple requests for a non-HTML
version, and I didn't want to fork the thread. Also sorry for
initially sending HTML; I didn't realize it was so abhorrent these
days. ]

On Fri, Aug 8, 2008 at 1:43 PM, Dan Kaminsky [EMAIL PROTECTED] wrote:

 It's easy to compute all the public keys that will be generated
 by the broken PRNG. The clients could embed that list and refuse
 to accept any certificate containing one of them. So, this
 is distinct from CRLs in that it doesn't require knowing which servers have 
 which cert...

 Funnily enough I was just working on this -- and found that we'd end up 
 adding a couple megabytes to every browser.  #DEFINE NONSTARTER.  I am 
 curious about the feasibility of a large bloom filter that fails back to 
 online checking though.  This has side effects but perhaps they can be made 
 statistically very unlikely, without blowing out the size of a browser.

Using this Bloom filter calculator:
http://www.cc.gatech.edu/~manolios/bloom-filters/calculator.html ,
plus the fact that there are 32,768 weak keys for every key type 
size, I get various sizes of necessary Bloom filter, based on how many
key type / sizes you want to check and various false positive rates:
 * 3 key types/sizes with 1e-6 false positive rate: 2826759 bits = 353 KB
 * 3 key types/sizes with 1e-9 false positive rate: 4240139 bits = 530 KB
 * 7 key types/sizes with 1e-6 false positive rate: 6595771 bits = 824 KB
 * 7 key types/sizes with 1e-9 false positive rate: 9893657 bits = 1237 KB

I presume that the first 3  first 7 key type/sizes in this list
http://metasploit.com/users/hdm/tools/debian-openssl/ are the best to
incorporate into the filter.

Is there any chance it would be feasible to get a list of all the weak
keys that were actually certified by browser-installed CAs, or those
weak certificates? Presumably, this list would be much smaller and
would be more effectively distributed in Bloom filter form.

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Open-source PAL

2007-11-30 Thread Tim Dierks
A random thought that's been kicking around in my head: if someone were
looking for a project, an open-source permissive action link (
http://www.cs.columbia.edu/~smb/nsam-160/pal.html is a good link, thank you
Mr. Bellovin) seems like it might be a great public resource: I suspect it's
something that some nuclear states could use some education on, but even if
the US is willing to share technology, the recipient may not really trust
the source.

As such, an open-source PAL technology might substantially improve global
safety.

Thoughts?

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Exponent 3 damage spreads...

2006-09-14 Thread Tim Dierks

On 9/14/06, James A. Donald [EMAIL PROTECTED] wrote:

It seems to me that the evil here is ASN.1, or perhaps standards that
use ASN.1 carelessly and badly.

It is difficult to write code that conforms to ASN.1, easy to get it
wrong, and difficult to say what in fact constitutes conforming to ASN.1
or at least difficult to say what in fact constitutes conforming to
standard written in ASN.1

ASN.1 does the same job as XML, but whereas XML is painfully verbose and
redundant, ASN.1 is crypticly concise.

People do not seem to get XML wrong all that often, while they endlessly
get ASN.1 wrong, and endlessly disagree over what constitutes being right.


This problem is just as likely or more likely if we were using XML to
encode the hash inside the RSA-encrypted blob (signature). The
equivalents would be:

Appended garbage:
 signed-hashValid-looking-hash/signed-hashGarbage here
Or
 signed-hashValid-looking-hash/signed-hash[null byte]Garbage here

Interior garbage:
 signed-hash legal-but-unparsed-attribute=Garbage
hereValid-looking-hash/signed-hash

or similar attacks. The problem is not XML or ASN.1: the problem is
that it's very demanding and tricky to write parsers that are
invulnerable to all the different kinds of malicious attack that are
out there.

If anything, I think XML is more vulnerable to such attacks because
its less-structured format makes it harder to write very strict
parsers. The actual way to design a system that was less vulnerable to
this attack would have been to use a much simpler data structure:
e.g., one could have said that the hashing algorithm must already be
known, so the size of the hash is known to be n bytes, and that the
data block should be a byte of value 0, followed by bytes of value FF,
with the last n bytes equal to the hash. Then it would have been a
no-brainer for anyone to write a precisely accurate parser and
validator, and we'd be less vulnerable to such oversights.

- Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Another entry in the internet security hall of shame....

2005-08-24 Thread Tim Dierks
[resending due to e-mail address / cryptography list membership issue]

On 8/24/05, Ian G [EMAIL PROTECTED] wrote:
 Once you've configured iChat to connect to the Google Talk service, you may
 receive a warning message that states your username and password will be
 transferred insecurely. This error message is incorrect; your username and
 password will be safely transferred.

iChat pops up the warning dialog whenever the password is sent to the
server, rather than used in a hash-based authentication protocol.
However, it warns even if the password is transmitted over an
authenticated SSL connection.

I'll leave it to you to decide if this is:
 - an iChat bug
 - a Google security problem
 - in need of better documentation
 - all of the above
 - none of the above

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Number of rounds needed for perfect Feistel?

2005-08-12 Thread Tim Dierks
I'm attempting to design a block cipher with an odd block size (34
bits). I'm planning to use a balanced Feistel structure with AES as the
function f(), padding the 17-bit input blocks to 128 bits with a pad
dependent on the round number, encrypting with a key, and extracting the
low 17 bits as the output of f().

If I use this structure, how many rounds do I need to use to be secure (or
can this structure be secure at all, aside from the obvious insecurity
issues of the small block size itself)? I've been told that a small number
of rounds is insecure (despite the fact that f() can be regarded as
perfect) due to collisions in the output of f(). However, I don't
understand this attack precisely, so a reference would be appreciated.

Thanks,
 - Tim


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Number of rounds needed for perfect Feistel?

2005-08-12 Thread Tim Dierks
Barney Wolff wrote:
 On Fri, Aug 12, 2005 at 11:47:26AM -0400, Tim Dierks wrote:
 I'm attempting to design a block cipher with an odd block size (34
 bits). I'm planning to use a balanced Feistel structure with AES as the
 function f(), padding the 17-bit input blocks to 128 bits with a pad
 dependent on the round number, encrypting with a key, and extracting the
 low 17 bits as the output of f().

 Pardon a dumb question, but how do you plan on avoiding collisions in
 the encrypted values, independent of the number of rounds?  Seems to me
 that even if the 128-bit encryption is guaranteed to be 1-to-1 with the
 plaintext, there is no such guarantee on any subset of the 128 bits.

A Feistel network doesn't depend on lack of collision in f(). The Handbook
of Applied Cryptography,
http://www.cacr.math.uwaterloo.ca/hac/about/chap7.pdf describes it pretty
well.

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: MD5 collisions?

2004-08-18 Thread Tim Dierks
On Thu, 19 Aug 2004 00:49:17 +1000, Greg Rose [EMAIL PROTECTED] wrote:
   It seems to be a straightforward differential cryptanalysis attack, so
  one wonders why no-one else came up with it.

 With further hindsight, and Phil Hawkes' help, I understand now. The
 technique needs to alternate between single bit (xor) differences and
 subtractive differences, with careful conditioning of the bits affected to
 make sure the right carries do, or don't, propagate. This is explained in
 Phil's upcoming paper about SHA-2 cancellations, which was presented later
 in the rump session. That should be on e-print in the next couple of days.
 The Chinese team is also writing a much more detailed paper, but it will
 take longer.

 There has been criticism about the Wang et. al paper that it doesn't
 explain how they get the collisions. That isn't right. Note that from the
 incorrect paper to the corrected one, the delta values didn't change.
 Basically, if you throw random numbers in as inputs, in pairs with the
 specified deltas, you should eventually be able to create your own MD5
 collisions for fun or profit. How they got this insight, we'll have to wait
 for... but the method is already there.

How likely is it that this attack could be extended to creating
common-prefix collisions? This is equivalent to specifying two IVs for
the two hashes. Looking at X.509 ASN.1, it seems like it would be
pretty easy to construct a certificate request such that, if you could
predict the certificate serial number you were to be issued, you could
create a hash collision in an X.509 certificate:

You would construct two strings, each the same length, an even
multiple of 512 bits long, which contained the prefix of the
certificate you were going to be issued and the prefix of the
certificate you would like to claim to have been issued (P1 =
authentic prefix, P2 = forged prefix). The trailing part of each
prefix would be the first few bits of the value of n in the RSA key.
These first few bits can be different in P1 and P2, and randomly
selected, as long as they begin with a 1. Call the first few bits of n
in P1 p1, and the first few bits of n in P2 p2.

You would take IV1 = internal state of MD5 after hashing P1 and IV2 =
internal state of MD5 after hashing P2. You can tweak the first few
bits of n if you want IV1 and IV2 to have a particular relation, as
long as it's not infeasible to find that relation. Then find two
1024-bit 2-block messages, M1, and M2, such that they create a
collision. Define v1 = p1 concatentated with M1, and similarly for v2.
Calculate D = abs(v1-v2). Factor D, finding a 1024-bit prime that
evenly divides D (i). If you can't easily factor D, or it doesn't
yield a 1024-bit prime factor, find another collision pair. Now
construct the last bits of n (call them t), which will be used in both
requests, such that n1 = v1 concatentated with k and n2 = v2
concatenated with k are both equal to i times j1 and j2 (each 1024-bit
primes). I think it may be possible to acclerate it, but even if you
just have to calculate collisions until it's true, you should be able
to find a collision D and a value k that gives primes i, j1, and j2
once in every ~30 million collisions. (I'm taking a stab in the dark
that values of D that have a single 1024-bit factor have 1/300
density, but this is demonstrably worst-case.) If you can find such a
collision with 5 minutes of CPU time, the attack will take, at worst,
275 CPU-years. This is university-feasible.

Of course, I don't have any idea how hard it is to extend the known
attack with common IVs to one in which the IVs are not common.

I'd recommend that anyone operating a CA choose serial numbers which are
long  unpredictable; if you want some structure, append a random 128 bit
value to your structure.

- Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: XML-proof UIDs

2003-11-16 Thread Tim Dierks
At 05:52 AM 11/14/2003, Eugen Leitl wrote:

Does anyone have robust code to generate globally unique IDs which won't 
break XML parsing,
and work on several platforms?

I was thinking of using an entropy pool to seed a cryptographic PRNG, used to
generate a sequence of SHA-1 hashes, dumped to an XML-armored representation.
This is what GUIDs/UUIDs were designed for, and they're used broadly. 
They're standardized in ISO 11578 [1], although there's a very similar 
public description in an expired Internet Draft [2]. Microsoft also 
publishes a description of how they generate their GUIDs, but I can't find 
it right now.

 - Tim

[1]
http://www.iso.ch/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=2229ICS1=35ICS2=100ICS3=70
[2]
http://www.ics.uci.edu/~ejw/authoring/uuid-guid/draft-leach-uuids-guids-01.txt
PS - I'm looking for a full-time job. My resume is at 
http://www.dierks.org/tim/resume.html . Looking for architecture or 
technical management jobs; I'm in New York, NY, but I am willing to relocate.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Clipper for luggage

2003-11-13 Thread Tim Dierks
From the New York Times. Any guesses on how long it'll take before your 
local hacker will have a key which will open any piece of your luggage?

 - Tim

A Baggage Lock for You and the Federal Screeners

By JOE SHARKEY
Published: November 11, 2003
AIRLINE passengers will be able to lock checked bags confidently again 
starting tomorrow, thanks to a new customer-service initiative between 
private enterprise and the Transportation Security Administration.

Here's how the plan will work: Several major luggage and lock retailers in 
the United States will announce tomorrow the availability of new locks, 
made by various manufacturers, that T.S.A. inspectors will be able to 
readily identify and open on checked bags selected for hand searches at 
airports.

T.S.A. screeners in airports around the country have already been trained 
in using secure procedures to open the new certified locks when necessary, 
and relock them after inspecting bags.

Literally since we began the process of screening every checked bag for 
explosives in December, one of the challenges has been the ability to get 
into bags without doing damage to them, said Brian Turmail, a spokesman 
for the T.S.A.

The system, developed in cooperation with the T.S.A. and the Travel Goods 
Association, a trade group, was designed around a common set of standards 
that any company that manufactures, or is interested in manufacturing, 
luggage or luggage locks could follow that would allow T.S.A. screeners to 
open the bag without doing damage to the bag, in a manner that would allow 
the bag to stay secured afterwards,'' Mr. Turmail said. In other words, we 
can open it, but no one else can.

The locks will be available in various manufacturers' designs. All will be 
geared around a uniform technology allowing them to be opened by T.S.A. 
inspectors using a combination of secure codes and special tools, according 
to John W. Vermilye, a former airline baggage-systems executive who 
developed the system through Travel Sentry, a company he set up for that 
purpose.

All the locks will carry a red diamond-shaped logo to certify to screeners 
that they meet the Travel Sentry standards. Mr. Vermilye said his company 
would receive royalties from manufacturers.

The system will ensure that passengers using the locks will not have to 
worry about a lock being broken or a locked bag being damaged if it is 
selected for hand inspection. It will also mean more peace of mind for 
passengers worried about reports of increased pilferage from unlocked bags.

The general feeling of airline passengers is, 'I don't like to have to 
keep my bags unlocked,'  added Mr. Vermilye, who once worked as a baggage 
handler. As somebody in the business for 30 years, I don't like it either, 
because I know what goes on in some baggage-handling areas, he said.

An industry study showed that 90 percent of air travelers are now leaving 
checked bags unlocked, whereas before this year about 66 percent of them 
said they always locked their bags.

I travel all the time, and I always used to lock my bags until this year, 
said Michael F. Anthony, the chairman and chief executive of Brookstone, a 
specialty retailer with 266 shops, including 30 in airports. Besides the 
worry about theft within the airline baggage-handling systems, Mr. Anthony 
said he was concerned on business trips about unlocked bags in the hands of 
cab and airport shuttle drivers, bellhops and others.

Brookstone airport shops are planning to introduce the chain's own brand of 
new locks with in-store promotions tomorrow, Mr. Anthony said. A package of 
two four-digit Brookstone combination locks costs $20. Luggage and other 
accessories with the lock standards incorporated also will begin moving 
soon onto shelves at Brookstone and other retailers.

Mr. Anthony said that the locks represented a needed air-travel 
customer-service breakthrough, helping people reclaim a sense of security 
they had in the past with their checked possessions.

The T.S.A. mandated screening of all checked bags starting last Dec. 31. 
Since then, most of the estimated 1.5 million bags checked daily in 
domestic airports have been inspected by bomb-detecting machinery - but 
about 10 percent of checked bags are opened and inspected by hand.

Initially, the T.S.A. planned to issue a blanket prohibition against 
locking bags, but the agency ultimately decided instead to merely suggest 
that passengers not lock them. The T.S.A. public directive on the subject 
says: In some cases screeners will have to open your baggage as part of 
the screening process. If your bag is unlocked, then T.S.A. will simply 
open the bag and screen the bag. However, if the bag is locked and T.S.A. 
needs to open your bag, then locks may have to be broken. You may keep your 
bag locked if you choose, but T.S.A. is not liable for damage caused to 
locked bags that must be opened.''

With bags unlocked, many travelers, including business travelers who pack 

Re: WYTM?

2003-10-13 Thread Tim Dierks
At 12:28 AM 10/13/2003, Ian Grigg wrote:
Problem is, it's also wrong.  The end systems
are not secure, and the comms in the middle is
actually remarkably safe.
I think this is an interesting, insightful analysis, but I also think it's 
drawing a stronger contrast between the real world and the Internet threat 
model than is warranted.

It's true that a large number of machines are compromised, but they were 
generally compromised by malicious communications that came over the 
network. If correctly implemented systems had protected these machines from 
untrustworthy Internet data, they wouldn't have been compromised.

Similarly, the statement is true at large (many systems are compromised), 
but not necessarily true in the small (I'm fairly confident that my SSL 
endpoints are not compromised). This means that the threat model is valid 
for individuals who take care to make sure that they comply with its 
assumptions, even if it may be less valid for the Internet at large.

And it's true that we define the threat model to be as large as the problem 
we know how to solve: we protect against the things we know how to protect 
against, and don't address problems at this level that we don't know how to 
protect against at this level. This is no more incorrect than my buying 
clothes which will protect me from rain, but failing to consider shopping 
for clothes which will do a good job of protecting me from a nuclear blast: 
we don't know how to make such clothes, so we don't bother thinking about 
that risk in that environment. Similarly, we have no idea how to design a 
networking protocol to protect us from the endpoints having already been 
compromised, so we don't worry about that part of the problem in that 
space. Perhaps we worry about it in another space (firewalls, better OS 
coding, TCPA, passing laws).

So, I disagree: I don't think that the SSL model is wrong: it's the right 
model for the component of the full problem it looks to address. And I 
don't think that the Internet threat model has failed to address the 
problem of host compromise: the fact is that these host compromises 
resulted, in part, from the failure of operating systems and other software 
to adequately protect against threats described in the Internet threat 
model: namely, that data coming in over the network cannot be trusted.

That doesn't change the fact that we should worry about the risk in 
practice that those assumptions of endpoint security will not hold.

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-04 Thread Tim Dierks
I'm lost in a twisty page of MITM passages, all alike.

My point was that in an anonymous protocol, for Alice to communicate with 
Mallet is equivalent to communicating with Bob, since the protocol is 
anonymous: there is no distinction. All the concept of MITM is intended to 
convey is that in an anonymous protocol, you don't know who you're talking 
to, period. Mallet having two conversations with Alice  Bob is equivalent 
to Mallet intermediating himself into a conversation between Alice  Bob.

If you have some unintermediated channel to speak with a known someone 
once, you can exchange a value or values which will allow you to 
authenticate each other forevermore and detect any intermediations in the 
past. But the fundamental truth is that there's no way to bootstrap a 
secure communication between two authenticated parties if all direct  
indirect communications between those parties may be intermediated. (Call 
this the 'brain in a jar' hypothesis.)

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Tim Dierks
At 07:06 PM 10/1/2003, M Taylor wrote:
Stupid question I'm sure, but does TLS's anonymous DH protect against
man-in-the-middle attacks? If so, how? I cannot figure out how it would,
and it would seem TLS would be wide open to abuse without MITM protection so
I cannot imagine it would be acceptable practice without some form of
security.
It does not, and most SSL/TLS implementations/installations do not support 
anonymous DH in order to avoid this attack. Many wish that anon DH was more 
broadly used as an intermediate security level between bare, insecure TCP  
authenticated TLS, but this is not common at this time.

(Of course, it's not even clear what MITM means for an anonymous 
protocol, given that the layer in question makes no distinction between Bob 
 Mallet.)

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: anonymous DH MITM

2003-10-01 Thread Tim Dierks
At 10:37 PM 10/1/2003, Peter Gutmann wrote:
Tim Dierks [EMAIL PROTECTED] writes:
It does not, and most SSL/TLS implementations/installations do not support
anonymous DH in order to avoid this attack.
Uhh, I think that implementations don't support DH because the de facto
standard is RSA, not because of any concern about MITM (see below).  You can
talk to everything using RSA, you can talk to virtually nothing using DH,
therefore...
Sure, although it's a chicken  egg thing: it's not the standard because 
the initial adopters  designers of SSL didn't have any use for it (not to 
mention the political strength of RSADSI in the era).

Many wish that anon DH was more broadly used as an intermediate security
level between bare, insecure TCP  authenticated TLS, but this is not common
at this time.
RSA is already used as anon-DH (via self-signed, snake-oil CA, expired,
invalid, etc etc certs), indicating that MITM isn't much of a concern for most
users.
There are so many different categories of users that it's probably 
impossible to make any blanket statements about most users. It's 
certainly true that a web e-commerce vendor doesn't have much use for 
self-signed certificates, since she knows that dialogs popping up warning 
customers that they have some problem they don't understand is going to 
lead to the loss of some small fraction of sales. (Not that she necessarily 
has any concern about the security implications: it's almost entirely a 
customer comfort and UI issue.)

 - Tim

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Beware of /dev/random on Mac OS X

2003-08-29 Thread Tim Dierks
At 05:01 PM 8/28/2003, Peter Hendrickson wrote:
First, the entropy pool in Yarrow is only 160 bits.  From Section 6
Open Questions and Plans for the Future of the Yarrow paper
referenced above:
 Yarrow-160, our current construction, is limited to at most 160 bits
 of security by the size of its entropy accumulation pools.
If the program needs more than 160 bits, it can seed it with more than
that amount of entropy.  (Strictly, it could seed it with 160 bits,
read it, seed it, read it, but this isn't mentioned on the man
page.)
Can anyone who believes that only having 160 bits of entropy available is 
an interesting weakness tell me why? I'm currently of the belief that 
there's far too much entropy paranoia out there. Barring disclosure of the 
entropy pool, I'm not aware of any plausible attack that could occur if I 
(for example) generate a bunch of keys from a single 160-bit entropy seed, 
given that I believe a 160-bit value to be invulnerable to brute force for 
quite a long time. I can't imagine any situation in which the lack of 
reseeding is going to be the weakness in this scenario, but maybe I'm 
insufficiently imaginative.

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fwd: [IP] A Simpler, More Personal Key to Protect Online Messages

2003-07-08 Thread Tim Dierks
At 05:30 PM 7/8/2003, Nomen Nescio wrote:
One difference is that with the identity-based crypto, once a sender
has acquired the software and the CA's public key, he doesn't have to
contact the CA to get anyone's certificate.  He can encrypt to anyone
without having to contact the CA, just based on the email address.
Your proposed substitute doesn't allow for this.
True, but how valuable is that, given that you can't send the actual 
message without contacting a server? I suppose one can construct 
theoretical scenarios where that's a benefit, but it seems to be a pretty 
narrow niche to me.

 but you don't need goofy new crypto to accomplish it.

The Weil pairing hardly constitutes goofy new crypto.  They are
doing all kinds of cool stuff with pairings these days, including
privacy-enhancing technology such as public keys with built-in forward
secrecy.
I retract the goofy. My point was that the market is incredibly reluctant 
to adopt new technology: if you can solve a problem with components known 
to the marketplace, you're much more likely to be successful than if you 
invent something new. This is above and beyond any reluctance to adopt new 
cryptographic technology based on concerns about security.

Even if the Weil pairing is known to be 100% secure and tested, any new 
solution has to, as a practical matter, leap a huge hurdle to overcome 
available, well known alternatives. I've spent years attempting to get the 
market to accept alternative security solutions, and I can testify to how 
high that hurdle is. In my opinion, identity-based cryptography has 
insufficient upside to overcome that hurdle, especially given that it is 
not without its downsides (escrowed private keys, no protection against key 
compromise).

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fwd: [IP] A Simpler, More Personal Key to Protect Online Messages

2003-07-07 Thread Tim Dierks

A Simpler, More Personal Key to Protect Online Messages

By JOHN MARKOFF
The New York Times
I wrote this for another list I'm on:

This system is based on an identity-based cryptography scheme developed by 
Dan Boneh with Matt Franklin. You can find a link to his paper Identity 
based encryption from the Weil pairing on Dr. Boneh's website, 
http://crypto.stanford.edu/~dabo/pubs.html .

The system allows any predetermined public value (e.g., an e-mail address) 
to be a public key. To encrypt a message, you do a mathematical operation 
as follows:

  EncM = E(M, pubKey, p)

Where:
  EncM is the encrypted message
  E is the encryption operation
  M is the message
  pubKey is the public key (e-mail address)
  p is a set of public domain parameters
The parameters p are a set of values which any subset of people can use to 
communicate with each other, but which must be predetermined by a trusted 
party and shared with all communicants. When the trusted third party 
creates the public domain parameters, there is a matched set of secret 
domain parameters (call them sp) which allow the trusted party to determine 
the matching secret key for any public key. Namely, in this system, for 
every pubKey there is a matching secKey which can be used to decrypt an 
encrypted message. The secret domain parameters are needed to be able to 
calculate secKey from pubKey:

  secKey = KD(pubKey, sp)

Where KD is the key derivation algorithm.

So, it all boils down to a system that's not dissimilar to a traditional 
CA-based public key system. In order for you to participate, you go to the 
trusted third party, they verify that you own the e-mail address you're 
claiming to possess (with whatever level of verification they insist upon), 
and if you do, they generate your secret key for you and send it to you. 
You can now decrypt messages which other people encrypt with that public key.

I don't think it's an interesting solution. I don't see any interesting 
application that's possible with this system which you couldn't do with 
existing public-key cryptography: for example, I could write a protocol  
software where you could request a public key from a server for any e-mail 
address; if the user didn't already have an enrolled key, my trusted server 
would generate one and enroll it on their behalf. When they got an 
encrypted message, they could contact me, authenticate themselves, and I'd 
send them their secret key. The functionality ends up being pretty much the 
same, but you don't need goofy new crypto to accomplish it. Furthermore, 
no-one's bothered to deploy the system I describe (although it's obvious) 
which implies that market demand for such a system hasn't been held back by 
the fact that no one had figured out the math yet. All of this, on top of 
the fact that the private key, is, in essence, escrowed by the trusted 
third party, causes me to believe that this system doesn't fill an 
important unmet need.

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: An attack on paypal

2003-06-08 Thread Tim Dierks
At 02:55 PM 6/8/2003, James A. Donald wrote:
Attached is a spam mail that constitutes an attack on paypal similar
in effect and method to man in the middle.
The bottom line is that https just is not working.  Its broken.

The fact that people keep using shared secrets is a symptom of https
not working.
The flaw in https is that you cannot operate the business and trust
model using https that you can with shared secrets.
I don't think it's https that's broken, since https wasn't intended to 
solve the customer authentication / authorization problem (you could try to 
use SSL's client certificates for that, but no one ever intended client 
certificate authentication to be a generalized transaction problem).

When I responded to this before, I thought you were talking about the 
server auth problem, not the password problem. I continue to feel that the 
server authentication problem is a very hard problem to solve, since 
there's few hints to the browser as to what the user's intent is.

The password problem does need to be solved, but complaining that HTTPS or 
SSL doesn't solve it isn't any more relevant than complaining that it's not 
solved by HTML, HTTP, and/or browser or server implementations, since any 
and all of these are needed in producing a new solution which can function 
with real businesses and real users. Let's face it, passwords are so deeply 
ingrained into people's lives that nothing which is more complex in any way 
than passwords is going to have broad acceptance, and any consumer-driven 
company is going to consider easy to be more important that secure.

Right now, my best idea for solving this problem is to:
 - Standardize an HTML input method for FORM which does an SPEKE (or 
similar) mutual authentication.
 - Get browser makers to design better ways to communicate to users that 
UI elements can be trusted. For example, a proposal I saw recently which 
would have the OS decorate the borders of trusted windows with facts or 
images that an attacker wouldn't be able to predict: the name of your dog, 
or whatever. (Sorry, can't locate a link right now, but I'd appreciate one.)
 - Combine the two to allow sites to provide a user-trustable UI to enter 
a password which cannot be sucked down.
 - Evangelize to users that this is better and that they should be 
suspicious of any situation where they used such interface once, but now 
it's gone.

I agree that the overall architecture is broken; the problem is that it's 
broken in more ways than can just be fixed with any change to TLS/SSL or HTTPS.

 - Tim



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]