Re: [Cryptography] Crypto being blamed in the London riots.

2011-08-10 Thread Steven Bellovin
On Aug 10, 2011, at 12:19 53PM, Perry E. Metzger wrote:

> On Wed, 10 Aug 2011 11:59:53 -0400 John Ioannidis  wrote:
>> On Tue, Aug 9, 2011 at 8:02 PM, Sampo Syreeni  wrote:
>>> 
>>> Thus, why not turn the Trusted Computing idea on its head? Simply
>>> make P2P public key cryptography available to your customers, and
>>> then bind your hands behind your back in an Odysseian fasion,
>>> using hardware means? Simply make it impossible for even yourself
>>> to circumvent the best cryptographic protocol you can invent,
>>> which you embed in your device before ever unveiling it, and then
>>> just live with it?
>>> 
>> 
>> "Customers"? There is no profit in any manufacturer or provider to
>> build that kind of functionality.
> 
> Blackberry already more or less has that functionality, which
> disproves your hypothesis.
> 
More precisely, Blackberry email is encrypted from the recipient's
Exchange server to the mobile device.

The scenario is corporate email; the business case is that RIM could
claim that they *couldn't* read the email; they never had it in the
clear.  However, that's only true for that service.  For personal
Blackberries, there is no corporate-owned server doing the encryption.

The service in question here, though, is Blackberry Messenger.  There
seems to be some confusion about whether or not such messages are
encrypted, and if so under what circumstances.  One link
(http://www.berryreview.com/2010/08/06/faq-blackberry-messenger-pin-messages-are-not-encrypted/)
 says that they're not, in any meaningful form.  More
authoritatively, 
http://web.archive.org/web/20101221211610/http://www.cse-cst.gc.ca/its-sti/publications/itsb-bsti/itsb57a-eng.html
says that they aren't.

The most authoritative source is RIM itself.  P 27 of
http://docs.blackberry.com/16650/ confirms the CSE document.

Looking at things more abstractly, there's a very difficult key 
management problem for a decentralized, many-to-one encryption service.
Here, you're either in CA territory or web of trust territory.  In
this case, are the alleged perpetrators of the riots careful enough
about to which keys they're sending the organizing messages?  If
the pattern is anything like Facebook friending, I sincerely doubt
it.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: Photos of an FBI tracking device found by a suspect

2010-10-08 Thread Steven Bellovin

On Oct 8, 2010, at 11:21 16AM, Perry E. Metzger wrote:

> My question: if someone plants something in your car, isn't it your
> property afterwards?
> 
> http://gawker.com/5658671/dont-post-pictures-of-an-fbi-tracking-device-you-find-on-a-car-to-the-internet

See http://www.wired.com/threatlevel/2010/10/fbi-tracking-device/ for even more 
disturbing aspects of the story -- they operated by intimidation (to say 
nothing of apparent ethnic and religious profiling).

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Anyone know anything about the new AT&T encrypted voice service?

2010-10-06 Thread Steven Bellovin

On Oct 6, 2010, at 6:19 01PM, Perry E. Metzger wrote:

> AT&T debuts a new encrypted voice service. Anyone know anything about
> it?
> 
> http://news.cnet.com/8301-13506_3-20018761-17.html
> 
> (Hat tip to Jacob Applebaum's twitter feed.)
> 

http://www.att.com/gen/press-room?pid=18624&cdvn=news&newsarticleid=31260&mapcode=enterprise
 says a bit more.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Steven Bellovin

On Sep 30, 2010, at 11:41 18AM, Thor Lancelot Simon wrote:

> On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
>> Thor Lancelot Simon writes:
>> 
>>> a significant net loss of security, since the huge increase in computation
>>> required will delay or prevent the deployment of "SSL everywhere".
>> 
>> That would only happen if we (as security experts) allowed web developers to
>> believe that the speed of RSA is the limiting factor for web application
>> performance.
> 
> At 1024 bits, it is not.  But you are looking at a factor of *9* increase
> in computational cost when you go immediately to 2048 bits.  At that point,
> the bottleneck for many applications shifts, particularly those which are
> served by offload engines specifically to move the bottleneck so it's not
> RSA in the first place.
> 
> Also, consider devices such as deep-inspection firewalls or application
> traffic managers which must by their nature offload SSL processing in
> order to inspect and possibly modify data before application servers see 
> it.  The inspection or modification function often does not parallelize
> nearly as well as the web application logic itself, and so it is often
> not practical to handle it in a distributed way and "just add more CPU".
> 
> At present, these devices use the highest performance modular-math ASICs
> available and can just about keep up with current web applications'
> transaction rates.  Make the modular math an order of magnitude slower
> and suddenly you will find you can't put these devices in front of some
> applications at all.
> 
> This too will hinder the deployment of "SSL everywhere", and handwaving
> about how for some particular application, the bottleneck won't be at
> the front-end server even if it is an order of magnitude slower for it
> to do the RSA operation itself will not make that problem go away.
> 
While I'm not convinced you're correct, I think that many posters here
underestimate the total cost of SSL.  A friend of mine -- a very competent
friend -- was working on a design for a somewhat sensitive website.  He
really wanted to use SSL -- but the *system* would have cost at least 12x
as much.  There were many issues, but one of them is that the average dwell
time on a web site is very few pages, which means that you have to amortize
the cost of the SSL negotiation over very little actual activity.  
> 


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Certificate-stealing Trojan

2010-09-27 Thread Steven Bellovin
Per 
http://news.softpedia.com/news/New-Trojan-Steals-Digital-Certificates-157442.shtml
 there's a new Trojan out there that looks for a steals Cert_*.p12 files -- 
certificates with private keys.  Since the private keys are password-protected, 
it thoughtfully installs a keystroke logger as well

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


ciphers with keys modifying control flow?

2010-09-27 Thread Steven Bellovin
Does anyone know of any ciphers where bits of keys modify the control path, 
rather than just data operations?  Yes, I know that that's a slippery concept, 
since ultimately things like addition and multiplication can be implemented 
with loops in the hardware or firmware.  I also suspect that it's potentially 
dangerous, since it might create very hard-to-spot classes of weak keys.  The 
closest I can think of is SIGABA, where some of the keying controlled the 
stepping of the other rotors.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Something you have, something else you have, and, uh, something else you have

2010-09-17 Thread Steven Bellovin

On Sep 17, 2010, at 4:53 51AM, Peter Gutmann wrote:

> From the ukcrypto mailing list:
> 
>  Just had a new Lloyds credit card delivered, it had a sticker saying I have
>  to call a number to activate it. I call, it's an automated system.
> 
>  It asks for the card number, fair enough. It asks for the expiry date, well
>  maybe, It asks for my DOB, the only information that isn't actually on the
>  card, but no big secret. And then it asks for the three-digit-security-code-
>  on-the-back, well wtf?
> 
>  AIUI, and I may be wrong, the purpose of activation is to prevent lost-in-
>  the-post theft/fraud - so what do they need details which a thief who has
>  the card in his hot sweaty hand already knows for?
> 
> Looks like it's not just US banks whose interpretation of n-factor auth is "n
> times as much 1-factor auth".
> 
I don't know how NZ banks do it; in the US, they use the phone number you're 
calling from.  Yes, it's spoofable, but most folks (a) don't know it, and (b) 
don't know how.

Of course, in many newer houses here there's a phone junction box *outside* the 
house.  So -- steal the envelope, and plug your own phone into the junction 
box, and away you go...


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Intel plans crypto-walled-garden for x86

2010-09-14 Thread Steven Bellovin

On Sep 13, 2010, at 11:58 57PM, John Gilmore wrote:

> http://arstechnica.com/business/news/2010/09/intels-walled-garden-plan-to-put-av-vendors-out-of-business.ars
> 
> "In describing the motivation behind Intel's recent purchase of McAfee
> for a packed-out audience at the Intel Developer Forum, Intel's Paul
> Otellini framed it as an effort to move the way the company approaches
> security "from a known-bad model to a known-good model." Otellini went
> on to briefly describe the shift in a way that sounded innocuous
> enough--current A/V efforts focus on building up a library of known
> threats against which they protect a user, but Intel would live to
> move to a world where only code from known and trusted parties runs on
> x86 systems."
> 
> Let me guess -- to run anything but Windows, you'll soon have to 
> jailbreak even laptops and desktop PC's?
> 

I've written a long blog post on this issue for the Concurring Opinions legal 
blog; see 
http://www.concurringopinions.com/archives/2010/09/a-new-threat-to-generativity.html


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


HDCP master key supposedly leaked

2010-09-14 Thread Steven Bellovin
http://arstechnica.com/tech-policy/news/2010/09/claimed-hdcp-master-key-leak-could-be-fatal-to-drm-scheme.ars

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Fwd: [ PRIVACY Forum ] 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-13 Thread Steven Bellovin
Here's what happens when you get your integrity checks wrong

Begin forwarded message:

> From: priv...@vortex.com
> Date: September 13, 2010 1:26:10 PM EDT
> To: privacy-l...@vortex.com
> Subject: [ PRIVACY Forum ] 'Padding Oracle' Crypto Attack Affects Millions of 
> ASP.NET Apps
> Reply-To: PRIVACY Forum Digest mailing list 
> 
> 
> 
> 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps
> 
> http://bit.ly/cnpzux  (threatpost)
> 
> --Lauren--
> Lauren Weinstein (lau...@vortex.com)
> http://www.vortex.com/lauren
> Tel: +1 (818) 225-2800
> Co-Founder, PFIR (People For Internet Responsibility): http://www.pfir.org
> Co-Founder, NNSquad (Network Neutrality Squad): http://www.nnsquad.org
> Founder, GCTIP (Global Coalition for Transparent Internet Performance): 
>   http://www.gctip.org
> Founder, PRIVACY Forum: http://www.vortex.com
> Member, ACM Committee on Computers and Public Policy
> Lauren's Blog: http://lauren.vortex.com
> Twitter: https://twitter.com/laurenweinstein
> Google Buzz: http://bit.ly/lauren-buzz
> 
> 
> 
> ___
> privacy mailing list
> http://lists.vortex.com/mailman/listinfo/privacy
> 


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


more digitally-signed malware

2010-09-13 Thread Steven Bellovin
http://www.computerworld.com/s/article/9184700/Newest_Adobe_zero_day_PDF_exploit_scary_says_researcher

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: RSA question

2010-09-04 Thread Steven Bellovin

On Sep 4, 2010, at 9:07 37AM, Victor Duchovni wrote:

> On Fri, Sep 03, 2010 at 07:16:00PM +0300, Sampo Syreeni wrote:
> 
>> On 2010-09-02, travis+ml-cryptogra...@subspacefield.org wrote:
>> 
>>> I hear that NIST Key Mgmt guideline (SP 800-57) suggests that the RSA key 
>>> size equivalent to a 256 bit symmetric key is roughly 15360 bits. I 
>>> haven't actually checked this reference, so I don't know how they got such 
>>> a big number; caveat emptor.
>> 
>> I would imagine it'd be the result of fitting some reasonable exponential 
>> to both keylengths and extrapolating, which then of course blows up...for 
>> once *literally* exponentially. ;)
> 
> Instead of imagining, one could look-up the brute-force cost of RSA
> vs. (ideal) symmetric algorithms, and discover that while brute forcing
> an ideal symmetric algorithm doubles in cost for every additional key
> bit, GNFS factoring cost is approximately proportional to
> 
>   exp(n^(1/3)*log(n)^(2/3))
> 
> where "n" is the number of key bits.
> 
> So compared to 1k RSA bits, 16k RSA bits has a GNFS cost that is
> (16*1.96)^(1/3) ~ 3.15 times higher. If 1k RSA bits is comparable to 80
> symmetric bits, then 16k RSA bits is comparable to 80*3.15 or 252 bits.
> 
> The mystery of the NIST numbers goes away, and one learns that the
> "blowing-up" of RSA key sizes relative to symmetric key sizes is less
> than cubic, and so definitely not "exponential".
> 
>\lim_{n \to \infty} \frac{\mathrm{RSA}(n)}{n^3} = 0
> 
> where RSA(n) is the number of RSA bits to match an n-bit symmetric key.

Also see RFC 3766, which comes up with comparable numbers and several types of 
analysis.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: questions about RNGs and FIPS 140

2010-08-26 Thread Steven Bellovin

On Aug 25, 2010, at 4:37 16PM, travis+ml-cryptogra...@subspacefield.org wrote:

> 
> 3) Is determinism a good idea?
> See Debian OpenSSL fiasco.  I have heard Nevada gaming commission
> regulations require non-determinism for obvious reasons.

It's worth noting that the issue of determinism vs. non-determinism is by no 
means clearcut.  You yourself state that FIPS 140-2 requires deterministic 
PRNGs; I think one can rest assured that the NSA had a lot of input into that 
spec.  The Clipper chip programming facility used a PRNG to set the unit key -- 
and for good reasons, not bad ones.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)

2010-08-25 Thread Steven Bellovin

On Aug 25, 2010, at 9:04 20AM, Richard Salz wrote:

>> Also, note that HSTS is presently specific to HTTP. One could imagine 
>> expressing a more generic "STS" policy for an entire site
> 
> A really knowledgeable net-head told me the other day that the problem 
> with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs 
> are now more prohibitive than the crypto costs.  I was quite surprised to 
> hear this; he was stunned to find it out.

This statement is quite correct.  I know of at least one major player that was 
very reluctant to use SSL because of this issue; the round trips, especially on 
intercontinental connections, led to considerable latency, which in turn hurt 
the perceived responsiveness of their service.

We need to do something about the speed of light.  Is anyone working on 
hyperwave or subether technologies?


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: [IP] Malware kills 154

2010-08-24 Thread Steven Bellovin
With his permission, here is a summary of the Spanish-language article by Ivan 
Arce, a native speaker of Spanish.  (I misspoke when I said he read the actual 
report.)  Note the lack of any assertion of causality between the crash and the 
malware.

--

- The malware-infected computer was located at the HQs in Palma de
Mallorca. The plane crashed on take off from Madrid.
- The fact that the computer was infected was revealed in an internal
memo on the same day of the incident.
- The computer hosted the application used to log maintaince failure
reports. It was configured to trigger on on-screen alarm (maybe a dialog
with an "OK" button?) when it detected 3 failures of a similar kind on
the same plane
- Spainair  was known to take up to 24hs to update the system with
maintaince reports as admitted by two mechanics (I dont know the proper
english term for this) from the maint. team.
- This isn't a minor issue given that the same plane had two failures on
the prior day and another failure on the same day. The maintaince crew
was responsible for reporting failures immediately when they were
discovered.
- That last failure on the same day, had prompted the pilots to abort
the take off at the head of the runway and get back to the gate when an
overheated valve was detected.
- Then the pilots forgot to activate flaps and slats.
- The plane had an onboard audible alarm to signal that condition, the
alarm did not go off.

Reading this full account is quite saddening.

So, in sum, it seems that a set of failures and errors were combined and
led to terrible consequences. In this overall picture, malware had a
very limited and small impact.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: [IP] Malware kills 154

2010-08-24 Thread Steven Bellovin

On Aug 24, 2010, at 12:32 19PM, Chad Perrin wrote:

> On Mon, Aug 23, 2010 at 03:35:45PM -0400, Steven Bellovin wrote:
>> 
>> And the articles I've seen do not say that the problem caused the
>> crash.  Rather, they say that a particular, important computer was
>> infected with malware; I saw no language (including in the Google
>> translation of the original article at
>> http://www.elpais.com/articulo/espana/ordenador/Spanair/anotaba/fallos/aviones/tenia/virus/elpepiesp/20100820elpepinac_11/Tes,
>> though the translation has some crucial infelicities) that said
>> "because of the malware, bad things happened.  It may be like the
>> reactor computer with a virus during a large blackout -- yes, the
>> computer was infected, but that wasn't what caused the problem.
> 
> The problem was evidently a couple of maintenance technicians who didn't
> do their jobs correctly.  The computer comes into the matter because one
> of its jobs was to activate an alarm if a critical system whose failure
> *was* the proximate cause of the crash was not working properly.  It
> didn't activate the alarm, which would have led to the aircraft being
> prohibited from taking off, because of the malware.
> 

What I have not seen are any statements attributed to the investigating agency 
that support your last conclusion: that the malware is what caused the alarm 
failure.  

I saw a very good summary of the official findings; I'll ask permission to 
repost them.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: [IP] Malware kills 154

2010-08-23 Thread Steven Bellovin

On Aug 23, 2010, at 11:11 13AM, Peter Gutmann wrote:

> "Perry E. Metzger"  forwards:
> 
>> "Authorities investigating the 2008 crash of Spanair flight 5022
>> have discovered a central computer system used to monitor technical
>> problems in the aircraft was infected with malware"
>> 
>> http://www.msnbc.msn.com/id/38790670/ns/technology_and_science-security/?gt1=43001
> 
> Sigh, yet another attempt to use the "dog ate my homework" of computer
> problems, if their fly-by-wire was Windows XP then they had bigger things to
> worry about than malware.
> 
To say nothing of what happens when you run a nuclear power plant on Windows: 
http://www.upi.com/News_Photos/Features/Irans-Bushehr-nuclear-power-plant/3693/2/
 (slightly OT, I realize, but too good to pass up).


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: [IP] Malware kills 154

2010-08-23 Thread Steven Bellovin

On Aug 23, 2010, at 11:50 30AM, John Levine wrote:

>>> "Authorities investigating the 2008 crash of Spanair flight 5022
>>> have discovered a central computer system used to monitor technical
>>> problems in the aircraft was infected with malware"
>>> 
>>> http://www.msnbc.msn.com/id/38790670/ns/technology_and_science-security/?gt1=43001
> 
> This was very poorly reported.  The malware was on a ground system that
> wouldn't have provided realtime warnings of the configuration problem
> that caused the plane to crash anyway.
> 

And the articles I've seen do not say that the problem caused the crash.  
Rather, they say that a particular, important computer was infected with 
malware; I saw no language (including in the Google translation of the original 
article at 
http://www.elpais.com/articulo/espana/ordenador/Spanair/anotaba/fallos/aviones/tenia/virus/elpepiesp/20100820elpepinac_11/Tes,
 though the translation has some crucial infelicities) that said "because of 
the malware, bad things happened.  It may be like the reactor computer with a 
virus during a large blackout -- yes, the computer was infected, but that 
wasn't what caused the problem.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread Steven Bellovin

On Aug 17, 2010, at 5:19 10PM, Samuel Neves wrote:

> On 17-08-2010 21:42, Perry E. Metzger wrote:
>> On Tue, 17 Aug 2010 22:32:52 +0200 Simon Josefsson
>>  wrote:
>>> Bill Stewart  writes:
>>> 
 Basically, 2048's safe with current hardware
 until we get some radical breakthrough
 like P==NP or useful quantum computers,
 and if we develop hardware radical enough to
 use a significant fraction of the solar output,
 we'll probably find it much easier to eavesdrop
 on the computers we're trying to attack than to
 crack the crypto.
>>> Another breakthrough in integer factoring could be sufficient for an
>>> attack on RSA-2048.  Given the number of increasingly efficient
>>> integer factorization algorithms that have been discovered
>>> throughout history, another breakthrough here seems more natural
>>> than unlikely to me.
>> A breakthrough could also render 10kbit keys broken, or might never
>> happen at all. A breakthrough could make short ECC keys vulnerable.
>> A breakthrough could make AES vulnerable. One can't operate on this
>> basis -- it makes it impossible to use anything other than one-time
>> pads.
>> 
> 
> A breakthrough is a rather strong term. But it's not unreasonable to
> believe that the number field sieve's complexity could be lowered on the
> near future by an *incremental* improvement --- it would only require
> lowering the complexity from L[1/3, ~1.92] to L[1/3, ~1.2] to make 2048
> bit factorization roughly as easy as 768 bits today.

It's worth quote from the paper at CRYPTO '10 on factorization of a 768-bit 
number:

The new NFS record required the following effort. We spent half a year 
on
80 processors on polynomial selection. This was about 3% of the main 
task,
the sieving, which took almost two years on many hundreds of machines. 
On
a single core 2.2 GHz AMD Opteron processor with 2 GB RAM, sieving would
have taken about fifteen hundred years. We did about twice the sieving
strictly necessary, to make the most cumbersome step, the matrix step, 
more
manageable. Preparing the sieving data for the matrix step took a 
couple of
weeks on a few processors. The final step after the matrix step took 
less
than half a day of computing.

They conclude with

at this point factoring a 1024-bit RSA modulus looks more than five 
times
easier than a 768-bit RSA modulus looked back in 1999, when we achieved
the first public factorization of a 512-bit RSA modulus. Nevertheless, a
1024-bit RSA modulus is still about a thousand times harder to factor 
than
a 768-bit one. It may be possible to factor a 1024-bit RSA modulus 
within
the next decade by means of an academic effort on the same scale as the
effort presented here. Recent standards recommend phasing out such 
moduli
by the end of the year 2010 [28]. See also [21]. Another conclusion from
our work is that we can confidently say that if we restrict ourselves to
an open community, academic effort such as ours and unless something
dramatic happens in factoring, we will not be able to factor a 1024-bit
RSA modulus within the next five years [27]. After that, all bets are 
off.

They also suggest that a 3-4 year phase-out of 1024-bit moduli is the proper 
course.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-17 Thread Steven Bellovin

On Aug 16, 2010, at 9:19 49PM, John Gilmore wrote:

>> who's your enemy?  The NSA?  The SVR?  Or garden-variety cybercrooks?
> 
> "Enemy"?  We don't have to be the enemy for someone to crack our
> security.  We merely have to be in the way of something they want;
> or to be a convenient tool or foil in executing a strategy.
> 

John, as you yourself have said, "cryptography is a matter of economics".  
Other than a few academics, people don't factor large numbers for fun; rather, 
they want the plaintext or the ability to forge signatures.  Is factoring the 
best way to do that?  Your own numbers suggest that it is not.  You wrote 
"After they've built 50, which perhaps only take six months to crack a key, 
will YOUR key be one of the 100 keys that they crack this year?"  100 keys, 
perhaps multiplied by 10 for the number of countries that will share the 
effort, means 1000 keys/year.  How many *banks* have SSL keys?  If you want to 
attack one of those banks, which is *cheaper*, getting time on a rare factoring 
machine, or finding some other way in, such as hacking an endpoint?  For that 
matter, don't forget Morris' "three Bs: burglary, bribery, and blackmail".  
(Aside: I was once discussing TWIRL with someone who has ties to the classified 
community.  When I quoted solution speeds of the we're discussing, he chortled, 
saying that the political fight over whose solutions were more valuable would 
paralyze things.)

If the threat is factoring, there are cheaper defenses than going to 1024-bit 
keys.  For example, every one under a given CA can issue themselves 
subcertificates.  For communication keys, use D-H; it's a separate solution 
effort for each session.  (Yes, it's cheaper if the modulus is held constant.)  
Cracking the signing key won't do any good, because of perfect forward secrecy.

You don't need long keys when they're used solely for short-lived 
authentication -- DNSSEC comes to mind.

Now -- all that said, I agree that 2048-bit keys are a safer choice.  However, 
defenders have to consider economics, too, and depending on what they're 
protecting it may not be a smart choice.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-16 Thread Steven Bellovin

On Aug 15, 2010, at 1:17 30PM, Peter Gutmann wrote:

> Ray Dillinger  writes:
>> On Fri, 2010-08-13 at 14:55 -0500, eric.lengve...@wellsfargo.com wrote:
>> 
>>> The big drawback is that those who want to follow NIST's recommendations
>>> to migrate to 2048-bit keys will be returning to the 2005-era overhead.
>>> Either way, that's back in line with the above stated 90-95% overhead.
>>> Meaning, in Dan's words "2048 ain't happening."
>> 
>> I'm under the impression that <2048 keys are now insecure mostly due to
>> advances in factoring algorithms 
> 
> Insecure against what?

Right -- who's your enemy?  The NSA?  The SVR?  Or garden-variety cybercrooks?

>  Given the million [0] easier attack vectors against
> web sites, which typically range from "trivial" all the way up to "relatively
> easy", why would any rational attacker bother with factoring even a 1024-bit
> key, with a difficulty level of "quite hard"?  It's not as if these keys have
> to remain secure for decades, since the 12-month CA billing cycle means that
> you have to refresh them every year anyway.

That depends on what you're protecting.  If it's the 4-digit PIN to 
billion-zorkmid bank accounts, they key needs to remain secure for many years, 
given how seldom PINs are changed.

>  Given both the state of PKI and
> the practical nonexistence of attacks on crypto of any strength because it's
> not worth the bother, would the attackers even notice if you used a 32-bit RSA
> key?  How would an adversary effectively scale and monetise an attack based on
> being able to break an RSA key, even if it was at close to zero cost?
> 
> The unfortunate effect of such fashion-statement crypto recommendations as
> "you must use 2K bit keys, regardless of the threat environment" is that what
> it actually says is "you must not use SSL on your web site".  "Le mieux est
> l'ennemi du bien" strikes again.
> 
> 
Yup.
> 
> [0] Figure exaggerated slightly for effect.

But only slightly exaggerated...



--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: new tech report on easy-to-use IPsec

2010-08-14 Thread Steven Bellovin

On Aug 11, 2010, at 12:21 47PM, Adam Aviv wrote:

> I think the list may get a kick out of this.
> 
> The tech-report was actually posted on the list previously, which is
> where I found it. Link included for completeness.
> 
> http://mice.cs.columbia.edu/getTechreport.php?techreportID=1433

Thanks.  I'll add that the code is now up on SourceForge under a BSD license:
http://sourceforge.net/projects/simple-vpn/


> 
> 
> 
>  Original Message 
> Subject: Re: new tech report on easy-to-use IPsec
> Date: Wed, 28 Jul 2010 21:36:47 -0400
> From: Steven Bellovin 
> To: Adam Aviv 
> 
> 
> On Jul 28, 2010, at 9:29 51PM, Adam Aviv wrote:
>> I couldn't help but notice this nugget of wisdom in your report:
>> 
>> [quote]
>> 
>> Public key infrastructures (PKIs) are surrounded by a great
>> mystique. Organizations are regularly told that they are complex,
>> require ultra-high security, and perhaps are best outsourced to
>> competent parties. Setting up a certifcate authority (CA) requires a
>> "ceremony", a term with a technical meaning [13] but nevertheless
>> redolent of high priests in robes, acolytes with censers, and
>> more. This may or may not be true in general; for most IPsec uses,
>> however, little of this is accurate. (High priests and censers are
>> defnitely not needed; we are uncertain about the need for acolytes
>> ...)
> 
> Peter Gutmann told me privately that he thinks the alternate model
> involves human sacrifices and perhaps a goat...
> 
> 
>   --Steve Bellovin, http://www.cs.columbia.edu/~smb
> 
> 
> 
> 
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com
> 


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Venona

2010-07-31 Thread Steven Bellovin
I'm currently reading "Defend the Realm", an authorized history oF MI-5 by a 
historian who had access to their secret files.  The chapter on Venona has the 
following fascinating footnote: "The method of decryption is summarized in a 
number of NSA publications, among them the account by Cecil James Phillips of 
NSA, 'What Made Venona Possible?'  References to this account does not imply 
that it is corroborated by HMG or any British intelligence agency."

I've always been a bit skeptical of NSA's story of how the system was broken, 
which makes me wonder if there's some other reason for that disclaimer...  
(Grumpy aside: I have the full text of the ebook (borrowed from the NY Public 
Library) on my laptop.  However, I couldn't copy and paste the quotation, even 
though it's obviously fair use -- the DRM software wouldn't let me...)

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Is this the first ever practically-deployed use of a threshold scheme?

2010-07-31 Thread Steven Bellovin

On Jul 31, 2010, at 8:44 12AM, Peter Gutmann wrote:

> Apparently the DNS root key is protected by what sounds like a five-of-seven
> threshold scheme, but the description is a bit unclear.  Does anyone know
> more?
> 
> (Oh, and for people who want to quibble over "practically-deployed", I'm not
> aware of any real usage of threshold schemes for anything, at best you have
> combine-two-key-components (usually via XOR), but no serious use of real n-
> of-m that I've heard of.  Mind you, one single use doesn't necessarily count
> as "practically deployed" either).

There is circumstantial evidence that such schemes were deployed for U.S. 
nuclear weapons command and control.  I also wonder if it's used for some of 
the NSA's root keys -- they run very large PKIs.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Obama administration seeks warrantless access to email headers.

2010-07-30 Thread Steven Bellovin

On Jul 30, 2010, at 3:58 08PM, Perry E. Metzger wrote:

> On Fri, 30 Jul 2010 09:38:44 +0200 Stefan Kelm  wrote:
>> Perry,
>> 
>>>  The administration wants to add just four words -- "electronic
>>>  communication transactional records" -- to a list of items that
>>> the law says the FBI may demand without a judge's approval.
>>> Government
>> 
>> Would that really make that much of a difference? In Germany,
>> at least, the so-called "judge's approval" often isn't worth
>> a penny, esp. wrt. phone surveillance. It simply is way too
>> easy to get such an approval, even afterwards.
> 
> It is significantly harder here in the US.

Actually, no, it isn't.  Transaction record access is not afforded the same 
protection as content.  I'll skip the detailed legal citations; the standard 
now for transactional records is 'if the governmental entity offers specific 
and articulable facts showing that there are reasonable grounds to believe that 
the contents of a wire or electronic communication, or the records or other 
information sought, are relevant and material to an ongoing criminal 
investigation."  This is much less than the "probably cause" and specificity 
standards for full-content wiretaps, which do enjoy very strong protection.

> Equally importantly, it is
> much simpler to determine what warrants were issued after the fact.
> 

Not in this case.  Since the target of such an order is not necessarily the 
suspect, the fact of the information transfer may never be introduced in open 
court.  Nor is there a disclosure requirement here, the way there is for 
full-content wiretaps.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Steven Bellovin

On Jul 28, 2010, at 9:22 29AM, Peter Gutmann wrote:

> Steven Bellovin  writes:
> 
>> For the last issue, I'd note that using pki instead of PKI (i.e., many 
>> different per-realm roots, authorization certificates rather than identity 
>> certificates, etc.) doesn't help: Realtek et al. still have no better way or 
>> better incentive to revoke their own widely-used keys.
> 
> I think the problems go a bit further than just Realtek's motivation, if you 
> look at the way it's supposed to work in all the PKI textbooks it's:
> 
>  Time t: Malware appears signed with a stolen key.
>  Shortly after t: Realtek requests that the issuing CA revoke the cert.
>  Shortly after t': CA revokes the cert.
>  Shortly after t'': Signature is no longer regarded as valid.
> 
> What actually happened was:
> 
>  Time t: Malware appears signed with a stolen key.
>  Shortly after t: Widespread (well, relatively) news coverage of the issue.
> 
>  Time t + 2-3 days: The issuing CA reads about the cert problem in the news.
>  Time t + 4-5 days: The certificate is revoked by the CA.
>  Time t + 2 weeks and counting: The certificate is regarded as still valid by
>the sig-checking software.
> 
> That's pretty much what you'd expect if you're familiar with the realities of 
> PKI, but definitely not PKI's finest hour.  In addition you have:
> 
>  Time t - lots: Stuxnet malware appears (i.e. is noticed by people other than
>the victims)
>  Shortly after t - lots: AV vendors add it to their AV databases and push out
>updates
> 
> (I don't know what "lots" is here, it seems to be anything from weeks to
> months depending on which news reports you go with).
> 
> So if I'm looking for a defence against signed malware, it's not going to be 
> PKI.  That was the point of my previous exchange with Ben, assume that PKI 
> doesn't work and you won't be disappointed, and more importantly, you now 
> have 
> the freedom to design around it to try and find mechanisms that do work.

When I look at this, though, little of the problem is inherent to PKI.  Rather, 
there are faulty communications paths.

You note that at t+2-3 days, the CA read the news.  Apart from the question of 
whether or not "2-3 days" is "shortly after" -- the time you suggest the next 
step takes place -- how should the CA or Realtek know about the problem?  Did 
the folks who found the offending key contact either party?  Should they have?  
The AV companies are in the business of looking for malware or reports thereof; 
I think (though I'm not certain) that they have a sharing agreement for new 
samples.  (Btw -- I'm confused by your definition of "t" vs. "t-lots".  The 
first two scenarios appear to be "t == the published report appearing"; the 
third is confusing, but if you change the timeline to "t+lots" it works for "t 
== initial, unnoticed appearance in the wild".  Did the AV companies push 
something out long before the analysis showed the stolen key?)

Suppose, though, that Realtek has some Google profile set up to send them 
reports of malware affecting their anything.  Even leaving aside false 
positives, once they get the alert they should do something.  What should that 
something be?  Immediately revoke the key?  The initial reports I saw were not 
nearly specific enough to identify which key was involved.  Besides, maybe the 
report was not just bogus but malicious -- a DoS attack on their key.  They 
really need to investigate it; I don't regard 2-3 days as unreasonable to 
establish communications with an malware analysis company you've never heard of 
and which has to verify your bonafides, check it out, and verify that the 
allegedly malsigned code isn't something you actually released N years ago as 
release 5.6.7.9.9.a.b for a minor product line you've since discontinued.  At 
that point, a revocation request should go out; delays past that point are not 
justifiable.  The issue of software still accepting it, CRLs notwithstanding, 
is more a sign of buggy code.

The point about the communications delay is that it's inherent to anything 
involving the source company canceling anything -- whether it's a PKI cert, a 
pki cert, a self-validating URL, a KDC, or magic fairies who warn sysadmins not 
to trust certain software.  

What's interesting here is the claim that AV companies could respond much 
faster.  They have three inherent advantages: they're in the business of 
looking for malware; they don't have to complete the analysis to see if a 
stolen key is involved; and they can detect problems after installation, 
whereas certs are checked only at installation time.  Of course, speedy act

Re: A mighty fortress is our PKI, Part II

2010-07-28 Thread Steven Bellovin

On Jul 28, 2010, at 8:21 33AM, Ben Laurie wrote:

> On 28/07/2010 13:18, Peter Gutmann wrote:
>> Ben Laurie  writes:
>> 
>>> I find your response strange. You ask how we might fix the problems, then 
>>> you 
>>> respond that since the world doesn't work that way right now, the fixes 
>>> won't 
>>> work. Is this just an exercise in one-upmanship? You know more ways the 
>>> world 
>>> is broken than I do?
>> 
>> It's not just that the world doesn't work that way now, it's quite likely 
>> that 
>> it'll never work that way (for the case of PKI/revocations mentioned in the 
>> message, not the original SNI).  We've been waiting for between 20 and 30 
>> years (depending on what you define as the start date) for PKI to start 
>> working, and your reponse seems to indicate that we should wait even harder. 
>>  
>> If I look at the mechanisms we've got now, I can identify that commercial 
>> PKI 
>> isn't helping, and revocations aren't helping, and work around that.  I'm 
>> after effective practical solutions, not just "a solution exists, QED" 
>> solutions.
> 
> The core problem appears to be a lack of will to fix the problems, not a
> lack of feasible technical solutions.
> 
> I don't know why it should help that we find different solutions for the
> world to ignore?

There seem to be at least three different questions here: bad code (i.e., that 
Windows doesn't check the revocation status properly), the UI issue, and the 
conceptual question of what should replace the current PKI+{CRL,OCSP} model.  
For the last issue, I'd note that using pki instead of PKI (i.e., many 
different per-realm roots, authorization certificates rather than identity 
certificates, etc.) doesn't help: Realtek et al. still have no better way or 
better incentive to revoke their own widely-used keys.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: MITM attack against WPA2-Enterprise?

2010-07-26 Thread Steven Bellovin

On Jul 26, 2010, at 10:30 19PM, Perry E. Metzger wrote:

> On Mon, 26 Jul 2010 21:42:53 -0400 Steven Bellovin
>  wrote:
>>> 
>>> I don't know, if it is truly only a ten line change to a common
>>> WPA2 driver to read, intercept and alter practically any traffic
>>> on the network even in enterprise mode, that would seem like a
>>> serious issue to me. Setting up the enterprise mode stuff to work
>>> is a lot of time and effort. If it provides essentially no
>>> security over WPA2 in shared key mode, one wonders what the point
>>> of doing that work is. This doesn't seem like a mere engineering
>>> compromise.
>> 
>> If I understand the problem correctly, it doesn't strike me as
>> particularly serious.  Fundamentally, it's a way for people in the
>> same enterprise and on the same LAN to see each other's traffic.  A
>> simple ARP-spoofing attack will do the same thing; no crypto
>> needed.  Yes, that's a more active attack, and in theory is
>> somewhat more noticeable.  In practice, I suspect the actual risk
>> is about the same.
> 
> I think the issue is that people have been given the impression that
> WPA2 provides enough security that people can feel reasonably secure
> that others will not be reading their traffic over the air the way
> that they might in a pure shared key scenario, and that this justified
> the extra complexity of deployment. While what you say is perfectly
> true, it does lead one to ask if WPA2 enterprise has not been
> significantly oversold.
> 
Probably...  To me, access link crypto is about access control.  WEP --
apart from the failings in RC4 and how it was used -- got that badly
wrong, because it was impossible to change keys in any rational way.
WPA2 was supposed to fix that; I'd have been happy if that were all
it did.  As others have noted, end-to-end crypto is the proper approach.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: MITM attack against WPA2-Enterprise?

2010-07-26 Thread Steven Bellovin
> 
> I don't know, if it is truly only a ten line change to a common WPA2
> driver to read, intercept and alter practically any traffic on the
> network even in enterprise mode, that would seem like a serious issue
> to me. Setting up the enterprise mode stuff to work is a lot of time
> and effort. If it provides essentially no security over WPA2 in shared
> key mode, one wonders what the point of doing that work is. This
> doesn't seem like a mere engineering compromise.

If I understand the problem correctly, it doesn't strike me as particularly 
serious.  Fundamentally, it's a way for people in the same enterprise and on 
the same LAN to see each other's traffic.  A simple ARP-spoofing attack will do 
the same thing; no crypto needed.  Yes, that's a more active attack, and in 
theory is somewhat more noticeable.  In practice, I suspect the actual risk is 
about the same.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


MITM attack against WPA2-Enterprise?

2010-07-25 Thread Steven Bellovin
There is a claim of a flaw in WPA2-Enterprise -- see 
http://wifinetnews.com/archives/2010/07/researchers_hints_8021x_wpa2_flaw.html

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Root Zone DNSSEC Deployment Technical Status Update

2010-07-18 Thread Steven Bellovin

On Jul 17, 2010, at 3:30 05PM, Taral wrote:

> On Sat, Jul 17, 2010 at 7:41 AM, Paul Wouters  wrote:
>>> Several are using old SHA-1 hashes...
>> 
>> "old" ?
> 
> "old" in that they are explicitly not recommended by the latest specs
> I was looking at.

DNSSEC signatures do not need to have a long lifetime; no one cares if, in 10 
years, someone can find a preimage attack against today's signed zones.  This 
is unlike many other uses of digital signatures, where you may have to present 
evidence in court about what some did or did not sign.

It's also unclear to me what the actual deployment is of stronger algorithms, 
or of code that will do the right thing if multiple signatures are present.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


new tech report on easy-to-use IPsec

2010-07-14 Thread Steven Bellovin
Folks on this list may be interested in a new tech report:

Shreyas Srivatsan, Maritza Johnson, and Steven M. Bellovin. Simple-VPN: 
Simple IPsec configuration. Technical Report CUCS-020-10, Department of 
Computer Science, Columbia University, July 2010. 
http://mice.cs.columbia.edu/getTechreport.php?techreportID=1433

The IPsec protocol promised easy, ubiquitous encryption. That has never 
happened. For the most part, IPsec usage is confined to VPNs for road warriors, 
largely due to needless configuration complexity and incompatible 
implementations.  We have designed a simple VPN configuration language that 
hides the unwanted complexities. Virtually no options are necessary or 
possible. The administrator specifies the absolute minimum of information: the 
authorized hosts, their operating systems, and a little about the network 
topology; everything else, including certificate generation, is automatic. Our 
implementation includes a multitarget compiler, which generates 
implementation-specific configuration files for three different platforms; 
others are easy to add.

We hope to have the code up on Sourceforge soon.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Question w.r.t. AES-CBC IV

2010-07-09 Thread Steven Bellovin

On Jul 9, 2010, at 1:55 12PM, Jonathan Katz wrote:

> CTR mode seems a better choice here. Without getting too technical, security 
> of CTR mode holds as long as the IVs used are "fresh" whereas security of CBC 
> mode requires IVs to be random.
> 
> In either case, a problem with a short IV (no matter what you do) is the 
> possibility of IVs repeating. If you are picking 32-bit IVs at random, you 
> expect a repeat after only (roughly) 2^16 encryptions (which is not very 
> many).
> 

Unless I misunderstand your point, I think that in the real world there's a 
very real difference in the insecurity of CBC vs CTR if the IV selection is 
faulty.  With CBC, there is semantic insecurity, in that one can tell if two 
messages have a common prefix if the IV is the same.  Furthermore, if the IV is 
predictable to the adversary under certain circumstances some plaintext can be 
recovered.

With CTR, however, there are very devastating two-message attacks if the IVs 
are the same; all that's necessary is some decent knowledge of some probable 
plaintext.  


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


A real case of malicious steganography in the wild?

2010-07-09 Thread Steven Bellovin
For years, there have been unverifiable statements in the press about assorted 
hostile parties using steganography.  There may now be a real incident -- or at 
least, the FBI has stated in court documents that it happened.

According to the Justice Department 
(http://www.justice.gov/opa/pr/2010/June/10-nsd-753.html), 11 Russian nationals 
have been operating as deep cover agents in the US for many years.  According 
to Complaint #2 (link at the bottom of the page), a steganographic program was 
used.  Other Internet-era techniques allegedly employed include ad hoc WiFi 
networks.  (To be sure, the FBI could be wrong.  In the past, I've seen them 
make mistakes about that, but they're *much* more experienced now.)

It will be interesting to see how this develops in court.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Commercial quantum cryptography system broken

2010-07-09 Thread Steven Bellovin
http://www.technologyreview.com/blog/arxiv/25189/

Not at all to my surprise, they broke it by exploiting a difference between a 
theoretical system and a real-world implementation.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Quantum Key Distribution: the bad idea that won't die...

2010-04-22 Thread Steven Bellovin
While I'm quite skeptical that QKD will prove of practical use, I do think it's 
worth investigating.  The physics are nice, and it provides an interesting and 
different way of thinking about cryptography.  I think that there's a 
non-trivial chance that it will some day give us some very different abilities, 
ones we haven't even thought of.  My analog is all of the strange and wondrous 
things our cryptographic protocols can do -- blind signatures, zero knowledge 
proofs, secure multiparty computation, and more -- things that weren't on the 
horizon just 35 years ago.  I'm reminded of a story about a comment Whit Diffie 
once heard from someone in the spook community about public key crypto.  "We 
had it first -- but we never knew what we had.  You guys have done much more 
with it than we ever did."  All they knew to do with public key was key 
distribution or key exchange; they didn't even invent digital signatures.  They 
had "non-secret encryption"; we had public key cryptography.

Might the same be true for QKD?  I have no idea.  I do suggest that it's worth 
thinking in those terms, rather than how to use it to replace conventional key 
distribution.  Remember that RSA's essential property is not that you can use 
it to set up a session key; rather, it's that you can use it to send a session 
key to someone with whom you don't share a secret.  

Beyond Perry's other points -- and QKD is inherently point-to-point; you need 
n^2 connections, since you can't terminate the link-layer crypto at a router 
without losing your security guarantees -- it's worth reminding people that the 
security guarantees apply to ideal quantum systems.  If your emitter isn't 
ideal -- and of course it isn't -- it can (will?) emit more photons; I can play 
my interception games with the ones your detector doesn't need.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: "Against Rekeying"

2010-03-25 Thread Steven Bellovin

On Mar 23, 2010, at 11:21 AM, Perry E. Metzger wrote:

> 
> Ekr has an interesting blog post up on the question of whether protocol
> support for periodic rekeying is a good or a bad thing:
> 
> http://www.educatedguesswork.org/2010/03/against_rekeying.html
> 
> I'd be interested in hearing what people think on the topic. I'm a bit
> skeptical of his position, partially because I think we have too little
> experience with real world attacks on cryptographic protocols, but I'm
> fairly open-minded at this point.

I'm a bit skeptical -- I think that ekr is throwing the baby out with the bath 
water.  Nobody expects the Spanish Inquisition, and nobody expects linear 
cryptanalysis, differential cryptanalysis, hypertesseract cryptanalysis, etc.  
A certain degree of skepticism about the strength of our ciphers is always a 
good thing -- no one has ever deployed a cipher they think their adversaries 
can read, but we know that lots of adversaries have read lots of "unbreakable" 
ciphers.

Now -- it is certainly possible to go overboard on this, and I think the IETF 
often has.  (Some of the advice given during the design of IPsec was quite 
preposterous; I even thought so then...)  But one can calculate rekeying 
intervals based on some fairly simple assumptions about the amount of 
{chosen,known,unknown} plaintex/ciphertext pairs needed and the work factor for 
the attack, multiplied by the probability of someone developing an attack of 
that complexity, and everything multiplied by Finagle's Constant.  The trick, 
of course, is to make the right assumptions.  But as Bruce Schneier is fond of 
quoting, attacks never get worse; they only get better.  Given recent research 
results, does anyone want to bet on the lifetime of AES?  Sure, the NSA has 
rated it for Top Secret traffic, but I know a fair number of people who no 
longer agree with that judgment.  It's safe today -- but will it be safe in 20 
years?  Will my plaintext still be sensitive then?

All of that is beside the point.  The real challenge is often to design a 
system -- note, a *system*, not just a protocol -- that can be rekeyed *if* the 
long-term keys are compromised.  Once you have that, setting the time interval 
is a much simpler question, and a question that can be revisited over time as 
attacks improve.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security of Mac Keychain, Filevault

2009-11-02 Thread Steven Bellovin


On Oct 29, 2009, at 11:25 PM, Jerry Leichter wrote:

A couple of days ago, I pointed to an article claiming that these  
were easy to break, and asked if anyone knew of security analyses of  
these facilities.


I must say, I'm very disappointed with the responses.  Almost  
everyone attacked the person quoted in the article.  The attacks  
they assumed he had in mind were unproven or unimportant or  
insignificant.  Gee ... sounds *exactly* like the response you get  
from companies when someone finds a vulnerability in their  
products:  It's not proven; who is this person anyway; even if there  
is an attack, it isn't of any practical importance.


Unfortunately, there's no better response here.

At time T, someone will assert that "X is insecure", and that products  
exist -- commercial and freeware -- to crack it.  This person supplies  
no evidence except for an incomplete list of products to support the  
assertion.  What do I now know that I didn't know before?


One way to judge is by reputation.  If, say, Adi Shamir says it, I'm  
very inclined to believe it, even without wading through the technical  
details.  If the posting comes from a notorious crank, I'll likely  
discard the message unread because cranks tend to misread technical  
papers.  If it's someone I've never heard of, I have to make the  
decision based on the evidence presented and what I already know.   
What was the evidence here?


The article made no verifiable or falsifiable technical statements, so  
there's nothing to evaluate in that respect.  I've never heard of any  
freeeware to crack Filevault; given the familiarity of the readership  
of this list in the aggregate with the free software world, it seems  
unlikely that such software exists.  He did point to some commercial  
software to attack Filevault, but it works by password guessing.  For  
his business -- forensic analysis -- I suspect that that technique is  
extremely useful; I doubt that anyone on this list would disagree.   
But that's not the same as a flaw in MacOS.


Beyond that, we're left with *no* new information.  What basis does  
this article give us to conclude that Filevault is -- or is not --  
insecure?  I have no more reason to trust it or distrust it than I had  
before reading that article.


A proper evaluation of Filevault would, of course, be a good idea.   
But that statement is equally true after the article as before.



--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security of Mac Keychain, File Vault

2009-10-26 Thread Steven Bellovin


On Oct 24, 2009, at 5:31 PM, Jerry Leichter wrote:

The article at http://www.net-security.org/article.php?id=1322  
claims that both are easily broken.  I haven't been able to find any  
public analyses of Keychain, even though the software is open-source  
so it's relatively easy to check.  I ran across an analysis of File  
Vault not long ago which pointed out some fairly minor nits, but  
basically claimed it did what it set out to do.


The article makes a bunch of other claims which aren't obviously  
unreasonable.


Anyone one know of more recent analysis of Mac encryption stuff?   
(OS bugs/security holes are a whole other story)


The article specifically mentions Mac Marshall for attacking  
FileVault, but from the descriptions of it I can find it's just doing  
password guessing.



--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Possibly questionable security decisions in DNS root management

2009-10-20 Thread Steven Bellovin


On Oct 17, 2009, at 5:23 AM, John Gilmore wrote:


Even plain DSA would be much more space efficient on the signature
side - a DSA key with p=2048 bits, q=256 bits is much stronger than a
1024 bit RSA key, and the signatures would be half the size. And NIST
allows (2048,224) DSA parameters as well, if saving an extra 8 bytes
is really that important.


DSA was (designed to be) full of covert channels.


The evidence that it was an intentional design feature is, to my  
knowledge, slim.  More relevant to this case is why it matters: what  
information is someone trying to smuggle out via the DNS?  Remember  
that DNS records are (in principle) signed offline; servers are  
signing *records*, not responses.  In other words, it's more like a  
certificate model than the TLS model.


Given that they are attempted to optimize for minimal packet size,  
the

choice of RSA for signatures actually seems quite bizarre.


It's more bizarre than you think.  But packet size just isn't that big
a deal.  The root only has to sign a small number of records -- just
two or three for each top level domain -- and the average client is
going to use .com, .org, their own country, and a few others).  Each
of these records is cached on the client side, with a very long
timeout (e.g. at least a day).  So the total extra data transfer for
RSA (versus other) keys won't be either huge or frequent.  DNS traffic
is still a tiny fraction of overall Internet traffic.  We now have
many dozens of root servers, scattered all over the world, and if the
traffic rises, we can easily make more by linear replication.  DNS
*scales*, which is why we're still using it, relatively unchanged,
after more than 30 years.


It's rather more complicated than that.  The issue isn't bandwidth per  
se, at least not as compared with total Internet bandwidth.  Bandwidth  
out of a root server site may be another matter.  Btw, the DNS as  
designed 25 years ago would not scale to today's load.  There was a  
crucial design mistake: DNS packets were limited to 512 bytes.  As a  
result, there are 10s or 100s of millions of machines that read *only*  
512 bytes.  That in turn means that there can be at most 13 root  
servers.  More precisely, there can be at most 13 root names and IP  
addresses.  (We could possibly have one or two more if there was just  
one name that pointed to many addresses, but that would complicate  
debugging the DNS.)  The DNS is working today because of anycasting;  
many -- most?  all? -- of the 13 IP addresses exist at many points in  
the Internet, and depend on routing system magic to avoid problems.   
At that, anycasting works much better for UDP than for TCP, because it  
will fail utterly if some packets in a conversation go to one  
instantiation and others go elsewhere.


It is possible to have larger packets, but only if there is prior  
negotiation via something called EDNS0.  At that, you still *really*  
want to stay below 1500 bytes, the Ethernet MTU.  If you exceed that,  
you get fragmentation, which hurts reliability.  But whatever the  
negotiated maximum DNS response size, if the data exceeds that value  
the server will say "response truncated; ask me via TCP".  That, in  
turn, will cause massive problems.  Many hosts won't do TCP properly  
and many firewalls are incorrectly configured to reject DNS over TCP.   
Those problems could, in principle, be fixed.  But TCP requires a 3- 
way handshake to set up the connection, then a 2-packet exchange for  
the data and response (more if the response won't fit in a single  
packet), plus another 3 packets to tear down the connection.  It also  
requires a lot of state -- and hence kernel memory -- on the server.   
There are also reclamation issues if the TCP connection stops -- but  
isn't torn down -- in just the proper way (where the server is in FIN- 
WAIT-2 state), which in turn might happen if the routing system  
happens to direct some anycast packets elsewhere.


To sum up: there really are reasons why it's important to keep DNS  
responses small.  I suspect we'll have to move towards elliptic curve  
at some point, though there are patent issues (or perhaps patent FUD;  
I have no idea) there.


The bizarre part is that the DNS Security standards had gotten pretty
well defined a decade ago,


Actually, no; the design then was wrong.  It looked ok from the crypto  
side, but there were subtle points in the DNS design that weren't  
handled properly.  I'll skip the whole saga, but it wasn't until RFC  
4033-4035 came out, in March 2005, that the specs were correct.  There  
are still privacy concerns about parts of DNSSEC.



when one or more high-up people in the IETF
decided that "no standard that requires the use of Jim Bidzos's
monopoly crypto algorithm is ever going to be approved on my watch".
Jim had just pissed off one too many people, in his role as CEO of RSA
Data Security and the second most hated guy in crypto.  (NSA export
controls was the first r

Review of new book on the NSA

2009-10-14 Thread Steven Bellovin
There's a new book on the NSA, based largely on documents received via  
Freedom of Information Act requests.  Bamford's review is at http://www.nybooks.com/articles/23231 
 .


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: [Barker, Elaine B.] NIST Publication Announcements

2009-09-30 Thread Steven Bellovin


On Sep 29, 2009, at 10:31 AM, Perry E. Metzger wrote:



Stephan Neuhaus  writes:

For business reasons,
Alice can't force Bob to use a particular TTA, and it's also
impossible to stipulate a particular TTA as part of the job
description (the reason is that Alice and the Bobsgreat band name
BTW---won't agree to trust any particular TTA and also don't want to
operate their own).


You don't need such a complicated description -- you're just asking  
"can

I do secure timestamping without requiring significant trust in the
timestamping authority."

The Haber & Stornetta scheme provides a timestamping service that
doesn't require terribly much trust, since hard to forge widely
witnessed events delimit particular sets of timestamps. The only issue
is getting sufficient granularity.



I don't know if their scheme was patented in Germany.  It was in the  
U.S., though I think that at least some of the patents expire within  
the year.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


FileVault on other than home directories on MacOS?

2009-09-21 Thread Steven Bellovin
Is there any way to use FileVault on MacOS except on home  
directories?  I don't much want to use it on my home directory; it  
doesn't play well with Time Machine (remember that availability is  
also a security property); besides, different directories of mine have  
different sensitivity levels.


I suppose I could install TrueCrypt (other suggestions or comments on  
TrueVault?), but I prefer to minimize the amount of extra software I  
have to maintain.



--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


NSA intercepts led to a terrorist conviction

2009-09-09 Thread Steven Bellovin

"Threat Level Privacy, Crime and Security Online
NSA-Intercepted E-Mails Helped Convict Would-Be Bombers
The three men convicted in the United Kingdom on Monday of a plot to  
bomb several transcontinental flights were prosecuted in part using  
crucial e-mail correspondences intercepted by the U.S. National  
Security Agency, according to Britain’s Channel 4.


The e-mails, several of which have been reprinted by the BBC and other  
publications, contained coded messages, according to prosecutors. They  
were intercepted by the NSA in 2006 but were not included in evidence  
introduced in a first trial against the three last year.



http://www.wired.com/threatlevel/2009/09/nsa-email/ has more.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-09-08 Thread Steven Bellovin


On Sep 3, 2009, at 12:26 AM, Peter Gutmann wrote:


Steven Bellovin  writes:

This returns us to the previously-unsolved UI problem: how -- with  
today's
users, and with something more or less like today's browsers since  
that's
what today's users know -- can a spoof-proof password prompt be  
presented?


Good enough to satisfy security geeks, no, because no measure you  
take will
ever be good enough.  However if you want something that's good  
enough for
most purposes then Camino has been doing something pretty close to  
this since
it was first released (I'm not aware of any other browser that's  
even tried).
When you're asked for credentials, the dialog rolls down out of the  
browser
title bar in a hard-to-describe scrolling motion a bit like a  
supermarket till
printout.  In other words instead of a random popup appearing in  
front of you
from who knows what source and asking for a password, you've got a  
direct
visual link to the thing that the credentials are being requested  
for.  You
can obviously pepper and salt this as required (and I wouldn't dream  
of
deploying something like this without getting UI folks to comment  
and test it
on real users first), but doing this is a tractable UI design issue  
and not an

intractable business-model/political/social/etc problem.



Several other people made similar suggestions.  They all boil down to  
the same thing, IMO -- assume that the user will recognize something  
distinctive or know to do something special for special sites like  
banks.  Both, to me, are unproven assumptions.  Worse yet, both the  
security literature and what I've seen of user behavior strongly  
suggest to me that neither scenario is true.


Peter, I'm not sure what you mean by "good enough to satisfy security  
geeks" vs. "good enough for most purposes".  I'm not looking for  
theoretically good enough, for any value of "theory"; my metric -- as  
a card-carrying security geek -- is precisely "good enough for most  
purposes".  A review of user studies of many different distinctive  
markers, from yellow URL bars to green partial-URL bars to special  
pictures to you-name-it shows that users either never notice the  
*absence* of the distinctive feature or are fooled by a tailored  
attack (see, e.g., the paper on picture-in-picture attacks).  Maybe  
Camino is really better -- or maybe it just hasn't been properly  
attacked yet, say by a clever flash animation or some AJAX weirdness.   
Given the failure of all previous attempts -- who, amongst the  
proponents of EV certificates, realized that attackers could and would  
use all-green favicon.ico files to fool users -- I think the burden of  
proof is on the proponents.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-09-04 Thread Steven Bellovin


On Aug 26, 2009, at 6:26 AM, Ben Laurie wrote:

On Mon, Aug 10, 2009 at 6:35 PM, Peter Gutmann> wrote:
More generally, I can't see that implementing client-side certs  
gives you much
of anything in return for the massive amount of effort required  
because the
problem is a lack of server auth, not of client auth.  If I'm a  
phisher then I
set up my bogus web site, get the user's certificate-based client  
auth
message, throw it away, and report successful auth to the client.   
The browser
then displays some sort of indicator that the high-security  
certificate auth
was successful, and the user can feel more confident than usual in  
entering
their credit card details.  All you're doing is building even more  
substrate

for phishing attacks.

Without simultaneous mutual auth, which -SRP/-PSK provide but PKI  
doesn't,
you're not getting any improvement, and potentially just making  
things worse

by giving users a false sense of security.


I certainly agree that if the problem you are trying to solve is
server authentication, then client certs don't get you very far. I
find it hard to feel very surprised by this conclusion.

If the problem you are trying to solve is client authentication then
client certs have some obvious value.

That said, I do tend to agree that mutual auth is also a good avenue
to pursue, and the UI you describe fits right in with Chrome's UI in
other areas. Perhaps I'll give it a try.



This returns us to the previously-unsolved UI problem: how -- with  
today's users, and with something more or less like today's browsers  
since that's what today's users know -- can a spoof-proof password  
prompt be presented?


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Kahn's "Seizing the Enigma" back in print -- with a catch

2009-08-13 Thread Steven Bellovin
David Kahn's "Seizing the Enigma" is back in print.  However, it's  
only available from Barnes and Noble -- their publishing arm is doing  
the reprint.  According to the preface, the new edition corrects minor  
errors, but didn't give any details.


http://search.barnesandnoble.com/Seizing-the-Enigma/David-Kahn/e/9781435107915/?itm=1

--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com