Re: Client Certificate UI for Chrome?

2009-09-09 Thread Steven M. Bellovin
On Wed, 09 Sep 2009 15:42:34 +1000
James A. Donald jam...@echeque.com wrote:

 Steven Bellovin wrote:
  Several other people made similar suggestions.  They all boil down
  to the same thing, IMO -- assume that the user will recognize
  something distinctive or know to do something special for special
  sites like banks. 
 
 Not if he only does it for special sites like banks, but if
 something special is pretty widely used, he will notice when things
 are different.

We conducted a small-scale controlled user study -- it didn't work.
 
  Peter, I'm not sure what you mean by good enough to satisfy
  security geeks vs. good enough for most purposes.  I'm not
  looking for theoretically good enough, for any value of theory;
  my metric -- as a card-carrying security geek -- is precisely good
  enough for most purposes.  A review of user studies of many
  different distinctive markers, from yellow URL bars to green
  partial-URL bars to special pictures to you-name-it shows that
  users either never notice the *absence* of the distinctive feature
 
 I never thought that funny colored url bars for banks would help, and 
 ridiculed that suggestion when it was first made, and said it was
 merely an effort to get more money for CAs, and not a serious
 security proposal
 
 The fact that obviously stupid and ineffectual methods have failed is 
 not evidence that better methods would also fail.
 
 Seems to me that you are making the argument We have tried
 everything that might increase CA revenues, and none of it has
 improved user security, so obviously user security cannot be
 improved.
 
Not quite.  I'm not saying it cannot be improved.  I'm saying that
controlled studies thus far have demonstrated none of the proposed
methods have worked, against fairly straight-forward new attacks.  And
if we've learned one thing over the last ten years, it's that the
attackers are as good as we are at what they do.  There's money to be
made and the market has worked its wonders: there is a demand for
capable hackers, and they're making enough money to attract good people.

What I am saying is twofold.  First -- when you invent a new scheme,
do a scientific test: does it actually help?  Don't assume that because
pure reason tells you it's a good idea, it actually is in the real
world.  Second -- you may very well be right that tinkering with the
password entry mechanisms cannot succeed, because users are habituated
to many different mechanisms and to login screens that regularly change
because some VP in charge of publicity has decided that the site's web
presence looks old-fashioned and needs to be freshened.  In that case,
we have to look at entirely different approaches.  (How many different
experiments will it take to convince people that you can't make gold by
mixing chemicals together?)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


spyware on Blackberries

2009-07-16 Thread Steven M. Bellovin
http://feeds.wired.com/~r/wired27b/~3/CFV8MEwH_rM/

A BlackBerry update that a United Arab Emirates service provider pushed
out to its customers contains U.S.-made spyware that would allow the
company or others to siphon and read their e-mail and text messages,
according to a researcher who examined it.

The update was billed as a “performance enhancement patch” by the
UAE-based phone and internet service provider Etisalat, which issued
the patch for its 100,000 subscribers.

...



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD6 withdrawn from SHA-3 competition

2009-07-04 Thread Steven M. Bellovin
On Thu, 2 Jul 2009 20:51:47 -0700
Joseph Ashwood ashw...@msn.com wrote:

 --
 Sent: Wednesday, July 01, 2009 4:05 PM
 Subject: MD6 withdrawn from SHA-3 competition
 
  Also from Bruce Schneier, a report that MD6 was withdrawn from the
  SHA-3 competition because of performance considerations.
 
 I find this disappointing. With the rate of destruction of primitives
 in any such competition I would've liked to see them let it stay
 until it is either broken or at least until the second round. A quick
 glance at the SHA-3 zoo and you won't see much left with no attacks.
 It would be different if it was yet another M-D, using AES as a
 foundation, blah, blah, blah, but MD6 is a truly unique and
 interesting design.
 
 I hope the report is wrong, and in keeping that hope alive, the MD6
 page has no statement about the withdrawl.

The report is quite correct.  Rivest sent a note to NIST's hash forum
mailing list (http://csrc.nist.gov/groups/ST/hash/email_list.html)
announcing the withdrawal.  Since a password is necessary to access the
archives (anti-spam?), I don't want to post the whole note, but Rivest
said that they couldn't improve MD6's performance to meet NIST's
criteria (at least as fast as SHA-2); the designers of MD6 felt that
they could not manage that and still achieve provable resistance to
differential attacks, and they regard the latter as very important.
Here's the essential paragraph:

Thus, while MD6 appears to be a robust and secure cryptographic
hash algorithm, and has much merit for multi-core processors,
our inability to provide a proof of security for a
reduced-round (and possibly tweaked) version of MD6 against
differential attacks suggests that MD6 is not ready for
consideration for the next SHA-3 round.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


visualizing modes of operation

2009-05-21 Thread Steven M. Bellovin
http://www.cryptosmith.com/archives/621


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


80-bit security? (Was: Re: SHA-1 collisions now at 2^{52}?)

2009-05-07 Thread Steven M. Bellovin
On Thu, 30 Apr 2009 17:44:53 -0700
Jon Callas j...@callas.org wrote:

 The accepted wisdom
 on 80-bit security (which includes SHA-1, 1024-bit RSA and DSA keys,
 and other things) is that it is to be retired by the end of 2010. 

That's an interesting statement from a historical perspective -- is it
true?  And what does that say about our ability to predict the future,
and hence to make reasonable decisions on key length?

See, for example, the 1996 report on key lengths, by Blaze, Diffie,
Rivest, Schneier, Shimomura, Thompson, and Wiener, available at
http://www.schneier.com/paper-keylength.html -- was it right?

In 1993, Brickell, Denning, Kent, Maher, and Tuchman's interim report
on Skipjack (I don't believe there was ever a final report) stated that
Skipjack (an 80-bit cipher) was likely to be secure for 30-40 years.
Was it right?

The problem with SHA-1 is not its 80-bit security, but rather that it's
not that strong.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Some old works

2009-04-30 Thread Steven M. Bellovin
While poking around Google Books, I stumbled on the following two
references that might be of interest to this list.  The first is cited
by Kahn.

\emph{The Military Telegraph During the Civil War in the United States:
With an Exposition of Ancient and Modern Means of Communication,
and of the Federal and Confederate Cipher Systems ; Also a Running
Account of the War Between the States}
By William Rattle Plum
Published by Jansen, McClurg  Company, 1882
http://books.google.com/books?id=trpBIAAJ

Secret Writing
The Century
Published by The Century Co., 1913
http://books.google.com/books?id=LbIul9mwYtsCprintsec=titlepage#PPA83,M1



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


A reunion at Bletchley Park

2009-04-30 Thread Steven M. Bellovin
http://www.google.com/hostednews/ap/article/ALeqM5jFmxwZmt8V4URihSIugJroZE4yKgD974J72O0


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Legalities: NSA outsourcing spying on Americans?

2009-04-30 Thread Steven M. Bellovin
The assertion occasionally comes up that since the NSA cannot legally
eavesdrop on Americans, it outsources to the UK or one of the other
Echelon countries.  It turns out that that's forbidden, too -- see
Section 2.12 of Executive Order 12333
(http://www.archives.gov/federal-register/codification/executive-order/12333.html)

Now, I'm not saying that the government or the NSA always follows the
rules; I'm simply saying that that loophole is pretty obvious and is
(officially, at least) closed.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Judge orders defendant to decrypt PGP-protected laptop

2009-03-04 Thread Steven M. Bellovin
On Tue, 03 Mar 2009 17:05:32 -0800
John Gilmore g...@toad.com wrote:

  I would not read too much into this ruling -- I think that this is a
  special situation, and does not address the more important general
  issue.  
  In other cases, where alternative evidence is not available to the
  government, and where government agents have not already had a look
  at the contents, the facts (and hence perhaps the ruling) would be
  different.
 
 Balls.  This is a straight end-run attempt around the Fifth Amendment.
 The cops initially demanded a court order making him reveal his
 password -- then modified their stance on appeal after they lost.  So
 he can't be forced to reveal it, but on a technicality he can be
 forced to produce the same effect as revealing it?  Just how broad is
 this technicality, and how does it get to override a personal
 constitutional right?

Courts very rarely issue broader rulings than they absolutely have to.
*Given the facts of this particular case* -- where Federal agents have
already seen the putatively-illegal images -- it strikes me as unlikely
there will be definitive ruling in either direction.  

Let me refer folks to Orin Kerr's blog on the original ruling:
http://volokh.com/posts/chain_1197670606.shtml .  I rarely agree with
Kerr; this time, after thinking about it a *lot*, I concluded he was
likely correct.  I suggest that people read his post (including all the
'click here to see more' links, which seem to require (alas)
Javascript) and the precedents cited.  It doesn't mean I agree with all
of those rulings (I don't), or that I think the courts should rule
against Boucher.  What I'm saying is that based on precedent and the
facts of this case, I think they will.

Here's a crucial factual excerpt from Kerr's blog:

The agent came across several files with truly revolting titles
that strongly suggested the files themselves were child
pornography. The files had been opened a few days earlier, but
the agent found that he could not open the file when he tried
to do so. Agents asked Boucher if there was child pornography
in the computer, and Boucher said he wasn't sure; he downloaded
a lot of pornography on to his computer, he said, but he
deleted child pornography when he came across it.

In response to the agents' request, Boucher waived his Miranda
rights and agreed to show the agents where the pornography on
the computer was stored. The agents gave the computer to
Boucher, who navigated through the machine to a part of the
hard drive named drive Z. The agents then asked Boucher to
step aside and started to look through the computer themselves.
They came across several videos and pictures of child
pornography. Boucher was then arrested, and the agents powered
down the laptop.

Also note this text from the original ruling (at
http://www.volokh.com/files/Boucher.pdf) supporting Boucher:

Both parties agree that the contents of the laptop do
not enjoy Fifth Amendment protection as the contents
were voluntarily prepared and are not testimonial. See
id. at 409-10 (holding previously created work
documents not privileged under the Fifth Amendment).
Also, the government concedes that it cannot compel
Boucher to disclose the password to the grand jury
because the disclosure would be testimonial. The
question remains whether entry of the password, giving
the government access to drive Z, would be testimonial
and therefore privileged.

The legal issue is very narrow: is entering the password testimonial,
and thus protected?  Again: both parties agree that the contents of the
laptop do not enjoy Fifth Amendment protection as the contents were
voluntarily prepared and are not testimonial.

Beyond that, Boucher waived his Miranda rights in writing and showed the
agent the (I assume) relevant folders.  That, coupled with the
precedents from Fisher, Hubbell, etc., make it likely, in my
non-lawyerly opinion, that the government will prevail. *But* -- I
predict that the ruling will be narrow.  It will not (I suspect and
hope) result in a ruling that the government can always compel the
production of keys.

(Philosophical aside: I've never been happy with the way the Fifth
Amendment has been interpreted.  To me, it's about freedom of
conscience, rather than freedom from bringing punishment upon oneself.
The law supports that in other situations -- the spousal exemption, the
priest-penitent privilege, etc.  This is why grants of immunity and
especially use immunity have always troubled me.  I recognize, though,
that this is not the way the law works.)

So -- I suspect that Boucher is going to lose.  The real question is
whether the ruling will be narrow, based on these facts, or whether
some judge will issue a broad ruling on witholding keys.

--Steve 

Re: Judge orders defendant to decrypt PGP-protected laptop

2009-03-03 Thread Steven M. Bellovin
On Tue, 03 Mar 2009 12:26:32 -0500
Perry E. Metzger pe...@piermont.com wrote:

 
 Quoting:
 
A federal judge has ordered a criminal defendant to decrypt his
hard drive by typing in his PGP passphrase so prosecutors can view
the unencrypted files, a ruling that raises serious concerns about
self-incrimination in an electronic age.
 
 http://news.cnet.com/8301-13578_3-10172866-38.html
 
I would not read too much into this ruling -- I think that this is a
special situation, and does not address the more important general
issue.  To me, this part is crucial:

Judge Sessions reached his conclusion by citing a Second
Circuit case, U.S. v. Fox, that said the act of producing
documents in response to a subpoena may communicate
incriminating facts in two ways: first, if the government
doesn't know where the incriminating files are, or second, if
turning them over would implicitly authenticate them.

Because the Justice Department believes it can link Boucher
with the files through another method, it's agreed not to
formally use the fact of his typing in the passphrase against
him. (The other method appears to be having the ICE agent
testify that certain images were on the laptop when viewed at
the border.)

Sessions wrote: Boucher's act of producing an unencrypted
version of the Z drive likewise is not necessary to
authenticate it. He has already admitted to possession of the
computer, and provided the government with access to the Z
drive. The government has submitted that it can link Boucher
with the files on his computer without making use of his
production of an unencrypted version of the Z drive, and that
it will not use his act of production as evidence of
authentication. 

In other cases, where alternative evidence is not available to the
government, and where government agents have not already had a look at
the contents, the facts (and hence perhaps the ruling) would be
different.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Judge orders defendant to decrypt PGP-protected laptop

2009-03-03 Thread Steven M. Bellovin
On Tue, 03 Mar 2009 13:53:50 -0500
Perry E. Metzger pe...@piermont.com wrote:

 
 Adam Fields cryptography23094...@aquick.org writes:
  Well, it should be clear that any such scheme necessarily will
  produce encrypted partitions with less storage capacity than one
  with only one set of cleartext. You can't magically store 2N bytes
  in an N byte drive -- something has to give. It should therefore
  be reasonably obvious from partition sizes that there is something
  hidden.
 
  I don't see how you could tell the difference between a virtual 40GB
  encrypted padded partition and 2 virtual 20GB ones.
 
 The judge doesn't need to know the difference to beyond any
 doubt. If the judge thinks you're holding out, you go to jail for
 contempt.
 
 Geeks expect, far too frequently, that courts operate like Turing
 machines, literally interpreting the laws and accepting the slightest
 legal hack unconditionally without human consideration of the impact
 of the interpretation. This is not remotely the case.
 
 I'll repeat: the law is not like a computer program. Courts operate on
 reasonableness standards and such, not on literal interpretation of
 the law. If it is obvious to you and me that a disk has multiple
 encrypted views, then you can't expect that a court will not be able
 to understand this and take appropriate action, like putting you in a
 cage.
 
Indeed.  Let me point folks at
http://www.freedom-to-tinker.com/blog/paul/being-acquitted-versus-being-searched-yanal
-- which was in fact written by a real lawyer, a former prosecutor who
is now a law professor.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Solving password problems one at a time, Re: The password-reset paradox

2009-03-02 Thread Steven M. Bellovin
On Sat, 21 Feb 2009 11:33:32 -0800
Ed Gerck edge...@nma.com wrote:

 I submit that the most important password problem is not that someone 
 may find it written somewhere. The most important password problem is 
 that people forget it. So, writing it down and taking the easy 
 precaution of not keeping next to the computer solves the most
 important problem with not even a comparably significant downside.

Up to a point.  The most important password problem is very much
context-dependent.  I'm not going to forget the unlock password to my
laptop, because I use it many times/day.  I regularly forget my login
password to the CS department's servers because I use it so rarely --
as best I can tell, I haven't used it in at least 15 months, because I
use public key authentication for most functions.  They've installed
some new service that will require it, though, so I suppose I need to
learn it.

However -- if you're talking about garden-variety web passwords, you're
probably correct.  

For your last sentence, see my next response...

 Having automatic, secure, and self-managed password recovery and
 password reset (in case the password cannot be recovered) apps are
 also part of this solution.

Define automatic and secure.  Self-managed is context-dependent.
It's true for generic web authentication; it most certainly is not for
more serious ones.  The generic recovery/reset mechanisms have their
own security issues -- how secure is the back-up authentication
systems?  In most cases, the answer is much less secure than the base
mechanism.
 
 I see the second most important problem in passwords to be that they 
 usually have low entropy -- ie, passwords are usually easily
 guessable or easy to find in a quick search.

So -- why does that matter?

We've become prisoners of dogma here.  In 1979, Bob Morris and Ken
Thompson showed that passwords were guessable.  In 1979, that was
really novel.  There was a lot of good work done in the next 15 years
on that problem -- Spaf's empirical observations, Klein's '90 paper on
improving password security, Lamport's algorithm that gave rise to
S/Key, my and Mike Merritt's EKE, many others.  Guess what -- we're not
living in that world now.  We have shadow password files on Unix
systems; we have Kerberos; we have SecurID; we have SSL which rules out
the network attacks and eavesdropping that EKE was intended to counter;
etc.  We also have web-based systems whose failure modes are not nearly
the same.  Why do we think that the solutions are the same?  There was
a marvelous paper at Hotsec '07 that I resent simply because the
authors got there before me; I had (somewhat after them) come to the
same conclusions: the defenses we've built up against password failure
since '79 don't the problems of today's world.  We have to recognize
the new problems before we can solve them.  (I *think* that the paper
is at
http://www.usenix.org/events/hotsec07/tech/full_papers/florencio/florencio.pdf
but I'm on an airplane now and can't check...

 The next two important problems in passwords are absence of mutual 
 authentication (anti-phishing)

Personally, I think this is the biggest problem when it comes to
phishing attacks.

 and absence of two-factor
 authentication.

What problem does two-factor solve?  I agree that it's helpful, but
until we know the threat we can't solve it.
 
 To solve these three problems, at the same time, we have been 
 experimenting since 2000 with a scheme where the Username/Password
 login is divided in two phases. In different applications in several
 countries over nine years, this has been tested with many hundreds of
 thousands of users and further improved. (you can also test it if you
 want). It has just recently been applied for TLS SMTP authentication
 where both the email address and the user's common name are also
 authenticated (as with X.509/PKI but without the certificates).
 
 This is how it works, both for the UI and the engine behind it.
 
 (UI in use since 2000, for web access control and authorization)
 After you enter a usercode in the first screen, you are presented
 with a second screen to enter your password. The usercode is a
 mnemonic 6-character code such as HB75RC (randomly generated, you
 receive from the server upon registration). Your password is freely
 choosen by you upon registration.That second screen also has
 something that you and the correct server know but that you did not
 disclose in the first screen -- we can use a simple three-letter
 combination ABC, for example. You use this to visually authenticate
 the server above the SSL layer. A rogue server would not know this
 combination, which allays spoofing considerations -- if you do not
 see the correct three-letter combination, do not enter your password.

As Peter Gutmann has pointed out, that has succeeded only because it
hasn't been seriously attacked.  Research results show that users are
very easily fooled by changes to the server.  In the scenario you
cite, all it 

Re: Security through kittens, was Solving password problems

2009-02-25 Thread Steven M. Bellovin
On Wed, 25 Feb 2009 10:04:40 -0800
Ray Dillinger b...@sonic.net wrote:

 On Wed, 2009-02-25 at 14:53 +, John Levine wrote:
 
  You're right, but it's not obvious to me how a site can tell an evil
  MITM proxy from a benign shared web cache.  The sequence of page
  accesses would be pretty similar.
 
 There is no such thing as a benign web cache for secure pages.
 If you detect something doing caching of secure pages, you need 
 to shut them off just as much as you need to shut off any other 
 MITM.

It's not caching such pages; it is acting as a TCP relay for the
requests, without access to the keys.  These are utterly necessary for
some firewall architectures, for example, and generally do not represent
a security threat beyond traffic analysis.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


stripping https from pages

2009-02-20 Thread Steven M. Bellovin
http://www.theregister.co.uk/2009/02/19/ssl_busting_demo/ -- we've
talked about this attack for quite a while; someone has now implemented
it.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The password-reset paradox

2009-02-20 Thread Steven M. Bellovin
On Fri, 20 Feb 2009 02:36:17 +1300
pgut...@cs.auckland.ac.nz (Peter Gutmann) wrote:

 There are a variety of password cost-estimation surveys floating
 around that put the cost of password resets at $100-200 per user per
 year, depending on which survey you use (Gartner says so, it must be
 true).
 
 You can get OTP tokens as little as $5.  Barely anyone uses them.
 
 Can anyone explain why, if the cost of password resets is so high,
 banks and the like don't want to spend $5 (plus one-off background
 infrastructure costs and whatnot) on a token like this?
 
Because then you need PIN resets, lost token handling, and my token
doesn't work and I'm on a trip and my boss will kill me if I don't get
this done resets.  I've personally had to deal with two of the three,
and it was just as insecure as password resets


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


NSA offering 'billions' for Skype eavesdrop solution

2009-02-13 Thread Steven M. Bellovin
Counter Terror Expo: News of a possible viable business model for P2P
VoIP network Skype emerged today, at the Counter Terror Expo in London.
An industry source disclosed that America's supersecret National
Security Agency (NSA) is offering billions to any firm which can
offer reliable eavesdropping on Skype IM and voice traffic.



http://www.theregister.co.uk/2009/02/12/nsa_offers_billions_for_skype_pwnage/


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Property RIghts in Keys

2009-02-12 Thread Steven M. Bellovin
I was reading a CPS from GeoTrust -- 91 pages of legalese! -- and came
across the following statement:

Without limiting the generality of the foregoing, GeoTrust's
root public keys and the root Certificates containing them,
including all self-signed certificates, are the property of
GeoTrust.  GeoTrust licenses software and hardware
manufacturers to reproduce such root Certificates to place
copies in trustworthy hardware devices or software.

Under what legal theory might a certificate -- or a key! -- be
considered property?  There wouldn't seem to be enough creativity in
a certificate, let alone a key, to qualify for copyright protection.

I won't even comment on the rest of the CPS, not even such gems as
Subscribers warrant that ... their private key is protected and that
no unauthorized person has ever had access to the Subscriber's private
key.  And just how can I tell that?


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Proof of Work - atmospheric carbon

2009-01-31 Thread Steven M. Bellovin
On Fri, 30 Jan 2009 11:40:12 -0700
Thomas Coppi thisnuke...@gmail.com wrote:

 On Wed, Jan 28, 2009 at 2:19 PM, John Levine jo...@iecc.com wrote:
  Indeed.  And don't forget that through the magic of botnets, the bad
  guys have vastly more compute power available than the good guys.
 
  Just out of curiosity, does anyone happen to know of any documented
 examples of a botnet being used for something more interesting than
 just sending spam or DDoS?

I asked Rob Thomas of Team Cymru this question (he and they study the
underground).  Here is his answer, posted with permission:


Botnets are routinely used as:

1. Proxies (IRC, HTTP  HTTPS)

2. To recover financial credentials, e.g. paypal, citibank, et al.
   This was the original purpose of the PSNIFF code in some of the early
bots.

Here's a code snippet from the now venerable
rBot_rxbot_041504-dcom-priv-OPTIX_MASTERPASSWORD dating back several
years:

[ ... ]

// Scaled down distributed network raw packet sniffer (ala Carnivore)
//
// When activated, watches for botnet login strings, and
// reports them when found.
//
// The bots NIC must be configured for promiscuous mode (recieve
// all). Chances are this already done, if not, you can enable it
// by passing the SIO_RCVALL* DWORD option with a value of 1, to
// disable promiscuous mode pass with value 0.
//
// This won't work on Win9x bots since SIO_RCVALL needs raw
// socket support which only WinNT+ has.

[ ... ]

PSWORDS pswords[]={
{:.login,BOTP},
{:,login,BOTP},
{:!login,BOTP},
[ ... ]
{paypal,HTTPP},
{PAYPAL,HTTPP},
{paypal.com,HTTPP},
{PAYPAL.COM,HTTPP},
{Set-Cookie:,HTTPP},
{NULL,0}
};

[ ... ]


3. Remember they're called boats now, so anything is possible.  Screen
captures are becoming increasingly popular.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


full-disk encryption standards released

2009-01-28 Thread Steven M. Bellovin
http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9126869intsrc=hm_ts_head


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Obama's secure PDA

2009-01-27 Thread Steven M. Bellovin
On Mon, 26 Jan 2009 02:49:31 -0500
Ivan Krstić krs...@solarsail.hcs.harvard.edu wrote:

 Finally, any idea why the Sectéra is certified up to Top Secret for  
 voice but only up to Secret for e-mail? (That is, what are the  
 differing requirements?)
 
I actually explained (my take on) that question to my class last week.
Quite simply, voice offers one service -- voice.  Data offers many
services, and hence many venues for data-driven attacks: email (which
includes many MIME types) and probably clicking on URLs, web (which
includes HMTL, gif, jpeg, perhaps png, and almost certainly
Javascript), and perhaps data files including pdf, Word, Powerpoint,
and Excel.  Any one of those data formats is far more complex than even
compressed voice; the union of them makes me surprised it can handle
even Secret data... Note especially that HTML involves IFRAMEs and
third-party images, which means inherent cross-domain issues.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-20 Thread Steven M. Bellovin
On Mon, 19 Jan 2009 10:45:55 +0100
Bodo Moeller bmoel...@acm.org wrote:

 On Sat, Jan 17, 2009 at 5:24 PM, Steven M. Bellovin
 s...@cs.columbia.edu wrote:
 
  I've mentioned it before, but I'll point to the paper Eric Rescorla
  wrote a few years ago:
  http://www.cs.columbia.edu/~smb/papers/new-hash.ps or
  http://www.cs.columbia.edu/~smb/papers/new-hash.pdf .  The bottom
  line: if you're running a public-facing web server, you *can't*
  offer a SHA-2 certificate because you have no way of knowing if the
  client supports SHA-2. Fixing that requires a TLS fix; see the
  above timeline for that.
 
 The RFC does exit (TLS 1.2 in RFC 5246 from August 2008 makes SHA-256
 mandatory), so you can send a SHA-256 certificate to clients that
 indicate they support TLS 1.2 or later.  You'd still need some other
 certificate for interoperability with clients that don't support
 SHA-256, of course, and you'd be sending that one to clients that do
 support SHA-256 but not TLS 1.2.  (So you'd fall back to SHA-1, which
 is not really a problem when CAs make sure to use the hash algorithm
 in a way that doesn't rely on hash collisions being hard to find,
 which probably is a good idea for *any* hash algorithm.)
 
So -- who supports TLS 1.2?  (Btw -- note the date of that RFC: August
2008.  That's almost exactly 3 years after ekr and I published our
paper.  Since ekr is co-chair of the TLS working group, we can assume
that that group was aware of the problem.  See what Peter and I said
about how long it takes to get any changes deployed.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: MD5 considered harmful today, SHA-1 considered harmful tomorrow

2009-01-17 Thread Steven M. Bellovin
On Mon, 12 Jan 2009 16:05:08 +1300
pgut...@cs.auckland.ac.nz (Peter Gutmann) wrote:

 Weger, B.M.M. de b.m.m.d.we...@tue.nl writes:
 
  Bottom line, anyone fielding a SHA-2 cert today is not going=20
  to be happy with their costly pile of bits.
 
 Will this situation have changed by the end of 2010 (that's next
 year, by the way), when everybody who takes NIST seriously will have
 to switch to SHA-2?
 
 I have a general outline of a timeline for adoption of new crypto
 mechanisms (e.g. OAEP, PSS, that sort of thing, and not specifically
 algorithms) in my Crypto Gardening Guide and Planting Tips,
 http://www.cs.auckland.ac.nz/~pgut001/pubs/crypto_guide.txt, see
 Question J about 2/3 of the way down.  It's not meant to be
 definitively accurate for all cases but was created as a rough
 guideline for people proposing to introduce new crypto mechanisms to
 give an idea of how long they should expect to wait to see them
 adopted.
 
My analysis is similar to Peter's: 2-3 years for an RFC, 2-3 years for
design/code/test, 2 years average delay for the next major release of
Windows which will include it, 5 years for most of the older machines to
die off.  

I've mentioned it before, but I'll point to the paper Eric Rescorla
wrote a few years ago:
http://www.cs.columbia.edu/~smb/papers/new-hash.ps or
http://www.cs.columbia.edu/~smb/papers/new-hash.pdf .  The bottom line:
if you're running a public-facing web server, you *can't* offer a SHA-2
certificate because you have no way of knowing if the client supports
SHA-2. Fixing that requires a TLS fix; see the above timeline for that.

-- 
--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: feds try to argue touch tone content needs no wiretap order

2009-01-11 Thread Steven M. Bellovin
On Fri, 09 Jan 2009 20:12:16 -0500
Perry E. Metzger pe...@piermont.com wrote:

 
 Just about everyone knows that the FBI must obtain a formal
 wiretap order from a judge to listen in on your phone calls
 legally. But the U.S. Department of Justice believes that police
 don't need one if they want to eavesdrop on what touch tones you
 press during the call.
 
 Those touch tones can be innocuous (press 0 for an operator). Or
 they can include personal information including bank account
 numbers, passwords, prescription identification numbers, Social
 Security numbers, credit card numbers, and so on--all of which
 most of us would reasonably view as private and confidential.
 
 That brings us to New York state, where federal prosecutors have
 been arguing that no wiretap order is necessary. They insist that
 touch tones cannot be content, a term of art that triggers legal
 protections under the Fourth Amendment.
 
 http://news.cnet.com/8301-13578_3-10138074-38.html?part=rsstag=feedsubj=News-PoliticsandLaw
 
It's very much worth reading the whole article; the author, Declan
McCullagh, does a good job with the historical background.  I'll add
one more historical tidbit: in the late 1980s, New York courts outlawed
pen register taps, because the same equipment was used to detect touch
tones as was used to record full content, and thus there was no
protection against law enforcement agents exceeding the court's
authority.

If I may wax US-legal for a moment...  According to a (U.S.) Supreme
Court decision (Katz v U.S. 389 US 347 (1967)), phone call content is
private, which therefore brings into play the full protection of the
Fourth Amendment -- judges, warrants, probable cause, etc.  However,
under a later ruling (Smith v Maryland 442 US 735 (1979)), the numbers
you call are information that is given to the phone company, and
hence is no longer private.  Accordingly, the Fourth Amendment does not
apply, and a much easier-to-get court order is all that's needed,
according to statute.  (I personally regard the reasoning in Smith as
convoluted and tortuous, but there have been several other, similar
rulings: data you voluntarily give to another party is no longer
considered private, so the Fourth Amendment doesn't apply.)

The legitimate (under current law) problem that law enforcement would
like to solve involves things like prepaid calling cards.  Suppose I
use one to call a terrorist friend, via some telco.  The number of the
calling card provider is available to law enforcement, under a pen
register order, per Smith and 18 USC 3121, the relevant legislation.
The telco will help law enforcement get that number.  I next dial my
account number; this is in effect a conversation between me and the
calling card provider.  Getting that number requires yet a different
kind of court order, I believe, but I'll skip that one for now.  I next
dial the number of my terrorist friend.  That's the number they now
want -- and per Smith, they're entitled to it, since it's a dialed
number via a telecommunications provider.  There is no doubt they could
go to that provider and ask for such a number.  However, they want to
ask the telco for it -- but the telco doesn't know what is a phone
number, what is an account number, what is a password for an online
bank account, and what is a password for an adult conference bridge.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


FBI code-cracking contest

2008-12-30 Thread Steven M. Bellovin
http://www.networkworld.com/community/node/36704


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Fw: [saag] Further MD5 breaks: Creating a rogue CA certificate

2008-12-30 Thread Steven M. Bellovin


Begin forwarded message:

Date: Tue, 30 Dec 2008 11:05:28 -0500
From: Russ Housley hous...@vigilsec.com
To: ietf-p...@imc.org, ietf-sm...@imc.org, s...@ietf.org, c...@irtf.org
Subject: [saag] Further MD5 breaks: Creating a rogue CA certificate


http://www.win.tue.nl/hashclash/rogue-ca/

MD5 considered harmful today
Creating a rogue CA certificate

December 30, 2008

Alexander Sotirov, Marc Stevens,
Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, Benne de
Weger

We have identified a vulnerability in the Internet Public Key 
Infrastructure (PKI) used to issue digital certificates for secure 
websites. As a proof of concept we executed a practical attack 
scenario and successfully created a rogue Certification Authority 
(CA) certificate trusted by all common web browsers. This certificate 
allows us to impersonate any website on the Internet, including 
banking and e-commerce sites secured using the HTTPS protocol.

Our attack takes advantage of a weakness in the MD5 cryptographic 
hash function that allows the construction of different messages with 
the same MD5 hash. This is known as an MD5 collision. Previous work 
on MD5 collisions between 2004 and 2007 showed that the use of this 
hash function in digital signatures can lead to theoretical attack 
scenarios. Our current work proves that at least one attack scenario 
can be exploited in practice, thus exposing the security 
infrastructure of the web to realistic threats.

___
saag mailing list
s...@ietf.org
https://www.ietf.org/mailman/listinfo/saag




--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: two bits of light holiday reading

2008-12-27 Thread Steven M. Bellovin
On Fri, 26 Dec 2008 01:35:43 -0500
Ivan Krsti__ krs...@solarsail.hcs.harvard.edu wrote:


 2.
 
 The DC-based Center for Strategic and International Studies recently  
 released a report titled 'Securing Cyberspace for the 44th
 Presidency' written by a number of influential authors:
 
 http://www.csis.org/media/csis/pubs/081208_securingcyberspace_44.pdf
 
 Of most interest to this list, the report suggests going on the  
 offensive with regard to identity management, proposing to restrict  
 bonuses and awards of US federal agencies not using strong digital  
 credentials for employees in sufficient numbers (logical pp. 61-65).  
 Maybe, uh, it'll work this time around?

I disagree with a number of recommendations in that report; some of the
ones about identity management are high on my list.  See
http://www.cs.columbia.edu/~smb/blog/2008-12/2008-12-15.html for my
comments.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-17 Thread Steven M. Bellovin
On Wed, 17 Dec 2008 13:02:58 -0500
Jerry Leichter leich...@lrw.com wrote:

 On Dec 16, 2008, at 4:22 PM, Charles Jackson wrote:
  I probably should not be commenting, not being a real device guy.   
  But,
  variations in temperature and time could be expected to change SSD  
  timing.
  Temperature changes will probably change the power supply voltages  
  and shift
  some of the thresholds in the devices.  Oscillators will drift
  with changes
  in temperature and voltage.  Battery voltages tend to go down over  
  time and
  up with temperature.  In addition, in some systems the clock  
  frequency is
  purposely swept over something like a 0.1% range in order to
  smooth out the
  RF emissions from the device.  (This can give a 20 or 30 dB  
  reduction in
  peak emissions at a given frequency.  There is, of course, no
  change in
  total emissions.)
 
  Combine all of these factors, and one can envision the SSD cycles  
  taking
  varying numbers of system clock ticks and consequently the low
  order bits of
  a counter driven by a system clock would be random.  However,
  one would
  have to test this kind of entropy source carefully and would have
  to keep
  track of any changes in the manufacturing processes for both the
  SSD and the
  processor chip.
 
  Is there anyone out there who knows about device timing that can
  say more?
 I'm not a device guy either, but I've had reason to learn a bit more  
 about SSD's than is widely understood.
 
 SSD's are complicated devices.  Deep down, the characteristics of
 the underlying storage are very, very different from those of a
 disk. Layers of sophisticated hardware/firmware intervene to make a
 solid- state memory look like a disk.  To take a very simple
 example:  The smallest unit you can read from/write to solid state
 memory is several times the size of a disk block.  So to allow
 software to continue to read and write individual disk blocks, you
 have to do a layer of buffering and blocking/deblocking.  A much more
 obscure one is that the throughput of the memory is maximum when you
 are doing either all reads or all writes; anywhere in between slows
 it down.  So higher- performance SSD's play games with what is
 essentially double buffering:  Do all reads against a segment of
 memory, while sending writes to a separate copy as well as a
 look-aside buffer to satisfy reads to data that was recently
 written.  Switch the roles of the two segments at some point.
 
But what is the *physical basis* for the randomness?
http://www.springerlink.com/content/gkbmm9nuy07kerww/ (full text
at http://world.std.com/~dtd/random/forward.pdf) explains why hard drive
timings are considered random; are their comparable phenomena for SSDs?
(Of course -- that's a '94 paper; hard drive technology has changed a
lot.  Would they still get the same results?)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


HavenCo and Sealand

2008-11-26 Thread Steven M. Bellovin
Slightly off-topic, but a cause celebre on cypherpunks some years ago
-- but HavenCo, which ran a datacenter on the nation of Sealand, is
no longer operating there:
http://www.theregister.co.uk/2008/11/25/havenco/ (pointer via Spaf's
blog).


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Comment Period for FIPS 186-3: Digital Signature Standard

2008-11-12 Thread Steven M. Bellovin
From: Sara Caswell [EMAIL PROTECTED]
To: undisclosed-recipients:;
Subject: Comment Period for FIPS 186-3: Digital Signature Standard
Date: Wed, 12 Nov 2008 14:52:17 -0500
User-Agent: Thunderbird 2.0.0.14 (Windows/20080421)

As stated in the Federal Register of November 12, 2008, NIST requests
final comments on FIPS 186-3, the proposed revision of FIPS 186-2, the
Digital Signature Standard. The draft defines methods for digital
signature generation that can be used for the protection of messages,
and for the verification and validation of those digital signatures
using DSA, RSA and ECDSA.

Please submit comments to [EMAIL PROTECTED] with Comments on Draft
186-3 in the subject line. The comment period closes on December 12,
2008.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


NIST Special Publication 800-108 Recommendation for Key Derivation Using Pseudorandom Functions

2008-11-08 Thread Steven M. Bellovin
From: Sara Caswell [EMAIL PROTECTED]
To: undisclosed-recipients:;
Subject: NIST Special Publication 800-108 Recommendation for Key
   Derivation Using Pseudorandom Functions
Date: Fri, 07 Nov 2008 08:57:40-0500

 Dear Colleagues:
NIST Special Publication 800-108 Recommendation for Key Derivation
 Using Pseudorandom Functions is published at
 http://csrc.nist.gov/publications/nistpubs/800-108/sp800-108.pdf

Thank you very much for your valuable comments during public comments
period.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Rubber-hose cryptanalysis?

2008-10-27 Thread Steven M. Bellovin
http://news.cnet.com/8301-13739_3-10069776-46.html?tag=mncol


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Cryptologic History Symposium: Call for Papers

2008-10-27 Thread Steven M. Bellovin
Forwarded with permission.


---
From: Sieg, Kent G [EMAIL PROTECTED]
Subject: Symposium Call for Papers
Date: Mon, 27 Oct 2008 10:23:50 -0400

Just sending notice of our upcoming Symposium, especially if you can
present or know of a colleague who would like to do so.  Dr. Kent Sieg  

The Center for Cryptologic History announces a call for papers for its
biennial Symposium on Cryptologic History.  The Symposium will occur on
15-16 October 2009 in Laurel, Maryland, at the Johns-Hopkins Applied
Physics Laboratory located in the Baltimore-Washington corridor.  The
theme for the Symposium will be Global Perspectives on Cryptologic
History.  We will consider all proposals relating to any aspect of
cryptologic history.  The deadline for submission of proposals, to
include a minimum two-page topic prospectus, a brief source list, and a
biography, is 10 January 2009.  Selected presenters will received
notification by 1 March 2009.  For further information, contact Dr.
Kent Sieg, Symposium coordinator, at 301-688-2336 or [EMAIL PROTECTED]

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


unbreakable quantum crypto cracked by a laser

2008-10-24 Thread Steven M. Bellovin
http://technology.newscientist.com/channel/tech/dn14866-laser-cracks-unbreakable-quantum-communications.html?feedId=online-news_rss20

Not surprisingly, it's attacking the implementation, not the physics --
but of course we use implementations to communicate, rather than
theories.



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Using GPUs to crack crypto

2008-10-24 Thread Steven M. Bellovin
Elcomsoft has a product that uses GPUs to do password-cracking on a
variety of media.  They claim a speed-up of up to 67x, depending on the
application being attacked.

http://www.elcomsoft.com/edpr.html?r1=prr2=wpa

(This has led to a variety of stories (see, for example,
http://www.scmagazineuk.com/WiFi-is-no-longer-a-viable-secure-connection/article/119294/)
claiming that WPA is dead. The correct answer, though, is that
passwords are dead, especially bad ones.)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Fake popup study

2008-09-24 Thread Steven M. Bellovin
On Wed, 24 Sep 2008 20:43:53 -0400
Perry E. Metzger [EMAIL PROTECTED] wrote:

 
 Steven M. Bellovin [EMAIL PROTECTED] writes:
  Human factors haven't received nearly enough attention, and as
  long as human factors failings are dismissed as the fault of
  idiot users, they never will.
  
  Strong agreement.
 
 I don't disagree that much more needs to be done on human factors. I
 just don't see it as a panacea. 

There are no panaceas in this business.  As I told my class yesterday,
if they learn nothing else they should remember that security is a
systems property, and everything interacts.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: once more, with feeling.

2008-09-21 Thread Steven M. Bellovin
On Thu, 18 Sep 2008 17:18:00 +1200
[EMAIL PROTECTED] (Peter Gutmann) wrote:

 - Use TLS-PSK, which performs mutual auth of client and server
 without ever communicating the password.  This vastly complicated
 phishing since the phisher has to prove advance knowledge of your
 credentials in order to obtain your credentials (there are a pile of
 nitpicks that people will come up with for this, I can send you a
 link to a longer writeup that addresses them if you insist, I just
 don't want to type in pages of stuff here).
 
Once upon a time, this would have been possible, I think.  Today,
though, the problem is the user entering their key in a box that is (a)
not remotely forgeable by a web site that isn't using the browser's
TLS-PSK mechanism; and (b) will *always* be recognized by users, even
dumb ones.  Today, sites want *pretty* login screens, with *friendly*
ways to recover your (or Palin's) password, and not just generic grey
boxes.  Then imagine the phishing page that displays an artistic but
purely imaginary login screen, with a message about NEW!  Better
naviation on our login page!

If this had been done in the beginning, before users -- and web site
designers, and browser vendors -- were mistrained, it might have
worked.  Now, though?  I'm skeptical.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Origin of the nomenclature red-black?

2008-08-30 Thread Steven M. Bellovin
Does anyone know where and when the use of red (inside networks) and
black (outside, encrypted networks for crypto gear) started?  I'm
especially intrigued by the use of red, since in other military
nomenclature (in the US) blue is the usual color for US and friendly
forces and red is (for obvious geopolitical reasons) the enemy.

One hypothesis I've come up with is that the color was chosen by the
British from the so-called all-red route -- the web of underseas
telegraph links that touched only Britain and its colonies.  It was
named for the usual map color of the time (~100 years ago) for the
British empire.  The all-red route gave the British protection against
(some) foreign eavesdropping; it was also useful offensively, since the
1920 Official Secrets Act contained a provision requiring cable
companies to turn over copies of all telegrams to the government.
(Source: The Invisible Weapon: Telecommunications and International
Politics, 1851-1945, by Daniel R. Headrick, Oxford University Press,
1991.)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: road toll transponder hacked

2008-08-28 Thread Steven M. Bellovin
On Thu, 28 Aug 2008 10:49:20 +0200
Eugen Leitl [EMAIL PROTECTED] wrote:

 On Wed, Aug 27, 2008 at 12:16:23PM -0400, Steven M. Bellovin wrote:
 
  Finally, the transponders may not matter much longer; OCR on license
  plates is getting that good.  As has already been mentioned, the 407
  ETR road in Toronto already relies on this to some extent; it won't
  be too much longer before the human assist is all but unneeded.
 
 http://en.wikipedia.org/wiki/Toll_Collect is in operation in entire
 Germany. It does OCR on all license plates (also used for police
 purposes in realtime, despite initial vigorous denial) but currently 
 is only used for truck toll.
 
How well does that actually work?  There were many articles in RISKS
Digest about problems with the early deployment.

And -- turning the topic back to crypto -- is there a cryptographic
solution to license plates?  Put another way, what are the legitimate
needs of various parties, and can these be satisfied in a
privacy-preserving way?  (Note: I do not regard put a digital cash
wallet in the transponder as a solution to the license plate problem,
since it doesn't handle the problem of toll evaders, people who aren't
members of the system, and many other things that license plates are
used for.)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: road toll transponder hacked

2008-08-28 Thread Steven M. Bellovin
On Thu, 28 Aug 2008 17:55:57 +0200
Stefan Kelm [EMAIL PROTECTED] wrote:

  http://en.wikipedia.org/wiki/Toll_Collect is in operation in entire
  Germany. It does OCR on all license plates (also used for police
  purposes in realtime, despite initial vigorous denial) but
  currently is only used for truck toll.
 
  How well does that actually work?  There were many articles in RISKS
  Digest about problems with the early deployment.
 
 That's true wrt to early deployment. Given that the Toll Collect
 system has been up and running since January 2005 it (technically)
 runs surprisingly well. They have improved tremendously and are
 likely to sell their technology to other european countries.
 
I confess that from a privacy perspective, I'd prefer if it didn't work
that well...

Thanks.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Decimal encryption

2008-08-27 Thread Steven M. Bellovin
On Wed, 27 Aug 2008 17:05:44 +0200
Philipp G__hring [EMAIL PROTECTED] wrote:

 Hi,
 
 I am searching for symmetric encryption algorithms for decimal
 strings.
 
 Let's say we have various 40-digit decimal numbers:
 2349823966232362361233845734628834823823
 3250920019325023523623692235235728239462
 0198230198519248209721383748374928601923
 
 As far as I calculated, a decimal has the equivalent of about 3,3219
 bits, so with 40 digits, we have about 132,877 bits.
 
 Now I would like to encrypt those numbers in a way that the result is
 a decimal number again (that's one of the basic rules of symmetric
 encryption algorithms as far as I remember).
 
 Since the 132,877 bits is similar to 128 bit encryption (like eg.
 AES), I would like to use an algorithm with a somewhat comparable
 strength to AES. But the problem is that I have 132,877 bits, not 128
 bits. And I can't cut it off or enhance it, since the result has to
 be a 40 digit decimal number again.
 
 Does anyone know a an algorithm that has reasonable strength and is
 able to operate on non-binary data? Preferrably on any chosen
 number-base?
 
Do you want a stream cipher or a block cipher?  For the former, it's
easy.  Use something like rc4, which produces a sequence of keystream
bytes.  Retrieve the low-order N bits from each key stream byte, where N
is large enough for the base you're using.  If the value is greater
than or equal to the base you're using, discard that byte and try
again.  For your example, you'd use the low-order 4 bits, but discard
any bytes whose value is = 10.  Add this value, discarding the carry,
to the digit to be encrypted.

You're running RC4 at 5/8 efficiency; unless you have a *lot* of data,
that almost certainly doesn't matter.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: road toll transponder hacked

2008-08-27 Thread Steven M. Bellovin
On Wed, 27 Aug 2008 07:10:51 -0400
[EMAIL PROTECTED] wrote:

 
 Bill Frantz writes, in part:
 -+--
  | In the San Francisco Bay Area, they are using the transponder codes
  | to measure how fast traffic is moving from place to place. They
  | post the times to various destinations on the electric signs when
  | there are no Amber alerts or other more important things to
  | display. It is quite convenient, and they promise they don't use it
  | to track people's trips.
  |
 
 
 Look for general tracking to appear everywhere.
 Fast declining gasoline tax revenues will be
 replaced with per-mile usage fees, i.e., every
 major road becomes a toll road.  Most likely
 first in will be California and/or Oregon.
 
 The relationship to this list may then be thin
 excepting that the collection and handling of
 such data remains of substantial interest.  Of
 course, everyone who carries a cell phone has
 already decided that convenience trumps security,
 at least the kind of security that says they
 can't misuse what they ain't got.
 
There's a limit to how far they can go with that, because of the fear
of people abandoning the transponders.  For example -- they absolutely
will not use it for automated speeding tickets on, say, the NJ
Turnpike, because if they did people would stop using their EZPasses.
Given what a high percentage of drivers use them, especially at rush
hour, they make a significant improvement in throughput and safety at
toll plazas.  On congested roads, throughput is *extremely* important.

As for usage-based driving -- the first question is the political will
to do so.  In NYC, there's been tremendous resistance to things like
tolls over the East River bridges or congestion charges for driving
into much of Manhattan during the business day -- the Mayor tried very
hard, but was unable to push it through the state legislature.  That
said, I've seen some papers on how use of these transponders has
desensitized people towards the actual tolls they pay, and hence to
toll increases.

Finally, the transponders may not matter much longer; OCR on license
plates is getting that good.  As has already been mentioned, the 407
ETR road in Toronto already relies on this to some extent; it won't be
too much longer before the human assist is all but unneeded.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Decimal encryption

2008-08-27 Thread Steven M. Bellovin
On Wed, 27 Aug 2008 09:34:15 -0700
Greg Rose [EMAIL PROTECTED] wrote:
 
 So, you don't have a 133-bit block cipher lying around? No worries,
 I'll sell you one ;-). 

Also see Debra Cook's PhD dissertation on Elastic Block Ciphers at
http://www1.cs.columbia.edu/~dcook/thesis_ab.shtml



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cube cryptanalysis?

2008-08-19 Thread Steven M. Bellovin
Greg, assorted folks noted, way back when, that Skipjack looked a lot
like a stream cipher.  Might it be vulnerable?


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fw: NIST Documents Available for Review

2008-08-18 Thread Steven M. Bellovin


Begin forwarded message:

Date: Mon, 18 Aug 2008 10:56:16 -0400
From: Sara Caswell [EMAIL PROTECTED]
To: undisclosed-recipients:;
Subject: NIST Documents Available for Review


NIST revised the first drafts of Special Publication(SP) 800-106,
Randomized Hashing for Digital Signatures, and SP 800-107,
Recommendation for Applications Using Approved Hash Algorithms after
receiving great comments from many public and private individuals and
organizations. The second drafts of these two SPs have been posted at
http://csrc.nist.gov/publications/PubsDrafts.html. The deadlines for
public comments and the point-of-contact are listed with the documents. 

NIST also would like to announce that FIPS 198-1 has already been
approved and it is posted at
http://csrc.nist.gov/publications/fips/fips198-1/FIPS-198-1_final.pdf.





--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Judge approves TRO to stop DEFCON presentation

2008-08-10 Thread Steven M. Bellovin
On Sat, 09 Aug 2008 19:38:45 -0400
Ivan Krsti__ [EMAIL PROTECTED] wrote:

 On Sat, 09 Aug 2008 17:11:11 -0400, Perry E. Metzger
 [EMAIL PROTECTED] wrote:
  Las Vegas - Three students at the Massachusetts Institute of
  Technology (MIT) were ordered this morning by a federal court
  judge to cancel their scheduled presentation about
  vulnerabilities in Boston's transit fare payment system, violating
  their First Amendment right to discuss their important research.
 
 http://www-tech.mit.edu/V128/N30/subway/Defcon_Presentation.pdf
 
And the vulnerability assessment they prepared -- filed by the MBTA in
court, and hence a matter of public record -- is at
http://blog.wired.com/27bstroke6/files/vulnerability_assessment_of_the_mtba_system.pdf


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fw: FIPS 198-1 announcement

2008-07-30 Thread Steven M. Bellovin


Begin forwarded message:

Date: Wed, 30 Jul 2008 12:36:36 -0400
From: Sara Caswell [EMAIL PROTECTED]
To: undisclosed-recipients:;
Subject: FIPS 198-1 announcement


The National Institute of Standards and Technology (NIST) is pleased to 
announce approval of Federal Information Processing Standard(FIPS) 
Publication 198-1, The Keyed-Hash Message Authentication Code (HMAC), a 
revision of FIPS 198. The Federal Register Notice (FRN) of the approval 
is available here. The FIPS specifies a mechanism for message 
authentication using cryptographic hash functions in Federal
information systems.
 

URL to the Federal Register Notice:  
http://csrc.nist.gov/publications/fips/fips198-1/FIPS198-1_FRN.pdf

URL to the FIPS Publication 198-1:   
http://csrc.nist.gov/publications/PubsFIPS.html#FIPS%20198-1

 





--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: how to check if your ISP's DNS servers are safe

2008-07-23 Thread Steven M. Bellovin
On Tue, 22 Jul 2008 10:21:14 -0400
Perry E. Metzger [EMAIL PROTECTED] wrote:

 
 Niels Provos has a web page up with some javascript that automatically
 checks if your DNS caching server has been properly patched or not.
 
 http://www.provos.org/index.php?/pages/dnstest.html
 
 It is worth telling people to try.
 
Those who prefer command lines can try 

dig +short porttest.dns-oarc.net TXT



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kaminsky finds DNS exploit

2008-07-14 Thread Steven M. Bellovin
On Mon, 14 Jul 2008 16:27:58 +0200
Florian Weimer [EMAIL PROTECTED] wrote:
 
 On top of that, some operators decided not to offer TCP service at
 all.

Right.  There's a common misconception, on both security and network
operator mailing lists, that DNS servers use TCP only for zone
transfers, and that all such connection requests should be blocked.
See, for example, the NANOG thread starting at
http://mailman.nanog.org/pipermail/nanog/2008-June/001240.html


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Kaminsky finds DNS exploit

2008-07-09 Thread Steven M. Bellovin
On Wed, 09 Jul 2008 11:22:58 +0530
Udhay Shankar N [EMAIL PROTECTED] wrote:

 I think Dan Kaminsky is on this list. Any other tidbits you can add 
 prior to Black Hat?
 
 Udhay
 
 http://www.liquidmatrix.org/blog/2008/07/08/kaminsky-breaks-dns/
 
I'm curious about the details of the attack.  Paul Vixie published the
basic idea in 1995 at Usenix Security
(http://www.usenix.org/publications/library/proceedings/security95/vixie.html)
-- in a section titled What We Cannot Fix, he wrote:

With only 16 bits worth of query ID and 16 bits worth of UDP port
number, it's hard not to be predictable.  A determined attacker
can try all the numbers in a very short time and can use patterns
derived from examination of the freely available BIND code.  Even
if we had a white noise generator to help randomize our numbers,
it's just too easy to try them all.

Obligatory crypto: the ISC web page on the attack notes DNSSEC is the
only definitive solution for this issue. Understanding that immediate
DNSSEC deployment is not a realistic expectation...

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Upper limit?

2008-07-05 Thread Steven M. Bellovin
On Fri, 04 Jul 2008 20:46:13 -0700
Allen [EMAIL PROTECTED] wrote:

 Is there an upper limit on the number of RSA Public/Private 1024 bit 
 key pairs possible? If so what is the relationship of the number of 
 1024 bit to the number of 2048 and 4096 bit key pairs?
 
There are limits, but they're not particularly important.

I'll oversimplify.  Roughly speaking, a 1024-bit RSA public key is the
product of two 512-bit primes.  According to the Prime Number Theorem,
the number of primes  n is approximately n/log(n).  Actually, what we
need is the number of primes 2^511 and 2^512, but that correction
doesn't make much differences -- work through the math yourself to see
that.  Call the number of such primes P.

Now, we need two such primes.  There are therefore P^2 pairs, more than
2^1000.  The numbers are very much larger for 2048- and 4096-bit keys,
but I'll leave those as an exercise for the reader.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Mystery on Fifth Avenue

2008-06-13 Thread Steven M. Bellovin
Off-topic, but (a) some crypto stuff, and (b) I think this group will
appreciate it: http://www.nytimes.com/2008/06/12/garden/12puzzle.html


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A call for aid in cracking a 1024-bit malware key

2008-06-11 Thread Steven M. Bellovin
On Wed, 11 Jun 2008 15:58:26 -0400
Jeffrey I. Schiller [EMAIL PROTECTED] wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 I bet the malware authors can change keys faster then we can factor
 them...
 
To put it mildly.  They can can even set up sophisticated structures to
have lots of keys.

Let's put it like this: suppose you wanted to use all of your
cryptographic skills to do such a thing.  Do you think it could be
cracked?  I don't...

Btw -- see http://blogs.zdnet.com/security/?p=1259 for more details.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


A call for aid in cracking a 1024-bit malware key

2008-06-09 Thread Steven M. Bellovin
According to
http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9094818intsrc=hm_list%3E%20articleId=9094818intsrc=hm_list
some new malware is encrypting files with a 1024-bit RSA key.  Victims
are asked to pay a random to get their files decrypted.  So -- can
the key be factored?


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: The perils of security tools

2008-05-25 Thread Steven M. Bellovin
On Sat, 24 May 2008 20:29:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

 Of course, we have now persuaded even the most stubborn OS that 
 randomness matters, and most of them make it available, so perhaps
 this concern is moot.
 
 Though I would be interested to know how well they do it! I did have 
 some input into the design for FreeBSD's, so I know it isn't
 completely awful, but how do other OSes stack up?
 
I believe that all open source Unix-like systems have /dev/random
and /dev/urandom; Solaris does as well.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [ROS] The perils of security tools

2008-05-22 Thread Steven M. Bellovin
On Tue, 13 May 2008 12:10:16 -0400
Jonathan S. Shapiro [EMAIL PROTECTED] wrote:

 Ben's points are well taken, but there is one *small* piece of this
 where I have some sympathy for the Debian folks:
 
  What can we learn from this? Firstly, vendors should not be fixing 
  problems (or, really, anything) in open source packages by patching
  them locally - they should contribute their patches upstream to the
  package maintainers.
 
 The response times from package maintainers -- even the good ones like
 the OpenSSL team -- are not always fast enough. Sometimes, vendors
 don't have a choice. There is a catch-22 on both sides of this coin.
 
I was going to post something similar.  I maintain several pkgsrc
packages (http://www.pkgsrc.org); while most upstream maintainers are
happy to receive bug fixes, others range from indifferent to downright
hostile.  For example, I once reported a portability bug to a
developer: POSIX standards *require* that a certain system call reject
out-of-range arguments, and NetBSD enforces that check.  The Linux
kernel (or rather, the kernel of that time; I haven't rechecked lately)
did not.  Fine -- a minor standards issue with Linux.  But the
application I was adding to pkgsrc relied on the Linux behavior and the
developer angrily rejected my fix -- the standard was stupid, and he
saw no reason to change his code to conform.

Usually, though, indifference is a bigger problem.  The NetBSD internal
developers' mailing list has seen numerous complaints about *major*
package developers ignoring portability and correctness fixes.  If it
isn't Linux and it isn't Windows, it doesn't matter, it seems.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [ROS] The perils of security tools

2008-05-22 Thread Steven M. Bellovin
On Tue, 13 May 2008 23:00:57 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

 Steven M. Bellovin wrote:
  On Tue, 13 May 2008 14:10:45 +0100
  Ben Laurie [EMAIL PROTECTED] wrote:
  
  Debian have a stunning example of how blindly fixing problems
  pointed out by security tools can be disastrous.
 
  I've blogged about it here: http://www.links.org/?p=327
 
  Vendors Are Bad For Security
 
  I?ve ranted about this at length before, I?m sure - even in print,
  in O?Reily?s Open Sources 2. But now Debian have proved me right
  (again) beyond my wildest expectations. Two years ago,
  they ?fixed? a ?problem? in OpenSSL reported by valgrind[1] by
  removing any possibility of adding any entropy to OpenSSL?s pool
  of randomness[2].
 
  The result of this is that for the last two years (from Debian?s
  ?Edgy? release until now), anyone doing pretty much any crypto on
  Debian (and hence Ubuntu) has been using easily guessable keys.
  This includes SSH keys, SSL keys and OpenVPN keys.
 
  
  [2] Valgrind tracks the use of uninitialised memory. Usually it is
  bad to have any kind of dependency on uninitialised memory, but
  OpenSSL happens to include a rare case when its OK, or even a good
  idea: its randomness pool. Adding uninitialised memory to it can do
  no harm and might do some good, which is why we do it. It does
  cause irritating errors from some kinds of debugging tools, though,
  including valgrind and Purify. For that reason, we do have a flag
  (PURIFY) that removes the offending code. However, the Debian
  maintainers, instead of tracking down the source of the
  uninitialised memory instead chose to remove any possibility of
  adding memory to the pool at all. Clearly they had not understood
  the bug before fixing it.
 
  Ben: I haven't looked at the actual code in question -- are you
  saying that the *only* way to add more entropy is via this pool of
  uninitialized memory?
 
 No. That would be fantastically stupid.
 
So why are are the keys so guessable?  Or did they delete other code?


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


blacklisting the bad ssh keys?

2008-05-22 Thread Steven M. Bellovin
Given the published list of bad ssh keys due to the Debian mistake (see
http://metasploit.com/users/hdm/tools/debian-openssl/), should sshd be
updated to contain a blacklist of those keys?  I suspect that a Bloom
filter would be quite compact and efficient.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Steven M. Bellovin
On Sun, 04 May 2008 11:22:51 +0100
Ben Laurie [EMAIL PROTECTED] wrote:

 Steven M. Bellovin wrote:
  On Sat, 03 May 2008 17:00:48 -0400
  Perry E. Metzger [EMAIL PROTECTED] wrote:
  
  [EMAIL PROTECTED] (Peter Gutmann) writes:
  I am left with the strong suspicion that SSL VPNs are easier to
  configure and use because a large percentage of their user
  population simply is not very sensitive to how much security is
  actually provided.
  They're easier to configure and use because most users don't
  want to have to rebuild their entire world around PKI just to set
  up a tunnel from A to B.
  I'm one of those people who uses OpenVPN instead of IPSEC, and I'm
  one of the people who helped create IPSEC.
 
  Right now, to use SSH to remotely connect to a machine using public
  keys, all I have to do is type ssh-keygen and copy the locally
  generated public key to a remote machine's authorized keys file.
  When there is an IPSEC system that is equally easy to use I'll
  switch to it.
 
  Until then, OpenVPN let me get started in about five minutes, and
  the fact that it is less than completely secure doesn't matter
  much to me as I'm running SSH under it anyway.
 
  There's a technical/philosophical issue lurking here.  We tried to
  solve it in IPsec; not only do I think we didn't succeed, I'm not at
  all clear we could or should have succeeded.
  
  IPsec operates at layer 3, where there are (generally) no user
  contexts.  This makes it difficult to bind IPsec credentials to a
  user, which means that it inherently can't be as simple to
  configure as ssh.
  
  Put another way, when you tell an sshd whom you wish to log in as,
  it consults that user's home directory and finds an authorized_keys
  file. How can IPsec -- or rather, any key management daemon for
  IPsec -- do that?  Per-user SPDs?  Is this packet for port 80 for
  user pat or user chris?
  
  I can envision ways around this (especially if we have an IP address
  per user of a system -- I've been writing about fine-grained IP
  address assignment for years), but they're inherently a lot more
  complex than ssh.
 
 I don't see why.
 
 The ssh server determines who the packets are for from information
 sent to it by the ssh client.
 
 The ssh client knows on whose behalf it is acting by virtue of being 
 invoked by that user (I'll admit this is a simplification of the most 
 general case, but I assert my argument is unaffected), and thus is
 able to include the information when it talks to the server.
 
 Similarly, the client end of an IPSEC connection knows who opened the 
 connection and could, similarly, convey that information. That data
 may not be available in some OSes by the time it gets to the IPSEC
 stack, but that's a deficiency of the OS, not a fundamental problem.
 
The problem is more on the server end.




--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-06 Thread Steven M. Bellovin
On Sat, 03 May 2008 19:50:01 -0400
Perry E. Metzger [EMAIL PROTECTED] wrote:
 
 Almost exclusively the use for such things is nailing up a tunnel to
 bring someone inside a private network. For that, there is no need for
 per user auth -- the general assumption is that the remote box is a
 single user laptop or something similar anyway. You really just want
 to verify that the remote host has a particular private key, and if it
 does, you nail up a tunnel to it (possibly allocating it a local IP
 address in the process). That solves about 95% of the usage scenarios
 and it requires very little configuration. It also covers virtually
 all use of IPSec I see in the field.
 
 Again, there are more complex usage scenarios, and it may be more
 complicated to set one of *those* up, but it is a shame that it is
 difficult to do the simple stuff.
 
So here's an interesting experiment.  Part one: Take a common IPsec
implementation -- Linux, *BSD, Windows, what have you.  Assume this
common scenario: laptop connecting to a corporate server.  Assume a
user authentication credential.  (I'd prefer that that be a public/
private key pair, for many reasons, not the least of which is the bug
in IKEv1 with main mode and shared secrets.)  Do not assume a 1:1 ratio
between laptops and internal IP address, because such servers are
frequently underprovisioned.  Challenge: design -- and implement -- a
*simple* mechanism by which the client user can set up the VPN
connection, both on the client and on the server.  This part can
happen while the client is physically on the corporate net.  Variant A:
the VPN server is a similar box to which the client has login-grade
access. Variant B: the VPN server is something like a restricted-access
Cisco box, in which case a trusted proxy is probably needed.  User
setup should be something like 'configvpn cs.columbia.edu', where I
supply my username and authenticator.  User connection should be
'startvpn cs.columbia.edu' (or, of course, the GUI equivalent); all I
supply is some sort of authenticator.  Administrator setup should be a
list of authorized users, and probably an IP address range to use
(though having the VPN server look like a DHCP relay would be cool).

Experiment part two: implement remote login (or remote IMAP, or remote
Web with per-user privileges, etc.) under similar conditions.  Recall
that being able to do this was a goal of the IPsec working group.

I think that part one is doable, though possibly the existing APIs are
incomplete.  I don't think that part two is doable, and certainly not
with high assurance.  In particular, with TLS the session key can be
negotiated between two user contexts; with IPsec/IKE, it's negotiated
between a user and a system.  (Yes, I'm oversimplifying here.)

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL and Malicious Hardware/Software

2008-05-03 Thread Steven M. Bellovin
On Fri, 2 May 2008 08:33:19 +0100
Arcane Jill [EMAIL PROTECTED] wrote:

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Ryan Phillips
 Sent: 28 April 2008 23:13
 To: Cryptography
 Subject: SSL and Malicious Hardware/Software
 
  I can't think of a great way of alerting the user,
 
 I would be alerted immediately, because I'm using the Petname Tool
 Firefox plugin.
 
 For an unproxied site, I get a small green window with my own choice
 of text in it (e.g. Gmail if I'm visiting https://mail.google.com).
 If a proxy were to insert itself in the middle, that window would
 turn yellow, and the message would change to (untrusted).
 
Assorted user studies suggest that most users do not notice the color
of random little windows in their browsers...


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: User interface, security, and simplicity

2008-05-03 Thread Steven M. Bellovin
On Sat, 03 May 2008 17:00:48 -0400
Perry E. Metzger [EMAIL PROTECTED] wrote:

 
 [EMAIL PROTECTED] (Peter Gutmann) writes:
 I am left with the strong suspicion that SSL VPNs are easier to
 configure and use because a large percentage of their user
 population simply is not very sensitive to how much security is
 actually provided.
 
  They're easier to configure and use because most users don't want
  to have to rebuild their entire world around PKI just to set up a
  tunnel from A to B.
 
 I'm one of those people who uses OpenVPN instead of IPSEC, and I'm one
 of the people who helped create IPSEC.
 
 Right now, to use SSH to remotely connect to a machine using public
 keys, all I have to do is type ssh-keygen and copy the locally
 generated public key to a remote machine's authorized keys file.
 When there is an IPSEC system that is equally easy to use I'll switch
 to it.
 
 Until then, OpenVPN let me get started in about five minutes, and the
 fact that it is less than completely secure doesn't matter much to me
 as I'm running SSH under it anyway.
 
There's a technical/philosophical issue lurking here.  We tried to
solve it in IPsec; not only do I think we didn't succeed, I'm not at
all clear we could or should have succeeded.

IPsec operates at layer 3, where there are (generally) no user
contexts.  This makes it difficult to bind IPsec credentials to a user,
which means that it inherently can't be as simple to configure as ssh.

Put another way, when you tell an sshd whom you wish to log in as, it
consults that user's home directory and finds an authorized_keys file.
How can IPsec -- or rather, any key management daemon for IPsec -- do
that?  Per-user SPDs?  Is this packet for port 80 for user pat or user
chris?

I can envision ways around this (especially if we have an IP address
per user of a system -- I've been writing about fine-grained IP address
assignment for years), but they're inherently a lot more complex than
ssh.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: privacy expectations Was: SSL and Malicious Hardware/Software

2008-04-30 Thread Steven M. Bellovin
On Wed, 30 Apr 2008 12:49:12 +0300 (IDT)
Alexander Klimov [EMAIL PROTECTED] wrote:

 
 http://www.securityfocus.com/columnists/421/2:
 
   Lance Corporal Jennifer Long was issued a government computer
   to use on a government military network. When she was
   suspected of violations of the military drug use policies (and
   of criminal laws related to drug use), Marine Corps criminal
   investigators reviewed the contents of email messages she sent
   to another military employee who was likewise using
   a government issued computer over the same government network.
   The messages were retrieved from the government mail server
   and later used against Long. On September 27, 2006, the United
   States Court of Appeals for the Armed forces had to decide
   whether Long had any expectation of privacy in these e-mails.
 
   The starting point for any analysis is, of course, the DoD
   policy expressed on its warning banner, which stated quite
   explicitly:
 
 [...] All information, including personal information,
 placed on or sent over this system may be monitored. Use of
 this DoD computer system, authorized or unauthorized,
 constitutes consent to monitoring of this system. [...]
 
   However, the military court, [...] found that Long did, in
   fact have some privacy interests in the contents of her
   communications. It noted that while the government said it
   could monitor, it rarely did.
 
The actual opinion is much more nuanced and case-specific.  In the
first place, it demonstrated that the actual culture at that site was
very different.  In particular, the administrator testified that it
was general policy to avoid examining e-mails and their content
because it was a 'privacy issue'.  The court might well have ruled
differently were that not the case.

Second, the court noted that the suspected misconduct was (a) for
evidence of illegal behavior, and (b) unrelated to workplace misconduct.
And the banner wasn't specific enough: The banner in the instant case
did not provide Apellee with notice that she had no right of privacy.
Instead, the banner focused on the idea that her use of the system may
be monitored for limited purposes.

In addition, because the employer in this case was the government,
constitutional protections come into play, in a way that would not
apply to a private sector employer.  The reasoning there is complex,
especially since we're talking about the military (and soldiers have
many fewer rights than do civilians), so I won't try to summarize it;
let it suffice to say that generalizing from that case to an ordinary
workplace environment is not simple.

To sum up -- the court ruling in this particular case was very specific
to the facts of the case.  It's far from clear that it's generally
applicable.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Declassified NSA publications

2008-04-24 Thread Steven M. Bellovin
http://www.nsa.gov/public/crypt_spectrum.cfm


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: 2factor

2008-04-21 Thread Steven M. Bellovin
On Wed, 16 Apr 2008 14:07:49 -0400
[EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


 Which seem to be aimed at a drop in replacement for SSL (with a
 working example using Firefox and Apache). They seem to rest on a key
 exchange or agreement based on  a shared secret. 

As opposed to, say, RFC 4279, which is TLS-based.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Still locked up Shannon crypto work?

2008-04-18 Thread Steven M. Bellovin
On Mon, 07 Apr 2008 08:53:44 -0700
Ed Gerck [EMAIL PROTECTED] wrote:

 Consider Shannon. He didn?t do just information theory. Several
 years before, he did some other good things and some which are still
 locked up in the security of cryptography.
 
 Shannon's crypto work that is still [1986] locked up? This was
 said (*) by Richard W. Hamming on March 7, 1986. Hamming,
 who died when he was almost 83 years old in 1998, was then a
 Professor at the Naval Postgraduate School in Monterey, California.
 He was also a retired Bell Labs scientist.
 
 Does anyone about this or what it could be? Or if Hamming was
 incorrect?
 
I've heard that there were some patent applications with secrecy
orders. though I thought those were release by the late 1980s.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Hagelin cipher machine for sale on Ebay

2008-03-30 Thread Steven M. Bellovin
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItemih=005viewitem=item=150231089624rd=1


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: How is DNSSEC

2008-03-27 Thread Steven M. Bellovin
On Fri, 21 Mar 2008 08:52:07 +1000
James A. Donald [EMAIL PROTECTED] wrote:

  From time to time I hear that DNSSEC is working fine, and on
 examining the matter I find it is working fine except that 
 
 Seems to me that if DNSSEC is actually working fine, I should be able
 to provide an authoritative public key for any domain name I control,
 and should be able to obtain such keys for other domain names, and
 use such keys for any purpose, not just those purposes envisaged in
 the DNSSEC specification.  Can I?  It is not apparent to me that I
 can.
 
You might want to look at RFC 3445 and draft-iab-dns-choices-05.txt.

As for DNSSEC keys -- DNSSEC is for securing the DNS.  Once you've done
that, you can put other records in the DNS, but there are some subtle
points in DNS RR design that should be heeded.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Protection for quasi-offline memory nabbing

2008-03-21 Thread Steven M. Bellovin
I've been thinking about similar issues.  It seems to me that just
destroying the key schedule is a big help -- enough bits will change in
the key that data recovery using just the damaged key is hard, per
comments in the paper itself.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


NSA approves secure smart phone

2008-03-19 Thread Steven M. Bellovin
http://www.gcn.com/online/vol1_no1/45946-1.html


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: cold boot attacks on disk encryption

2008-03-15 Thread Steven M. Bellovin
On Thu, 21 Feb 2008 13:37:20 -0800
Ali, Saqib [EMAIL PROTECTED] wrote:

   Umm, pardon my bluntness, but what do you think the FDE stores the
  key in, if not DRAM? The encrypting device controller is a computer
  system with a CPU and memory. I can easily imagine what you'd need
  to build to do this to a disk drive. This attack works on anything
  that has RAM.
 
 How about TPM? Would this type of attack work on a tamper-resistant
 ver1.2 TPM?

See
http://technet2.microsoft.com/windowsserver2008/en/library/d2ff5c4e-4a68-4fd3-81d1-665e95a59dd91033.mspx?mfr=true

Briefly, there's a bit in the TPM that means there are keys present;
zero RAM when booting.  This does nothing against the guy with the
Dewar flask of liquid nitrogen, of course.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG for Padding

2008-03-15 Thread Steven M. Bellovin
On Fri, 7 Mar 2008 15:04:49 +0100
COMINT [EMAIL PROTECTED] wrote:

 Hi,
 
 This may be out of the remit of the list, if so a pointer to a more
 appropriate forum would be welcome.
 
 In Applied Crypto, the use of padding for CBC encryption is suggested
 to be met by ending the data block with a 1 and then all 0s to the end
 of the block size.
 
 Is this not introducing a risk as you are essentially introducing a
 large amount of guessable plaintext into the ciphertext.
 
 Is it not wiser to use RNG data as the padding, and using some kind of
 embedded packet size header to tell the system what is padding?
 
Maybe -- but you probably have enough guessable plaintext elsewhere
that a bit more simply doesn't matter much.  See, for example, my 1997
paper Probable Plaintext Cryptanalysis of the IP Security Protocols,
http://www.cs.columbia.edu/~smb/papers/probtxt.pdf


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Toshiba shows 2Mbps hardware RNG

2008-02-15 Thread Steven M. Bellovin
On Wed, 13 Feb 2008 20:38:49 -0800
[EMAIL PROTECTED] wrote:

 
  - Original Message -
  From: Pat Farrell [EMAIL PROTECTED]
  To: 
  Subject: Re: Toshiba shows 2Mbps hardware RNG
  Date: Sun, 10 Feb 2008 17:40:19 -0500
  
  
  Perry E. Metzger wrote:
   [EMAIL PROTECTED] (Peter Gutmann) writes:
   I've always wondered why RNG speed is such a big deal for
   anything but a few highly specialised applications.
  
   Perhaps it isn't, but any hardware RNG is probably better than
   none for many apps, and they've managed to put the whole thing in
   a quite small bit of silicon. The speed is probably icing on the
   cake.
  
  One of the benefits of speed is that you can use cleanup code to 
  control bias. Carl Ellison put some out on his website last century.
  
  
 
 It is a HUGE win for designing a crypto system to have a really 
 fast (and good) HW RNG. Being able to generate 10-20,000 AES keys
 per second means that you can engineer things that were impossible
 to do otherwise.  You can generate as many keys as you like, throw
 away keys after one time use, treat them as ephemeral authentication
 keys (say give a few million or so to a user), etc. Or you could 
 hand a sender 10 MBytes (less than a minute to generate), which then
 can be used to create billions of keys (say using Ueli Maurer's 
 Bounded Storage Model).  The sender could then use each key to 
 uniquely encrypt (AES CTR) each message of a series of messages or
 packets to a receiver (AES key setup is fast). No need for an IV or 
 worrying about message ordering (each one has a key id), or even the
 compromise of a key or two.
 
 Randomness is the most fundamental underpinning of a crypto system
 and having lots of it on demand is really fabulous to have in our 
 system security design tool box.
 
Leaving aside whether or not your scenarios make sense, why must this
be done via a hardware RNG?

I ran 'openssl speed aes' on a 3.4 Ghz single-core Pentium.  On 16-byte
blocks with AES-128 -- i.e., running AES in counter mode to generate
128-bit keys -- it ran at about 3.4M encryptions/second.  That's more
than two orders of magnitude better than you say is needed.  Why do I
need hardware?

Hardware RNGs are great for producing initial seeds.  They're also
great for producing new randomness to stir into the pot (i.e., via
something like Yarrow).  But they're lousy for ongoing work because
they're relatively low assurance.

As others have noted, software has a big advantage: it's
deterministic.  Once you know its working, you have much higher
assurance that it will continue to work the same way.  (Aside: I know
quite a bit about the problem of certifying complex software.  A
cryptographically strong PRNG doesn't fall into that category if you
have confidence in the algorithm.)  Remember the Clipper chip?
According to Dorothy Denning, the escrowed keys -- that is, the entire
security of the basic scheme -- was generated by several applications of
the Skipjack, the underlying block cipher -- see
http://catless.ncl.ac.uk/Risks/14.52.html#subj1 for details.  (Note:
that statement was later disavowed.  I'm not sure I believe the
disavowal; it looked secure to me.)

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-02-09 Thread Steven M. Bellovin
On Thu, 07 Feb 2008 17:37:02 +1300
[EMAIL PROTECTED] (Peter Gutmann) wrote:

 The real issues occur in two locations:
 
 1. In the browser UI.
 2. In the server processing, which no longer gets the password via an
 HTTP POST but as a side-effect of the TLS connect.
 
 (1) is a one-off cost for the browser developers, (2) is a bit more
 complex to estimate because it's on a per-site basis, but in general
 since the raw data (username+pw) is already present it's mostly a
 case of redoing the data flow a bit, and not necessarily rebuilding
 the whole system from scratch.  To give one example, a healthcare
 provider, they currently trigger an SQL query from an HTTP POST that
 looks up the password with the username as key, and the change would
 be to do the same thing at the TLS stage rather than the post-TLS
 HTTP stage.

There's another issue: initial account setup.  People will still need
to rely on certificate-checking for that.  It's a real problem at some
hotspots, where Evil Twin attacks are easy and lots of casual users are
signing up for the first time.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Gutmann Soundwave Therapy

2008-02-06 Thread Steven M. Bellovin
On Mon, 4 Feb 2008 09:33:37 -0500 (EST)
Leichter, Jerry [EMAIL PROTECTED] wrote:

 The NSA quote someone - Steve Bellovin? - has repeated comes to mind:
 Amateurs talk about algorithms.  Professionals talk about economics.
 Using DTLS for VOIP provides you with an extremely high level of
 security, but costs you 50% packet overhead.  Is that worth it to you?
 It really depends - and making an intelligent choice requires that
 various alternatives along the cost/safety curve actually be
 available.

Precisely.

Some years ago, I did a crypto design for a potential product.  As best
we could figure it, the extra overhead for a standard mechanism versus
a custom one was greater than the profit margin for this product.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Dutch Transport Card Broken

2008-01-30 Thread Steven M. Bellovin

 Why require contactless in the first place?
 
 Is swiping one's card, credit-card style too difficult for the average
 user?  I'm thinking two parallel copper traces on the card could be
 used to power it for the duration of the swipe, with power provided
 by the reader.  Why, in a billion-dollar project, one must use COTS
 RFIDs - with their attendant privacy and security problems - is
 beyond me. 
 
 A little ingenuity would have gone a long way.
 
OPs deliberately elided.

This posting (and several others in this thread) disturb me.  Folks on
this list and its progenitors have long noted that cryptography is a
matter of economics.  That is, cryptography and security aren't
absolute goals; rather, they're tools for achieving something else.
The obvious answers in this case are prevent fare fraud or make
money, and even those would suffice.  However, there are other issues
less easily monetized, such as make the trams and buses run
efficiently.

A security system doesn't have to be perfect.  Rather, it has to be
good enough that you save more than you lose via the holes, including
the holes you know about up front.  Spending more than you have to is
simply bad engineering.  Speaking as an engineer, rather than as a
scientist, the real failure mode is too high a net loss.  As a
cryptographer and security guy, I'd rather there were no loss -- but
that's not real.

A transit system has to move people.  For all that the New York City
Metrocard works, it's slower than a contactless wireless system.  How
much longer will it take people to board trams with a stripe reader
than with a contactless smart card?  What is your power budget (which
affects range)?  Even leaving out the effect that delays have on
ridership, a transit system that wants to move N people needs more
units if the latency per rider is above a certain threshold.

Let's take a closer look at the New York system, since it was touted as
superior.  It's optimized for subways, not buses, which has several
implications.  (Subway ridership in New York is twice bus
ridership -- see
http://www.crainsnewyork.com/apps/pbcs.dll/article?AID=/20070223/FREE/70223008/1066)
First, subway turnstiles are much more easily used as part of an online
system than are bus fare card readers.  The deployment started in 1994,
when cellular data simply wasn't an option, based on cost, bandwidth,
availability, and much more.  Second, on a subway you use your fare
card well in advance of boarding; there is thus little latency effect
on the system.  Third, wireless is *still* faster -- according to some
reports (http://www.dslreports.com/forum/r19222677-The-Next-MetroCard),
the MTA is considering replacing the current system with a wireless one.

Online systems have another issue: they require constant communication
to a high-availability server.  When that's not an option (i.e., New
York buses, or subway turnstiles when the server is down), the system
has to fall back to some other scheme.  This scheme is more restrictive,
precisely because of the fraud issue.   Back when I was in high school,
some students got bus passes.  I recall a frequent sight: those who had
boarded early moving to the back of the bus and handing their passes to
other students still waiting to board the bus.  Replay worked well
against an overloaded driver...  Metrocards don't have that failure
mode -- but the failure mode they do have is a limitation on how many
times they can be used in a short time interval.  This affects, for
example, a family of five or more trying to travel on a single card,
even on subways.  

How much of this applies to the Dutch farecards?  I have no idea.  But
this group is trying to *engineer* a system without looking at costs
and other constraints.  That leads to security by checklist, an
all-too-common failing.

Systems like this have two primary failure modes -- failure in the
sense of losing more money (or time, or what have you) than
anticipated.  First, the designers may not have understood the
available technology and its limitations.  That was certainly the case
with WEP; I suspect it's the case here, but I don't know.  Even so, it
is far from clear that exploitation of the hole will have an economic
impact; that's as much a sociological question as a technical one.
(Maybe the incremental cost per card of better crypto is ?.01.  One
web site I found put tram ridership in Amsterdam at 1,000,000/year
(http://blog.wired.com/cars/2007/10/trams-dominate-.html), which means
that the cost might be ?10,000/year.  How many riders will try to cheat
the system?  Enough that to be an issue?  I don't know -- but that's
precisely my point; I don't know and I doubt very much that most other
posters here know.  That said, I do suspect that stronger crypto would
be economical.)

The second failure mode comes from misunderstanding the threat model.
That's why the old American AMPS cellular phones were subject to
cloning attacks.  It was *not* that the designers didn't anticipate 

US reforming export controls

2008-01-29 Thread Steven M. Bellovin
The Bush administration is reforming the way export controls are
administered; see
http://www.fas.org/blog/ssp/2008/01/bush_administration_unveils_ne.php
It's too soon to know if crypto will be affected; certainly, it's
something to watch.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Typex

2008-01-24 Thread Steven M. Bellovin
A knowledgeable colleague (but who is nevertheless not a crypto expert)
thinks he's seen something about Typex (the WW II British rotor
machine) having been cracked.  Does anyone know anything about that?  A
quick Google found nothing of the sort, but did find references showing
that it was used as late as 1970.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Steven M. Bellovin
On Tue, 22 Jan 2008 21:49:32 -0800
Ed Gerck [EMAIL PROTECTED] wrote:

 As I commented in the
 second paragraph, an attack at the ISP (where SSL/TLS is
 of no help) has been the dominant threat -- and that is
 why one of the main problems is called warrantless
 wiretapping. Further, because US law does /not/ protect
 data at rest, anyone claiming authorized process (which
 the ISP itself may) can eavesdrop without any required
 formality.
 
Please justify this.  Email stored at the ISP is protected in the U.S.
by the Stored Communications Act, 18 USC 2701
(http://www4.law.cornell.edu/uscode/18/2701.html).  While it's not a
well-drafted piece of legislation and has been the subject of much
litigation, from the Steve Jackson Games case
(http://w2.eff.org/legal/cases/SJG/) to Warshak v. United States
(http://www.cs.columbia.edu/~smb/blog/2007-06/2007-06-19.html), I don't
see how you can say stored email isn't protected at all.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: SSL/TLS and port 587

2008-01-23 Thread Steven M. Bellovin
On Wed, 23 Jan 2008 08:10:01 -0800
Ed Gerck [EMAIL PROTECTED] wrote:

 Steven M. Bellovin wrote:
  On Tue, 22 Jan 2008 21:49:32 -0800
  Ed Gerck [EMAIL PROTECTED] wrote:
   As I commented in the
  second paragraph, an attack at the ISP (where SSL/TLS is
  of no help) has been the dominant threat -- and that is
  why one of the main problems is called warrantless
  wiretapping. Further, because US law does /not/ protect
  data at rest, anyone claiming authorized process (which
  the ISP itself may) can eavesdrop without any required
  formality.
 
  Please justify this.  Email stored at the ISP is protected in the
  U.S. by the Stored Communications Act, 18 USC 2701
  (http://www4.law.cornell.edu/uscode/18/2701.html).  While it's not a
  well-drafted piece of legislation and has been the subject of much
  litigation, from the Steve Jackson Games case
  (http://w2.eff.org/legal/cases/SJG/) to Warshak v. United States
  (http://www.cs.columbia.edu/~smb/blog/2007-06/2007-06-19.html), I
  don't see how you can say stored email isn't protected at all.
 
 As you wrote in your blog, users really need to read those boring
 [ISP] licenses carefully.
 
 ISP service terms grant the disclosure right on the basis of
 something broadly called valid legal process or any such
 term as defined /by the ISP/. Management access to the account
 (including email data) is a valid legal process (authorized by the
 service terms as a private contract) that can be used without
 any required formality, for example to verify compliance to the
 service terms or something else [1].
 
 Frequently, common sense and standard use are used to
 justify such access but, technically, no justification is
 actually needed.
 
 Further, when an ISP such as google says Google does not share
 or reveal email content or personal information with third
 parties. one usually forgets that (1) third parties may actually
 mean everyone on the planet but you; (2) third parties also
 have third parties; and (3) #2 is recursive.

You're confusing two concepts.  Warrants apply to government
behavior; terming something a wireless wiretap carries the clear
implication of government action.  Private action may or may not
violate the wiretap act or the Stored Communications Act, but it has
nothing to do with warrants.
 
 Mr. Councilman's case and his lawyer's declaration that Congress
 recognized that any time you store communication, there is an
 inherent loss of privacy was not in your blog, though. Did I
 miss something?

Since the Councilman case took place several years before I started my
blog, it's hardly surprising that I didn't blog on it.  And it turns out
that Councilman -- see http://epic.org/privacy/councilman/ for a
summary -- isn't very interesting any more.  The original district
court ruling, upheld by three judges of the Court of Appeals,
significantly weakened privacy protections for email.  It was indeed an
important and controversial ruling.  However, case was reheard en banc;
the full court ruled that the earlier decisions were incorrect, which
left previous interpretations of the wiretap law intact.  As far as I
can tell, it was never appealed to the Supreme Court.  (The ultimate
outcome, which isn't very interesting to this list, is discussed in
http://pacer.mad.uscourts.gov/dc/opinions/ponsor/pdf/councilman%20mo.pdf)

You are, of course, quite correct that ISP terms of service need to be
read carefully.

 
 Cheers,
 Ed Gerck
 
 [1] in http://mail.google.com/mail/help/about_privacy.html :
 Of course, the law and common sense dictate some exceptions. These
 exceptions include requests by users that Google's support staff
 access their email messages in order to diagnose problems; when
 Google is required by law to do so; and when we are compelled to
 disclose personal information because we reasonably believe it's
 necessary in order to protect the rights, property or safety of
 Google, its users and the public. For full details, please refer to
 the When we may disclose your personal information section of our
 privacy policy. These exceptions are standard across the industry and
 are necessary for email providers to assist their users and to meet
 legal requirements.



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Emissions security

2008-01-18 Thread Steven M. Bellovin
http://www.technologynewsdaily.com/node/8965 (for those of you who
don't take TEMPEST seriously)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: US drafting plan to allow government access to any email or Web search

2008-01-15 Thread Steven M. Bellovin
On Tue, 15 Jan 2008 08:19:11 -0500
Perry E. Metzger [EMAIL PROTECTED] wrote:
 
 The PDF link points to:
 
 http://online.wsj.com/public/resources/documents/WashWire.pdf
 
 which I'm unable to access at the moment.


I believe the proper URL is
http://blogs.wsj.com/washwire/2008/01/13/dancing-spychief-wants-to-tap-into-cyberspace/
(and as best I can tell, it doesn't require a WSJ subscription for
access).



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Death of antivirus software imminent

2008-01-14 Thread Steven M. Bellovin
On Fri, 11 Jan 2008 17:32:04 -0800
Alex Alten [EMAIL PROTECTED] wrote:


 
 Generally any standard encrypted protocols will probably eventually
 have to support some sort of CALEA capability. For example, using a
 Verisign ICA certificate to do MITM of SSL, or possibly requiring
 Ebay to provide some sort of legal access to Skype private keys.  

...
 
 This train left the station a *long* time ago.
 
 So it's not so clear that the train has even left the station.
 
You've given a wish list but you haven't explained why you think it
will happen.  The US government walked away from the issue years ago,
when the Clipper chip effort failed.  Even post-9/11, the Bush
administration chose not to revisit the question.

The real issue, though, is technical rather than political will.  CALEA
is a mandate for service providers; key escrow is a requirement on the
targets of the surveillance.  The bad guys won't co-operate...

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fw: SHA-3 API

2008-01-06 Thread Steven M. Bellovin
Forwarded with permission.

This is part of a discussion of the proposed SHA-3 API for the NIST
competition.  Those interested in discussing it should subscribe to the
list; see http://csrc.nist.gov/groups/ST/hash/email_list.html for
instructions.

Begin forwarded message:

Date: Fri, 4 Jan 2008 10:21:24 -0500
From: Ronald L. Rivest [EMAIL PROTECTED]
To: Multiple recipients of list [EMAIL PROTECTED]
Subject: SHA-3 API



Dear Larry Bassham --

Since you indicated that you might be producing a revised
API for the SHA-3 submissions, here are some suggestions and
thoughts for your consideration:

(1) Make hashState totally opaque.

 In other words, eliminate the requirement to include
 a field hashbitlen.  While an implementation presumably
 includes such a field, there is no need that I can see
 for standardizing its name and making it a requirement.

(2) Measure all input to be hashed in bytes, not bits.

 While the theoretical literature on hashing measures
 lengths in bits, in practice all data is an integral
 number of bytes.  That is, theory uses base-2, practice
 uses base-256.  I have never seen an application that
 cared about hashing an input that was not an integral
 number of bytes.

 An application that really needs bit-lengths for hashing
 can apply the standard transformation to the data first:
 always append a 1-bit, then enough 0-bits to make the data
 an integral number of bytes.

 I think that using a bit-length convention for the standard
 input will cause errors, as callers are likely to forget
 multiplying the input chunk length by 8.  This will cause
 the wrong result, but it will be undetectable---only 1/8 of
 the data will be hashed.  A security vulnerability will be
 created, as it will no longer be collision-resistant...

 I think the risk of application-level mistakes in this manner
 outweighs the (non-existent) need for bit-lengths on inputs.

(3) Eliminate the offset input to the Update function.

 First of all, it is too short, if you are going to admit
 inputs of 2**64 bits.

 But more importantly, there is no understandable need for
 such an input.

 I don't think you are contemplating giving the inputs
 out-of-order.  If this is to support parallel implementations
 somehow, you would need other functions, beyond Update, to
 combine the hash results for various portions of the input.

 Thus, the offset is merely the sum of the previous datalen
 values, and can be kept by the hash function implementation
 internally in hashState.

 Best to eliminate it.

(4) Make datalen a 64-bit input to Update.

 I think you need to bit the 64-bit bullet and insist that
 all C implementations support 64-bit data values, particularly
 when you have inputs that may often be larger than 2*32 bits
 (or 2**32 bytes, even).  Your SHA-1 example on page 4 of the
 proposed API breaks for long inputs.

 Having an int parameter here is another place where users may
 have errors, when they don't realize that their inputs may be
 exceeding the int length bound.  We shouldn't build in hazards
 for the unwary into the API.

(5) Make it clear what kinds of endian-ness should be supported.

 While the inputs are supplied as byte-strings, implementations
 may immediately copy these over into words for processing.
 What are the possibilities that an implementation needs to
 handle for endian-ness during this copying?  Big/little endian-ness
 within 16/32/64 bit words?

(6) Make it clear that threads are not allowed in reference
 implementation.

 You stated that the standard implementation should not
 make use of available parallelism on the reference platform.


Cheers,
Ron Rivest






-- 
 Ronald L. Rivest
 Room 32-G692, Stata Center, MIT, Cambridge MA 02139
 Tel 617-253-5880, Email [EMAIL PROTECTED]




--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: DRM for batteries

2008-01-06 Thread Steven M. Bellovin
On Sat, 5 Jan 2008 15:28:50 -0800
Stephan Somogyi [EMAIL PROTECTED] wrote:

 At 16:38 +1300 04.01.2008, Peter Gutmann wrote:
 
 At $1.40 each (at least in sub-1K quantities) you wonder whether
 it's costing them more to add the DRM (spread over all battery
 sales) than any marginal gain in preventing use of third-party
 batteries by a small subset of users.
 
 I don't think I agree with the DRM for batteries characterization.
 It's not my data in that battery that they're preventing me from
 getting at.

Correct.  In a similar case, Lexmark sued a maker of print cartridges
under the DMCA.  Lexmark lost in the Court of Appeals and the Supreme
Court declined to hear the case.  See
http://www.eff.org/cases/lexmark-v-static-control-case-archive and
http://www.scc-inc.com/SccVsLexmark/




--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Death of antivirus software imminent

2008-01-03 Thread Steven M. Bellovin
On Thu, 03 Jan 2008 11:52:21 -0500
[EMAIL PROTECTED] wrote:

 The aspect of this that is directly relevant to this
 list is that while we have labored to make network
 comms safe in an unsafe transmission medium, the
 world has now reached the point where the odds favor
 the hypothesis that whomever you are talking to is
 themselves already 0wned, i.e., it does not matter if
 the comms are clean when the opponent already owns
 your counterparty.

Right -- remember Spaf's famous line about how using strong crypto on
the Internet is like using an armored car to carry money between
someone living in a cardboard shack and someone living on a park bench?

Crypto solves certain problems very well.  Against others, it's worse
than useless -- worse, because it blocks out friendly IDSs as well as
hostile parties.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Flaws in OpenSSL FIPS Object Module

2007-12-11 Thread Steven M. Bellovin
On Mon, 10 Dec 2007 11:27:10 -0500
Vin McLellan [EMAIL PROTECTED] wrote:

 
 What does it say about the integrity of the FIPS program, and its
 CMTL evaluation process, when it is left to competitors to point out
 non-compliance of evaluated products -- proprietary or open source --
 to basic architectural requirements of the standard?
 
Integrity or ability?  We all know that finding problems in code or
architecture is *very* hard.  


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intercepting Microsoft wireless keyboard communications

2007-12-11 Thread Steven M. Bellovin
On Tue, 11 Dec 2007 13:49:19 +1000
James A. Donald [EMAIL PROTECTED] wrote:

 Steven M. Bellovin wrote:
  It's moderately complex if you're trying to conserve bandwidth
  (which translates to power) and preserve a datagram model.  The
  latter constraint generally rules out stream ciphers; the former
  rules out things like encrypting the keystroke plus seven random
  bytes with a 64-bit block cipher.  Power is also an issue if your
  cipher uses very much CPU time or custom hardware.
   Im sure most readers of this list can propose *some* solution.
   It's
  instructive, though, to consider everything that needs to go into a
  full system solution, including the ability to resynchronize cipher
  states and the need to avoid confusing naive users if the cat
  happened to fall asleep on the space bar while the CPU was turned
  off.
 
 Use CFB mode.  That takes care of all the above problems.  You can
 transmit any small bunch of bits, don't need to transmit a complete
 block, and if the keyboard and the receiver get out sync, the
 keyboard's signal will be decrypted as garbage for the first 128
 bits.  If one has the keyboard regularly transmit no key's pressed
 from time to time, and if valid key press representations have a
 couple of check bits redundancy, with several keypresses being
 ignored after any invalid key signal, keyboard and receiver will
 synchronize with no fuss.
 

Believe it or not, I thought of CFB...

Sending keep-alives will do nasties to battery lifetime, I suspect;
most of the time, you're not typing.  As for CFB -- with a 64-bit block
cipher (you want them to use DES? they're not going to think of anything
different), it will take 9 keypresses to flush the buffer.  With AES
(apparently your assumption), it will take 17 keypresses.  This isn't
exactly muggle-friendly.  Just think of the text in the instructions...
Redundancy?  I wonder how much is needed to avoid problems.  It has to
be a divisor of the cipher block size, which more or less means 8 extra
bits.  How much will that cost in battery life?


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Open-source PAL

2007-12-03 Thread Steven M. Bellovin
On Thu, 29 Nov 2007 16:05:00 -0500
Tim Dierks [EMAIL PROTECTED] wrote:

 A random thought that's been kicking around in my head: if someone
 were looking for a project, an open-source permissive action link (
 http://www.cs.columbia.edu/~smb/nsam-160/pal.html is a good link,
 thank you Mr. Bellovin) seems like it might be a great public
 resource: I suspect it's something that some nuclear states could use
 some education on, but even if the US is willing to share technology,
 the recipient may not really trust the source.
 
 As such, an open-source PAL technology might substantially improve
 global safety.
 
I don't think it would be fruitful.  Have a look at page 2 of
http://www.nytimes.com/2007/11/18/washington/18nuke.html -- it notdes
that The system hinges on what is essentially a switch in the firing
circuit that requires the would-be user to enter a numeric code that
starts a timer for the weapon?s arming and detonation.  I don't think
that that's quite correct -- it permits arming; PALs are not in the
firing circuit, I believe -- but this section is more interesting:
Delicate design details involve how to bury the link deep inside a
weapon to keep terrorists or enemies from disabling the safeguard.
In other words, it's easy to have a circuit that keeps the bomb from
arming; the hard part is doing so with high assurance against attacks,
and that's very design-dependent.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Fw: NIST announces approval of SP 800-38D specifying GCM

2007-11-28 Thread Steven M. Bellovin


Begin forwarded message:

Date: Tue, 27 Nov 2007 16:22:51 -0500
From: Morris Dworkin [EMAIL PROTECTED]
To: undisclosed-recipients:;
Subject: NIST announces approval of SP 800-38D specifying GCM


FYI, yesterday NIST announced the approval of Special Publication
800-38D, which specifies Galois/Counter Mode (GCM), an AES mode of
operation for authenticated encryption with associated data.  GCM was
submitted to NIST by David McGrew and John Viega.  The announcement
appears on the NIST website, at http://csrc.nist.gov/ , and the URL for
the document is http://csrc.nist.gov/publications/PubsSPs.html#800-38D .



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: refactoring crypto handshakes (SSL in 3 easy steps)

2007-11-16 Thread Steven M. Bellovin
On Wed, 14 Nov 2007 13:45:37 -0600
[EMAIL PROTECTED] wrote:

 
 I wonder if we here could develop a handshake that was
 cryptographically secure, resistant to CPU DoS now, and would be
 possible to adjust as we get faster at doing crypto operations to
 reduce latency even further.  Basically an easy knob for balancing
 high latency and DoS resistance vs. crypto overhead and low latency.
 It should be adjustable on either end without altering the other.
 
Depending on your goals, JFK has some of those properties; see
http://www1.cs.columbia.edu/~angelos/Papers/jfk-tissec.pdf


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: refactoring crypto handshakes (SSL in 3 easy steps)

2007-11-15 Thread Steven M. Bellovin
There was a paper by Li Gong at an early CCS -- '93, I think, though it
might have been '94 -- on the number of messages different types of
authentication protocol took.  It would be a good starting point.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Password hashing

2007-10-12 Thread Steven M. Bellovin
On Thu, 11 Oct 2007 22:19:18 -0700
james hughes [EMAIL PROTECTED] wrote:

 A proposal for a new password hashing based on SHA-256 or SHA-512 has
 been proposed by RedHat but to my knowledge has not had any rigorous
 analysis. The motivation for this is to replace MD-5 based password
 hashing at banks where MD-5 is on the list of do not use
 algorithms. I would prefer not to have the discussion MD-5 is good
 enough for this algorithm since it is not an argument that the
 customers requesting these changes are going to accept.
 
NetBSD uses iterated HMAC-SHA1, where the password is the key and the
salt is the initial plaintext.  (This is my design but not my
implementation.)


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Seagate announces hardware FDE for laptop and desktop machines

2007-10-02 Thread Steven M. Bellovin
On Tue, 02 Oct 2007 15:50:27 +0200
Simon Josefsson [EMAIL PROTECTED] wrote:

 
 It sounds to me as if they are storing the AES key used for bulk
 encryption somewhere on the disk, and that it can be unlocked via the
 password.

I'd say decrypted by the password, rather than unlocked, but that's
the right way to do it: since it permits easy password changes.  It
also lets you do things like use different AES keys for different parts
of the disk (necessary with 3DES, probably not with AES).

 So it may be that the bulk data encryption AES key is
 randomized by the device (using what entropy?) or possibly generated
 in the factory, rather than derived from the password.
 
There was this paper on using air turbulence-induced disk timing
variations for entropy...

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OK, shall we savage another security solution?

2007-09-19 Thread Steven M. Bellovin
On Wed, 19 Sep 2007 09:29:53 +0100
Dave Korn [EMAIL PROTECTED] wrote:

 On 18 September 2007 23:22, Leichter, Jerry wrote:
 
  Anyone know anything about the Yoggie Pico (www.yoggie.com)?  It
  claims to do much more than the Ironkey, though the language is a
  bit less marketing-speak.  On the other hand, once I got through
  the marketing stuff to the technical discussions at Ironkey, I ended
  up with much more in the way of warm fuzzies than I do with Yoggie.
  
  -- Jerry
 
   Effectively, it's just an offload processor in fancy dress.
 
   It relies on diverting all your network traffic out to the USB and
 back just before/after the NIC, which it presumably has to do with
 some sort of filter driver, so it's subject to all the same problems
 vs. malware as any desktop pfw.
 
   Unless your box is so overloaded that the pfw is starved of cpu
 cycles, I can't see the use of it myself.
 
If done properly -- i.e., with cryptographic protection against new
firmware or policy uploads to it -- it's immune to host or user
compromise as a way to disable the filter.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


open source digital cash packages

2007-09-17 Thread Steven M. Bellovin
Are there any open source digital cash packages available?  I need one
as part of another research project.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: using SRAM state as a source of randomness

2007-09-17 Thread Steven M. Bellovin
On Mon, 17 Sep 2007 11:20:32 -0700
Netsecurity [EMAIL PROTECTED] wrote:

 Back in the late 60's I was playing with audio and a magazine I
 subscribed to had a circut for creating warble tones for standing
 wave and room resonance testing.
 
 The relevance of this is that they were using a random noise
 generating chip that they acknowledged was not random enough for good
 measurements. The fix suggested was to parallel a number, six as I
 recall, to improve the randomness by mixing the signals to achieve
 better randomness. I don't recall the math but the approach improved
 the randomness by more than an order of magnitude. 
 
 I have also seen the same effect on reverse biased zener diodes used
 as random noise generators and that seemed - no real hard
 measurements that I can recall - to work quite well. Mind you these
 were not zeners all fabricated on a single chip, but rather
 individuals soldered together so the charateristics of each were more
 random because of the semi-randomness of the manufacturing process.
 
This is an old technique.  We could even go back to von Neumann's
scheme: look at two successive bits.  If they're equal, discard them.
Otherwise, map 0,1 to 0 and 1,0 to 1.

See the section on Software whitening in
http://en.wikipedia.org/wiki/Hardware_random_number_generator (which
was correct as of when I looked at it, a few minutes before the
timestamp on this email; check the Wiki history to be sure).


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


NSA crypto modernization program

2007-08-28 Thread Steven M. Bellovin
http://www.fcw.com/article103563-08-27-07-Print


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


more reports of terrorist steganography

2007-08-20 Thread Steven M. Bellovin
http://www.esecurityplanet.com/prevention/article.php/3694711

I'd sure like technical details...


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


interesting paper on the economics of security

2007-08-20 Thread Steven M. Bellovin
http://www.cl.cam.ac.uk/~rja14/Papers/econ_crypto.pdf 


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


a new way to build quantum computers?

2007-08-18 Thread Steven M. Bellovin
http://www.tgdaily.com/content/view/33425/118/

Ann Arbor (MI) - University of Michigan scientists have discovered a
breakthrough way to utilize light in cryptography. The new technique
can crack even complex codes in a matter of seconds. Scientists believe
this technique offers much advancement over current solutions and could
serve to foil national and personal security threats if employed

I'll let those who know more physics comment in detail; from reading
the article, it appears to lead to a way to construct quantum computers.


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   3   >