Re: [cryptography] ICIJ's project - comment on cryptography tools

2013-04-04 Thread Steven Bellovin

On Apr 4, 2013, at 4:51 PM, ianG i...@iang.org wrote:

 On 4/04/13 21:43 PM, Jon Callas wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 
 On Apr 4, 2013, at 6:27 AM, ianG i...@iang.org wrote:
 
 In a project similar to Wikileaks, ICIJ comments on tools it used to secure 
 its team-based project work:
 
 ICIJ’s team of 86 investigative journalists from 46 countries 
 represents one of the biggest cross-border investigative partnerships in 
 journalism history. Unique digital systems supported private document and 
 information sharing, as well as collaborative research. These included a 
 message center hosted in Europe and a U.S.-based secure online search 
 system.  Team members also used a secure, private online bulletin board 
 system to share stories and tips.
 
 The project team’s attempts to use encrypted e-mail systems such 
 as PGP (“Pretty Good Privacy”) were abandoned because of complexity and 
 unreliability that slowed down information sharing. Studies have shown that 
 police and government agents – and even terrorists – also struggle to use 
 secure e-mail systems effectively.  Other complex cryptographic systems 
 popular with computer hackers were not considered for the same reasons.  
 While many team members had sophisticated computer knowledge and could use 
 such tools well, many more did not.
 
 
 http://www.icij.org/offshore/how-icijs-project-team-analyzed-offshore-files
 
 
 Thanks!
 
 This is great. It just drives home that usability is all.
 
 
 Just to underline Jon's message for y'all, they should have waited for 
 iMessage:
 
 
 
  Encryption used in Apple's iMessage chat service has stymied attempts 
 by federal drug enforcement agents to eavesdrop on suspects' conversations, 
 an internal government document reveals.
 
  An internal Drug Enforcement Administration document seen by CNET 
 discusses a February 2013 criminal investigation and warns that because of 
 the use of encryption, it is impossible to intercept iMessages between two 
 Apple devices even with a court order approved by a federal judge.
 
  The DEA's warning, marked law enforcement sensitive, is the most 
 detailed example to date of the technological obstacles -- FBI director 
 Robert Mueller has called it the Going Dark problem -- that police face 
 when attempting to conduct court-authorized surveillance on non-traditional 
 forms of communication.
 
  When Apple's iMessage was announced in mid-2011, Cupertino said it 
 would use secure end-to-end encryption. It quickly became the most popular 
 encrypted chat program in history: Apple CEO Tim Cook said last fall that 300 
 billion messages have been sent so far, which are transmitted through the 
 Internet rather than as more costly SMS messages carried by wireless 
 providers.
 
 http://news.cnet.com/8301-13578_3-57577887-38/apples-imessage-encryption-trips-up-feds-surveillance/
 
 
There's a long thread on Twitter (look for Julian Sanchez, @normative) on this, 
with comments from me, Matt Blaze, Nick Weaver, and others.  Also see Julian's 
blog post at http://www.cato.org/blog/untappable-apple-or-dea-disinformation



--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Steven Bellovin
See Matt Blaze's Protocol Failure in the Escrowed Encryption Standard, 
http://www.crypto.com/papers/eesproto.pdf

On Mar 28, 2013, at 10:16 AM, Ethan Heilman eth...@gmail.com wrote:

 Peter,
 
 Do I understand you correctly. The checksum is calculated using a key or the 
 checksum algorithm is secret so that they can't generate checksums for new 
 keys?  Are they using a one-way function? Do you have any documentation about 
 this?
 
 Thanks,
 Ethan
 
 
 On Wed, Mar 27, 2013 at 11:50 PM, Peter Gutmann pgut...@cs.auckland.ac.nz 
 wrote:
 Jeffrey Walton noloa...@gmail.com writes:
 
 What is the reason for checksumming symmetric keys in ciphers like BATON?
 
 Are symmetric keys distributed with the checksum acting as a authentication
 tag? Are symmetric keys pre-tested for resilience against, for example,
 chosen ciphertext and related key attacks?
 
 For Type I ciphers the checksumming goes beyond the simple DES-style error
 control, it's also to ensure that if someone captures the equipment they can't
 load their own, arbitrary keys into it.
 
 Peter.
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Steven Bellovin

On Mar 28, 2013, at 4:21 PM, ianG i...@iang.org wrote:

 On 27/03/13 22:13 PM, Ben Laurie wrote:
 On 27 March 2013 17:20, Steven Bellovin s...@cs.columbia.edu wrote:
 On Mar 27, 2013, at 3:50 AM, Jeffrey Walton noloa...@gmail.com wrote:
 
 What is the reason for checksumming symmetric keys in ciphers like BATON?
 
 Are symmetric keys distributed with the checksum acting as a
 authentication tag? Are symmetric keys pre-tested for resilience
 against, for example, chosen ciphertext and related key attacks?
 
 The parity bits in DES were explicitly intended to guard against
 ordinary transmission and memory errors.
 
 
 Correct me if I'm wrong, but the parity bits in DES guard the key, which 
 doesn't need correcting?  And the block which does need correcting has no 
 space for parity bits?
 
If a block is garbled in transmission, you either accept it (look at all the
verbiage on error propagation properties of different block cipher modes)
or retransmit at a higher layer.  If a key is garbled, you lose everything.

Error detection in communications is a very old idea; I can show you telegraph
examples from the 1910s involving technical mechanisms, and the realization
that this was a potential problem goes back further than that, at least as
early as the 1870s, when telegraph companies offered a transmit back facility
to let the sender ensure that the message received at the far end was the one
intended to be sent.

The mental model for DES was computer-crypto box-{phone,leased} line,
or sometimes {phone,leased} line-crypto box-{phone, leased} line.  Much
of it was aimed at asynchronous (generally) teletype links (hence CFB-8),
bisync (https://en.wikipedia.org/wiki/Bisync) using CBC, or (just introduced
around the time DES was) IBM's SNA, which relied on HDLC and was well-suited
to CFB-1.  OFB was intended for fax machines.  Async and fax links didn't
need protection as long as error propagation of received data was very limited;
bisync and HDLC include error detection and retransmission by what we'd now
think of as the end-to-end link layer.  (On the IBM gear I worked with in the
late 1960s/early 1970s, the controller took care of generating the bisync
check bytes.  I no longer remember whether it did the retransmissions or
not; it's been a *long* time, and I was worrying more about the higher layers.)

In the second mental model for bisync and SNA, the sending host would have
generated a complete frame, including error detection bytes.  These bytes
would be checked after decryption; if the ciphertext was garbled, the error
check would fail and the messages would be NAKed (bisync, at least, used
ACK and NAK) and hence resent.  If they keying was garbled, though, nothing
would flow.  

It is not entirely clear to me what keying model IBM, NIST, or the NSA had
in mind back then -- remember that the original Needham-Schroeder paper didn't
come out until late 1978, several years after DES.  One commonly-described model
of operation involved loading master keys into devices; one end would pick
a session key, encrypt it (possibly with 2DES or 3DES) with the master key, 
and send that along.  From what I've read, I think that the NSA did have KDCs 
before that, but I don't have my references handy.  Multipoint networks
were not common then (though they did exist in some sense); you couldn't
go out to a KDC in real-time.  (I'll skip describing the IBM multipoint
protocol for the 2740 terminal; I never used them in that mode.  Let it
suffice to say that given the hardware of the time, if you had a roomful
of 2740s using multipoint, you'd have a single encryptor and single key 
for the lot.)

Anyway -- for most of the intended uses, error correction of the data was
either done at a different layer or wasn't important.  Keying was a
different matter.  While you could posit that it, too, should have been
wrapped in a higher layer, it is quite plausible that NSA wanted to guard
against system designers who would omit that step.  Or maybe it just
wasn't seen as the right way to go; as noted, layering wasn't a strong
architectural principle then (though it certainly did exist in lesser
forms).

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-27 Thread Steven Bellovin

On Mar 27, 2013, at 3:13 PM, Ben Laurie b...@links.org wrote:

 On 27 March 2013 17:20, Steven Bellovin s...@cs.columbia.edu wrote:
 On Mar 27, 2013, at 3:50 AM, Jeffrey Walton noloa...@gmail.com wrote:
 
 What is the reason for checksumming symmetric keys in ciphers like BATON?
 
 Are symmetric keys distributed with the checksum acting as a
 authentication tag? Are symmetric keys pre-tested for resilience
 against, for example, chosen ciphertext and related key attacks?
 
 The parity bits in DES were explicitly intended to guard against
 ordinary transmission and memory errors.  Note, though, that this
 was in 1976, when such precautions were common.  DES was intended
 to be implemented in dedicated hardware, so a communications path
 was needed, and hence error-checking was a really good idea.
 
 And in those days they hadn't quite wrapped their heads around the
 concept of layering?

That's partly though not completely true.
 
 That said, I used to work for a guy with a long history in comms. His
 take was that the designers of each layer didn't trust the designers
 of the layer below, so they added in their own error correction.
 
It's more that errors can occur at any layer -- even today, we have
link layer checksums, TCP checksums, and sometimes more.  This is the 
e2e error check, shortly before Saltzer and Clark wrote their paper...
And yes, hardware was a *lot* less reliable then.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] msft skype IM snooping stats PGP/X509 in IM?? (Re: why did OTR succeed in IM?)

2013-03-24 Thread Steven Bellovin

On Mar 23, 2013, at 10:04 AM, Adam Back a...@cypherspace.org wrote:

 btw is anyone noticing that apparently skype is both able to eavesdrop on
 skype calls, now that microsoft coded themselves in a central backdoor, this
 was initially rumoured, then confirmed somewhat by a Russian police
 statement [1], then confirmed by microsoft itself in its law enforcement
 requests report.  Now publicly disclosed law enforcement requests reports
 are good thing, started by google, but clearly those requests are getting
 info or they wouldnt be submitting them by the 10s of thousands.
 
 http://www.microsoft.com/about/corporatecitizenship/en-us/reporting/transparency/
 
 75,000 skype related law enforcement requests, 137,000 accounts affectd (each
 call involving or more parties).


Two words about this purported confirmation: pen register.  There's a
lot of very useful information that doesn't include content, and under US
law a pen register warrant is a *lot* easier to get than a wiretap warrant:
the latter requires a lot of internal paperwork, is restricted to a certain
set of crimes (though that list has been increasing over the years), and
requires law enforcement to show that other means of investigation won't
work.  A pen register order, by contrast, simply requires certification
by the applicant that the information likely to be obtained is relevant
to an ongoing criminal investigation.

For more information on modern surveillance, see
http://www.forbes.com/sites/andygreenberg/2012/07/02/as-reports-of-wiretaps-drop-the-governments-real-surveillance-goes-unaccounted/
Skype leaks: 
https://krebsonsecurity.com/2013/03/privacy-101-skype-leaks-your-location/

Besides that, Skype Out calls are tappable even without any back doors, and
always have been.

And that Russian assertion -- maybe it's credible, maybe it's not.  Tass is 
certainly more reliable now than it was 25 years ago, but that's a very low
bar.  I can certainly see the Russian government wanting their citizens to
believe they can listen to Skype, even if they can't.  I'll chalk this one
up as unproven.  

Ever since Microsoft bought the company, these rumors have been floating around.
I have yet to see any real evidence.  Here are the two best articles I've seen:
https://www.nytimes.com/2013/02/25/technology/microsoft-inherits-sticky-data-collection-issues-from-skype.html
http://paranoia.dubfire.net/2012/07/the-known-unknows-of-skype-interception.html
Both point out reasons for concern, but there's still no *evidence*.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Tigerspike claims world first with Karacell for mobile security

2012-12-24 Thread Steven Bellovin

On Dec 24, 2012, at 8:19 AM, Jeffrey Walton noloa...@gmail.com wrote:

 On Mon, Dec 24, 2012 at 8:03 AM, Ben Laurie b...@links.org wrote:
 On Mon, Dec 24, 2012 at 12:22 PM, Jeffrey Walton noloa...@gmail.com wrote:
 Has anyone had the privilege of looking at the stronger than military
 grade [encryption] scheme?
 
 http://innovblogdotcom.files.wordpress.com/2012/06/the-karacell-encryption-system-tech-paper1.pdf
 Thanks Ben. Based on the opening paragraph, I think I'm going to read
 some of it.
 
 The Karacell symmetric encryption system was specifically designed to
 counter the anticipated threat of quantum computing,

My understanding was that there was a general quantum algorithm for
brute force in 2^sqrt(keylen).  The real threat is to public key
algorithms.  The white paper just says well known and goes on from
there.

 whilst at the
 same time address other issues with existing cryptosystems such as
 slow computational performance, nonoptimal power consumption,

These are both plausible.

 nonuniform cryptographic strength over various bits of a file,

??  I've never heard that allegation against AES.  I am confident that
had it been known way back when, Rijndael never would have been selected.

 and
 ciphertext that depends upon the plaintext for pseudo-randomness.

??  Is this supposed to be a garbled reference to things like CBC and
CFB?

 It
 is based upon a non-polynomial-time computation problem (also known as
 an NP problem whose optimal algorithm has not been improved since
 1972). This final point is critical, as new cryptosystems are always
 treated with great scepticism; however, by demonstrating a linkage to
 a known mathematical problem, “new” cryptosystems are sometimes more
 accurately considered as derivatives of previously well-studied math
 problems.
 
Remember trapdoor knapsacks?  The issue isn't the *worst case* complexity
for solution, it's what a cryptanalyst would typically encounter.

These claims do not instill a great feeling of confidence in me.  Maybe
this is a good algorithm, but I'm not holding my breath.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Client certificate crypto with a twist

2012-10-10 Thread Steven Bellovin

On Oct 10, 2012, at 9:09 AM, Ben Laurie b...@links.org wrote:

 On Wed, Oct 10, 2012 at 1:44 PM, Guido Witmond gu...@wtmnd.nl wrote:
 Hello Everyone,
 
 I'm proposing to revitalise an old idea. With a twist.
 
 The TL;DR:
 
 1. Ditch password based authentication over the net;
 
 2. Use SSL client certificates instead;
 
 Here comes the twist:
 
 3. Don't use the few hundred global certificate authorities to sign
   the client certificates. These CA's require extensive identity
   validations before signing a certificate. These certificates are
   only useful when the real identity is needed.
   Currently, passwords provide better privacy but lousy security;
 
 4. Instead: install a CA-signer at every website that signs
   certificates that are only valid for that site. Validation
   requirement before signing: CN must be unique.
 
 http://tools.ietf.org/html/draft-balfanz-tls-obc-01

Or a very old, long-expired draft with the same theme:
https://www.cs.columbia.edu/~smb/papers/draft-ietf-ipsra-getcert-00.txt




--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Data breach at IEEE.org: 100k plaintext passwords.

2012-09-25 Thread Steven Bellovin

On Sep 25, 2012, at 1:47 PM, Kevin W. Wall kevin.w.w...@gmail.com wrote:

 
 -kevin
 Sent from my Droid; please excuse typos.
 On Sep 25, 2012 1:39 PM, Jeffrey Walton noloa...@gmail.com wrote:
 
  In case anyone on the list might be affected... [Please note: I am not
  the I' in the text below]
 
  http://ieeelog.com
 
 For shame. This should make for a nice article in a future _IEEE Security  
 Privacy_.

I'm on the editorial board; I passed along the message along with this
suggestion...

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Intel RNG

2012-06-18 Thread Steven Bellovin
On Jun 18, 2012, at 11:21 52PM, ianG wrote:
 
 
 Then there are RNGs.  They start from a theoretical absurdity that we cannot 
 predict their output, which leads to an apparent impossibility of 
 black-boxing.
 
 NIST recently switched gears and decided to push the case for deterministic 
 PRNGs.  According to original thinking, a perfect RNG was perfectly 
 untestable.  Where as a perfectly deterministic RNG was also perfectly 
 predictable.  This was a battle of two not-goods.
 
 Hence the second epiphany:  NIST were apparently reasoning that the 
 testability of the deterministic PRNG was the lesser of the two evils. They 
 wanted to black-box the PRNG, because black-boxing was the critical 
 determinant of success.
 
 After a lot of thinking about the way the real world works, I think they have 
 it right.  Use a deterministic PRNG, and leave the problem of securing good 
 seed material to the user.  The latter is untestable anyway, so the right 
 approach is to shrink the problem and punt it up-stack.
 

There's evidence, dating back to the Clipper chip days, that NSA feels the same 
way.  Given the difficulty of proving there are no weird environmental impacts 
on hardware RNGs, they're quite correct.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-06-07 Thread Steven Bellovin

On May 31, 2012, at 3:03 PM, Marsh Ray wrote:

 On 05/31/2012 11:28 AM, Nico Williams wrote:
 
 Yes, but note that one could address that with some assumptions, and
 with some techniques that one would reject when making a better hash
 -- the point is to be slow,
 
 More precisely, the point is to take a tunable amount of time with strong 
 assurance that an attacker will be unable to perform the computation with 
 significantly less computational resources.
 
 The deliberate consumption of computational resources is a price that the 
 defender has to pay in order to impose costs on the attacker. This ought to 
 be an advantageous strategy for the defender as long as the attacker is 
 expected to need to invoke the function many times more.
 
 But the defender's and attacker's cost structure is usually very different. 
 The defender (say a website with a farm of PHP servers) doesn't get to choose 
 when to begin the computation (legitimate users can log in at any time) and 
 he pays a cost for noticeable latency and server resources.
 
 The attacker costs are proportional to the number of guesses he needs to make 
 to reverse the password. Hopefully this is dominated by wrong guesses. But 
 the attacker is free to parallelize the computation across whatever 
 specialized hardware he can assemble in the time that the credentials are 
 valid (sometimes years). Some attackers could be using stolen resources (e.g. 
 botnets for which they do not pay the power bill).


There's another, completely different issue: does the attacker want a 
particular password, or will any passwords from a large set suffice?  

Given the availability of cheap cloud computing, botnets, GPUs, and botnets 
with GPUs, Aa * Ah * Ap can be very, very high, i.e., the attacker has a strong 
advantage when attacking a particular password.  Some say that it's so high 
that increasing Ad is essentially meaningless.  On the other hand, if there are 
many passwords in the set being attacked, a large Ad translates into a 
reduction in the fraction that can be attack in any given time frame.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-30 Thread Steven Bellovin

On May 29, 2012, at 7:01 22PM, Maarten Billemont wrote:

 Dear readers,
 
 I've written an iOS / Mac application whose goal it is to produce passwords 
 for any purpose.  I was really hoping for the opportunity to receive some 
 critical feedback or review of the algorithm used[1].
 
 --
 ABOUT
 
 With an increasing trend of web applications requiring users to register 
 accounts, we find ourselves with countless accounts.  Ideally, each should 
 have a different password, so that authenticating yourself for one account 
 doesn't reveal your credentials of other accounts.  That becomes really hard 
 when you've got tens or hundreds of passwords to remember.
 
 Solutions exist, mostly in the form of password vaults that list your 
 passwords and get stored in an encrypted form.  Other solutions send your 
 passwords off to be stored on some company's cloud service.
 
 Master Password is different in that it generates passwords based purely off 
 of a user's master password and the name of the site.  That means you need no 
 storage and have a fully offline algorithm that needs nothing more than what 
 you can remember easily.
 --
 
 I'm rather a notice in the field of security (certainly in comparison to some 
 of you), and I was hoping that some of you might find the time to have a look 
 at the algorithm and see if there are any obvious flaws or risks to the 
 security and integrity of the solution.
 
 As a side-note, the iOS application, Master Password, is fully open-source[2] 
 under the GPLv3.  If any of you speak fluent Objective-C, it would be awesome 
 if they could have a peek at the source code as well.


From a very quick glance, it looks to be about the same as

@inproceedings{web-pw-gen,
Author = {J. Alex Halderman and Brent Waters and Edward W. Felten},
Booktitle = {Proc. 14th Intl. World Wide Web Conference},
Month = {May},
Title = {A Convenient Method for Securely Managing Passwords},
Url = {http://userweb.cs.utexas.edu/~bwaters/publications/papers/www2005
.pdf},
Year = 2005,
}

As someone else has noted, a crucial issue is that every site receives a
function of your master password, the site name, and a counter that
defaults to zero.  If they launch a password-guessing attack -- and I
know you've made it expensive, but you can't go too far in that
direction without making user password retrieval too time-consuming, and
the attackers have GPUs, botnets, and things like EC2 to parallelize
their work -- they can retrieve the master password and hence all of
your others.  You can strengthen you scheme significantly by making the
counter 8 bytes and starting it with some random value.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] can the German government read PGP and ssh traffic?

2012-05-28 Thread Steven Bellovin

On May 26, 2012, at 8:15 34AM, Eugen Leitl wrote:

 On Fri, May 25, 2012 at 11:19:33AM -0700, Jon Callas wrote:
 
 My money would be on a combination of traffic analysis and targeted
 malware. We know that the Germans have been pioneering using targeted malware
 against Skype. Once you've done that, you can pick apart anything else. Just
 a simple matter of coding.
 
 Unrelated, IIRC Microsoft changed the architecture of supernodes to allow
 for lawful interception with Skype. It would be interesting to see inasmuch
 an open source version of Skype would want to evade that infrastructure,
 while asserting interoperability with legacy users.

I've seen news stories about Microsoft deploying its own supernodes, rather
than relying on the kindness of strangers.  I haven't seen any stories
about making lawful intercept possible -- do you have a source?


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] can the German government read PGP and ssh traffic?

2012-05-25 Thread Steven Bellovin
Here's Google Translate link to the article (I can't read German).  My money is 
on a protocol or implementation flaw, or possibly just hacks to the end system.

http://translate.google.com/translate?sl=detl=enjs=nprev=_thl=enie=UTF-8layout=2eotf=1u=http://www.golem.de/news/bundesregierung-deutsche-geheimdienste-koennen-pgp-entschluesseln-1205-92031.html

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NIST and other organisations that set up standards in information security cryptography. (was: Doubts over necessity of SHA-3 cryptography standard)

2012-04-23 Thread Steven Bellovin

On Apr 23, 2012, at 12:51 14PM, David Adamson wrote:

 On 4/23/12, Samuel Neves sne...@dei.uc.pt wrote:
 
 On big hardware, the fastest SHA-3 candidates (BLAKE, Skein) are very
 much closer to MD5 in performance (~5.5 cpb) than SHA-2. Plus, I don't
 see any platform where CubeHash16/32 wins over either of them in speed.
 
 The place where SHA-2 shines is the very low end. Performance there,
 however, is usually measured in gates, not cpb.
 
 
 The latest performance of Skein and BLAKE that you are mentioning is
 due the continuous efforts of designers and independent programmers to
 improve their implementation. As I can see, measurements of SHA-2 are
 mostly from an OpenSSL implementation that is not as much optimized as
 the implementations of the 5 SHA-3 finalists. But once SHA-2 is
 started to be as aggressively optimized as SHA-3 finalists, we will
 see reports like the one in [1]: Furthermore, even the fastest
 finalists will probably  offer only a small performance advantage over
 the current SHA-256 and SHA-512 implementations.
 
 Unfortunately, also I do not see any more improvements of the
 implementations of other SHA-3 candidates that did not enter 2-nd and
 final round (especially CubeHash, Shabal, BMW, Edon-R, Echo and SIMD).
 
And the MD6 team withdrew their submission because they couldn't make
it fast enough and still have enough security.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Looking for an unusual AKE protocol

2012-04-10 Thread Steven Bellovin
The station-to-station protocol -- a digitally-signed Diffie-Hellman exchange 
-- should do what you want.

On Apr 10, 2012, at 7:59 PM, King Of Fun wrote:

 I am looking for a protocol that will provide mutual authentication and key 
 exchange with a minor twist: the client and server have RSA key pairs, but 
 they cannot use them in the same way. In particular, the server has full use 
 of its keys, but the only use the clients can make of their private keys is 
 for signing. I would rather not roll my own protocol, given the amount of 
 rope available for self-hanging. And seeing as how there are some pretty 
 obscure protocols out there, chances are someone has already published one 
 that would cover this case.
 
 All clients have the public key of the server, and the server has all of the 
 public keys of the clients.
 The client can only use its private key for signing. In particular, the 
 client cannot decrypt data that has been encrypted with that client's public 
 key.
 
 Is there a protocol out there already that provides AKE, or are the clients 
 too underpowered, or...?
 
 Thanks and regards,
 Brian
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] MS PPTP MPPE only as secure as *single* DES

2012-04-08 Thread Steven Bellovin

On Apr 8, 2012, at 7:30 43AM, ianG wrote:

 On 6/04/12 10:57 AM, Steven Bellovin wrote:
 
 On Apr 5, 2012, at 5:51 10PM, James A. Donald wrote:
 
 So I think that pretty much everyone has already heard that MS PPTP is 
 insecure.  Every time I set up a vpn, I am re-reminded, just in case.
 
 
 Don't use cryptographic overkill.  Even bad crypto is usually the strong 
 part of the system.  Adi Shamir, 1995.  
 (http://www.ieee-security.org/Cipher/ConfReports/conf-rep-Crypto95.html)
 
 
 All hail the great A5/1 and lesser spawn.
 
 Seriously though, we suffer tremendously in this industry from overkill.  
 Studying the biases in the field would make a great cross-over PhD in 
 psych-CS-crypto-business.  Is there anyone amongst us who hasn't chortled 
 with glibbity and glee when some despised crypto system falls to a pernickity 
 academic attack?


Sure -- and I (and many others on this list) have worked hard for good, secure 
crypto standards. But thinks like PPTP, even when flawed, have survived for a 
reason.  Often, the reason is that they're far more *usable* than the stronger 
alternatives.  Let's take openvpn, which some others have spoken favorably of 
in this thread.  Consider 
http://openvpn.net/index.php/open-source/documentation/howto.html (and 
especially 
http://openvpn.net/index.php/open-source/documentation/howto.html#examples), 
the official starting points.  Then contrast that with what a typical 
sysadmin has to know to set up PPTP.  Yes, I understand why openvpn has a 
harder job, though I do think that a fair amount of the complexity could be 
hidden by (a) a bit more management software, and (b) the developers making 
certain decisions (and hence taking them away from the sysadmin).  Both of 
those take a great deal of taste to do correctly, of course.

IPsec is often worse.  Take a look at, say, 
http://www.freebsd.org/doc/en_US.ISO8859-1/articles/checkpoint/racoon.html, or 
the man page at http://www.linuxmanpages.com/man5/racoon.conf.5.php .  There's 
a fearsome amount you have to wade through just to decide that you don't need 
to touch, say, the nonce_size option.  More substantively, how many hours 
will it take the typical sysadmin to understand the description of the 
generate_policy option?

So -- you're the typical sysadmin.  You can spend many hours trying to 
understand all that stuff, or you can click through a very few screens and get 
crypto that will certainly deter the casual adversary at the local hotspot, 
will block even the NSA's vacuum cleaners -- and if you're targeted, might not 
be the weak point after all, since exploiting bad crypto depends at a minimum 
on actually picking up the traffic of interested, while a host exploit is 
always there.

Yes, the algorithms and protocols can be very important, especially if you have 
serious enemies. They're also more fun for many folks (myself included) than 
the really hard engineering and development work to make the thing usable.  
They're orders of magnitude more fun than the arguments in standards bodies to 
agree on what is really necessary as an option, as opposed to something that 
most people don't want but some vendor insists has to be there for 2.71828% of 
their customer base.



--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] MS PPTP MPPE only as secure as *single* DES

2012-04-08 Thread Steven Bellovin

On Apr 8, 2012, at 7:49 04PM, James A. Donald wrote:

 On 2012-04-09 9:15 AM, Steven Bellovin wrote:
  Yes, the algorithms and protocols can be very important,
  especially if you have serious enemies. They're also more
  fun for many folks (myself included) than the really hard
  engineering and development work to make the thing usable.
  They're orders of magnitude more fun than the arguments in
  standards bodies to agree on what is really necessary as an
  option, as opposed to something that most people don't want
  but some vendor insists has to be there for 2.71828% of
  their customer base.
 
 Seems to me that most crypto failure is usability failure.
 The only massive protocol and algorithm failure is wifi.

Yup.  Even there, the problem that got most of the attention
-- the fact that RC4 (as used in WEP) can be cryptanalyzed --
wasn't knowable at the time.  The avoidable errors -- the
misuse of a stream cipher, and the lack of a standardized
key management layer -- were not enough to prompt a change
in the standard.
 
 Also, anything that comes out of a committee, particularly a
 large committee containing conflicting agendas, evil people,
 stupid people, and crazy people, is apt to be a massive
 usability fail, and the only reason why it is usually not
 also a massive algorithm and protocol fail is that the
 stupid, the crazy, and the evil have difficulty following the
 protocol and algorithm discussion.

I'd put most of it down to conflicting agendas -- even people
you regard as evil don't see themselves that way; they
simply have a different definition -- agenda -- for good.
Craziness doesn't generally survive, nor stupidity.  Granted,
some folks with different agendas may (or may not) understand
certain details, but if they don't it's because that isn't
important to their employers' agendas.

One more thing: algorithm and protocol failures are often a
matter of fact, not opinion, and most people are reluctant
to argue for something that everyone else can see is factually
incorrect.  I recall one incident when I was Security Area Director
in the IETF when I blocked some SIP documents because of a
cut-and-paste attack.  I had a very hostile meeting with a fair
number of the proponents of those documents -- until I pulled
out my laptop and showed exactly how the attack worked.  End
of discussion, period.  One can disagree on the likelihood or
impact of a vulnerability, but generally not its existence,
until the audience is politicians.  (The disagreements, circa
the late 1970s, on the susceptibility of DES to an economically
feasible brute force attack come to mind.)  The trouble comes
when it gets to matters of taste and judgment, and what adding
17.3 new features to the protocol will do to the software's
correctness and comprehensibility.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RSA Moduli (NetLock Minositett Kozjegyzoi Certificate)

2012-03-25 Thread Steven Bellovin

On Mar 25, 2012, at 1:16 PM, Florian Weimer wrote:

 * Thierry Moreau:
 
 The unusual public RSA exponent may well be an indication that the
 signature key pair was generated by a software implementation not
 encompassing the commonly-agreed (among number-theoreticians having
 surveyed the field) desirable strategies.
 
 I don't think this conclusion is warranted.  Most textbooks covering
 RSA do not address key generation in much detail.  Even the Menezes et
 al. (1996) is a bit sketchy, but it mentions e=3 and e=2**16+1 as
 used in practice.  Knuth (1981) fixes e=3.  On the other side, two
 popular cryptography textbooks, Schneier (1996) and Stinson (2002),
 recommend to choose e randomly.  None of these sources gives precise
 guidance on how to generate the key material, although Menezes et al.
 gives several examples of what you should not do.

2^16+1 (or numbers of that pattern) give good performance for encryption
or for signature verification.  NIST's standards require that public
keys be odd, positive [sic] integers between 65537 and 2^256-1
(http://csrc.nist.gov/publications/nistpubs/800-78-3/sp800-78-3.pdf).


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

2012-03-25 Thread Steven Bellovin

On Mar 25, 2012, at 10:43 PM, Jon Callas wrote:

 
 On Mar 25, 2012, at 1:22 PM, coderman wrote:
 
 now they pay to side step crypto entirely:
 
 iOS up to $250,000
 Chrome or IE up to $200,000
 Firefox or Safari up to $150,000
 Windows up to $120,000
 MS Word up to $100,000
 Flash or Java up to $100,000
 Android up to $60,000
 OSX up to $50,000
 
 via 
 http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/
 
 plenty of weak links between you and privacy...
 
 This is precisely the point I've made: the budget way to break crypto is to 
 buy a zero-day. And if you're going to build a huge computer center, you'd be 
 better off building fuzzers than key crackers.


Bingo.  To quote myself, you don't go through strong security, you go around it.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The NSA and secure VoIP

2012-03-02 Thread Steven Bellovin

On Mar 2, 2012, at 2:59 AM, Marsh Ray wrote:

 On 03/01/2012 09:31 PM, Jeffrey Walton wrote:
 Interesting. I seem to recall that cascading ciphers is frowned upon
 on sci.crypt. I wonder if this is mis-information
 
 Not mis-information. You could easily end up enabling a meet-in-the-middle 
 attack just like double DES.
 
 https://en.wikipedia.org/wiki/Meet-in-the-middle_attack

Meet-in-the-middle attacks don't weaken things; they merely don't give you as 
much advantage as one might suppose.  Note, though, that you need 2^n storage.  
This is Suite B/Top Secret, which means 256-bit AES, which means that you would 
need 2^260 bytes of storage.  That's too much, even for NSA, so those attacks 
aren't even relevant.

Where NSA has a strong edge over most civilian crypto folks is that they 
understand that they're dealing with a *system* -- not just a cipher, but key 
exchange, key storage, timing attacks and other side channels, buggy 
implementations, very fallible (or corrupt[ed]) people, etc.  Maybe SRTP is 
weak in a way they haven't found.  Maybe IPsec is.  They've looked at both and 
don't think so, but they can't rule it out.  But if you combine both *and* you 
do it in a way you think actually buys you something, you've protected yourself 
against a lot of those failures.  Both would have to fail, and in a compatible 
way, for there to be a weakness.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Constitutional Showdown Voided as Feds Decrypt Laptop

2012-03-01 Thread Steven Bellovin

On Mar 1, 2012, at 8:18 32PM, Jeffrey I. Schiller wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 03/01/2012 06:09 PM, Nico Williams wrote:
 I let mailman generate passwords.  And I never use them, much less
 re-use them.  Well, I do use them when I need to change e-mail
 addresses, which happens very rarely, and then I start by asking
 mailman to send my my passwords because I don't remember them -- I've
 done this like once in the past decade.
 
 Perhaps mailman should be changed to require you to use its generated
 passwords, or better yet, to only generate a password when you ask it
 to send you your password, and then invalidate it after a few days. So
 it isn't really a password but a thunk of limited value.
 
 In this fashion we can be more assured that people aren't re-using
 passwords with mailman.
 
 Because... you and I may know better... the manager at the bank where
 are money is stored (or the doctors office where our medical records
 are located) may not know better...   ;-)

(typo corrected above.)

Not a bad idea, though I'm not certain it's worth it.  Fortunately,
since the default is for it to auto-generate its passwords, they're
not likely to be used elsewhere.  I'd wager long odds that most people
never even use that password.  (And the bank or the doctor's office?
They're not using mailman, because it would take a sysadmin to install
it for them...)

In an ideal world, perhaps this isn't necessary.  Mailman would somehow
learn everyone's email public key, to send passwords encrypted.  
Alternatively, it could somehow learn your web public key -- an in
particular, the one you use for this mailing list -- and use it to
verify the client-side cert you use to log in to this particular
mailing list.  (It can't be just any cert you have, since of course you
have many of them to avoid being tracked.)

Better yet, it could do a remote read on your /dev/brain and *know*
when you wanted to log in, weren't under duress, etc.  I regard that
as about as likely as the public key alternatives, at least if we're
sticking to the real world.  

Turning back to your specific suggestion: that sets the security of
your mailman account to the security of your email account.  Of course,
that's what the current scheme does.  The secret is valid for longer,
but I'm not convinced that that matters all that much.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] The NSA and secure VoIP

2012-03-01 Thread Steven Bellovin
http://www.scmagazine.com.au/News/292189,nsa-builds-android-phone-for-top-secret-calls.aspx
makes for interesting reading.  I was particularly intrigued by this:

Voice calls are encrypted twice in accordance with NSA policy, 
using IPSEC and SRTP, meaning a failure requires “two independent 
bad things to happen,” Salter said.

Margaret Salter is the head of the Information Assurance Directorate
of the NSA.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] US Appeals Court upholds right not to decrypt a drive

2012-02-24 Thread Steven Bellovin

On Feb 24, 2012, at 2:30 57PM, James A. Donald wrote:

 Bottom line is that the suspect was OK because kept his mouth zippered, 
 neither admitting nor denying any knowledge of the encrypted partition.
 
 Had he admitted control of the partition, *then* they would have been able to 
 compel production of the key.
 
 The court did not concede any right to refuse to decrypt a drive if you admit 
 possession of the contents.
 
 So:  Don't talk to police about the contents of your drive, or indeed 
 anything of which they might potentially disapprove.

No, I don't think that that's quite what the ruling said.  It's a long, complex 
opinion; what you said is close to one aspect of it, but not (in my non-lawyer 
opinion) precisely what the court said.

The first point, not addressed in your note but quite important to the ruling, 
is that the key has to be something you know, not something you have.  If the 
keying material is on a smart card, you have to turn that over and you're not 
protected.  If a PIN plus smart card is needed, you still have to turn over the 
smart card but not disclose the PIN.

Second, and going to the heart of your point, what's essential is whether or 
not they already know in reasonable detail what's on the encrypted drive; 
depending on the circumstances, they may already have that knowledge regardless 
of what you've said.  The issue of admitting possession is not what this case 
focused on; in fact, the prosecution tried to finesse that point by granting 
limited immunity on that point.  Quoting from the opinion:

'The U.S. Attorney requested that the court grant Doe immunity limited 
to “the use [of Doe’s] act of production of the unencrypted contents” of the 
hard drives. That is, Doe’s immunity would not extend to the Government’s 
derivative use of contents of the drives as evidence against him in a criminal 
prosecution. The court accepted the U.S. Attorney’s position regarding the 
scope of the immunity to give Doe and granted the requested order. The order 
“convey[ed] immunity for the act of production of the unencrypted drives, but 
[did] not convey immunity regarding the United States’ [derivative] use” of the 
decrypted contents of the drives.'

In other words, the fact of control of the encrypted data -- aka knowledge of 
the key -- was not at issue; the prosecution had agreed not to use that.  What 
was important was the files on the drive.  This is what distinguishes this case 
from Boucher (a case discussed in the opinion).  

The other current case is Fricosu, where a trial judge has ordered her to 
decrypt her laptop.  The Court of Appeals for that circuit -- the 10th; the 
opinion I cited is from the 11th, and hence not binding on this court -- 
declined to hear her appeal, not on the merits but because as a matter of 
procedure they won't intervene at this point in a trial.  If she's convicted, 
she can appeal on the grounds that her Fifth Amendment rights were violated, 
but not until then.  It's worth noting that the trial judge made his ruling on 
the same basis as the 11th Circuit Court of Appeals: did the government have 
enough prior knowledge of the contents that her rights were not infringed?  An 
appellate court may find that he didn't rule correctly on that point, or it may 
decline to adopt the 11th Circuit's reasoning -- but the fundamental legal 
reasoning is the same; what's different is the facts of the case.  (Btw, 
Fricosu did not talk to the police; however, she made injudicious statements to 
her husband in a monitored jailhouse call...)


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-18 Thread Steven Bellovin
 Mozilla has issued a statement about MITM certs: 
https://blog.mozilla.com/security/2012/02/17/message-to-certificate-authorities-about-subordinate-cas/

(Ack: Paul Hoffman posted this link to g+)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Steven Bellovin

On Feb 14, 2012, at 10:02 PM, Jon Callas wrote:

 
 On 14 Feb, 2012, at 5:58 PM, Steven Bellovin wrote:
 
 The practical import is unclear, since there's (as far as is known) no
 way to predict or control who has a bad key.
 
 To me, the interesting question is how to distribute the results.  That
 is, how can you safely tell people you have a bad key, without letting
 bad guys probe your oracle.  I suspect that the right way to do it is to
 require someone to sign a hash of a random challenge, thereby proving
 ownership of the private key, before you'll tell them if the
 corresponding public key is in your database.
 
 Yeah, but if you're a bad guy, you can download the EFF's SSL Observatory and 
 just construct your own oracle. It's a lot like rainbow tables in that once 
 you learn the utility of the trick, you just replicate the results. If you 
 implement something like the Certificate Transparency, you have an 
 authenticated database of authoritative data to replicate the oracle with.
 
 Waving my hand and making software magically appear, I'd combine Certificate 
 Transparency and such an oracle be combined, and compute the status of the 
 key as part of the certificate logs and proofs.


Note that they very carefully didn't say how they did it.  I have my own ideas 
-- but they're just that, ideas; I haven't analyzed them carefully, let alone 
coded them.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Steven Bellovin

On Feb 14, 2012, at 1:16 23PM, Jon Callas wrote:

 
 On Feb 14, 2012, at 7:42 AM, ianG wrote:
 
 On 14/02/12 21:40 PM, Ralph Holz wrote:
 Ian,
 
 Actually, we thought about asking Mozilla directly and in public: how
 many such CAs are known to them?
 
 It appears their thoughts were none.
 
 Of course there have been many claims in the past.   But the Mozilla CA desk 
 is frequently surrounded by buzzing small black helicopters so it all 
 becomes noise.
 
 I've asked about this, too, and the *documented* evidence of this happening 
 is exactly that -- zero.
 
 I believe it happens. People I trust have told me, whispered in my ear, and 
 assured me that someone they know has told them about it, but there's 
 documented evidence of it zero times.
 
 I'd accept a screen shot of a cert display or other things as evidence, 
 myself, despite those being quite forgeable, at this point.
 
 Their thoughts of it being none are reasonably agnostic on it.
 
 Those who have evidence need to start sharing.
 

A related question...

Sub-CAs for a single company are obviously not a problem.  Thus, if a major CA 
were to issue WhizzBangWidgets a CA cert capable of issuing certificates for 
anything in *.WhizzBangWidgets.com, it would be seen as entirely proper.  The 
issue is whether or not that sub-CA can issue certificates for, say, 
google.com.  The restriction is enforced by the Name Constraints field in the 
CA's cert.  However, this is seldom-enough seen that I have no idea if it's 
actually usable.  So -- do major cert-accepting programs examine and honor this 
field, and do it correctly?  I know that OpenSSL has some code to support it; 
does it work?  What about Firefox's?  The certificate-handling code in various 
versions of Windows?  Of MacOS?


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-14 Thread Steven Bellovin

On Feb 14, 2012, at 7:50 14PM, Michael Nelson wrote:

 Paper by Lenstra, Hughes, Augier, Bos, Kleinjung, and Wachter finds that two 
 out of every one thousand RSA moduli that they collected from the web offer 
 no security.  An astonishing number of generated pairs of primes have a prime 
 in common.  Once again, it shows the importance of proper randomness (my 
 remark).
 
 http://www.nytimes.com/2012/02/15/technology/researchers-find-flaw-in-an-online-encryption-method.html?_r=1hp
 
 
 The paper:
 
 http://eprint.iacr.org/2012/064.pdf


The practical import is unclear, since there's (as far as is known) no
way to predict or control who has a bad key.

To me, the interesting question is how to distribute the results.  That
is, how can you safely tell people you have a bad key, without letting
bad guys probe your oracle.  I suspect that the right way to do it is to
require someone to sign a hash of a random challenge, thereby proving
ownership of the private key, before you'll tell them if the
corresponding public key is in your database.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustwave admits issuing corporate mitm certs

2012-02-12 Thread Steven Bellovin

On Feb 12, 2012, at 10:26 46PM, Nico Williams wrote:

 On Sun, Feb 12, 2012 at 9:13 PM, Krassimir Tzvetanov
 mailli...@krassi.biz wrote:
 I agree, I'm just reflecting on the reality... :(
 
 Reality is actually as I described, at least for some shops that I'm
 familiar with.
 
The trend is the other way, towards allowing (and even encouraging)
employee-owned devices.  If nothing else, it saves the company money.
It also lets you get more work out of employees if they can deal
with management requests from their personal iToys or Andtoys.

The trick is to manage this behavior; banning it tends to be
futile.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Chrome to drop CRL checking

2012-02-06 Thread Steven Bellovin
http://arstechnica.com/business/guides/2012/02/google-strips-chrome-of-ssl-revocation-checking.ars

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Well, that's depressing. Now what?

2012-01-27 Thread Steven Bellovin
 
 Or at least that's what everyone thought. More recently, various groups have 
 begun to focus on a fly in the ointment: the practical implementation of this 
 process. While quantum key distribution offers perfect security in practice, 
 the devices used to send quantum messages are inevitably imperfect.

This is only surprising if you assume large values of everyone.  Anyone in 
the real world has long since worried about implementations.  Remember Bob 
Morris' Rule 1 of cryptanalysis: check for plaintext.  
(http://www.ieee-security.org/Cipher/ConfReports/conf-rep-Crypto95.html)


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] airgaps in CAs

2012-01-09 Thread Steven Bellovin

On Jan 8, 2012, at 11:48 52PM, Alistair Crooks wrote:

 On Sun, Jan 08, 2012 at 09:10:56PM -0500, Steven Bellovin wrote:
 
 On Jan 8, 2012, at 6:29 26AM, Florian Weimer wrote:
 
 * Eugen Leitl:
 
 Is anyone aware of a CA that actually maintains its signing
 secrets on secured, airgapped machines, with transfers batched and
 done purely by sneakernet?
 
 Does airgapping provide significant security benefits these days,
 compared to its costs?
 
 File systems are generally less robust than network stacks.  USB
 auto-detection is somewhat difficult to control on COTS systems.  So
 unless you build your own transfer mechanism, a single TCP port
 exposes less code, and code which has received more scrutiny.
 
 While I'm uncertain about your precise conclusion -- I know of no
 attempts to write a USB+file system+OS behavior security sanitizer,
 so I don't know how easy it is to do -- you're definitely asking
 the the right question.
 
 Taken from:
 
   http://www.netbsd.org/docs/rump/
 
 about Antti Kantee's Runnable Userspace Metaprograms (RUMP) in NetBSD,
 and while (again) this isn't what was asked for, it moves the attack
 point from the kernel to userspace.
 
   Use cases for rump cases include:
 
   [...]
 
   + security:  rump runs in its own instance in a userspace
   process.  For example, it is well-known that all operating
   systems are vulnerable to untrusted file system images. 
   Unlike on other operating systems, on NetBSD it is possible to
   mount untrusted ones, such as those on a USB stick, with an
   isolated server.  This isolates attacks and prevents kernel
   compromises.
 

Up to a point.  For one thing, some attacks are easier to launch in
userspace, because it's easier to do things like invoke shells.  More
important, many of the problems are due to higher-level semantics, e.g.,
what happens when you mount the file system -- autorun.inf comes to
mind.  


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] reports of T-Mobile actively blocking crypto

2012-01-09 Thread Steven Bellovin
https://grepular.com/Punching_through_The_Great_Firewall_of_TMobile

I know nothing more of this, including whether or not it's accurate

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] folded SHA1 vs HMAC for entropy extraction

2012-01-05 Thread Steven Bellovin

On Jan 5, 2012, at 4:46 PM, Thor Lancelot Simon wrote:

 On Fri, Jan 06, 2012 at 07:59:30AM +1100, ianG wrote:
 
 The way I treat this problem is that it is analogous to inventing
 ones own algorithm.  From that perspective, one can ask:
 
 What is?  The folded SHA, or the use of HMAC?
 
 You do understand why it's important to obscure what's mixed back in,
 I assume.  If not, read the paper I referenced on the Linux RNG;
 by insufficently obscuring what went back into the pool, the
 implementors made an attack with only 2^64 complexity possible.
 
 With the constraint that you can't just output exactly what you
 mix back in, a plain hash function without some further transformation
 won't suffice, whether it's MD4 or SHA512.  I am asking whether the
 use of HMAC with two different, well known keys, one for each purpose,
 is better or worse than using the folded output of a single SHA
 invocation for one purpose and the unfolded output of that same
 invocation for the other.


It bears a lot of thought.  By having the keys known, you're using
HMAC in a non-traditional way; the question is which security properties
still hold.  For example: suppose there was a preimage attack on which
ever hash function you use.  Since part of the input -- the keys -- to
the HMAC invocations is known, the preimage attack means that the attacker
can find the (or a) rest-of-input that went into the hashes.  Since
you're hashing 4K bits down to 160(?), there is loss of information in
the hash, which is good -- but we don't know what this hypothetical
preimage attack is.  By contrast, the Linux scheme loses information
via the folding.  Are the two equivalent?  Again, I don't know.  But
you can't just assume that the HMAC properties transfer.  (I'd be
happier with your scheme were the keys secret, though admittedly then
I'd ask what happens if they leak.)

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Steven Bellovin

On Dec 31, 2011, at 12:32 06PM, John Levine wrote:

 You can't force people to invent and memorize an endless stream of
 unrelated strong passwords.
 
 I'm not sure I agree with this phrasing.  It is easy to memorize a strong 
 password -- it just has to be long enough. 
 
 Don't forget endless stream of unrelated.  I have some strong
 passwords for the accounts that matter, but I don't have to start over
 every month.
 
 
 So what problem _is_ being addressed by requiring passwords to be changed 
 so often [and so inconveniently]?
 
 Compliance with standards written by people who created the standard
 by copying standards they saw other places.  I suspect a lot of them
 still trace back to attacks on /etc/passwd on PDP-11 Unix.
 

That's about it.  It all derives from the Morris and Thompson paper and
from http://csrc.nist.gov/publications/secpubs//rainbow/std002.txt .
Both were written at a time when a power user would have about 3 passwords.

Yes, ideally people would have a separate, strong password, changed
regularly for every site.  The difference between theory and practice,
though...  By actual count, I have more than 100 web site passwords.
The odds on me remembering all of them are exactly 0.  So -- I use a
password manager program, and store everything in an encrypted, 
cloud-resident place.  Nothing else would work.  The most sensitive
sites, though, aren't in the file; those, I can and will memorize.

Changing passwords?  Unless you're changing from one random string to
another, it doesn't help.  I posted a link a few days ago to a paper
that described an algorithm for finding ~40% of new passwords from the
previous one -- people follow patterns.

And if your machine is infected by a keystroke logger -- one of the
bigger threats these days -- none of that matters.  (See some of Cormac
Herley's papers.)

Passwords aren't dead, and despite what IBM says I don't think they're
going away any time soon.  But we need new rules and new guidelines
for managing them; the ones from the 1980s don't work anymore.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Steven Bellovin

On Dec 31, 2011, at 4:36 00PM, Bernie Cosell wrote:

 On 31 Dec 2011 at 15:30, Steven Bellovin wrote:
 
 Yes, ideally people would have a separate, strong password, changed
 regularly for every site.
 
 This is the very question I was asking: *WHY* changed regularly?  What 
 threat/vulnerability is addressed by regularly changing your password?  I 
 know that that's the standard party line [has been for decades and is 
 even written into Virginia's laws!], but AFAICT it doesn't do much of 
 anything other than encourage users to be *LESS* secure with their 
 passwords.


The standard rationale is that for any given time interval, there's a
non-zero probability that a given password has been compromised.  At
some point, the probability is high enough that it's a real risk.  By
changing passwords frequently enough, you never reach that point.  The
reference I posted previously 
(http://csrc.nist.gov/publications/secpubs//rainbow/std002.txt)
makes this very explicit, complete with equations; see Appendix F.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-31 Thread Steven Bellovin

On Dec 31, 2011, at 5:09 08PM, John Levine wrote:

 The standard rationale is that for any given time interval, there's a
 non-zero probability that a given password has been compromised.  At
 some point, the probability is high enough that it's a real risk.
 
 Sure, but where does that probability come from?  (Various tactless
 anatomical guesses elided here.)  If the probability is low enough the
 replacement interval could be greater than the lifetime of the system.
 I see they relate it to the guess rate, so I'd rather limit that then
 push costs on users and force them to rotate passwords.

Yup.  I'm not saying it makes sense now, or even made sense at the time.
But that was the rationale.  (Aside: this could have descended from NSA's
experience with cryptographic keys and especially codebooks.  The difference,
of course, is that in crypto having more traffic to cryptanalyze makes
the attackers job easier.)
 
 R's,
 John
 
 PS: Masking passwords as they're typed made a lot of sense on an
 ASR-33.  Is this another throwback?
 
ASR-33s could run in full-duplex mode, i.e., without the password
being echoed; the issue was the host OS.  IBM mainframes were not
really capable of that at the time, so masking was necessary.  Or
it could have been an IBM 2740 or 2741 terminal, based on the Selectric
typewriter; these devices weren't even capable of running full
duplex, as best I can recall, so either the OS had to provide masking
or you had to trust the user to remove the typeball (see
http://upload.wikimedia.org/wikipedia/commons/6/60/SelectricII_Hadar.jpg
if you don't know what I'm talking about).

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2011-12-27 Thread Steven Bellovin

On Dec 27, 2011, at 5:48 PM, Solar Designer wrote:

 On Tue, Dec 27, 2011 at 03:54:35PM -0500, Jeffrey Walton wrote:
 We're bouncing around ways to enforce non-similarity in passwords over
 time: password1 is too similar too password2 (and similar to
 password3, etc).
 
 I'm not sure its possible with one way functions and block cipher residues.
 
 Has anyone ever implemented a system to enforce non-similarity business 
 rules?
 
 In passwdqc, we opted to only do it for the current vs. previous
 password, not maintaining a password history.  (The previous password is
 normally entered by the user at password change time.)
 
 Password histories are controversial.  They do not obviously improve
 security; they may as well make things a lot worse (even if you're just
 storing hashes).
 
 Also, you shouldn't declare two passwords too similar just because they
 contain e.g. an N-character substring in common; rather, you should see
 if the remainder of the new password (with the too-similar portion
 removed or partially discounted) would still meet the policy.  This is
 what passwdqc does.
 
 KoreLogic ran a password hash cracking contest at DEFCON 2010 (with many
 remote participants as well) focused on effects of password histories -
 that is, they tried to simulate users' behavior patterns that they
 observed in corporate environments with password histories.  After the
 contest, they released John the Ripper rules that try to match users'
 typical approaches at bypassing password histories - appending the
 current year, month name, etc.  Apparently, this is what actually
 happens when there's a password history and regular password changes are
 enforced.
 
 The DEFCON 2010 contest (including related data files):
 http://contest-2010.korelogic.com
 
 passwdqc tested on the contest passwords:
 http://www.openwall.com/lists/john-users/2011/02/20/2


Also see http://www.cs.unc.edu/~reiter/papers/2010/CCS.pdf -- they
describe an algorithm to guess new passwords from old.

Here's a heretical thought: require people to change their passwords --
and publish the old ones.  That might even be a good idea...

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-09 Thread Steven Bellovin

On Dec 9, 2011, at 3:46 18PM, Jon Callas wrote:

 
 On 8 Dec, 2011, at 8:27 PM, Peter Gutmann wrote:
 
 In any case getting signing certs really isn't hard at all.  I once managed 
 it 
 in under a minute (knowing which Google search term to enter to find caches 
 of 
 Zeus stolen keys helps :-).  That's as an outsider, if you're working inside 
 the malware ecosystem you'd probably get them in bulk from whoever's dealing 
 in them (single botnets have been reported with thousands of stolen keys and 
 certs in their data stores, so it's not like the bad guys are going to run 
 out 
 of them in a hurry).
 
 Unlike credit cards and bank accounts and whatnot we don't have price 
 figures 
 for stolen certs, but I suspect it's not that much.
 
 If it were hard to get signing certs, then we as a community of developers 
 would demonize the practice as having to get a license to code.
 
Peter is talking about stolen certs, which for most parts of the development
community aren't a prerequisite...  But there's an interesting dilemma here
if we insist on all code being signed.

Assume that a code-signing cert costs {$,€,£,zorkmid}1/year.  Everyone but
large companies would scream.  Now assume the cost is {$,€,£,zorkmid}.01/year
or even free.  At that price, it's a nuisance factor, and would be issued via
a simple web interface.  Simple web interfaces are scriptable (and we all know
the limits of captchas), which means that malware could include a get a cert
routine for the next, mutated generation of itself.  In fact, they're largely
price-insensitive, since they'd be programmed with a stash of stolen credit
cards


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-09 Thread Steven Bellovin

On Dec 9, 2011, at 5:41 04PM, Randall Webmail wrote:

 From: Nico Williams n...@cryptonector.com
 
 What should matter is that malware should not be able to gain control
 of the device or other user/app data on that device, and, perhaps,
 that the user not even get a chance to install said malware, not
 because the malware's signatures don't chain up to a trusted CA but
 because the app store doesn't publish it and the user uses only
 trusted app stores.  Neither of the last two is easy to ensure though
 
 And yet we see things like someone (apparently) sneakernetting a thumb-drive 
 from an infected Internet Cafe to the SIPR network: 
 http://www.washingtonpost.com/national/national-security/cyber-intruder-sparks-response-debate/2011/12/06/gIQAxLuFgO_story.html
 
 If the USG can't even keep thumb drives off of SIPR, isn't the whole game 
 doomed to failure?   (What genius thought it would be a good idea to put USB 
 ports on SIPR-connected boxes, anyway?)

How do you import new intelligence data to it?  New software?  Updates?
New anti-virus definitions?  Patches for security holes?  Your external
backup drive?  Your wireless mouse for classified Powerpoint
presentations (based on
http://www.nytimes.com/2010/04/27/world/27powerpoint.html I suspect that
such things indeed happen) I've heard tell of superglue in the USB
ports and I've seen commercial software that tries to limit which
specific storage devices can be connected to (of course) Windows boxes.

Yes, one can imagine technical solutions to all of these, like NSA-run
central software servers and restricted machines to which new data can
be introduced and a good registry of allowed disks and banning both
Powerpoint and the mindset that overuses it.  Is that operationally
realistic, especially in a war environment where you don't have adequate
bandwidth back to Ft.  Meade?  (Hunt up the articles on the moaning and
groaning when DoD banned flash drives.)

The purpose of a system is not to be secure.  Rather, it's to help you
accomplish something else.  Being secure is one aspect of helping to
accomplish something, but it's not the only one.  The trick isn't to be
secure, it's to be secure enough while still getting the job done.
Sometimes, relying on training rather than technology is the right
answer.  Obviously, per that article, it wasn't enough, but it doesn't
mean the approach was wrong; perhaps other approaches would have had
even worse failures.



--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Steven Bellovin

On Dec 7, 2011, at 11:31 23AM, Jon Callas wrote:
 
 
 But really, I think that code signing is a great thing, it's just being done 
 wrong because some people seem to think that spooky action at a distance 
 works with bits.


The question at hand is this: what is the meaning of expiration or revocation
of a code-signing certificate?  That I can't sign new code?  That only affects
the good guys.  That I can't install code that was really signed before the
operative date?  How can I tell when it was actually signed?  That I can't
rely on it after the specified date?  That would require continual resigning
of code.  That seems to be the best answer, but the practical difficulties
are immense.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Steven Bellovin

On Dec 7, 2011, at 12:34 29PM, Jon Callas wrote:

 
 On 7 Dec, 2011, at 8:52 AM, Steven Bellovin wrote:
 
 
 On Dec 7, 2011, at 11:31 23AM, Jon Callas wrote:
 
 
 But really, I think that code signing is a great thing, it's just being 
 done wrong because some people seem to think that spooky action at a 
 distance works with bits.
 
 
 The question at hand is this: what is the meaning of expiration or revocation
 of a code-signing certificate?  That I can't sign new code?  That only 
 affects
 the good guys.  That I can't install code that was really signed before the
 operative date?  How can I tell when it was actually signed?  That I can't
 rely on it after the specified date?  That would require continual resigning
 of code.  That seems to be the best answer, but the practical difficulties
 are immense.
 
 I want to say that the answer is mu because you can't actually revoke a 
 certificate. That's not satisfying, though.

It's certainly one possible answer, and maybe it's the only answer.  For now, 
though, I'd like to assume that there can be some meaning but I at least don't 
know what it is.
 
 I think it is a policy question. If I were making a software development 
 system that used certificates with both expiration dates and revocation, I 
 would check both revocation and expiry. I might consider it either a warning 
 or an error, or have it be an error that could be overridden. After all, how 
 can you test that the revocation system on the back end works unless you can 
 generate revoked software?

I'm not sure what you mean.
 
 On a consumer-level system, I might refuse to install or run revoked 
 software; that seems completely reasonable. Refusing to install or run 
 expired software is problematic -- the thought of creating a system that 
 refuses to work after a certain date is pretty creepy, and the workaround is 
 to set the clock back. 

Yup.  In fact, it's more than creepy, it's an open invitation to Certain 
Software Vendors to *enforce* the notion that you just rent software.
 
 But really, it's a policy question that needs to be answer by the creators of 
 the system, not the crypto/PKI people. We can easily create mechanism, but 
 it's impossible to create one-size-fits-all policy.
 
Right now, I'm speaking abstractly.  I'm not concerned with current PKIs or 
pkis or business models or what have you.  If you'd prefer, I'll rephrase my 
question like this: Assume that there is some benefit to digitally-signed code. 
 (Note carefully that I'm not interested in how the recipient gets the 
corresponding public key -- we've already had our PKI is evil discussion for 
the year.)  Given that there is a non-trivial probability that the private 
signing key will be compromised, what are the desired semantics once the user 
learns this.  (Again, I'm saying nothing about how the user learns it -- CRLs 
or OSCP or magic elves are all (a) possible and (b) irrelevant.)  If the answer 
is it depends, on what does it depend?  Whose choice is it?  

Let's figure out what we're trying to accomplish; after that, we can try to 
figure out how to do it.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked?

2011-12-07 Thread Steven Bellovin

On Dec 7, 2011, at 4:56 29PM, Peter Gutmann wrote:

 Steven Bellovin s...@cs.columbia.edu writes:
 
 Let's figure out what we're trying to accomplish; after that, we can try to
 figure out how to do it.
 
 See above, code signatures help increase the detecability of malware, although
 in more or less the reverse of the way that it was intended.
 
I meant by canceling the key (I'm trying to avoid using the word revocation),
not by signing code.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-12-02 Thread Steven Bellovin

On Dec 2, 2011, at 5:26 27PM, Jeffrey Walton wrote:

 On Sun, Nov 27, 2011 at 3:10 PM, Steven Bellovin s...@cs.columbia.edu wrote:
 Does anyone know of any (verifiable) examples of non-government enemies
 exploiting flaws in cryptography?  I'm looking for real-world attacks on
 short key lengths, bad ciphers, faulty protocols, etc., by parties other
 than governments and militaries.  I'm not interested in academic attacks
 -- I want to be able to give real-world advice -- nor am I looking for
 yet another long thread on the evils and frailties of PKI.
 
 In July 2009, Benjamin Moody, a United-TI forum user, published the
 factors of a 512-bit RSA key used to sign the TI-83+ series graphing
 calculator,
 http://en.wikipedia.org/wiki/Texas_Instruments_signing_key_controversy.

Right.  I have five examples.  Apart from that one, there is:

The (alleged) factoring of 512-bit keys in code-signing certificates

The apparent use of WEP-cracking by the Gonzalez gang.  While we don't
know for sure that they did that, the Canadian Privacy Commissioner's
report said that TJX used WEP, and one of the indictments said that
Christopher Scott broke in to their wireless net.

The GSM interceptor.  I'm not using that one because the products I see
are (nominally) aimed at government use, and while I'm sure many have
been diverted I don't have any documented cases of them being used by
the private sector.  (For all of the reports about phone hacking by
Murdoch's companies, I've seen no reports of cell phone eavesdropping to
get the modern equivalent of, say, http://en.wikipedia.org/wiki/Squidgygate
or Camillagate.)

http://www.wired.com/threatlevel/2011/07/hacking-neighbor-from-hell/ --
someone who *really* wanted revenge on his neighbors.  Given that his
offenses were discovered to include child pornography, he was sentenced
to 18 years.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-29 Thread Steven Bellovin

On Nov 29, 2011, at 7:44 AM, d...@geer.org wrote:

 
 Steve/Jon, et al.,
 
 Would you say something about whether you consider key management
 as within scope of the phrase crypto flaw?  There is a fair
 amount of snake oil there, or so it seems to me in my line of
 work (reading investment proposals and the like) -- things like
 secure boot devices that, indeed, are encrypted but which have the
 decryption key hidden on the device (security through obscurity).
 That's just an example; don't pick on it, per se.  But to repeat,
 is key management within scope of the phrase crypto flaw?
 
It's a grey area for my purposes.  DRM is out completely; that's
something that can't work.  I'm looking for situations where (a) it's
easy for someone who knows the field to say, idiots -- if they'd
done XXX instead of YYY, there wouldn't be a flaw, and (b) there
was a real-world consequence of the failure, and not just someone
saying gotcha!  Leaving out key management entirely, like WEP did,
would qualify under (a) but not (b).  


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-28 Thread Steven Bellovin

On Nov 27, 2011, at 11:00 49PM, Peter Gutmann wrote:

 Steven Bellovin s...@cs.columbia.edu writes:
 
 Does anyone know of any (verifiable) examples of non-government enemies
 exploiting flaws in cryptography?
 
 Could you be a bit more precise about what flaws in cryptography covers?  
 If 
 you mean exploiting bad or incorrect implementations of crypto then there's 
 so 
 much that I barely know where to start, if it's actual cryptanalytic attacks 
 on anything other than toy crypto (homebrew ciphers, known-weak keys, etc) 
 then there's very little around.  If it's something else, you'd have to let us
 know where the borders lie.
 
I'm writing something where part of the advice is don't buy snake oil crypto,
get the good stuff.  By good I mean well-accepted algorithms (not 
proprietary
for extra security!), and protocols that have received serious analysis.  I 
also
want to exclude too-short keys.  But -- honesty requires that I define the 
threat
model.  We *know* why NSA wanted short keys in the 1990s, but most folks are not
being targeted by pick your favorite SIGINT agency, and hence don't have a
major worry.  So -- is there a real threat that people have to worry about?  The
TI example is a good one, since it's fully verified.  The claim has been made in
the foxit blog, but as noted it's not verified, merely asserted.  WEP?  Again, 
we
all know how bad it is, but has it really been used?  Evidence?  For GSM, is 
there
something I can footnote about these kits?  Is anyone using BEAST?  Did anyone
use the TLS renegotiate vulnerability?  A lot of the console and DRM breaks were
flaws in the concept, rather than the crypto.  Password guessing doesn't 
count...


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-28 Thread Steven Bellovin

On Nov 28, 2011, at 8:03 PM, Nico Williams wrote:

 The list is configured to set Reply-To.  This is bad, and in some
 cases has had humorous results.  I recommend the list owners change
 this ASAP.


Agree, strongly.  The mailman documentation agrees with us.  I'm on the
verge of unsubscribing on the grounds that the list is a privacy violation
in action.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PFS questions (was SSL *was* broken by design)

2011-10-03 Thread Steven Bellovin
 
 
 Come on. This discussion has descended past whacko, which is where it went 
 once the broken by design discussion started.

Quite.  I had to point someone at some of these threads today; when it came to 
this part, I alluded to black helicopters.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Enigma machine being auctioned by Christie's

2011-09-18 Thread Steven Bellovin
http://us.cnn.com/2011/WORLD/europe/09/16/enigma.machine.auction/index.html



--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Steven Bellovin

On Sep 12, 2011, at 5:48 00PM, James A. Donald wrote:

--
 On 2011-09-11 4:09 PM, Jon Callas wrote:
  The bottom line is that there are places that continuity
  works well -- phone calls are actually a good one. There
  are places it doesn't. The SSL problem that Lucky has
  talked about so well is a place where it doesn't. Amazon
  can't use continuity. It is both inconvenient and insecure.
 
 Most people who login to Amazon have a long existing relationship: Hence key 
 continuity and SRP would work well.
 
The problem with key continuity (and I alluded to this the other
day) is the tendency of people to just click OK to error
boxes.  From the perspective of many people, the choice is the
inability to visit, say, Amazon, and clicking OK to an error
message they find to be quite incomprehensible.  Furthermore,
they're probably right; most of the certificate errors I've
seen over the years were from ordinary carelessness or errors,
rather than an attack; clicking OK is *precisely* the right
thing to do.  


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Steven Bellovin

On Sep 13, 2011, at 2:22 28PM, Andy Steingruebl wrote:

 On Tue, Sep 13, 2011 at 10:48 AM, Steven Bellovin s...@cs.columbia.edu 
 wrote:
 
 Furthermore,
 they're probably right; most of the certificate errors I've
 seen over the years were from ordinary carelessness or errors,
 rather than an attack; clicking OK is *precisely* the right
 thing to do.
 
 Is anyone aware of any up-to-date data on this btw?  I've had
 discussions with the browser makers and they have some data, but I
 wonder whether anyone else has any data at scale of how often users
 really do run into cert warnings these days. They used to be quite
 common, but other than 1 or 2 sites I visit regularly that I know ave
 self-signed certs, I *never* run into cert warnings anymore.   BTW,
 I'm excluding mixed content warnings from this for the moment
 because they are a different but related issue.

From personal experience -- I use https to read news.google.com; Firefox 6
on a Mac complains about wildcard certificates.  And ietf.org's certificate
expired recently; it took a day or so to get a new one installed.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-13 Thread Steven Bellovin

On Sep 13, 2011, at 3:00 32PM, Paul Hoffman wrote:

 On Sep 13, 2011, at 11:57 AM, Steven Bellovin wrote:
 
 From personal experience -- I use https to read news.google.com; Firefox 6
 on a Mac complains about wildcard certificates.  And ietf.org's certificate
 expired recently; it took a day or so to get a new one installed.
 
 
 This last bit might be relevant to the mailing list.
 - The IETF's cert was for *.ietf.org
 - It took a week, not a day or so to get the new one installed
 Steve: I wonder if your browser, after you dismissed the dialog once, 
 silently remembered that dismissal for a week, or if it stopped asking you 
 after a day.


Neither -- I relied on my mailing list archives for the interval, and
didn't scroll back far enough...

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Let's go back to the beginning on this

2011-09-12 Thread Steven Bellovin
Jon, I think there was a great deal of wisdom in your post.  I'd add only one 
thing: a pointer to the definition of dialog box at 
http://www.w3.org/2006/WSC/wiki/Glossary .  
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PKI fixes that don't fix PKI (part III)

2011-09-10 Thread Steven Bellovin
 Sorry, that doesn't work. Afaik, there is practically zero evidence of 
 Internet interception of credit cards. 

This makes no sense whatsoever.  Credit card numbers are *universally*
encrypted; of course there's no interception of them.

In 1993, there was interception of passwords on the Internet.  This is
technically difficult, since if you were using telnet -- probably the
most common form of remote login via password then -- every character
would be in a separate packet; additionally, there was little context to
say where the login/password string would start.  By contrast, credit
card numbers sent via http are easy.  A card number is probably in a
single packet, and is a self-checking string: 15 or 16 consecutive
digits (since most web programmers seem to be too lazy to strip out
embedded blanks or dashes, even though that's the easy and natural way
to type a card number), where one of the digits is a check digit on the
others.  If you see such a string, grab the packet; you'll probably find
the expiration date and CVV in it as well.  I don't even have to use the
likely variable names in uploaded forms.

Sure, it's easier to harvest in bulk by hacking a web site, or by
seeding self-propagating malware that logs keystrokes.  But if
eavesdropping works -- and it has in enough other cases -- it would have
been used.  The *only* reason it isn't used against credit card numbers
has been SSL.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [OT] -gate (Re: An appropriate image from Diginotar)

2011-09-04 Thread Steven Bellovin
 It's one of the very few times a President resigned from office without his 
 term expiring.


Try only -- no other US President has resigned.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] exponentiation chips

2011-07-23 Thread Steven Bellovin
Who is selling exponentiation chips (in reasonably large quantities) these 
days?  Price and power consumption are important for this application, but I 
need to be able to verify a few K RSA (or possibly ECC) signatures/second.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-05 Thread Steven Bellovin
 there may be a pragmatic need for options dealing with existing
 systems or business requirements, however i have yet to hear a
 convincing argument for why options are necessary in any new system
 where you're able to apply lessons learned from past mistakes.

You said it yourself: different businesses have different requirements.
The requirements may be operational environment or they may be
marketing-related.  I'll give just one example: web authentication.
If, say, I'm building a web-based interface to a system for tasking
an orbital COMSAT satellite.  That system should likely require
strong role-based authentication, possibly coupled with authentication
of the machine it's coming from, plus personal authentication for
later auditing.  By contrast, an airline reservation system that's
used for selecting seats (and printing boarding passes) will frequently
be used at hotel and airport kiosks, may be delegated to administrative
assistants, etc.  At some level, it's the same problem -- reserving
a resource (surveillance slot or an airplane seat), but the underlying
needs are very different.

More importantly (and to pick a less extreme scenario), security isn't
an absolute, it's a matter of economics.  If the resource you're
protecting isn't worth much, why should you spend a lot?  There are
certainly kinds of security that cost very little (RC4-128 has exactly
the same run-time overhead as RC4-40, though the cost of the public
key operations commensurate with those key lengths will differ);
other times, though, requirements are just plain different.

To quote the old Einstein line, a system should be as simple as possible
but no simpler.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-05 Thread Steven Bellovin

On Jul 5, 2011, at 2:44 57AM, Jon Callas wrote:

 I was sitting around the other weekend with some friends and we were talking 
 about Bitcoin, and gossiping furiously about it. While we were doing so, an 
 interesting property came up.
 
 Did you know that if a Bitcoin is destroyed, then the value of all the other 
 Bitcoins goes up slightly?

Mmm -- the curve isn't monotonic; once the distribution of bitcoins gets 
sufficiently small, you can buy less with it, because there are fewer 
acceptors.  This in turn hurts the purchasing power, which means that 
completely cornering the market is bad for the actor who does it.  This 
suggests that there's another critical parameter needed for your model.

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] preventing protocol failings

2011-07-04 Thread Steven Bellovin

On Jul 4, 2011, at 7:28 10PM, Sampo Syreeni wrote:

 (I'm not sure whether I should write anything anytime soon, because of Len 
 Sassaman's untimely demise. He was an idol of sorts to me, as a guy who Got 
 Things Done, while being of comparable age to me. But perhaps it's equally 
 valid to carry on the ideas, as a sort of a nerd eulogy?)
 
 Personally I've slowly come to believe that options within crypto protocols 
 are a *very* bad idea. Overall. I mean, it seems that pretty much all of the 
 effective, real-life security breaches over the past decade have come from 
 protocol failings, if not trivial password ones. Not from anything that has 
 to do with hard crypto per se.
 
 So why don't we make our crypto protocols and encodings *very* simple, so as 
 to resist protocol attacks? X.509 is a total mess already, as Peter Gutmann 
 has already elaborated in the far past. Yet OpenPGP's packet format fares not 
 much better; it might not have many cracks as of yet, but it still has a very 
 convoluted packet structure, which makes it amenable to protocol attacks. Why 
 not fix it into the simplest, upgradeable structure: a tag and a binary blob 
 following it?
 
 Not to mention those interactive protocols, which are even more difficult to 
 model, analyze, attack, and then formally verify. In Len's and his spouse's 
 formalistic vein, I'd very much like to simplify them into a level which is 
 amenable to formal verification. Could we perhaps do it? I mean, that would 
 not only lead to more easily attacked protocols, it would also lead to more 
 security...and a eulogy to one of the new cypherpunks I most revered.
 -- 

Simplicity helps with code attacks as well as with protocol attacks, and the 
former are a lot more common than the latter.  That was one of our goals in JFK:

@inproceedings{aiello.bellovin.ea:efficient,
  author = {William Aiello and Steven M. Bellovin and Matt Blaze and
  Ran Canetti and John Ioannidis and Angelos D. Keromytis and
  Omer Reingold},
  title = {Efficient, {DoS}-Resistant, Secure Key Exchange for
  Internet Protocols},
  booktitle = {Proceedings of the ACM Computer and Communications
  Security (CCS) Conference},
  year = 2002,
  month = {November},
  url = {https://www.cs.columbia.edu/~smb/papers/jfk-ccs.pdf},
  psurl = {https://www.cs.columbia.edu/~smb/papers/jfk-ccs.ps}
}



--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Robert H. Morris died

2011-06-30 Thread Steven Bellovin
http://www.nytimes.com/2011/06/30/technology/30morris.html

I learned a lot about security, and especially attitudes towards security,
from him.  (Yes, this is crypto-relevant; read the obit.)

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Oddity in common bcrypt implementation

2011-06-28 Thread Steven Bellovin

On Jun 28, 2011, at 2:46 31PM, Marsh Ray wrote:

 On 06/28/2011 12:48 PM, Steven Bellovin wrote:
 Wow, this sounds a lot like the way 64-bit DES was weakened to 56 bits.
 
 It wasn't weakened -- parity bits were rather important circa 1974.
 (One should always think about the technology of the time.
 
 It's a very reasonable-sounding explanation, particularly at the time. 
 http://en.wikipedia.org/wiki/Robbed_bit_signaling is even still used for 
 things like T-1 lies.
 
 But somehow the system managed to handle 64-bit plaintexts and 64-bit 
 ciphertexts. Why would they need to shorten the key? Of the three different 
 data types it would be the thing that was LEAST often sent across serial 
 communications lines needing parity.
 
 If error correction was needed on the key for some kind of cryptographic 
 security reasons, then 8 bits would hardly seem to be enough.
 
 What am I missing here?

Errors in plaintext weren't nearly as important.  In text, the normal
redundancy of natural language suffices; even for otherwise-unprotected
data, a random error affects only one data item.  For ciphertext, the
modes of operation provide a range of different choices on error propagation.
In either case, higher-level protocols could provide more detection or
correction.  

A single-bit error in a key, however, could be disastrous; everything is
garbled.  Even hardware wasn't nearly as reliable then; it was not at all
uncommon to have redundant circuitry (at least in mainframes) for registers 
and ALUs, using the complement output of the flip-flops used for registers
(http://en.wikipedia.org/wiki/Flip-flop_%28electronics%29).

And there were fill devices: http://en.wikipedia.org/wiki/Fill_device --
the path from it to the crypto device really needed error detection.

Beyond that -- we know from Biham and Shamir that the inherent strength
of DES is ~54 bits against differential cryptanalysis; having more bits to
go into the key schedule doesn't help.


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Digitally-signed malware

2011-06-22 Thread Steven Bellovin
http://www.darkreading.com/advanced-threats/167901091/security/application-security/231000129/malware-increasingly-being-signed-with-stolen-certificates.html

Not surprising to most readers of this list, I suspect...

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digitally-signed malware

2011-06-22 Thread Steven Bellovin
 
 Just to split hairs, malware has stolen signing keys for years, but it's only
 in the last few years that malware vendors have started using them. 

Maybe that's it -- it's DRM for the malware vendors, to ensure that other
bad guys don't steal their code...


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Crypto-economics metadiscussion

2011-06-13 Thread Steven Bellovin
 
 
 Well, obviously, bitcoin is succeeding because the financial crisis has 
 caused loss of trust in government approved and regulated solutions.

Obviously?  I do not think this word means what you think it means.




--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Preserve us from poorly described/implemented crypto

2011-06-07 Thread Steven Bellovin

On Jun 7, 2011, at 3:01 30PM, J.A. Terranson wrote:

 
 On Tue, 7 Jun 2011, Nico Williams wrote:
 
 TEMPEST.
 
 I'd like keyboards with counter-measures (emanation of noise clicks)
 or shielding to be on the market, and built-in for laptops.
 
 Remember how well the original IBM PC clicky keyboard went over (I think 
 I'm the only person in the US who actually liked it - veryone gave me 
 theirs after upgrading to the newer lightweight and silent ones)

Im typing on a large, heavy, clicky IBM keyboard right now...


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] M-94 wheel cipher on EBay

2011-05-16 Thread Steven Bellovin
http://cgi.ebay.com/Model-M-94-Cipher-Device-U-S-Army-Signal-Corps-WWII-/220784760519

I'd love it, but the bidding is already over US$1000 so I'll pass...

Sent from my iPad
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] new tech report on the one-time pad

2011-03-02 Thread Steven Bellovin
I've posted a draft paper on my web site at 
http://mice.cs.columbia.edu/getTechreport.php?techreportID=1460 ;
here's the abstract:

The invention of the one-time pad is generally credited to
Gilbert S. Vernam and Joseph O. Mauborgne. We show that it 
was invented about 35 years earlier by a Sacramento banker 
named Frank Miller.  We provide a tentative identification
of which Frank Miller it was, and speculate on whether or 
not Mauborgne might have known of Miller's work, especially 
via his colleague Parker Hitt.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] encrypted storage, but any integrity protection?

2011-01-15 Thread Steven Bellovin

On Jan 15, 2011, at 8:53 44AM, Marsh Ray wrote:

 On 01/14/2011 06:13 PM, Jon Callas wrote:
 
 This depends on what you mean by data integrity.
 
 How about an attacker with write access to the disk is unable to modify the 
 protected data without detection?
 
 In a strict, formal
 way, where you'd want to have encryption and a MAC, the answer is no.
 I don't know of one that does, but if there *is* one that does, it's
 likely got other issues.
 
 How come? Is there some principle of conservation of awesomeness at work or 
 something?
 
 Disks, for example, pretty much assume that
 a sector is 512 bytes (or whatever). There's no slop in there. It
 wouldn't surprise me if someone were doing one, but it adds a host of
 other operational issues.
 
 If the crypto driver is functioning as a shim layer on top of a block 
 device it seems reasonable that it could reduce the overall size seen by 
 upper layers and remap the actual storage a little bit.
 
 A 256-bit hash takes 32 bytes, 1/16th of the 512 byte sector size. So a 
 encrypting driver could simply map 15 blocks onto 16 hardware disk blocks. 
 This might impose very little overhead since the minimum number of blocks in 
 the smallest IO operation goes from one to two, but this may not be 
 noticeable. My understanding is that the 512-byte block size is mainly used 
 by the disk bus protocols and the lower- and higher-layers (e.g. RAID, 
 filesystems, virtual memory) will operate on 4-8K blocks and not hit that 
 minimum anyway. The disk itself can be expected to do a fair amount 
 read-ahead caching too.
 
 However -- a number of storage things (including TrueCrypt) are using
 modes like XTS-AES. These modes are sometimes called PMA modes for
 Poor Man's Authentication. XTS in particular is a wide-block mode
 that takes a per-block tweak. This means that if you are using an XTS
 block of 512 bytes, then a single-bit change to the ciphertext causes
 the whole block to decrypt incorrectly.
 
 But how does anyone know that it decrypted incorrectly without some integrity 
 checking? It seems like any integrity scheme will have upper bounds on its 
 security related to the number of bits dedicated to that purpose.

See http://www.cs.unc.edu/%7Ereiter/papers/2007/USENIX1.pdf and 
http://www.cs.unc.edu/%7Ereiter/papers/2005/NDSS.pdf

--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-17 Thread Steven Bellovin

On Dec 17, 2010, at 12:34 39PM, Jon Callas wrote:

 Let's get back to the matter at hand.
 
 I believe that there's another principle, which is that he who proposes, 
 disposes. I'll repeat -- it's up to the person who says there was/is a back 
 door to find it.
 
 Searching the history for stupid-ass bugs is carrying their paranoid water. 
 *Finding* a bug is not only carrying their water, but accusing someone of 
 being underhanded. The difference between a stupid bug and a back door is 
 intent. By calling a bug a back door, or considering it, we're also accusing 
 that coder of being underhanded. You're doing precisely what the person 
 throwing the paranoia wants. You're sowing fear and paranoia. 
 
 Of course there are stupid bugs in the IPsec code. There's stupid bugs in 
 every large system. It is difficult to assign intent to bugs, though, as that 
 ends up being a discussion of the person.

Yes -- see http://en.wikipedia.org/wiki/James_Jesus_Angleton#The_Molehunt for 
where that sort of thing can lead.

Many years ago, I learned that someone working on a major project had just been 
arrested for hacking.  Did he leave any surprised behind in our code?  I put 
together a team to do an audit.  We found one clear security hole -- but the 
commit logs showed who was responsible, and a conversation with her showed that 
it was an innocent mistake (and not something our suspect had 
socially-engineered into the code base).  Then I found something much more 
ambiguous -- two separate bugs, which -- when combined with a common but 
non-standard configuration -- added up to a security hole.  In one of the bugs, 
the code didn't agree with the comments, but there was a very plausible 
innocent explanation.  And yes, the suspect was responsible for that section of 
the code.  Deliberate?  Accidental?  To this day, I don't know; all I know for 
sure is that we found and closed two security holes, one very subtle.  Today is 
Dec 17, an odd-numbered day, so I think it was an ordinary bug.  Tomorrow, 
 I may feel differently.

--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: [gsc] Fwd: OpenBSD IPSEC backdoor(s)

2010-12-16 Thread Steven Bellovin

On Dec 16, 2010, at 5:09 05PM, Marsh Ray wrote:

 On 12/15/2010 02:36 PM, Jon Callas wrote:
 
 Facts. I want facts. Failing facts, I want a *testable* accusation.
 Failing that, I want a specific accusation.
 
 How's this:
 
 OpenBSD shipped with a bug which prevented effective IPsec ESP authentication 
 for a few releases overlapping the time period in question:
 
 http://code.bsd64.org/cvsweb/openbsd/src/sys/netinet/ip_esp.c.diff?r1=1.74;r2=1.75;f=h
 
 No advisory was made.
 
 The developer who added it, and the developer who later reverted it, were 
 said to be funded by NETSEC
 
 http://monkey.org/openbsd/archive/misc/0004/msg00583.html
 
 I think there's more. I'm out of time to describe it right now, BBIAB.
 
I've known Angelos Keromytis since about 1997; he's now a colleague of mine on 
the faculty at Columbia.  I've known John Ioannidis -- the other name attached 
to that code -- for considerably longer.  I've written papers with both of 
them.  To anyone who knows them, the thought that either would insert a bug at 
the FBI's behest is, shall we say, preposterous.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] New analysis results for Skein

2010-12-10 Thread Steven Bellovin

On Dec 9, 2010, at 10:45 54PM, Peter Gutmann wrote:

 * Skein is soft and succumbs to brute force 
 * Skein has been successfully linearized 
 * Skein has clear output patterns 
 * Skein is easily distinguishable from a random oracle
 
 http://eprint.iacr.org/2010/623
 
Despite that, it was selected as one of the five finalists; the other four are 
BLAKE, JH, Keccak, and Grøstl.  Security was the main concern; algorithms were 
ruled out if they hadn't received enough analysis.  Algorithms with round 
structures were favored, because the number of rounds could be increased.  
Performance across a very wide range of platforms was also important.  NIST has 
promised a detailed report in the near future.  (This is taken from an email 
sent yesterday by Bill BUrr; I found an unofficial copy at 
http://www.reddit.com/r/crypto/comments/ej7m2/sha3_finalists ; the official 
archive requires a password.)


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Micro-SD card encrypts voice on mobile phones

2010-12-02 Thread Steven Bellovin

On Dec 2, 2010, at 4:30 18PM, coderman wrote:

 On Wed, Dec 1, 2010 at 7:26 PM, Steven Bellovin s...@cs.columbia.edu wrote:
 http://www.cellular-news.com/story/46690.php
 
 521-bit key and other odd claims? think i'll stick with RedPhone ...

That's 521 bits for the ECC part, as I read it.  An odd size, even there?
If nothing else, see John Gilmore's preference for non-standard sizes...
(No, I don't know if this product is any good, but I don't think that
this is a prima facie reason to disregard them.  And of course, 521
is a simple typographical error away from 512, which we all realize
is a magic number but a reporter might now.)
 


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Micro-SD card encrypts voice on mobile phones

2010-12-01 Thread Steven Bellovin
http://www.cellular-news.com/story/46690.php 

I know nothing more about this...

--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA's position in the dominance stakes

2010-11-18 Thread Steven Bellovin

On Nov 18, 2010, at 5:21 16PM, Adam Back wrote:

 So a serious question: is there a software company friendly jurisdiction?
 
 (Where software and algorithm patents do not exist under law?)

It won't help, if you want to sell into the US or other jurisdictions that
do recognize such patents.  A patent is not the right to do something; it is
the right to prevent others from making, using, selling, or importing the 
protected
idea.



--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] NSA's position in the dominance stakes

2010-11-17 Thread Steven Bellovin

On Nov 17, 2010, at 11:01 45PM, James A. Donald wrote:

 On 17/11/10 7:26 AM, David G. Koontz wrote:
 On 17/11/10 9:01 AM, David G. Koontz wrote:
 
 
 A. US6704870, granted on March 9, 2004 (Yes, published)
 
 
 Sony asserted prior art against this patent in the 2007 case before
 agreeing
 Certicom's motion to end the lawsuit, which was granted without
 prejudice.
 
 On 2010-11-18 8:42 AM, Ian G wrote:
 What does that mean?
 
 It means that Sony pointed out that Certicom's claim is as full of shit as we 
 all know it to be, and that the court case ended without the court, which 
 found Certicom's claim and Sony's defense equally incomprehensible, finding 
 for or against anyone.
 
Go to 
http://docs.justia.com/cases/federal/district-courts/texas/txedce/2:2007cv00216/103383/112/
 and read the document.  It says that the case is being dismissed because the 
parties have settled.  It says nothing about why either party chose to settle.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] stream MAC - does anything like it exist?

2010-09-14 Thread Steven Bellovin

On Sep 14, 2010, at 2:18 38PM, Zooko O'Whielacronx wrote:

 following-up to my own post:
 
 On Tue, Sep 14, 2010 at 8:54 AM, Zooko O'Whielacronx zo...@zooko.com wrote:
 
 Also, even if you did have a setting where the CPU cost of HMAC-SHA1
 was a significant part of your performance (at e.g. 12 cycles per byte
 [1]), then you could always switch to Poly1305 or VMAC (at e.g. 2
 cycles per byte), or to an authenticated encryption mode (effectively
 zero cycles per byte?).
 
 Hm, actually [1] shows AES-GCM (an authenticated encryption mode)
 running at 16 cycles per byte, compared to AES-CTR's 13 cycles per
 byte, so we can estimate the CPU cost of switching from
 unauthenticated encryption to authenticated encryption at about 3
 cycles per byte, similar to using VMAC.
 
Given the failures from not authenticating your encryption -- I pointed out 
many in IPsec in 1996, but examples are as recent as this week 
(http://threatpost.com/en_us/blogs/new-crypto-attack-affects-millions-aspnet-apps-091310#)
 I think that we shouldn't waste our time and coding effort supporting 
unauthenticated encryption.


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] stream MAC - does anything like it exist?

2010-09-12 Thread Steven Bellovin

On Sep 10, 2010, at 2:06 18PM, travis+ml-rbcryptogra...@subspacefield.org wrote:

 So there's an obvious (though imperfect) analogy between block ciphers
 and, say, HMAC.  Imperfect because authentication always seems to
 involve metadata.
 
 But is there a MAC analog to a stream cipher?  That is, something
 where you can spend a few bits authenticating each frame of a movie,
 or sound sample, for example, and have some probabilistic chance of
 detecting alteration at each frame.  I suppose it could also have uses
 with, say, an interactive SSH session, where each keystroke might be
 sent in its own packet.
 
 The closest thing I can think of is doing a truncated MAC on each
 frame.  Looking at HMAC, it looks like you could leave the inner hash
 running while also finalizing it for each frame (assuming your library
 supports this), so that you could keep it open to feed the next frame
 to it - this allows each truncated MAC to attest to the authenticity
 of prior frames, which might or might not allow you to get by with
 fewer bits of MAC per frame in certain applications (details of which
 are complicated and not particularly germane to this query).

I confess I'm not sure I understand what properties you're actually
looking forthat aren't handled by the truncated MAC you describe.
(I'd also that unless your frames are very small, truncation doesn't
buy you much.)  Are you looking for chaining properties between frames?
What are they?  (Stream ciphers don't have such, of course.)  Do you
want to MAC each frame with some probability, then get a strong MAC
on a group of frames?  I note that no matter the algorithm, the basic
properties are pretty obvious: if you have an N-bit authentication
field, the odds on a random field being accepted are 2^-N.  What else
do you want?


--Steve Bellovin, http://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography