Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Ethan Heilman
Peter,

Do I understand you correctly. The checksum is calculated using a key or
the checksum algorithm is secret so that they can't generate checksums for
new keys?  Are they using a one-way function? Do you have any documentation
about this?

Thanks,
Ethan


On Wed, Mar 27, 2013 at 11:50 PM, Peter Gutmann
pgut...@cs.auckland.ac.nzwrote:

 Jeffrey Walton noloa...@gmail.com writes:

 What is the reason for checksumming symmetric keys in ciphers like BATON?
 
 Are symmetric keys distributed with the checksum acting as a
 authentication
 tag? Are symmetric keys pre-tested for resilience against, for example,
 chosen ciphertext and related key attacks?

 For Type I ciphers the checksumming goes beyond the simple DES-style error
 control, it's also to ensure that if someone captures the equipment they
 can't
 load their own, arbitrary keys into it.

 Peter.
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Steven Bellovin
See Matt Blaze's Protocol Failure in the Escrowed Encryption Standard, 
http://www.crypto.com/papers/eesproto.pdf

On Mar 28, 2013, at 10:16 AM, Ethan Heilman eth...@gmail.com wrote:

 Peter,
 
 Do I understand you correctly. The checksum is calculated using a key or the 
 checksum algorithm is secret so that they can't generate checksums for new 
 keys?  Are they using a one-way function? Do you have any documentation about 
 this?
 
 Thanks,
 Ethan
 
 
 On Wed, Mar 27, 2013 at 11:50 PM, Peter Gutmann pgut...@cs.auckland.ac.nz 
 wrote:
 Jeffrey Walton noloa...@gmail.com writes:
 
 What is the reason for checksumming symmetric keys in ciphers like BATON?
 
 Are symmetric keys distributed with the checksum acting as a authentication
 tag? Are symmetric keys pre-tested for resilience against, for example,
 chosen ciphertext and related key attacks?
 
 For Type I ciphers the checksumming goes beyond the simple DES-style error
 control, it's also to ensure that if someone captures the equipment they can't
 load their own, arbitrary keys into it.
 
 Peter.
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
 
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography


--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread ianG

On 27/03/13 22:13 PM, Ben Laurie wrote:

On 27 March 2013 17:20, Steven Bellovin s...@cs.columbia.edu wrote:

On Mar 27, 2013, at 3:50 AM, Jeffrey Walton noloa...@gmail.com wrote:


What is the reason for checksumming symmetric keys in ciphers like BATON?

Are symmetric keys distributed with the checksum acting as a
authentication tag? Are symmetric keys pre-tested for resilience
against, for example, chosen ciphertext and related key attacks?


The parity bits in DES were explicitly intended to guard against
ordinary transmission and memory errors.



Correct me if I'm wrong, but the parity bits in DES guard the key, which 
doesn't need correcting?  And the block which does need correcting has 
no space for parity bits?




Note, though, that this
was in 1976, when such precautions were common.  DES was intended
to be implemented in dedicated hardware, so a communications path
was needed, and hence error-checking was a really good idea.


And in those days they hadn't quite wrapped their heads around the
concept of layering?



Layering was the big idea of the ISO 7 layer model.  From memory this 
first started appearing in standards committees around 1984 or so?  So 
likely it was developed as a concept in the decade before then -- late 
1970s to early 1980s.




That said, I used to work for a guy with a long history in comms. His
take was that the designers of each layer didn't trust the designers
of the layer below, so they added in their own error correction.

Having seen how crypto has failed lately, perhaps we should have more
of the same distrust!



It's still the same.  This is why websites have a notice on them don't 
push the PAY NOW button twice!  Strict layering makes the separation 
between skill specialties easier to conceptualise but it does not 
necessarily make architectural sense.  It works well enough if security 
isn't an issue.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 28, 2013, at 1:21 PM, ianG i...@iang.org wrote:

 
 Correct me if I'm wrong, but the parity bits in DES guard the key, which 
 doesn't need correcting?  And the block which does need correcting has no 
 space for parity bits?

Guard is perhaps a bit strong. They're just parity bits. 

In those days, people bought parity memory, and it was worth it. As Steve says, 
hardware errors that would just happen were pretty common. 

Now, there is a little more to it than that -- remember that when Lucifer 
became DES, it was knocked down from a 64-bit key to a 56-bit key. When they 
did that, they chose to knock one bit off of each octet (note that I'm saying 
octet, not byte, because also in those days it was not presumed that bytes 
had eight bits) rather than have 56 packed bits.

If you do it that way, using the orphaned bits as parity is a pretty reasonable 
use for them. 

 
 Layering was the big idea of the ISO 7 layer model.  From memory this first 
 started appearing in standards committees around 1984 or so?  So likely it 
 was developed as a concept in the decade before then -- late 1970s to early 
 1980s.

Earlier than that. But arguably, the full seven layers are still aspirational, 
but the word conceptual was used for a long, long time. The bottom four 
layers are pretty easy to know what goes where. But what makes a protocol be in 
5, 6, or 7 is subject to debate.

Jon


-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVKvZsTedWZOD3gYRAsIWAKCFLl335xfo5ivgyqSAOk+PbMY5rgCeMcvd
wdXEKz5QaHIzaKwDo5uXlHg=
=SgaG
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key Checksums (BATON, et al)

2013-03-28 Thread Steven Bellovin

On Mar 28, 2013, at 4:21 PM, ianG i...@iang.org wrote:

 On 27/03/13 22:13 PM, Ben Laurie wrote:
 On 27 March 2013 17:20, Steven Bellovin s...@cs.columbia.edu wrote:
 On Mar 27, 2013, at 3:50 AM, Jeffrey Walton noloa...@gmail.com wrote:
 
 What is the reason for checksumming symmetric keys in ciphers like BATON?
 
 Are symmetric keys distributed with the checksum acting as a
 authentication tag? Are symmetric keys pre-tested for resilience
 against, for example, chosen ciphertext and related key attacks?
 
 The parity bits in DES were explicitly intended to guard against
 ordinary transmission and memory errors.
 
 
 Correct me if I'm wrong, but the parity bits in DES guard the key, which 
 doesn't need correcting?  And the block which does need correcting has no 
 space for parity bits?
 
If a block is garbled in transmission, you either accept it (look at all the
verbiage on error propagation properties of different block cipher modes)
or retransmit at a higher layer.  If a key is garbled, you lose everything.

Error detection in communications is a very old idea; I can show you telegraph
examples from the 1910s involving technical mechanisms, and the realization
that this was a potential problem goes back further than that, at least as
early as the 1870s, when telegraph companies offered a transmit back facility
to let the sender ensure that the message received at the far end was the one
intended to be sent.

The mental model for DES was computer-crypto box-{phone,leased} line,
or sometimes {phone,leased} line-crypto box-{phone, leased} line.  Much
of it was aimed at asynchronous (generally) teletype links (hence CFB-8),
bisync (https://en.wikipedia.org/wiki/Bisync) using CBC, or (just introduced
around the time DES was) IBM's SNA, which relied on HDLC and was well-suited
to CFB-1.  OFB was intended for fax machines.  Async and fax links didn't
need protection as long as error propagation of received data was very limited;
bisync and HDLC include error detection and retransmission by what we'd now
think of as the end-to-end link layer.  (On the IBM gear I worked with in the
late 1960s/early 1970s, the controller took care of generating the bisync
check bytes.  I no longer remember whether it did the retransmissions or
not; it's been a *long* time, and I was worrying more about the higher layers.)

In the second mental model for bisync and SNA, the sending host would have
generated a complete frame, including error detection bytes.  These bytes
would be checked after decryption; if the ciphertext was garbled, the error
check would fail and the messages would be NAKed (bisync, at least, used
ACK and NAK) and hence resent.  If they keying was garbled, though, nothing
would flow.  

It is not entirely clear to me what keying model IBM, NIST, or the NSA had
in mind back then -- remember that the original Needham-Schroeder paper didn't
come out until late 1978, several years after DES.  One commonly-described model
of operation involved loading master keys into devices; one end would pick
a session key, encrypt it (possibly with 2DES or 3DES) with the master key, 
and send that along.  From what I've read, I think that the NSA did have KDCs 
before that, but I don't have my references handy.  Multipoint networks
were not common then (though they did exist in some sense); you couldn't
go out to a KDC in real-time.  (I'll skip describing the IBM multipoint
protocol for the 2740 terminal; I never used them in that mode.  Let it
suffice to say that given the hardware of the time, if you had a roomful
of 2740s using multipoint, you'd have a single encryptor and single key 
for the lot.)

Anyway -- for most of the intended uses, error correction of the data was
either done at a different layer or wasn't important.  Keying was a
different matter.  While you could posit that it, too, should have been
wrapped in a higher layer, it is quite plausible that NSA wanted to guard
against system designers who would omit that step.  Or maybe it just
wasn't seen as the right way to go; as noted, layering wasn't a strong
architectural principle then (though it certainly did exist in lesser
forms).

--Steve Bellovin, https://www.cs.columbia.edu/~smb





___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jeffrey Goldberg
[Reply-To set to cryptopolitics]

On 2013-03-28, at 12:37 AM, Jeffrey Walton noloa...@gmail.com wrote:

 On Wed, Mar 27, 2013 at 11:37 PM, Jeffrey Goldberg jeff...@goldmark.org 
 wrote:

 ... In the other cases, the phones did have a passcode lock, but
 with 1 possible four digit codes it takes about 40 minutes to run
 through all given how Apple has calibrated PBKDF2 on these (4 trials per
 second).

 Does rooting and Jailbreaking invalidate evidence collection?

That is the kind of thing that would have to be settled by case law, I don't
know if evidence gathered this way has ever been been offered as evidence in
trial. (Note that a lot can be used against a suspect during an investigation
without ever having to be presented as evidence at trail.)

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?

You seem to be talking about a single iOS 4 hardware key. But each device
has its own. We don't know if Apple actually has retained copies of that.

 I suspect Apple has the methods/processes to provide it.

I have no more evidence than you do, but my guess is that they don't, for
the simple reason that if they did that fact would leak out. Secret
conspiracies (and that's what it would take) grow less plausible
as a function of the number of people who have to be in on it.
(Furthermore I suspect that implausibility rises super-linearly with
the number of people in on a conspiracy.)

 I think there's much more to it than a simple brute force.

We know that those brute force techniques exist (there are several vendors
of forensic recovery tools), and we've got very good reasons to believe
that only a small portion of users go beyond the default 4 digit passcode.
In case of LEAs, they can easily hold on to the phones for the 20 minutes
(on average) it takes to brute force them.

So I don't see why you suspect that there is some other way that only
Apple (or other relevant vendor) and the police know about.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread shawn wilson
On Mar 27, 2013 11:38 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:




http://blog.agilebits.com/2012/03/30/the-abcs-of-xry-not-so-simple-passcodes/


Days? Not sure about the algorithm but both ocl and jtr can be run in
parallel and idk why you'd try to crack a password on an arm device anyway
(there's a jtr page that compares platforms and arm is god awful slow).
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

[Not replied-to cryptopolitics as I'm not on that list -- jdcc]

On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?
 
 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.

I've been involved in these sorts of questions in various companies that I've 
worked. Let's look at it coolly and rationally.

If you make a bunch of devices with keys burned in them, if you *wanted* to 
retain the keys, you'd have to keep them in some database, protect them, create 
access  controls and procedures so that only the good guys (to your definition) 
got them, and so on. It's expensive.

You're also setting yourself up for a target of blackmail. Once some bad guy 
learns that they have such a thing, they can blackmail you for the keys they 
want lest they reveal that the keys even exist. Those bad guys include 
governments of countries you operate or have suppliers in, mafiosi, etc. Heck, 
once some good guy knows about it, the temptation to break protocol on who gets 
keys when will be too great to resist, and blackmail will happen.

Eventually, so many people know about the keys that it's not a secret. Your 
company loses its reputation, even among the sort of law-and-order types who 
think that it's good for *their* country's LEAs to have those keys because they 
don't want other countries having those keys. Sales plummet. Profits drop. 
There are civil suits, shareholder suits, and most likely criminal charges in 
lots of countries (because while it's not a crime to give keys to their LEAs, 
it's a crime to give them to that other bad country's LEAs). Remember, the only 
difference between lawful access and espionage is whose jurisdiction it is.

On the other hand, if you don't retain the keys it doesn't cost you any money 
and you get to brag about how secure your device is, selling it to customers in 
and out of governments the world over.

Make the mental calculation. Which would a sane company do?

 
 I suspect Apple has the methods/processes to provide it.
 
 I have no more evidence than you do, but my guess is that they don't, for
 the simple reason that if they did that fact would leak out. Secret
 conspiracies (and that's what it would take) grow less plausible
 as a function of the number of people who have to be in on it.
 (Furthermore I suspect that implausibility rises super-linearly with
 the number of people in on a conspiracy.)

And that's just what I described above. I just wanted to put a sharper point on 
it. I don't worry about it because truth will out. Or as Dr. Franklin put it, 
three people can keep a secret if two of them are dead.

 
 I think there's much more to it than a simple brute force.
 
 We know that those brute force techniques exist (there are several vendors
 of forensic recovery tools), and we've got very good reasons to believe
 that only a small portion of users go beyond the default 4 digit passcode.
 In case of LEAs, they can easily hold on to the phones for the 20 minutes
 (on average) it takes to brute force them.

The unlocking feature on iOS uses the hardware to spin crypto operations on 
your passcode, so you have to do it on the device (the hardware key is involved 
-- you can't just image the flash) and you get about 10 brute force checks per 
second. For a four-character code, that's about 1000 seconds.

See http://images.apple.com/ipad/business/docs/iOS_Security_May12.pdf for 
many details on what's in iOS specifically.

Also, surprisingly often, if the authorities ask someone to unlock the phone, 
people comply. 

 
 So I don't see why you suspect that there is some other way that only
 Apple (or other relevant vendor) and the police know about.

Yeah, me either. We know that there are countries that have special national 
features in devices made by hardware makers that are owned by that country's 
government, but they're very careful to keep them within their own borders, for 
all the obvious reasons. It just looks bad and could lead to losing contracts 
in other countries.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFRVNHisTedWZOD3gYRAnLPAKCA3BW64XmpIlJJL8vMIwEZ9qBQzwCcDQiJ
OvnvTSUXUdELynnYxnT0lEA=
=JuD+
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas

On Mar 28, 2013, at 4:07 PM, shawn wilson ag4ve...@gmail.com wrote:

 
 On Mar 27, 2013 11:38 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 
 
 
  http://blog.agilebits.com/2012/03/30/the-abcs-of-xry-not-so-simple-passcodes/
 
 
 Days? Not sure about the algorithm but both ocl and jtr can be run in 
 parallel and idk why you'd try to crack a password on an arm device anyway 
 (there's a jtr page that compares platforms and arm is god awful slow)
 
 

You have to run the password cracker on the device, because it involves mixing 
the hardware key in with the passcode, and that's done in the security chip. 
You can't parallelize it unless you pry the chip apart. I'm not saying it's 
impossible, but it is risky. If you screw that up, you lose totally, as then 
breaking the passcode is breaking AES-256. And if you have about 2^90 memory, 
it's easier than breaking AES-128!

Jon




PGP.sig
Description: PGP signature
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Kevin W. Wall
On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 [Not replied-to cryptopolitics as I'm not on that list -- jdcc]

Ditto.

 On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?

 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.

 I've been involved in these sorts of questions in various companies that I've 
 worked. Let's look at it coolly and rationally.

 If you make a bunch of devices with keys burned in them, if you *wanted* to 
 retain the keys, you'd have to keep them in some database, protect them, 
 create access  controls and procedures so that only the good guys (to your 
 definition) got them, and so on. It's expensive.

 You're also setting yourself up for a target of blackmail. Once some bad guy 
 learns that they have such a thing, they can blackmail you for the keys they 
 want lest they reveal that the keys even exist. Those bad guys include 
 governments of countries you operate or have suppliers in, mafiosi, etc. 
 Heck, once some good guy knows about it, the temptation to break protocol on 
 who gets keys when will be too great to resist, and blackmail will happen.

 Eventually, so many people know about the keys that it's not a secret. Your 
 company loses its reputation, even among the sort of law-and-order types who 
 think that it's good for *their* country's LEAs to have those keys because 
 they don't want other countries having those keys. Sales plummet. Profits 
 drop. There are civil suits, shareholder suits, and most likely criminal 
 charges in lots of countries (because while it's not a crime to give keys to 
 their LEAs, it's a crime to give them to that other bad country's LEAs). 
 Remember, the only difference between lawful access and espionage is whose 
 jurisdiction it is.

 On the other hand, if you don't retain the keys it doesn't cost you any money 
 and you get to brag about how secure your device is, selling it to customers 
 in and out of governments the world over.

 Make the mental calculation. Which would a sane company do?


All excellent, well articulated points. I guess that means that
RSA Security is an insane company then since that's
pretty much what they did with the SecurID seeds. Inevitably,
it cost them a boatload too. We can only hope that Apple
and others learn from these mistakes.

OTOH, if Apple thought they could make a hefty profit by
selling to LEAs or friendly governments, that might change
the equation enough to tempt them. Of course that's doubtful
though, but stranger things have happened.

-kevin
-- 
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Nico Williams
On Thu, Mar 28, 2013 at 7:24 PM, Kevin W. Wall kevin.w.w...@gmail.com wrote:
 On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 [Rational response elided.]

 All excellent, well articulated points. I guess that means that
 RSA Security is an insane company then since that's
 pretty much what they did with the SecurID seeds. Inevitably,
 it cost them a boatload too. We can only hope that Apple
 and others learn from these mistakes.

RSA did it for plausible, reasonable (if wrong) ostensible reasons not
related to LEA.

 OTOH, if Apple thought they could make a hefty profit by

There is zero chance Apple would be backdooring anything for profit
considering the enormity of the risk they would be taking.  If they do
it at all it's because they've been given no choice (ditto their
competitors).

 selling to LEAs or friendly governments, that might change
 the equation enough to tempt them. Of course that's doubtful
 though, but stranger things have happened.

This the tin-foil response.  But note that the more examples of
bad-idea backdoors, the less confidence we can have in the rational
argument, and the more the tin-foil argument becomes the rational one.
 In the worst case scenario we can't trust much of anything and we
can't open-code everything either.  But in the worst case scenario
we're also mightily vulnerable to attack from bad guys.  Let us hope
that there are enough rational people at or alongside LEAs to temper
the would-be arm-twisters that surely must exist within those LEAs.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mar 28, 2013, at 5:24 PM, Kevin W. Wall kevin.w.w...@gmail.com wrote:

 
 All excellent, well articulated points. I guess that means that
 RSA Security is an insane company then since that's
 pretty much what they did with the SecurID seeds. Inevitably,
 it cost them a boatload too. We can only hope that Apple
 and others learn from these mistakes.

No, RSA was careless and stupid. It's not the same thing at all.

SecurID seeds are shared secrets and the authenticators need them. They did 
nothing like what we were talking about -- handing them out so the security of 
the device could be compromised. They kept their own crown jewels on some PC on 
their internal network and they were hacked for them.

 
 OTOH, if Apple thought they could make a hefty profit by
 selling to LEAs or friendly governments, that might change
 the equation enough to tempt them. Of course that's doubtful
 though, but stranger things have happened.

Excuse me, but Apple in particular is making annual income in the same ballpark 
as the GDP of Ireland, the Czech Republic, or Israel. They could bail out 
Cyprus with pocket change.

If you want to go all tinfoil hat, you shouldn't be thinking about friendly 
governments buying them off, you should be thinking about *them* buying their 
own country.

Jon
-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: iso-8859-1

wj8DBQFRVPGKsTedWZOD3gYRAmKzAKDkD8/myOnUQjpSQzohZ7i3OqC6QwCeJ69T
e81n4nVL+KTK7g72TLMeHow=
=JqMQ
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jeffrey Walton
On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 [Not replied-to cryptopolitics as I'm not on that list -- jdcc]

 On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:

 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?

 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.

 I've been involved in these sorts of questions in various companies that I've 
 worked.
Somewhat related: are you bound to some sort of non-disclosure with
Apple? Can you discuss all aspects of the security architecture, or is
it [loosely] limited to Apple's public positions?

 If you make a bunch of devices with keys burned in them, if you *wanted* to 
 retain the keys, you'd have to keep them in some database, protect them, 
 create access  controls and procedures so that only the good guys (to your 
 definition) got them, and so on. It's expensive.
Agreed.

 You're also setting yourself up for a target of blackmail
 Eventually, so many people know about the keys that it's not a secret. Your 
 company loses its reputation.
Agreed.

 On the other hand, if you don't retain the keys it doesn't cost you any money 
 and you get to brag about how secure your device is, selling it to customers 
 in and out of governments the world over.
Agreed.

I regard these as the positive talking points. There's no slight of
hand in your arguments, and I believe they are truthful. I expect them
to be in the marketing literature.

 I suspect Apple has the methods/processes to provide it.
 I have no more evidence than you do, but my guess is that they don't, for
 the simple reason that if they did that fact would leak out. ...
 And that's just what I described above. I just wanted to put a sharper point 
 on it.
 I don't worry about it because truth will out. ...
A corporate mantra appears to be 'catch me if you can', 'deny deny
deny', and then 'turn it over to marketing for a spin'.

We've seen it in the past with for example, Apple and location data,
carriers and location data, and Google and wifi spying. No one was
doing it until they got caught.

Please forgive my naiveness or my ignorance if I'm seeing things is a
different light (or shadow).

 I think there's much more to it than a simple brute force.
 We know that those brute force techniques exist (there are several vendors
 of forensic recovery tools), 
 The unlocking feature on iOS uses the hardware to spin crypto operations on 
 your passcode...

Apple designed the hardware and hold the platform keys. So I'm clear
and I'm not letting my imagination run too far ahead:

Apple does not have or use, for example, custom boot loaders signed by
the platform keys used in diagnostics, for data extraction, etc.

There are no means to recover a secret from the hardware, such as a
JTAG interface or a datapath tap. Just because I can't do it, it does
not mean Apple, a University with EE program, Harris Corporation,
Cryptography Research, NSA, GCHQ, et al cannot do it.

A naturally random event is used to select the hardware keys, and not
a deterministic event such as hashing a serial number and date of
manufacture.

These are some of the goodies I would expect a manufacturer to provide
to select customers, such as LE an GOV. I would expect that the
information would be held close to the corporate chest, so folks could
not discuss it even if they wanted to.

jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] RSA SecurID breach (was Re: Here's What Law Enforcement Can Recover From A Seized iPhone)

2013-03-28 Thread Kevin W. Wall
Note subject change.

On Thu, Mar 28, 2013 at 9:36 PM, Steven Bellovin s...@cs.columbia.edu wrote:
 All excellent, well articulated points. I guess that means that
 RSA Security is an insane company then since that's
 pretty much what they did with the SecurID seeds.

 Well, we don't really know what RSA stores; it's equally plausible
 that they have a master key and use it to encrypt the device serial
 number to produce the per-device key.  But yes, that's isomorphic.
 However...

 What Jon left out of his excellent analysis is this: what is the
 purpose of having such a database?  For Apple, which pushes a host
 or cloud backup solution, there's a lot less point; if a phone is
 dying, you restore your state onto a new phone.  They simply have no
 reason to need such keys.  With RSA, though, it's a different story.
 They're shipping boxes with hundreds or thousands of tokens to
 customers; these folks need some way to get the per-token keys into
 a database.  How do they do that?  For that matter, how does RSA
 get keys into the devices?  The outside of the devices has a serial
 number; the inside has a key.  How does provisioning work?  It's
 all a lot simpler, for both manufacturing and the customer, if
 the per-device key is a function of a master key and the serial
 number.  You then ship the customer a file with the serial number
 and the per-device key.  When I look at p. 64 of
 ftp://ftp.rsa.com/pub/docs/AM7.0/admin.pdf that sounds like what
 happens: there's a per-token XML file that you have to import
 into your system.

Yes; that's exactly what you do. And RSA has told us that the do
not have a master key used to generate them but that they are
generated randomly.  They told us that DB that was snatched
contained the seeds and serial #s. If a master key and serial #
was in itself sufficient it doesn't make a lot of sense why they
also store the seeds.  Of course, they could have been lying,
but if that's the case, it's an RSA conspiracy theory because
I've heard the same story from two very independent sources
at RSA who work in separate divisions.

 Translation: at some point in every token's life, RSA has to have
 a database with the keys.  Do they delete it?  Is it available
 to help customers who haven't backed up their own database properly?
 I don't know the answer to those questions; I do claim that they
 at least have a reason, which Apple apparently does not.

Long ago, they apparently deleted it, at least after a short
period of time, once they were confident that the fobs were
delivered and the seeds from the XML file imported.

But then they started to get calls from customers who had
lost the XML file or had their local DBs munged and needed
the seeds to restore usage to their fobs. Ultimately this lead
to RSA replacing the customer's fobs (at some cost to the
customer).  RSA claims that they saw an opportunity to
save their customers grief and according to them, starting
offering their SecurID customers a free service of keeping a
backup of their customer's serial #s and seeds. What I was
told that this originally was part of the contract and the
customer could opt-out of the free backup service if they
desired. But at some point, following numerous contract
revisions they apparently stopped even mentioning this
service in their purchase contracts. (One RSA person
speculated that this was probably because so few customers
had opted-out and all the customers who availed themselves
of the service were so happy their current fobs weren't toast
that someone in sales figured it would just be a good idea
to provide the service to all of their customers.)

Now, as I heard the story, RSA originally did everything
right and they had completely air-gapped their DB and
web interface to it and their CSRs had to use a manual swivel
chair process to manually copy the seed into an email
destined to the customer who missed the seeds for their
fobs.  However, eventually there was pressure from their
customers to speed up the process of seed recovery and
they removed the air gap. The rest is history... a few
well-targeted spear phishing attacks with a 0day Adobe
Flash exploit in an Excel spreadsheet and eventually
we were introduced to the new APT acronym.

 Btw: I've never been convinced that what was stolen from RSA was,
 in fact, keys or master keys.  Consider: when someone logs in
 to a system with an RSA token, they enter a userid, probably a PIN,
 and the code displayed on the token.  This hypothetical database
 or master key maps serial numbers -- not userids, and definitely
 not PINs since RSA wouldn't have those -- to keys.  How does an
 attacker with this database figure out which userid goes with
 which serial number?


Mostly answered above. We were also told that there was
a separate database of customers and SecurID serial #s
that was not air-gapped. What was not clear is whether
or not all individual serial #s were there. It seems unlikely,
but for bulk shipments, RSA usually ships 

Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jon Callas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Mar 28, 2013, at 6:59 PM, Jeffrey Walton noloa...@gmail.com wrote:

 On Thu, Mar 28, 2013 at 7:27 PM, Jon Callas j...@callas.org wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 [Not replied-to cryptopolitics as I'm not on that list -- jdcc]
 
 On Mar 28, 2013, at 3:23 PM, Jeffrey Goldberg jeff...@goldmark.org wrote:
 
 Do hardware manufacturers and OS vendors have alternate methods? For
 example, what if LE wanted/needed iOS 4's hardware key?
 
 You seem to be talking about a single iOS 4 hardware key. But each device
 has its own. We don't know if Apple actually has retained copies of that.
 
 I've been involved in these sorts of questions in various companies that 
 I've worked.
 Somewhat related: are you bound to some sort of non-disclosure with
 Apple? Can you discuss all aspects of the security architecture, or is
 it [loosely] limited to Apple's public positions?

- From being there, Apple's culture and practices are such that everything they 
do is focused on making cool things for the customers. Apple fights for the 
users. The users' belief and faith in Apple saved it from near death. 
Everything there focuses on how it's good for the users. Also remember that 
there are many axes of good for the users. User experience, cost, reliability, 
etc. are part of the total equation along with security. People like you and me 
are not the target,  it's more the proverbial My Mom sort of user.

Moreover, they're not in it for the money. They're in it for the cool. 
Obviously, one has to be profitable, and obviously high margins are better than 
low ones, but the motivator is the user, and being cool. Ultimately, they do it 
for the person in the mirror, not for the cash.

I believe that Apple is too closed-mouthed about a lot of very, very cool 
things that they do security-wise. But that's their choice, and as a gentleman, 
I don't discuss things that aren't public because I don't blab. NDA or no NDA, 
I just don't blab.


 I regard these as the positive talking points. There's no slight of
 hand in your arguments, and I believe they are truthful. I expect them
 to be in the marketing literature.
 
 I suspect Apple has the methods/processes to provide it.
 I have no more evidence than you do, but my guess is that they don't, for
 the simple reason that if they did that fact would leak out. ...
 And that's just what I described above. I just wanted to put a sharper point 
 on it.
 I don't worry about it because truth will out. ...
 A corporate mantra appears to be 'catch me if you can', 'deny deny
 deny', and then 'turn it over to marketing for a spin'.
 
 We've seen it in the past with for example, Apple and location data,
 carriers and location data, and Google and wifi spying. No one was
 doing it until they got caught.
 
 Please forgive my naiveness or my ignorance if I'm seeing things is a
 different light (or shadow).

Well, with locationgate at Apple, that was a series of stupid and unfortunate 
bugs and misfeatures. Heads rolled over it.

- From what I have read of the Google wifi thing, it was also stupid and 
unfortunate. The person who coded it up was a pioneer of wardriving. People 
realized they could do cool things and did them without thinking it through. 
Thinking it through means that there are things to do that are cool if you are 
just a hacker, but not if you are a company. If that had been written up here, 
or submitted at a hacker con, everyone would have cheered -- and basically did, 
since arguably a pre-alpha of that hack was a staple of DefCon contests. The 
superiors of the brilliant hackers didn't know or didn't grok what was going on.

In neither of those cases was anyone trying to spy. In each differently, people 
were building cool features and some combination of bugs and failure to think 
it through led to each of them. It doesn't excuse mistakes, but it does explain 
them. Not every bad thing in the world happens by intent. In fact, most of them 
don't.

 
 Apple designed the hardware and hold the platform keys. So I'm clear
 and I'm not letting my imagination run too far ahead:
 
 Apple does not have or use, for example, custom boot loaders signed by
 the platform keys used in diagnostics, for data extraction, etc.
 
 There are no means to recover a secret from the hardware, such as a
 JTAG interface or a datapath tap. Just because I can't do it, it does
 not mean Apple, a University with EE program, Harris Corporation,
 Cryptography Research, NSA, GCHQ, et al cannot do it.

I alluded to that before. Prying secrets out of hardware is known technology. 
If you're willing to destroy the device, there's a lot you can do, from 
decapping the chip, to just x-raying it, etc.

 
 A naturally random event is used to select the hardware keys, and not
 a deterministic event such as hashing a serial number and date of
 manufacture.
 
 These are some of the goodies I would expect a manufacturer to provide
 to select 

Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread James A. Donald

On 2013-03-29 8:23 AM, Jeffrey Goldberg wrote:

I suspect Apple has the methods/processes to provide it.

I have no more evidence than you do, but my guess is that they don't, for
the simple reason that if they did that fact would leak out. Secret
conspiracies (and that's what it would take) grow less plausible
as a function of the number of people who have to be in on it.


Real secret conspiracy:  Small enough to fit around a coffee table.

Semi secret conspiracy.  Big enough to exercise substantial power, 
powerful enough to say ha ha, you must be crazy, also racist, pawn of 
big oil, Nazi whenever anything leaks out.


Looking back at the early twentieth century, we find an ample supply of 
secret conspiracies which must have hundreds of thousands of people in 
the know.


I could mention two rather famous ones, but this would divert the list 
off topic because three people with ten sock puppets each would then 
post a bunch of messages saying ha ha, you must be crazy





(Furthermore I suspect that implausibility rises super-linearly with
the number of people in on a conspiracy.)


I think there's much more to it than a simple brute force.

We know that those brute force techniques exist (there are several vendors
of forensic recovery tools), and we've got very good reasons to believe
that only a small portion of users go beyond the default 4 digit passcode.
In case of LEAs, they can easily hold on to the phones for the 20 minutes
(on average) it takes to brute force them.

So I don't see why you suspect that there is some other way that only
Apple (or other relevant vendor) and the police know about.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread James A. Donald

On 2013-03-29 10:47 AM, Nico Williams wrote:

   There is zero chance Apple would be backdooring anything for profit


They might, however, and very likely are, backdooring everything to 
avoid getting their faces broken in with rifle butts.



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Here's What Law Enforcement Can Recover From A Seized iPhone

2013-03-28 Thread Jeffrey Goldberg
On 2013-03-28, at 10:42 PM, Jon Callas j...@callas.org wrote:

 On Mar 28, 2013, at 6:59 PM, Jeffrey Walton noloa...@gmail.com wrote:
 
 We've seen it in the past with for example, Apple and location data,

 Well, with locationgate at Apple, that was a series of stupid and unfortunate 
 bugs and misfeatures. Heads rolled over it.

There are a couple interesting lessons from LocationGate. The scary 
demonstrations were out and circulated before the press and public realized 
that what was cached were the location of cell towers, not the phones actual 
location and that there was a good reason for caching that data. But I suspect 
that the large majority of people who remember that, still are under the 
impression that Apple was arbitrarily storing the the actual locations of the 
phone for no good reason.

The scare story spread quickly, with the more hyperbolic accounts getting the 
most attention. The corrective analysis probably didn't penetrate as widely.

The second lesson has to do with the the status of iOS protection classes that 
can leave things unencrypted even when the phone is locked. There are things 
that we want our phones to do before they are unlocked with a passcode. We'd 
like them to know which local WiFi networks they can join and we'd like them to 
precompute our locations so that that is up and ready as soon as we do unlock 
the phones. As a consequence things like WiFi passwords are not (or at least, 
were not) stored in a way that are protected by the device key. The data 
protection classes NSFileProtectionNone and 
NSFileProtectionCompleteUntilFirstUserAuthentication have legitimate uses, but 
it does lead to cases where people may thing that some data is protected when 
their device is off or locked which in fact isn't.

The trick is how to communicate this the people, most of whom do not wish to be 
overwhelmed with information.  There are lots of other things like this 
(encrypted backups and thisDeviceOnly, 10 seconds after lock before keys are 
erased, etc) that really people ought to know. The information about these 
isn't secret, Apple publishes it. But it takes some level of sophistication to 
understand; but mostly what it takes is interest.

 In neither of those cases was anyone trying to spy. In each differently, 
 people were building cool features and some combination of bugs and failure 
 to think it through led to each of them. It doesn't excuse mistakes, but it 
 does explain them. Not every bad thing in the world happens by intent. In 
 fact, most of them don't.

What's the line? Never attribute to malice what can be explained by 
incompetence.

At the same time we are in the business of designing system that will protect 
people and their data under the assumption that the world is full of hostile 
agents. As I like to put it, I lock my car not because I think everyone is a 
crook, but because I know that car thieves do exist.

Cheers,

-j
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography