Re: Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-09-03 Thread Marsh Ray

On 09/03/2010 03:45 AM, Ben Laurie wrote:


That's the whole point - a hash function used on an arbitrary message
produces one of its possible outputs. Feed that hash back in and it
produces one of a subset of its possible outputs. Each time you do this,
you lose a little entropy (I can't remember how much, but I do remember
David Wagner explaining it to me when I discovered this for myself quite
a few years ago).


I found this to be interesting:
Danilo Gligoroski, Vlastimil Klima: Practical consequences of the 
aberration of narrow-pipe hash designs from ideal random functions, IACR 
eprint, Report 2010/384, pdf.

http://eprint.iacr.org/2010/384.pdf

The theoretical loss is -log2(1/e) = about 0.66 bits of entropy per
log2(N additional iterations).

This assumes that there is no systematic correlation between the hash 
input and the calculation of the output, which is not really a good 
assumption with the MD's and SHA's in current use. They accept, process, 
and output vectors of 32- or 64-bit words, even preserving their order 
to some extent. So it would seem reasonable to expect that to the extent 
that these actual functions differed from an ideal random function they 
could easily have the type of systematic bias which would be amplified 
through repeated iteration.


I played with some simulations with randomly-generated mappings, the 
observed value would at times wander over 1.0 BoE/log2 N.


It seems like this entropy loss could be largely eliminated by hashing 
the previous two intermediate results on each iteration instead of just 
one. But this basically amounts to widening the data path, so perhaps it 
would be cheating for the purposes of this discussion.


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Merkle Signature Scheme is the most secure signature scheme possible for general-purpose use

2010-09-03 Thread Marsh Ray

On 09/03/2010 01:22 PM, Ben Laurie wrote:

On 03/09/2010 17:01, Marsh Ray wrote:

I played with some simulations with randomly-generated mappings, the
observed value would at times wander over 1.0 BoE/log2 N.


I think when I did this, I fully enumerated the behaviour of a truncated
hash (e.g. the first 20 bits of MD5).


I represented the mapping entirely as a table in RAM (it sure is nice 
living in the age of the 4 GB laptop). Instead of truncated MD5, I 
initialized my table from a good but non-crypto PRNG. Having it in a 
table made it practical to do many repeated applications and watch how 
the rate of entropy loss varied.


I should clean up that code and graph the output, it seemed to be making 
some interesting curves.


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Randomness, Quantum Mechanics - and Cryptography

2010-09-07 Thread Marsh Ray

On 09/06/2010 09:49 PM, John Denker wrote:


If anybody can think of a practical attack against the randomness
of a thermal noise source, please let us know.  By "practical" I
mean to exclude attacks that use such stupendous resources that
it would be far easier to attack other elements of the system.


Blast it with RF for one.

Typically the natural thermal noise amounts to just a few millivolts, 
and so requires a relatively sensitive A/D converter. This makes it 
susceptible to injected "unnatural noise" overloading the conversion and 
changing most of the output bits to predictable values.


Using digital outputs from an enclosed module with enough shielding 
could probably prevent it. But there are plenty of environments which 
are too small (e.g., smart cards) or are potentially in the hands of the 
attacker for an extended period of time (smart cards, DRM devices, power 
meters, etc.).


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Randomness, Quantum Mechanics - and Cryptography

2010-09-07 Thread Marsh Ray

On 09/07/2010 12:58 PM, John Denker wrote:

On 09/07/2010 10:21 AM, Marsh Ray wrote:


If anybody can think of a practical attack against the randomness
of a thermal noise source, please let us know.  By "practical" I
mean to exclude attacks that use such stupendous resources that
it would be far easier to attack other elements of the system.


Blast it with RF for one.


1) This is not an argument in favor of quantum noise over
thermal noise, because the same attack would be at least
as effective against quantum noise.


Agreed.


2) You can shield things so as to make this attack very,
very difficult.


The point is that this it's a generic, relatively low-tech attack that 
is likely to be effective against a straightforward implementation of 
the general idea.



3) The attack is detectable long before it is effective,
whereupon you can shut down the RNG, so it is at best a
DoS attack.


Only if the engineers know about it and spend the resources to build in 
such resistances to it. So the system which consumes the entropy also as 
to look for the "I'm not producing any more entropy" signal as well. The 
proper operation of this signaling has to part of the test process. So 
now there needs to be a way to simulate the attack scenario for testing. 
Presumably this becomes another input to the system which itself must be 
test. All this adds time, cost, and complexity and it's not surprising 
that they don't always get it perfect.


There is some evidence that engineers designing chips that go into 
actual products (little stuff like girls' toys and smart grid power 
meters) aren't familiar with this:


http://www.flickr.com/photos/travisgoodspeed/4142689541/
"This graph shows the counts of individual seed bytes in a poor random 
number generator. The sample width is a single integer, and the RNG byte 
is expected to be one of the very few spikes presented on this graph."


Note that the above description is a little confusing because there are 
multiple problems going on here. The "seed bytes" are coming from a 
poorly engineered radio source and are also going into a "poor random 
number generator".


Here's a better description:
http://rdist.root.org/2010/01/11/smart-meter-crypto-flaw-worse-than-thought/


 And then you have to compare it against
other brute-force DoS attacks, such as shooting the
computer with an AK-47.


Well, the idea of physical stress attacks is that you get the system to 
do something it isn't supposed to do (e.g., sign with a weak nonce).



Typically the natural thermal noise amounts to just a few millivolts,
and so requires a relatively sensitive A/D converter. This makes it
susceptible to injected "unnatural noise" overloading the conversion and
changing most of the output bits to predictable values.


Even the cheapest of consumer-grade converters has 16 bits of
resolution, which is enough to resolve the thermal noise and
still have _two or three orders of magnitude_ of headroom.


Were they engineered for use with crypto to resist attack? Were they 
tested in an actively hostile RF environment?


It's really unwise to try to reason about the behavior of complex 
systems like digitial circuitry when operated outside of its absolute 
maximum specifications. You'd have to re-qualify them for such use.



If
you are really worried about this, studio-grade stuff is still
quite affordable, and has even more headroom and better shielding.


And it will not get built into any product if it costs $0.01 more unless 
the hardware engineer is unable to justify the additional expense.



How much RF are we talking about here?


Probably very little if the engineer didn't take special precautions.

Also the attacker gets to choose the frequency and direction from which 
the device is most susceptible and combine this will all other 
techniques simultaneously. For example, perhaps would run current 
through the external shielding or expose it to a static magnetic field 
(thus heating it or saturating its magnetic permeability).



At some point you can
undoubtedly DoS the RNG ... but I suspect the same amount of
RF would fry most of the computers, phones, and ipods in the
room.


So the attacker leaves his ipod out of the faraday cage in which he's 
abusing the smart card or DRM device.



Is the RF attack in any way preferable to the AK-47 attack?


The attacker doesn't necessarily have to completely eliminate all 
entropy from the output, just enough that he can make up the difference 
with brute force or analytic techniques.


http://focus.ti.com/docs/prod/folders/print/cc2531.html
"Changes from Revision Original (September 2009) to Revision A" "Removed 
sentence that pseudorandom data can be used for security."


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Randomness, Quantum Mechanics - and Cryptography

2010-09-07 Thread Marsh Ray

On 09/07/2010 02:18 PM, Perry E. Metzger wrote:


The question is, can you make it more expensive to do that than to,
say, buy a new parking card or whatever else the smart card is being
used for. If the attack is fairly cheap and repeatable and yields
something reasonably valuable, you have a problem. If you can make the
attack expensive and only yield something cheap, you're doing well.


The designer often has wrong information about what the system will be 
used for. Most systems don't see much adoption and are discontinued 
because they don't make any money. Systems that succeed with low-value 
transactions tend to get repurposed for more and more important roles 
until the breaking point. SSL and Zigbee are two examples.


Imagine how much an additional shielded region would cost to a cell 
phone that's expected to sell 50 million units. An engineer is probably 
going to be trading that cost off against some other feature with a 
tangible benefit. When the junior engineer speaks up and says "let's 
just use the microphone for entropy gathering instead" he's going to be 
considered a hero for saving millions.


An additional consideration is that the device must also operate 
reliably when someone puts popcorn in the microwave or uses an arc 
welder in the next room. The detector must absolutely never create a 
false positive.


Most actual consumer products sold will prefer to continue insecure 
operation rather than shut off. For example, the GSM standard includes a 
mechanism to notify the user on the display if they're connected to a 
cell tower with an unencrypted signal. Cell carriers typically disable 
this notification, presumably because it tangibly increases support 
costs for a benefit that appears highly theoretical. It's usually only 
when it's the interests of the manufacturer that are being protected 
that a device will actually go out of its way to find a reason to cease 
operation (e.g., DRM).


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-08 Thread Marsh Ray

On 09/08/2010 10:45 AM, f...@mail.dnttm.ro wrote:

Hi.

Just subscribed to this list for posting a specific question. I hope
the question I'll ask is in place here.


Oh good, this makes me not the new guy now :-)

These seem like nice standard, authentication system design questions. 
I'll give them a shot.



We do a web app with an Ajax-based client. Anybody can download the
client and open the app, only, the first thing the app does is ask
for login.


Using SSL here makes all the difference in the world. Without SSL, an 
attacker can modify your javascript to do anything he wants, such as 
sending the password in plaintext or redirecting the browser to a 
malware site.


Since SSL is required for us to even discuss security in a meaningful 
way, most of the rest of my comments assume all URLs are https. Ideally, 
the only thing you serve out of port 80 is a redirect to https and you 
support STS Strict Transport Security.

http://en.wikipedia.org/wiki/Strict_Transport_Security


The login doesn't happen using form submission, nor does it happen
via a known, standard http mechanism.


Still, you're sending something via a standard HTTP POST or GET method. 
I assume you're using a standard session cookie? Be sure it has the 
"secure" flag set.



What we do is ask the user for some login information, build a hash
out of it, then send it to the server and have it verified. If it
checks out, a session ID is generated and returned to the client.
Afterwards, only requests accompanied by this session ID are answered
by the server.

Right now, the hash sent by the browser to the server is actually not
a hash, but the unhashed login info. This has to be changed, of
course.


Although it doesn't seem right to me either, sending the plaintext 
password via a HTTP POST body is really the standard way to implement a 
login form over https.


Again, your servers will refuse to accept credentials except over SSL, 
right?



What we need is a hashing algorithm that: - should not generate the
same hash every time


I think the words you want to use are "select a standard, well-accepted 
hash algorithm for use in this authentication protocol". For example, 
SHA-256.


Hash functions (all functions really) by definition produce the same 
output for the same inputs. So you want to include "unpredictable" data 
in the input.



- i.e. should includen something random
elements


In addition to the random data chosen by the client, it would be good to 
have some random data sent by the server, too. Its better if this data 
is not sent back by the client directly, the legitimate server should 
know what he sent.



- should require little code to generate - should allow
verification of whether two hashes stem from the same login data,
without having access to the actual login data


That's the hard part.


We need to implement the hashing algorithm in Javascript and the
verification algorithm in Java, and it needs to execute reasonably
fast, that's why it has to require little code.


Standard hash algorithms usually have Javascript code available that 
should be fast enough for a login process. Java will have a native C 
implementation available in its crypto library.



None of us is really
into cryptography, so the best thing we could think of was asking for
advice from people who grok the domain.

The idea is the following: we don't want to secure the connection, we
just want to prevent unauthenticated/unauthorized requests.


The only way to do that is to "sign" the contents of each and every http 
request with the password. There are schemes that do this, but again 
it's rather pointless since the bad guy supplied the Javascript that the 
browser is running anyway.



Therefore, we only send a hash over the wire and store it in the
database when the user changes his password, and only send different
hashes when the user authenticates later on. On the server, we just
verify that the stored hash and the received hash match, when an
authentication request arrives.


It will take a little more than just comparing a received hash with a 
static database entry. If that were all there was to it, the transmitted 
hash would be a password equivalent.


In any case, if the attacker can see these transmitted hashes he can 
attempt to crack them at a later time to recover the password unless 
there was a secret he doesn't know mixed in to the hashed data (but how 
could there be if you don't secure the connection?) So you're putting 
the complexity of the user's chosen password plus a hashing function 
that runs quickly in Javascript in a computational drag race with the 
attacker's set of gigahertz CPUs, GPUs, and Amazon EC2 nodes. The best 
you can do is include the random data to prevent him from using 
precomputed tables to help with the attack and raise the cost of 
cracking each password.


In practice, attackers can conduct millions of tries per second so most 
passwords can be cracked relatively quickly. This is especiall

Re: Hashing algorithm needed

2010-09-14 Thread Marsh Ray

On 09/13/2010 07:24 PM, Ian G wrote:

On 11/09/10 6:45 PM, f...@mail.dnttm.ro wrote:


Essentially, the highest risk we have to tackle is the database.
Somebody having access to the database, and by this to the
authentication hashes against which login requests are verified,
should not be able to authenticate as another user. Which means, I
need an algorithm which should allow the generation of different
hashes for which it can be verified that they stem from the same login
info, without being able to infer this login info from a hash. This
algorithm is the problem I haven't solved yet. Other than that, I see
no way of protecting against a dictionary attack from somebody having
direct access to the database.


flj, I appreciate your systematic and conscientious engineering 
approach. But I haven't heard anything in your requirements that make it 
sound like a journey outside of well established protocols is justified 
here.


There are a few experienced people around here who could probably design 
come up with a new custom scheme and get it right the first time. But 
the history of most (even professionally-designed) new security 
protocols usually includes the later discovery of serious weaknesses.



I don't recall the full discussion, but what you described is generally
handled by public key cryptography, and it is built into HTTPS.

Here's my suggestion:


+1

I have a similar setup going in a reasonably big production environment 
and it's working great.



1. In your initial account creation / login, trigger a creation of a
client certificate in the browser.


There may be a way to get a browser to generate a cert or CSR, but I 
don't know it. But you can simply generate it at the server side.



1.b. record the client cert as the authenticator in the database.

>

2. when someone connects, the application examines the cert used, and
confirms the account indicated.


Note that you can get the full client cert presented by the web server 
and compare it (or a sufficiently long :-) hash of it) directly with 
what you have in the database. There may be no need to check signatures 
and so on if your server-side is centralized.



If an unknown cert, transfer to a
landing page.
2.b note that there is no login per se, each request can as easily check
the client cert listed by Apache.


Most apps will want to ask the human to authenticate explicitly from 
time to time for one reason or another.



3. you just need some way to roll-over keys from time to time. Left for
later.


Make sure you have at least some way of revoking and renewing a client 
certs, even if it's a code update. Just on the outside chance that, say, 
the keys got generated by Debian Etch's RNG or something.



3.b There are some other bugs, but if the approximate scheme works...


Three more recommendations:

Don't put anything sensitive in the X509 cert. Just a minimal userid or 
even random junk. You're just looking it up in a database.


Disable TLS renegotiation unless you control both the clients and the 
servers and can ensure they're all patched for CVE 2009-3555. Don't 
expect to be able to use renegotiation to "hide" the contents public 
cert, that never worked against an active attacker anyway.


Use a separate dns name for the https site that accepts client certs 
from the one that does not. The reason is that the client cert will have 
to be requested on the initial handshake. Requesting a client cert will 
cause many browsers to pop-up a dialog. Not something you want on your 
secure home page.


Again this is a good scheme, it's the way SSL/TLS has been intended to 
be used for authenticated clients since SSLv3. It even offers additional 
protections from the server's perspective, too: the server is no longer 
forced to transitively trust the union of all trusted root CA certs of 
all allowed clients in order to prove the non-existence of a 
man-in-the-middle.


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-14 Thread Marsh Ray

On 09/14/2010 09:13 AM, Ben Laurie wrote:

On 14/09/2010 12:29, Ian G wrote:

On 14/09/10 2:26 PM, Marsh Ray wrote:

On 09/13/2010 07:24 PM, Ian G wrote:



1. In your initial account creation / login, trigger a creation of a
client certificate in the browser.


There may be a way to get a browser to generate a cert or CSR, but I
don't know it. But you can simply generate it at the server side.


Just to be frank here, I'm also not sure what the implementation details
are here.  I somewhat avoided implementation until it becomes useful.


FWIW, you can get browsers to generate CSRs and eat the resulting certs.
The actual UIs vary from appalling to terrible.

Of some interest to me is the approach I saw recently (confusingly named
WebID) of a pure Javascript implementation (yes, TLS in JS, apparently),
allowing UI to be completely controlled by the issuer.


First, let's hear it for out of the box thinking. *yay*

Now, a few questions about this approach:

How do you deliver Javascript to the browser securely in the first 
place? HTTP?


How do you get the user to save his private key file? Copy and paste?

How does the proper Javascript later access the user's private key securely?

How do they securely wipe memory in Javascript?

How do they resist timing attacks? In practice, an attacker can probably 
get the browser to repeatedly sign random stuff with the client cert 
even while he's running his own script in the same process.



Ultimately this
approach seems too risky for real use, but it could be used to prototype
UI, perhaps finally leading to something usable in browsers.


A sad indictment of browser vendor user interface priorities.


Slide deck here: http://payswarm.com/slides/webid/#(1)

(note, videos use flash, I think, so probably won't work for anyone with
their eye on the ball).

Demo here: https://webid.digitalbazaar.com/manage/


"This Connection is Untrusted"

- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Certificate-stealing Trojan

2010-09-28 Thread Marsh Ray

On 09/27/2010 08:26 PM, Rose, Greg wrote:


On 2010 Sep 24, at 12:47 , Steven Bellovin wrote:


Per
http://news.softpedia.com/news/New-Trojan-Steals-Digital-Certificates-157442.shtml
there's a new Trojan out there that looks for a steals Cert_*.p12
files -- certificates with private keys.  Since the private keys
are password-protected, it thoughtfully installs a keystroke logger
as well


Ah, the irony of a trojan stealing something that, because of lack of
PKI, is essentially useless anyway...


While I agree with the sentiment on PKI, we should accept this evidence 
for what it is:


There exists at least one malware author who, as of recently, did not 
have a trusted root CA key.


Additionally, the Stuxnet trojan is using driver-signing certs pilfered 
from the legitimate parties the old-fashioned way. This suggests that 
even professional teams with probable state backing either lack that 
card or are saving it to play in the next round.


Is it possible that the current PKI isn't always the weakest link in the 
chain? Is it too valuable of a cake to ever eat? Or does it just leave 
too many footprints behind?


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Marsh Ray

On 09/30/2010 10:41 AM, Thor Lancelot Simon wrote:

On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:


Thor Lancelot Simon writes:

a significant net loss of security, since the huge increase in computation
required will delay or prevent the deployment of "SSL everywhere".


That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.


+1.

Why are multi-core GHz server-oriented CPUs providing hardware 
acceleration for AES rather than RSA?


There may be reasons: AES side channels, patents, marketing, etc..

But if it really were such a big limitation you'd think it'd be a 
feature to sell server chips by now. Maybe in a sense it already is. 
What else are you going to do on that sixth core you stick behind the 
same shared main memory bus?



At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.


I could be wrong, but I get the sense that there's not really a high 
proportion of sites which are:


A. currently running within an order of magnitude of maxing out server 
CPU utilization on 1024 bit RSA, and


B. using session resumption to its fullest (eliminates RSA when it can 
be used), and


C. an upgrade to raw CPU power would represent a big problem for their 
budget.


OTOH, if it increased the latency and/or power consumption for 
battery-powered mobile client devices that could be noticeable for a lot 
of people.



Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and "just add more CPU".


The unwrapping of the SSL should parallelize just fine. I think the IT 
term for that is "scalability". We should be so lucky that all our 
problems could be solved by throwing more silicon at them!


Well, if there are higher-layer inspection methods (say virus scanning) 
which don't parallelize, well, wouldn't they have the same issue without 
encryption?



At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.


Or the vendors get to sell a whole new generation of boxes again.


This too will hinder the deployment of "SSL everywhere",


It doesn't bother me the least if deployment of dragnet-scale 
interception-friendly SSL is hindered. But you may be right that it has 
some kind of effect on overall adoption.



and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.


Most sites do run "some particular application". For them, it's either a 
problem, an annoyance, or not a noticeable at all. The question is what 
proportion of situations are going to be noticeably impacted.


I imagine increasing the per-handshake costs from, say, 40 core-ms to 
300 core-ms will have wildly varying effects depending on the system. It 
might not manifest as a linear increase of anything that people care to 
measure.


I agree, it does sound a bit hand-wavy though. :-)

- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: English 19-year-old jailed for refusal to disclose decryption key

2010-10-06 Thread Marsh Ray

On 10/06/2010 01:57 PM, Ray Dillinger wrote:

a 19-year-old just got a 16-month jail sentence for his refusal to
disclose the password that would have allowed investigators to see
what was on his hard drive.


I am thankful to not be an English "subject".


I suppose that, if the authorities could not read his stuff
without the key, it may mean that the software he was using may
have had no links weaker than the encryption itself


Or that the authorities didn't want to reveal their capability to break it.

Or that they wanted to make an example out of him.

Or...


-- and that
is extraordinarily unusual - an encouraging sign of progress in
the field, if of mixed value in the current case.

Really serious data recovery tools can get data that's been
erased and overwritten several times


Really? Who makes these tools? Where do they make that claim?

Wouldn't drive manufacturers have heard about this? What would they do 
once they realized that drives had this extra data storage capacity 
sitting unused?


I see this idea repeated enough that people accept it as true, but no 
one ever has a published account of one existing or having been used.


> (secure deletion being quite unexpectedly difficult)

Sure, but mainly because of stuff that doesn't get overwritten (i.e., 
drive firmware remaps sectors which then retain mostly valid data) not 
because atomic microscopy is available.



, so if it's ever been in your filesystem
unencrypted, it's usually available to well-funded investigators
without recourse to the key.  I find it astonishing that they
would actually need his key to get it.


What makes you think these investigators were well-funded?

Or they wouldn't prefer to spend that money on other things?

Or that they necessarily would have asked the jailers to release the 
teen because they'd been successful in decrypting it. Perhaps their plan 
was to simply imprison him until he confesses?



Rampant speculation: do you suppose he was using a solid-state
drive instead of a magnetic-media hard disk?


SSDs retain info too. Due to the wear leveling algorithms they're quite 
systematic about minimizing overwrite.


But I doubt any of that is an issue in this case.

- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: English 19-year-old jailed for refusal to disclose decryption key

2010-10-07 Thread Marsh Ray

On 10/07/2010 12:10 PM, Bernie Cosell wrote:


There's no way to tell if you used the
first password that you didn't decrypt everything.


Is there a way to prove that you did?

If yes, your jailers may say "We know you have more self-incriminating 
evidence there. Your imprisonment will continue until you prove that 
you've given us everything."


If no, your jailers may say "We know you have more self-incriminating 
evidence there. Your imprisonment will continue until you prove that 
you've given us everything."


Get it?


So in theory you
could hide the nasty stuff behind the second passsword, a ton of innocent
stuff behind the first password and just give them the first password
when asked.


If the encrypted file is large, and disk file fragmentation patterns, 
timestamps, etc. suggest it has grown through reallocation, the 4 KB 
grocery list you decrypt out of it is not going to convince anyone.
On the other hand, if you produce a sufficient amount of relatively 
incompressable image, video, or encrypted data from it, you may be able 
to convince them that you've decrypted it all.


- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com