On Wed, Jan 24, 2007 at 03:28:50PM -0800, Allen wrote:
> If 4 gigs is right, would it then be records to look for to break 
> the code via birthday attacks would be things like seismic data,

In case anyone else couldn't parse this, he means "the amount of
encrypted material necessary to break the key would be large" or
"the size of a lookup table would be large" or something like

> Currently I'm dealing 
> with very large - though not as large as 4 gig - x-ray, MRI, and 
> similar files that have to be protected for the lifespan of the 
> person, which could be 70+ years after the medical record is 
> created. Think of the MRI of a kid to scan for some condition 
> that may be genetic in origin and has to be monitored and 
> compared with more recent results their whole life.

That's longer than computers have been available, and also longer
than modern cryptography has existed.  The only way I would propose
to be able to stay secure that long is either:
1) use a random key as large as the plaintext (one-time-pad)
2) prevent the ciphertext from leaking
   (quantum crypto, spread-spectrum communication, steganography)

Even then, I doubt Lloyd's would insure it.  Anyone who claims to know
what the state of the art will be like in 70+ years is a fool.  I
would be cautious about extrapolating more than five years.

The problem is not the amount of data under one key; that's easy
enough, generate random keys for every n bits of plaintext and encrypt
them with a meta-key, creating a two-level hierarchy.  You calculate a
information-theoretic bound on n by computing the entropy of the
plaintext and the unicity distance of the cipher.  Note that the data
(keys) encrypted directly with the meta-key is completely random, so
the unicity distance is infinite.  Furthermore, one can't easily
brute-force the meta-key by trying the decrypted normal keys on the
ciphertext because all the plaintext under one key equivocates because
it is smaller than the unicity distance.  I'm not sure how it
compounds when the meta-key encrypts multiple keys, I'd have to look
into that.  In any case, you can create a deeper and deeper hierarchy
as you go along.

This bound is the limit for information-theoretic, or unconditional
security.  Shannon proved that a system with these characteristics is
unbreakable.  If you don't know what the entropy of the plaintext is,
you have to use a one-time pad.  The unicity distance of DES, last
time I looked, was so low that one might as well use a one-time pad.

With computational security, you can fudge a little by trying to
calculate how much data you can safely encrypt under one key.
However, I believe this value can only go down over time, as new
cryptanalytic attacks are developed against the cipher.

Another method is to derive many data keys from bits of a larger
meta-key in a way that is computationally infeasible.  However, every
time you hear "computationally infeasible", remember that it is an
argument of ignorance; we don't know an efficient way to break it,
yet, or if someone does they aren't talking.

You can also make this argument more "scientific" by extrapolating
future attacks and computational advances from trends (Moore's Law
et. al.) - see "Rules of Thumb in Data Engineering" from Microsoft;
it's on the storagemojo.com blog and well worth reading.

Furthermore, you should provide a mechanism for the crypto to be
changed transparently as technology progresses; an installed base is
forever, but computational security is not.  Permitting multiple
security configurations is complex, but I don't think anything short
of OTP can give an absolute assurance of confidentiality when the
opponent has access to the plaintext.

Another simple solution, the belt-and-suspenders method, is to
superencrypt the ciphertext with a structurally different cipher.
This basically makes the plaintext fed to the top-level cipher
computationally indistinguishable from random data, and so the unicity
distance of the top-level cipher is infinite according to
computational security of the lower-level cipher.  I'm mixing
terminology here, but the net result is that you're guaranteed that
the combination is as secure as either alone, and in most cases a
weakness in one cipher will not be a weakness in the other (this
is because of the "structurally independent" assumption).

You get the same effect by sending an encrypted file over an encrypted
network connection (unless the file is converted to base64 or
something prior to transmission), assuming that the opponent is not an
insider with access to decrypted network traffic.

Some assumptions to consider are:

What problem(s) are we trying to solve, and why?

Can we set up a secure distribution network for key material?

Who is the opponent?  How many years will they remain interested in a
captured record?

What is the budget?

Who are the users?  How competent are they?  How much can we educate them?

How will we fix bugs or update it?

What are the security priorities?
(usability, confidentiality, authenticity, integrity, availability,
identification, authorization, repudiation)

When the system fails, does it fail-safe or fail unsafe?  What is
the backup method when it fails?  Can the main system be forced to
fail by the opponent?

What are the legal and economic considerations?  That is, are the
people who can best secure the system also the ones with the financial
liability?  If the patient is the beneficiary of the confidentiality,
do they have any control over which system they use, or do they have
any ability to make sure the system is secure?  Who bears the cost of
the system?  Ultimately, a patient should be able to choose between
priorities; if my drug allergies were stored on the system, and I had
a heart attack and passed out, confidentiality and self-authorization
would not be my primary concern.  It's easy to imagine other scenarios
with other priorities.

In my experience, the last consideration (legal and economic) is the
most common reason why systems fail.  Compare the security of voting
systems versus the security of slot machines and you'll see exactly
how economics and self-interest trumps everything else.

What you want to do is terribly difficult to do with any assurance,
unless you're willing to spend a lot of money on it.  I would
recommend figuring out what you're trying to do, then hiring some
consultants; cryptographers, systems analysts, penetration testers,
professional engineeers on reliability, and so on.  Brainstorm.
Propose ideas and shoot them down.  If your funds are limited,
publication on the web, trade magazines, and a public email list plus
referendum may be the best way to get free design critiques.  Since
"all bugs are shallow to one set of eyes", so too most weaknesses are
obvious to some brain, and most failures are predictable by someone.
The closer they are to the problem domain, the more valuable they are,
but even stopped clocks are right twice a day.

I'm thinking about unconditional security, and will write up a
proposed design soon.  I'll send it around when it's ready for public
``Unthinking respect for authority is the greatest enemy of truth.''
-- Albert Einstein -><- <URL:http://www.subspacefield.org/~travis/>
For a good time on my UBE blacklist, email [EMAIL PROTECTED]

Attachment: pgpk3ZrReLV8W.pgp
Description: PGP signature

Reply via email to