Cryptography-Digest Digest #80, Volume #12       Wed, 21 Jun 00 20:13:01 EDT

Contents:
  Re: Variability of chaining modes of block ciphers (michael)
  Re: Variability of chaining modes of block ciphers (Eric Lee Green)
  Re: Encryption on missing hard-drives (Mike Andrews)
  Re: Missing Info in the crypto-gram of MR BS (SCOTT19U.ZIP_GUY)
  Re: Encryption on missing hard-drives ([EMAIL PROTECTED])
  Re: Missing Info in the crypto-gram of MR BS (James Felling)
  Re: How encryption works (no-one)
  Length of pseudo random digits (=?ISO-8859-1?Q?Jacques_Th=E9riault?=)
  Re: Is this a HOAX or RSA is REALLY broken?!? (Bill Unruh)
  Re: obfuscating the RSA private key (Dave Ahn)
  Re: Encryption on missing hard-drives (jungle)
  Re: Encryption on missing hard-drives (jungle)
  Re: mother PRNG - input requested (Tim Tyler)

----------------------------------------------------------------------------

Subject: Re: Variability of chaining modes of block ciphers
From: michael <[EMAIL PROTECTED]>
Date: Wed, 21 Jun 2000 13:28:14 -0700

I like the idea that you nassume that your attacker knows
everything except the plaintext and the key, and has more
resources and cleverness than you do. Limiting yourself to
improvements that still improve your security under this
standard seemms wiser than adding improvements that require
secrets algorithms, chaining methods, etc (unless the algorithm,
etc is selected by a part of the key not used elsewhere).
=======================================================
"O Breath of Life, give us this day our daily food, deliver us
from the curse of the ice, save us from our forest enemies, and
with mercy receive us into the great beyond."
-Onagar
http://www.urantia.org
=======================================================



Got questions?  Get answers over the phone at Keen.com.
Up to 100 minutes free!
http://www.keen.com


------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Variability of chaining modes of block ciphers
Date: Wed, 21 Jun 2000 21:25:57 GMT

Mok-Kong Shen wrote:
> 
> The most popular block chaining mode seems to be CBC.
> There is also PBC which chains with plaintext blocks.
> One can also accumulate the previous blocks for doing the
> chaining and use plaintext as well as ciphertext for
> chaining. (I used this in one of my own designs.) By
> combinatorics this gives 8 variants.

Great. You just added 3 bits to the key space. At the expense of adding yet
more code that could be defective/insecure or slow the operation of the
program.

-- 
Eric Lee Green                         [EMAIL PROTECTED]
Software Engineer                      Visit our Web page:
Enhanced Software Technologies, Inc.   http://www.estinc.com/
(602) 470-1115 voice                   (602) 470-1116 fax

------------------------------

From: [EMAIL PROTECTED] (Mike Andrews)
Subject: Re: Encryption on missing hard-drives
Date: Wed, 21 Jun 2000 21:40:43 GMT

Scripsit [EMAIL PROTECTED]:

: On a more sci.crypt note, noone's answered my original question which
: is if it's possible to encrypt a device such that it's impossible to
: read the contents without leaving a trail. Something similar to a one
: time password system, where a series of keys must be used or some
: other clever algorithm. Just for fun, we'll allow a secure connection
: to a trusted party.

Since these are disk drives, I'd bet real money that at most
the contents were encrypted. I suppose it _is_ possible that 
the US gov't has special disk controllers and hard drives,
such that if you don't do just the right thing when trying 
to read the drive, the data all go away. But it is a real
stretch of the imagination to come up with that, and it 
would preclude using any off-the-shelf WIN-based OS.

-- 
Life is complicated, but winter narrows it down to a few simple problems: heat,
food, shelter, plumbing. And it focuses you in wonderful ways. You don't have
to search for your personal identity in winter; winter gives it to you. You are
prey in winter; nature is making a serious attempt to kill you. - GK/PHC

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Missing Info in the crypto-gram of MR BS
Date: 21 Jun 2000 21:31:47 GMT

[EMAIL PROTECTED] (James Felling) wrote in 
<[EMAIL PROTECTED]>:

>
>
>"SCOTT19U.ZIP_GUY" wrote:
>
>>  I don't often go to the BS site about encryption
>> because maybe its his physics trainning. I have
>> a Master Degree in Electronic Engineering Control Theory
>> and we seldom see eye to eye.
>>
>>  Like many of his Crypt Grams this one starts very proper
>> and talks about Claude Shannon who in the 40's came up
>> with the concept called "unicity distance" He states that
>> English would require about 8.2 bytes of data for DES type of
>> cypher. Claude Shannon is the kind of genuis that Mr BS
>> is not. He goes on to state for Des only 2 cipher text blocks
>> need to be decoded. If the first block looks like English
>> and the second block also looks like English "you've found
>> the correct key". At this point he goes to state that
>> compression increases the "unity distance". It is at this
>> point that he does those wishing to understand the science
>> of encryption a disservice and the NSA would give him bonus
>> points. The fact is most "compression does not really add
>> to the unity distance".
>
>Headerless compression will add to the unicity distance assuming that
>the file is compressible.  The reasoning being that since more of the
>space of possible plaintexts being inputed is used, the unicity distance
>must therefore increase.
     This is where we disagree. Example if no compression is used
then one in theroy could look at the outputs that when decrypted
result in valid text. We both agree that there is a length at
which if ascii text was used after a certain distance you would
reduce the solution to one file. However if even most headerless
compression schemes are used. Many of the cases can be tossed out
since the file had to have been compressed. If the decompression
leads to a file that when compressed does not come back to the
same file then that "key" was an in valid key. 
 Here is why you are making the error. If I compress text you would think
that due to compression there are several more files that when examining
the compressed output lead to more solutions. And you are correct
the denisity of possible ascii messages goes up. But the problem
is with no compression the number of possible solutions with a
false key is a 2**N (where N the key size) if the encryption is
perfect and message size is N bits then every possible ascii text
of N bits is a possible solution so more encrypted data is needed
to find a solution since nothing can be ruled out.
  suppose at some lenght M > N that the probablity of the prefect
encryption yields that there can only be one solution. This
is the unicity distance. Now suppose we compress the file with
brand X headerless adaptive compression.  You would think that
since there are more possible ascii texts in M bits that you would
have more than one possible solution so that you would need more
encrypted data to find a unique solution. This would be true
except that most compression schemes leak info about the compression
method. And that many candidates are eliminted since they when
decompressed and recompressed lead to a different file. THus you
may need few then M bits.
>
>>
>>  What either Mr BS fails to understand or is misleading people
>> about. Is that most compression help the attacker and his
>> statement about compression increasing the unity distaance
>> and the statement about a random file haveing infinte unicity
>> distance is quite false for most compression schemes.
>
>A file of random data will have an infintie unicity distance no matter
>what compression scheme is used.  However, I will agree that compression
>will NOT always increase unicity distance, though it should at worst
>leave the distance unchanged.

    the above is wrong.
>
>>
>>  The main thing hidden from the user is for the compression
>> to actually increase the unity distance not only does the file
>> have to compress but the compression routines used must be
>> of the following form. for any file that is the result of
>> a "wrong key" that resulting file must be such that when it
>> is uncompressed to a test file. That file must compress back
>> exactly to the resulting file. This eliminates many canditate
>> files.
>
>I am agreed that this is sub optimal. However, this is not necessarialy
>worthless, as if even one possible is added, the unicity distance is
>increased.
>
>> It can eliminate so many files that the NSA does not
>> even need to know what kind of file you are compressing since
>> contrary to the statement that a random file has infinite unicity
>> they could in fact pretend they have zero knowledge about
>> the file. And only search for one by whatever means at there
>> disposal to find one that when uncompressed and compressed back
>> comes back to the test file. Since for most compression schemes
>> for files of most lengths there would only be one solution.
>
>Agreed in a general sense. However, I think you are misunderstanding his
>statement in re: a "random file" his statement is in refrence to a file
>of totally random data, I believe you are reading it as a "file chosen
>at random"
>
>>
>> This to me means that the unicity distance is far form infinity
>> and that one should be very careful about what compression one
>> uses when encrypting data since there is lot that Mr BS chose
>> to leave out of his so called crypt gram.
>>
>> check out http://members.xoom.com/ecil/compress.htm
>> at hss pointers to Tim's and Matt's site they seem
>> to have more of an understanding about what this kind
>> of compression is than anything you would find at the
>> BS site.
>>
>> David Scott
>> .
>
>I do agree that a so called 1-1 compression method is to be desired so
>far as adding to the unicity distance.  However, even such a method will
>not result in an infinite unicity distance, as the output will still
>have predictable character given that the input is predictable. ( I.e.
>you uncompress your decryption and it should also have characteristics
>similar to that of the original plaintext space.)  I do not feel that he
>left an extensive amout of material out, merely that he was stating
>generalities.
>
>


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Encryption on missing hard-drives
Date: Wed, 21 Jun 2000 21:46:39 GMT

Mike Andrews <[EMAIL PROTECTED]> wrote:
> Since these are disk drives, I'd bet real money that at most
> the contents were encrypted. I suppose it _is_ possible that 

ABC news reported today that "[Bill] Richardson said he has changed
the procedure which now requires information such as the data on the
hard drives to be encrypted" which strongly implies that it
wasn't. Really though, no amount of encryption is going to help in the
face of abysmal physical security. Twenty six people were allowed
unescorted access to the vault, and could remove items without signing
for them. On top of that, the last inventory was for the y2k
inspection.

-- 
Matt Gauthier <[EMAIL PROTECTED]>

------------------------------

From: James Felling <[EMAIL PROTECTED]>
Subject: Re: Missing Info in the crypto-gram of MR BS
Date: Wed, 21 Jun 2000 17:16:41 -0500



"SCOTT19U.ZIP_GUY" wrote:

> [EMAIL PROTECTED] (James Felling) wrote in
> <[EMAIL PROTECTED]>:
>
> >
> >
> >"SCOTT19U.ZIP_GUY" wrote:
> >
> <<snip>>



> At this point he goes to state that
> >> compression increases the "unity distance". It is at this
> >> point that he does those wishing to understand the science
> >> of encryption a disservice and the NSA would give him bonus
> >> points. The fact is most "compression does not really add
> >> to the unity distance".
> >
> >Headerless compression will add to the unicity distance assuming that
> >the file is compressible.  The reasoning being that since more of the
> >space of possible plaintexts being inputed is used, the unicity distance
> >must therefore increase.

>
>      This is where we disagree. Example if no compression is used
> then one in theroy could look at the outputs that when decrypted
> result in valid text. We both agree that there is a length at
> which if ascii text was used after a certain distance you would
> reduce the solution to one file

Agreed.

> . However if even most headerless
> compression schemes are used. Many of the cases can be tossed out
> since the file had to have been compressed.

Also agreed.

> If the decompression
> leads to a file that when compressed does not come back to the
> same file then that "key" was an invalid key.

Agreed.

>
>  Here is why you are making the error. If I compress text you would think
> that due to compression there are several more files that when examining
> the compressed output lead to more solutions. And you are correct
> the denisity of possible ascii messages goes up.

Agreed so far.

> But the problem
> is with no compression the number of possible solutions with a
> false key is a 2**N (where N the key size) if the encryption is
> perfect and message size is N bits then every possible ascii text
> of N bits is a possible solution so more encrypted data is needed
> to find a solution since nothing can be ruled out.

Simply put this is where we part ways. Given that the message is Known to be
ASCII text. Then the possible space of messages that it could be is very
limited -- about 1-5 bits per byte of data.( and 5 is being really generous).
If compression will increase this density -- allow more uncertanty then the
unicity distance will increase.

>
>   suppose at some lenght M > N that the probablity of the prefect
> encryption yields that there can only be one solution. This
> is the unicity distance. Now suppose we compress the file with
> brand X headerless adaptive compression.  You would think that
> since there are more possible ascii texts in M bits that you would
> have more than one possible solution so that you would need more
> encrypted data to find a unique solution. This would be true
> except that most compression schemes leak info about the compression
> method. And that many candidates are eliminted since they when
> decompressed and recompressed lead to a different file. THus you
> may need few then M bits.

Yes those compression schemes leak more info than a "1-1" method, but they
leak less( or in a worst case the same quantity) of  information vs. ASCII
text.

>
> >
> >>
> >>  What either Mr BS fails to understand or is misleading people
> >> about. Is that most compression help the attacker and his
> >> statement about compression increasing the unity distaance
> >> and the statement about a random file haveing infinte unicity
> >> distance is quite false for most compression schemes.
> >
> >A file of random data will have an infintie unicity distance no matter
> >what compression scheme is used.  However, I will agree that compression
> >will NOT always increase unicity distance, though it should at worst
> >leave the distance unchanged.
>
>     the above is wrong.

First off I will agree that this is not true in all cases.  If the file being
compressed fills the space of possibles more efficiently than the compression
algorithim as applied to that space does, then yes, not compressing is to our
benefit.  However, if the compression results in a more dense mapping of
possibles within that space, then we are benefiting.  I claim that at least
in the case of text( which it seems we are discussing) that compression will
more fill that space more densely( or worst case the same densly) with
possibles than uncompressed text.  Remember, the objective is to uncompress
as few possible blocks as one can. I will agree that as the amount of data
assembeld goes up compression gets more easily recognisable, but I find it
hard to accept that it EVER is MORE easily recognised than text.

> <<snip>>

I will state that I feel that in all likelyhood there is a "recognisability"
factor that a compression algorithim posseses. Similarly there is a
"recognisability"  factor that any type of input may have.  I believe that if
the compression is more easily recognised than the input then do NOT
compress, as you make the situation worse.  If that is not the case, you will
make the situation no worse than it previously was.( assuming that your
compression either shrinks or leaves the file size equal)


------------------------------

From: no-one <[EMAIL PROTECTED]>
Subject: Re: How encryption works
Date: Wed, 21 Jun 2000 23:31:12 +0100

On 20 Jun 2000 23:59:13 GMT, [EMAIL PROTECTED] (infamis at
programmer.net) wrote:

>Ok, I've read this newsgroup's faq, some other texts out there, and some
>presentations on cryptography, but I just don't understand it. Only partially.
>For example, this is something I read somewhere:
>--------------------
>n,e=public key, n = (prime num. p * prime num. q), M=message
>d=private key
>encryption: C=M^e mod n
>decryption: M=C^d mod n
>[say p=5, q=7, e=3]
>p*q = 35
>d=e^(-1) mod ((p-1)(q-1))
> =16
>--------------------
>I understand most of it except....
>1) is n or e the public key?
>2) if M=message, is this the checksum of the whole message, of 1 character,
>blocks of characters, or what?
>3) the line with "d=....". When I did this out, (3^(-1)) mod ((4)(6)) didn't
>equal 16... I got 1/3.
>
>Other questions:
>I have pgp5.5. They key's properties said Diffie-Hellman/DSS, with CAST cipher.
>What in the world does this mean? Isn't DH/DSS the encryption, so why need CAST
>[or IDEA which I saw on other keys]?
>
>I haven't read about any encryption methods that have broken, which didn't use
>a brute force attack. Are there any and how did the decryption work?
>
>When I export my public key to people, which is a DH/DSS CAST ciph. 2048/1024
>bit key, the key is in plaintext [like Hkel3jadAio3nDlLoWX ...]. 2048bit[or is
>it 1024?] / 8bit(per byte) = 256. but my key is > 256 bytes.
>
>I don't plan to make some great encryption method and get famous or anything, I
>just want to know how this works because I think it's very interesting. I'm
>only 14, so bear with me...

If you can afford them, or borrow them, get hold of Bruce Schneier's
"Applied Cryptography" and/or Simon Singh's "The Code Book".  These
books give the background you need, and lots of ideas.
You are young enough to understand and follow up everything.  
(I wish I was!)

Yours,

No-one.
 

------------------------------

Subject: Length of pseudo random digits
From: [EMAIL PROTECTED] (=?ISO-8859-1?Q?Jacques_Th=E9riault?=)
Date: Wed, 21 Jun 2000 22:45:26 GMT

How can I calculate the period of a random digit generator.

Is there any formula to estimate such a period?

Jacques Th�riault

------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Subject: Re: Is this a HOAX or RSA is REALLY broken?!?
Date: 21 Jun 2000 23:17:18 GMT

In <[EMAIL PROTECTED]> [EMAIL PROTECTED] (S. T. L.) writes:

>I believe that 15's already been factored.  Now for something big, like 77....

No 15 has not been factored by a quantum computer using Shor's
algorithm. That would require a 12 bit or so quantum computer, which
does not exist yet.

------------------------------

From: [EMAIL PROTECTED] (Dave Ahn)
Subject: Re: obfuscating the RSA private key
Date: 21 Jun 2000 22:58:09 GMT

[EMAIL PROTECTED] (Mack) writes:

>I am having a hard time following exactly what you want to do.

>If you MUST store the key ithin the program code or data there is no way
>to guarantee security.  Just how secure do you need this to be?

Yes, I understand that there is no way to guarantee security.  However,
I am trying to make the key recovery as difficult as possible.

Mike Rosing <[EMAIL PROTECTED]> writes:

>If the end user has access to the code, then they can follow the process
>with a debugger.  You can make it as convoluted as you want, but they'll
>still be able to follow it.

Yes.  However, the end user has access to the obfuscator generator's source
code, not the obfuscated blackbox code that is spit out by the generator.
The user can follow the process with a debugger and eventually recover the
private key, but I want to make this process as difficult as possible using
a mathematically sound technique.

>If you don't use RSA, you can just give each user their own private key.
>They send you their public key and all messages get encrypted with that.
>When they need to decrypt, they can use *their* private key.

I can consider any cryptographic algorithm for this purpose.  But I can't
give each user his/her own private key, because the user can't be trusted!

>Maybe you can expand on the problem again, 'cause it sounds like we're
>missing something.

I will try...

Our group has client-server programs that are open sourced for peer review.
We distribute these programs in source and precompiled binary form.  Users
download the client binary and use it to connect to servers over the
Internet.

We wish to ensure that the users of the client software are using the
official precompiled binary as opposed to a custom-compiled version based
on the public source code.  We do not trust the client users.  But we do
trust the server administrator.  We also trust the network connection
between the client and the server (i.e. ignore eavesdropping or man-in-middle
attacks).

We address this authentication problem using RSA.  A keypair is generated
for each client on each platform.  The public key is published and added
to the server key ring.  The private key is embedded into the client binary
during the compile process.  To validate the client, the server uses a typical
challenge-response scenario by encrypting a random number with the public
key which only the client can decrypt and send back.

In its basic form, this system is trivial to break, because a mere memory
dump of the client process would reveal the private key.  Furthermore, any
obfuscation technique that requires the recovery of the private key in-memory
(such as secret sharing algorithms) is equally weak.  Therefore, we use an
RSA encryption blackbox to help secure the private key.

We have a program called mkkey.  It takes the RSA algorithm (which is just
a sequence of mathematical computations) and the private key (which is just
a constant) and creates a new algorithm, call it RSA_blackbox, based on a
new sequence of computations that are mathematically identical to the first.
The technique is described in the mkkey source code, which is published.

We use mkkey to generate the RSA_blackbox source code for each client private
key.  We compile the RSA_blackbox functions into the client binaries, then
discard the RSA_blackbox source code.  We strip the debugging symbols, link
staticly, and randomize the object link order to improve security.

Using mkkey, recovery of the private key is much more difficult, because
the user would need to evaluate each subsequent mathematical operation
within the RSA_blackbox to reconstruct the private key.  It may be possible
to write a "showkey" program which takes the machine code, automatically
reconstructs the mathematical operations, then works backwards to recover
the private key.

I understand that this system, no matter how convoluted, can never be secure.
But I wish to make key recovery as difficult as possible.  Towards this
end, I am very interested in knowing whether mkkey's technique can be improved
on a purely mathematical basis.  In fact, I wonder if it is possible to
have an algorithm which constructs an RSA_blackbox from which the private
key cannot be recovered at all.

Does this clarify my original questions a bit?

Thanks in advance.
--
Dave Ahn | [EMAIL PROTECTED] | Wake Forest University Baptist Medical Center

When you were born, you cried and the world rejoiced.  Try to live your life
so that when you die, you will rejoice and the world will cry.  -1/2 jj^2

------------------------------

From: jungle <[EMAIL PROTECTED]>
Subject: Re: Encryption on missing hard-drives
Date: Wed, 21 Jun 2000 19:20:18 -0400

for what magic, your place is accounted for ?

Mike Andrews wrote:
> --



------------------------------

From: jungle <[EMAIL PROTECTED]>
Subject: Re: Encryption on missing hard-drives
Date: Wed, 21 Jun 2000 19:21:23 -0400

excellent staff ...

Mike Andrews wrote:
> --
> "The most interestign thing about king Charles I is that he was 5' 6" at
> the beginning of his reign and 4' 8" at the end of it, and all because
> of..."
>                         -- from [alt.fan.pratchett] Re: [I] Exam questions



------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: mother PRNG - input requested
Reply-To: [EMAIL PROTECTED]
Date: Wed, 21 Jun 2000 23:03:22 GMT

Joseph Ashwood <[EMAIL PROTECTED]> wrote:

[snip objections]

Your argument makes more sense to me on the second reading.

:> If seed is only 32-bit, you re-key for each message and your messages
:> aren't ever very long, having a minimum period much longer than 2^32
:> seems rather pointless.

: Unless there [are other statistical anomolies].

Of course.
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  VIPAR GAMMA GUPPY.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to