Cryptography-Digest Digest #820, Volume #8       Thu, 31 Dec 98 18:13:05 EST

Contents:
  Re: New Book "The Unknowable" (R. Knauer)
  Re: New Book "The Unknowable" (John Briggs)
  Re: On leaving the 56-bit key length limitation (wtshaw)
  Re: Free ENCRYPTION Programs (32b) ([EMAIL PROTECTED])
  GSM Hack [Was  Security through obscurity in the smartcard world] (Gary Howland)
  Re: [Q. newbie] Authentication/Digital Signatures ([EMAIL PROTECTED])
  FAQ: as up to date as it should be? ("Joel Noble")
  block cipher ideas ("Michael A. Greenly")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: New Book "The Unknowable"
Date: Thu, 31 Dec 1998 17:39:23 GMT
Reply-To: [EMAIL PROTECTED]

On 31 Dec 1998 15:24:47 GMT, [EMAIL PROTECTED] (John Briggs)
wrote:

>How are you assessing "best compression procedure".

>Is it the procedure which does the best job on your plaintext, P?

If by "best job" you mean the greatest possible compression of the
plaintext, then that is what is meant by it. Chaitin randomness means
that a given finite sequence is irreducible.

>Or the procedure that does the best job on average for all plaintexts
>that you might have chosen?

I would suppose that one given finite string would suffice for
purposes of discussion. Therefore one should be able to choose the
method of compression that does the "best job".

>Or the procedure that is fastest?
>Or the procedure that has the fastest inverse?
>Or something else?

The sole criterion is that the compressed plaintext cannot be
compressed further, namely  to use Chaitin's terminology:
"irreducible".

>Are you choosing "best" among all known algorithms, all finite
>algorithms, or all possible mappings.

See above.

>What led you to conclude that a compression procedure exists.

Because I can find at least one, and if that is the only one, then it
is the "best" possible, that is, there exists no "better" and
therefore the given finite sequence had been reduced fully.

>What led you to conclude that it was unique?

What difference would it make if there were two compression algorithms
which both gave the smallest, albeit different, result? I would chose
one and use it.

>You need to define the compression procedure precisely and establish
>existence and uniqueness before you can talk meaningfully about "CP"
>for a given "P".

Perhaps, if this were a discussion regarding a practical cryptosystem,
but it is not. It is a theoretical discussion designed to find out if
Chaitin's definition of randomness based on algorithmic complexity
(see also Kolmogorov) is applicable to crypto.

>For instance, you could constrain plaintext and ciphertext to be
>bit strings, you could assert the existence of a _finite_ set of
>possible plaintext and a probability distribution for that set
>Then you could define compression procedure as a mapping from the
>plaintext domain to the set of all bit strings, possibly with 
>restrictions to eliminate ambiguous leading substrings.  And you
>could define "efficiency" as the ratio of average plaintext length to
>average ciphertext length.  And, after handwaving the divide-by-zero
>problem away, you could easily prove that at least one maximum
>efficiency mapping exists.  And you could also easily prove that all
>such optimal mappings are computable.  And then one could define as
>"best" the first algorithm in lexicographic order that computes
>an optimal mapping.

>Let's suppose you do this.

OK.

>Note that, if unbounded plaintext is permitted, I would conjecture
>that an optimal compression mapping does not necessarily exist and,
>if one exists, it would not neccessarily be computable by a finite
>procedure.

It is usually assumed that the plaintext is finite in length.

>For any particular CP of length > 0 there may well be a
>smaller algorithm that will reproduce it.  We optimized for average
>ciphertext length, not individual ciphertext length.

Then we should consider optimizing for a single given finite
plaintext.

>>Therefore CP is a random ciphertext which cannot be deciphered based
>>on any discernable pattern, since it is "random" according to
>>algorithmic complexity - that is the algorthm that porduces it cannot
>>be made substantially smaller.

>False premise, invalid conclusion.

The premise is that a "best job" algorithm exists and can be found -
say, by a procedure you outline above.

Chaitin makes a point about "experimental mathematics", which means
that is it assumed you and others in the field have a powerful
computer at your disposal to experiment with various algorithms.
Further assume that after many man-years of such experimentation,
everyone agrees that algorithm X is the "best" for a given finite
plaintext. The fact that there could always be a slightly "better"
algorithm waiting to be discovered does not mean that our discussion
is moot. Take it as a working hypothesis that the current algorithm IS
the "best" and go from there.

The point here is not to belabor the practical aspects of such a
cryptosystem, but to explore the meanings of Chaitin randomness as
they apply to crypto.

>How do you choose which decompression algorithm to use when the
>compression algorithm is not necessarily unique?  Nobody forced
>us to use lexicographic order.

Sticking with your program here, I would assume that for every
compression algorithm you can find for a given plaintext, there exists
a corresponding decompression algorithm. Of course, if that is not the
case, then that implies that Chaitin randomness is a one-way street -
which is a fascinating prospect.

Just imagine what that would mean: If you can find the "best"
algorithm for compression in terms of Chaitin randomness, you can
never find the corresponding decompression algorithm. I suppose that
has to be the most unbreakable classical cryptosystem ever invented,
namely the resulting ciphertext is so random that it can never be used
to reveal the original plaintext.

That implies that Chaitin randomness is TOO good for use in crypto.
But then, that remains to be seen.

Bob Knauer

"He knows nothing and he thinks he knows everything. That points
clearly to a political career."
--George Bernard Shaw


------------------------------

From: [EMAIL PROTECTED] (John Briggs)
Subject: Re: New Book "The Unknowable"
Date: 31 Dec 1998 15:24:47 GMT
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (R. Knauer) 
writes:
>.... it seems that Chaitin's definition of randomness does no good for
>purposes of crypto. Consider the following procedure.
>
>Begin with a plaintext and compress it using the best compression
>procedure available. Call this sequence CP for Compressed Plaintext.
>The assumption is that CP is the minimal representation for the
>original plaintext based on compression.

How are you assessing "best compression procedure".

Is it the procedure which does the best job on your plaintext, P?
Or the procedure that does the best job on average for all plaintexts
that you might have chosen?
Or the procedure that is fastest?
Or the procedure that has the fastest inverse?
Or something else?
Are you choosing "best" among all known algorithms, all finite
algorithms, or all possible mappings.

What led you to conclude that a compression procedure exists.

What led you to conclude that it was unique?

You need to define the compression procedure precisely and establish
existence and uniqueness before you can talk meaningfully about "CP"
for a given "P".

For instance, you could constrain plaintext and ciphertext to be
bit strings, you could assert the existence of a _finite_ set of
possible plaintext and a probability distribution for that set
Then you could define compression procedure as a mapping from the
plaintext domain to the set of all bit strings, possibly with 
restrictions to eliminate ambiguous leading substrings.  And you
could define "efficiency" as the ratio of average plaintext length to
average ciphertext length.  And, after handwaving the divide-by-zero
problem away, you could easily prove that at least one maximum
efficiency mapping exists.  And you could also easily prove that all
such optimal mappings are computable.  And then one could define as
"best" the first algorithm in lexicographic order that computes
an optimal mapping.

Let's suppose you do this.

Note that, if unbounded plaintext is permitted, I would conjecture
that an optimal compression mapping does not necessarily exist and,
if one exists, it would not neccessarily be computable by a finite
procedure.

>Now employ algorithmic complexity by finding the minimal algorithm
>that will reproduce CP. But that algorithm is just:
>
>print (CP)
>
>because there is no other algorithm that can reduce CP to a smaller
>size.

For any particular CP of length > 0 there may well be a
smaller algorithm that will reproduce it.  We optimized for average
ciphertext length, not individual ciphertext length.

>Therefore CP is a random ciphertext which cannot be deciphered based
>on any discernable pattern, since it is "random" according to
>algorithmic complexity - that is the algorthm that porduces it cannot
>be made substantially smaller.

False premise, invalid conclusion.

>But we know better - the cipher can be broken by decompressing it.
>Therefore it would seem that randomness, based on algorithmic
>complexity, is not a suitable measure of crypto strength.

How do you choose which decompression algorithm to use when the
compression algorithm is not necessarily unique?  Nobody forced
us to use lexicographic order.

        John Briggs             [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: On leaving the 56-bit key length limitation
Date: Thu, 31 Dec 1998 11:02:18 -0600

In article <[EMAIL PROTECTED]>, Bryan Olson
<[EMAIL PROTECTED]> wrote:
> 
> > 6. The final word on cryptographic strength is thus not to be found
> > in enforced export controls for key length. It is to be found in our
> > own drawing boards in terms of a system's "unicity distance" and its
> > derived design issues, which is feasible to deal with and lies in our
> > hands.
> 
> Yuck, manager-speak.

I'm in no position to criticize his actual words, but he is correct that
keylength and unicity distance can both be related to algorithm strength. 
Neither factor is more important, nor exclusive, but relying more on one
can cover for deficiencies in the other.  Preferable is to avail oneself
of both a substantial keylength and a long unicity distance, whatever that
really means.
-- 
New Year's Resolution: Strive to be accurate, even if it means telling a lie or 
misrepresenting the whole truth; after all, this is how Congress does it.

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: alt.privacy,fido7.crypt,talk.politics.crypto
Subject: Re: Free ENCRYPTION Programs (32b)
Date: Thu, 31 Dec 1998 18:06:36 GMT

In article <[EMAIL PROTECTED]>,
  fungus <[EMAIL PROTECTED]> wrote:
>
>
> [EMAIL PROTECTED] wrote:
> >
> >  While your at it enter my contest it is free
> >
>
> ...but not until you've read Bruce Schneier's latest crypto-gram
> which explains why crypto contests are worthless.
>

 If Mr B.S. himself posts here I will read it.
Also in case your not to bright that creature
has his own contest or did you notice. Of course
his is not a real contest since he determines the
winner and it is not black and white.

David Scott

--
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (Gary Howland)
Subject: GSM Hack [Was  Security through obscurity in the smartcard world]
Date: Thu, 31 Dec 1998 15:55:14 GMT

On 29 Dec 1998 22:20:07 GMT, [EMAIL PROTECTED] (Ian Goldberg)
wrote:

>In article <76b57t$[EMAIL PROTECTED]>, Gary Howland  <[EMAIL PROTECTED]> wrote:
>>Admittedly, in the case of the GSM hack, early peer review of the
>>system might have resulted in SIM cards that cannot be cloned at
>>the present time, since they (the GSM designers) might have used
>>a better algorithm due to the peer review.  But who's to say that
>>the pre-launch publication of the algorithms would have spotted
>>it,
>
>Well, it took Dave and me about 2 hours (over an Indian buffet lunch at
>Rajah's in Berkeley) to break it; it stands to reason that others could
>have done the same...

 I'm not disputing this - I'm just saying _if_ - after all, systems in
widespread use get more scrutiny than those that aren't even out yet.

>>and that the earlier publication of the algorithms wouldn't
>>have allowed more sophisticated attacks by this time?  But, while
>>the GSM hack is to be applauded, it does not actually harm the GSM
>>security model, and will no way be a repeat of the old analogue
>>phone asttacks, since the security relies on much more than just
>>the SIMs (there is a unique id in the phones themselves for starters).
>
>?? I can copy a SIM, stick the copy into a brand-new phone, and voila!
>Cloned GSM phone.  (It really works!)  It it certainly much _harder_ to
>do than the trivial cloning of analogue phones, admittedly.

I knew this was going to be brought up, and didn't want to get drawn
into it.  But seeing as it's you, I'll respond ....

First of all, please don't think that I'm criticising the hack - I'm
not.  Also, I think I should clarify what I mean when I say "the GSM
system is secure" - I mean it is currently safe from _widescale_
fraud, such as occurred with the old analogue system.

Yes, at present you can move your SIM card between phones, but bear in
mind that phones have a unique identifier built into them as well -
and this is used when disabling a stolen *phone* (not stolen SIM).
Now, not all of the service providers mighrt be using this facility at
the moment, but the point is that it is available for use if a problem
rises that requires it.

One claim I have heard is that a SIM can be stolen, and copied onto
thousands of cards.  Fine, we now have a problem.  So what are the
service providers going to do?  They'll just disable all *phones* that
use the stolen SIM.  Once this happens, and it becomes public
knowledge that the use of a stolen SIM can effectively break your
mobile phone, then the market for these dodgy black market cards will
disappear, and so will the security problem.

But as for the current time, sure, the odd hacker will be able to get
free calls, but this isn't a concern of the service providers, as I'm
sure you're aware, since stopping the hackers would be more expensive
than living with them.

However, if SIMs could be created from the data snatched from the
airwaves, rather than being stolen from hire cars etc., then, yes, we
have a serious security problem with the system.  But this is not the
case, and there are currently no security concerns in this area.

Some people might claim that these extra security features should
already be in place - but why should a company waste time and money on
fixing something that is not yet a problem (as long as it knows it can
do so in the future)?

Ian, please correct me if I've got any of the facts wrong about the
phone hack.

Gary Howland

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: [Q. newbie] Authentication/Digital Signatures
Date: Thu, 31 Dec 1998 19:43:41 GMT

In article <[EMAIL PROTECTED]>,
  Entschuldigung <[EMAIL PROTECTED]> wrote:

> Thank you for pointing out that the Digital Signature Algorithm (DSA)
> uses a non-deterministic value in its calculations so that, normally,
> a signature of one message will change each time a signature is
> produced. For some people, this would prevent them from using DSA
> for encryption and decryption. I stand corrected.
>
> However, I reviewed the DSA at the following website:
>
> http://csrc.nist.gov/fips/fips186.txt
>
> to see if there is some way to circumvent this feature. There is.
> The non-deterministic feature of the DSA can be circumvented in at
> least two ways:
>
> 1  If I have the source code for the DSA, I can make it
> deterministic by making "k" be a constant instead of a
> pseudo-random number.

Give me two messages and their DSA signatures signed using
the same private key and the same k, and I'll discover the
private key.

You can make DSA deterministic by defining k as a complex
function of the message and some secret.  For example, you
could choose k by applying SHA-1 to the private key,
XOR'ed with the digest of the message.  Each message would
be signed using a different and hidden k, as required for
security, but multiple signatures of the same message with
the same key would always return the same signature.

--Bryan

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "Joel Noble" <[EMAIL PROTECTED]>
Subject: FAQ: as up to date as it should be?
Date: Thu, 31 Dec 1998 15:40:39 -0700

Hi, all.

I was just taking a peek at the FAQ (it'd been a couple years), and it seems
that it hasn't been
updated since 1994.

I'm no expert in this area, but I was wondering if folks think it needs
anything new.  Are there other genuine frequently-asked-questions we want in
there?  Is any of the existing information due for updating?

...or was I just not looking in the right place for the current version? :)

Thanks,

Joel Noble
[EMAIL PROTECTED]



------------------------------

From: "Michael A. Greenly" <[EMAIL PROTECTED]>
Subject: block cipher ideas
Date: Thu, 31 Dec 1998 17:05:28 -0600

    Here's a juicy new morsel to chew on.

http://www.pinenet.com/~mgreenly/cipher.gif

    The full cipher would consist of 16 rounds which would be whitened
before the first round and after the last round.  Does anyone see any
obvious attacks on this construction that I'm missing?

    The graphic above depicts a 128 block cipher.  Does anyone have any
particle suggestions for stretching it to 256 bits?  My current
preference would be to increasing the size of the subblocks from 32 to
64 bits but this means that the functions g and f become quite a bit
more costly.  Can anyone suggest good replacements for functions g,f ?
Especially a 64x64 replacement for f and a 64x256 replacement for g?

    Does the fact that the master key is used in the first round allow
for any special attacks?  It'll be hidden by the whitening?

    Would it be better to use addition vs. xor for the whitening?

    Currently I'm working on constructing the Sboxes for the cipher,
does anyone have any suggestions as to the specific properties I should
look for?


    I realize that some of these questions are quite broad, but I'm just
doing this for the fun of it.  Not to mention I  thought maybe I could
generate a little discussion on cipher design instead of politics?

--
Mike Greenly
[EMAIL PROTECTED]













------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to