Cryptography-Digest Digest #507, Volume #10       Thu, 4 Nov 99 18:13:03 EST

Contents:
  Re: questions about twofish ("Adam Durana")
  Re: Interesting LFSR (Medical Electronics Lab)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column 
([EMAIL PROTECTED])
  Bit/byte orientation in SHA-1 (JohnSmith)
  Re: Build your own one-on-one compressor (Mok-Kong Shen)
  Re: Q: Removal of bias (Mok-Kong Shen)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY)
  Data Scrambling references ("Larry Mackey")

----------------------------------------------------------------------------

From: "Adam Durana" <[EMAIL PROTECTED]>
Subject: Re: questions about twofish
Date: Thu, 4 Nov 1999 12:56:22 -0500

> In counterpane's optimized twofish, there are different options you can
> choose during compilation like zero, partial, or full key.
> First,
>  What are the advantage/dis-advantages.
>  Do they affect security, or is it just a memory/speed trade-off.

>From what I understand Twofish is able to spend more time on the key
generating process and in return you get faster encryption times.  In
different cases you would want to spend more time on the key generating
process, e.g., you are encrypting several big files with the same key.  In
other cases you would want to spend no time on generating keys, e.g., the
key you are using changes a lot and you are encrypting small chunks of data.
I would guess, 'zero' means spend no time on generating keys, 'partial'
means spend some time, and 'full' means generate all the keys.

>
> Second,
>  What's the difference between using the 192 bit key option, and using
> the 256 bit key option with 64 bits zeroized (both still have same key
> space).

I really have no idea.  Only thing I could see being a problem is if someone
is trying to brute force your key, they could be testing keys in such a way
that it approaches 192bits + 64 bits zeroed.  And if it isn't found in
192bits + 64 bits zeroed they just continue on to 256bits.  So if they did
find the key in 192bits + 64 bits zeroed they only had to test 2^192 keys
and not 2^256 keys.  But it would look as if it was encrypted using a 256bit
key, but since you know the workings of the algorithm used you could test
all the 2^192 keys with 64 bits zeroed first, just incase someone did
encrypt using 192bits + 64bits zeroed.

But you probably know more about Twofish than I do.  I was hoping you would
get a response from someone who knew for sure, but seeing how you did not
get a good response (Hi Tom!), I decided to take a chance.  I hope it helps.

-- Adam



------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: Interesting LFSR
Date: Thu, 04 Nov 1999 12:14:58 -0600

David Wagner wrote:
> 
> Why not just run it backward, keeping track of the _set_ of all possible
> states?  If you implement it, I strongly suspect you will find that this
> set usually stays very small.

The number of sets should grow like the (number of rounds) * (number
of duplicate entries)/128. If there are only a few duplicates, it
should run backwards easily.
  
> (Sometimes some states have multiple predecessors, which grows the set,
> but also some states have no predecessors, which shrinks the set, and the
> two effects are expected to cancel each other out almost exactly.  I'll
> omit the mathematical calculations.)
> 
> Worth a try...

Definitly!  If there's only 128 rounds, you'll only expect to see
(number of duplicates) sets, which should be pretty easy to keep
track of.

Patience, persistence, truth,
Dr. mike

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Thu, 04 Nov 1999 18:45:46 GMT

In article <[EMAIL PROTECTED]>,
  "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
>>Maybe I trust rot13. I encrypt my message with it and send it to you.
>>Your email client does not know anything about rot13 - it downloads
>>its code and successfully decrypts my message. It is my message and
>>if I chose a bad cipher it is my problem; you will always be able to
>read any message you receive.
>
>I disagree.  It is OUR conversation.  Since I expect you to refer,
>implicitly and explicitly, to the contents of the messages I send,
>which contents I intend to keep private, I have a vested interest in
>the security of the messages you send.

You are right.

My goal was to avoid the need for a priori negotiation at the email
level. I don't see a very good solution.
(...)
>>>> In a networked world third parties are a fact of life.
>>>
>>> Sure.  As a key respository.  Not as a repository for security
>>> implementations.  The two aren;t anywhere near comparable.
>>
>>Not really. The difference between code and data is contextual. In
>>LISP both code and data have exactly the same form.
>
>Hardly.  Representation yes; form, meaning structure, no.
>
>Consider that given something that claims to be a cipher key I can
>fairly trivially determine whether it is ir is not a cipher key. (I'm,
>thinking of modern symmetric ciphers).  Given something that claims to
>be a secure implementation of a particular cipher, I would expect it
>to be impossible to render a definitive judgement without a detailed
>analysis of the software.

Well O.K. I won't push this matter. The point is that if you want to
have the agility to change primitives then you must have a way to
introduce code into your computer. Downloading certified code is an
option I find trustworthy - even more than buying shrink rapped
original software.


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: JohnSmith <[EMAIL PROTECTED]>
Subject: Bit/byte orientation in SHA-1
Date: Thu, 04 Nov 1999 20:43:25 +0100

Hello,

I am trying to verify a module which implements the FIPS PUB 180-1 SHA-1
specification which is bit oriented. This means it does not pad the '1'
bluntly after the last byte but after the last bit which is specified
through the size of the message. However, the C-implementation of Steve
Reid, which I use currently for verification, is byte orientated. Can
anyone point me to C-implementation which is bit orientated ?

Thanks and bye,

[EMAIL PROTECTED]





------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Thu, 04 Nov 1999 20:52:13 +0100

Tim Tyler wrote:
> 
> In sci.crypt <[EMAIL PROTECTED]> (i.e. my good self) wrote:
> 
> : The document is located at http://www.alife.co.uk/securecompress/
> 
> An alternative to this scheme has been proposed to me by email.
> 
> This scheme /appears/ to be simpler and offers better performance.
> 
> It has fewer constraints on the dictionary used.
> 
> I expect to be away from usenet for a week.
> 
> On the offchance that nobody else presents the scheme in the interim,
> I hope to write more about it upon my return ;-)

I have no idea of what your future scheme will be. However, in a
most recent follow-up I questioned whether your first rule could
be redundant. I have further thought about the whole thing. I
currently believe that even your second rule could be weakened a bit.
Using your <--> convention, i.e. when doing processing anything
on the left CAN be replaced by what is on the left and vice versa
and that one may left things unchanged (cf. one of your previous
examples), the only rule that is needed seems to be the same as that 
for Huffman encoding, i.e. the prefix-freeness. That is, no string
(the whole entity on any side of <-->) may be the prefix of any
other string in the dictionary.

On the assumption that I am right in the above, the question
could be re-asked whether such a compression (on the byte level
instead of on the bit level as in the case of the normal Huffman
compression) may not suffer from performance, i.e. having a lower
compression ratio, due to its operating with larger granularity
(bytes instead of bits).

So the superiority of schemes in the direction of your proposal
remains to be demonstrated. In particular, how your dictionary is
to be built from scratch (i.e. general guidelines for constructing
an 'optimal' one) appears to be unclear, at least for me. On the 
other hand, for the 'one-to-one' property David Scott, who use to 
stress its need, has a functioning algorithm. In a previous thread
I pointed out that, if one sacrifies one Huffman code symbol, 
namely the one consisting of all 0's, to take care of the file ending 
problem, then the normal Huffman algorithm (together with the
rule that on compression O's at file end may be truncated and
on decompreesion 0's may be appended) can also function under the
requirement of 'one-to-one'. I look forward therefore with interest 
to read and discuss your future scheme.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Removal of bias
Date: Thu, 04 Nov 1999 20:52:20 +0100

Scott Nelson wrote:
> 

> Assuming a biased bit which is '1' .75 and '0' .25
> (entropy = 0.8112781)
> Using XOR to combine N bits,
>  1 bits: Entropy = 0.8112781
>  2 bits: Entropy = 0.9544340
>  3 bits: Entropy = 0.9886994
>  4 bits: Entropy = 0.9971804
> (after 12 bits, it's 1.0 to seven places.)

Is is possible to do an analogous computation for the von Neumann's
device? Thanks.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Thu, 04 Nov 1999 21:35:27 GMT

In article <3820e4ad$[EMAIL PROTECTED]>, Don Taylor <[EMAIL PROTECTED]> wrote:
>In comp.compression Tim Tyler <[EMAIL PROTECTED]> wrote:
>> In sci.crypt <[EMAIL PROTECTED]> (i.e. my good self) wrote:
>> : The document is located at http://www.alife.co.uk/securecompress/
>
>> An alternative to this scheme has been proposed to me by email.
>> This scheme /appears/ to be simpler and offers better performance.
>> It has fewer constraints on the dictionary used.
>> I expect to be away from usenet for a week.
>
>> On the offchance that nobody else presents the scheme in the interim,
>> I hope to write more about it upon my return ;-)
>
>I though of something this morning.  I sent a version of this to Tim
>Tyler asking him if I had done something completely silly.  He was busy
>with other things but was kind enough to take a few minutes of his time
>to make some comments.  He seemed to think that I might not be
>completely confused about this.  But, any mistakes in this are still
>all my responsibility.
>
>==============
>It seems that most difficulties that people are debating in this
>one-to-one compression discussion are because of the way that the
>problem has been phrased.  If I look at all the conditions and try to
>rethink this JUST using those conditions I think I come to a very
>different place, but that I still seem to satisfy those conditions.
>
>I think that much of the confusion results from using the same alphabet
>for both halves of each dictionary entry.
>
>Consider the following dictionary
>
>        a       1
>        apple   2
>        banana  3
>        house   4
>        ...
>
>Now, I claim that the dictionary contains all the words that might be
>used in messages, we just mandate that it contains the vocabulary that
>people will use for messages.  But we do make the vocabulary large
>enough that it contains an adequate set for communication.
>
>The left hand side consists of words created from an alphabet while the
>right hand side consists of only numeric codes, these are NOT strings
>of digits, they are just codes.  All words in all messages will be
>separated from each other by a single blank.  And the translated codes
>will just be concatenated.
>
>Translation becomes transparently obvious, get the next word, look up
>that word in the dictionary and output the code.  To reverse the
>translation, get the next code, look it up, and output the word.  (and
>we can encode that number into the underlying stream for the message to
>be sent in a suitable and obvious way).
>
>It is guaranteed that it is 1-1.  Take any string of words and they
>become the obvious string of codes.  Take any string of codes and they
>become the obvious string of words.
>
>All the questions about whether the message is maintained by any number
>of translations back and forth is settled in a single stroke.  All the
>issues of prefixes and suffixes are settled.  But the whole business of
>prefixes and suffixes and how words might be looked up doesn't really
>seem to be the essential core of what was originally being debated,
>which was whether there could be a one-to-one comression scheme.
>
>The codes are shorter than the words, because I claim that for any
>'reasonable' human vocabulary the number of bits needed to represent
>the average message using codes is less than the number of bits needed
>to represent the message using characters, because of the redundancy
>built into the construction of human words.  So we have accomplished
>the desired compression using this scheme.
>
>For example, we can allow say 2^16 different words, a very reasonable
>vocabulary for any specialty, and in english, with an average length >5
>bytes for words constructed from characters, the compressed result only
>requires a length of 2 bytes to express the code for each translated
>word.

  IF you ordered these 16 bit wordrs so that  in hex 00 00 was in general
the most commmon occuring token  then 00 01 and 01 00 and 01 01 where
the next most common token where you increase and use the next available 8 bit 
token to build this table and so on. So that the table is order based on some
standard english text.  Build your compressor to convert the english only 
words to something like this.  Then used a FIXED HUFFMAN TABLE not
my adaptive huffman table as the starting table. This could be done since the 
codes and huffman table are decided on in advanced based on the language. 
Compress this in a one to one way using the starting frequency of occurance of 
each of the 8 bit hex codes and end it using my way of ending so file is a one 
to one compression I would still use some sort of adaption like in h3com.exe
  The advantage of the almost fixed table (slowing adapting table) is that if 
an enemy tries to guess a key when he uncompresses he will always get text 
that would appear somewhat realistic since he would be using the almost fixed 
table based on the real frequency of occurrance. But the changes prevent 
imbedded plain text from compressing the same way if different text appears 
before it.
 The main disadvantge is that only 2^16words can be used but for most messages
this should be ok. Since even in WWII the navjho code talkers had to use 
concepts in the language for words that where not in the language. You may
have to write a program that converts words not in the language to strings of
letters. This would take away from some of the 2^16 symboles. It would also
mean people who such like me and can't spell worht a shit will me more apt
to have longer messages unless some sort of specail spell checker built in.



David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Thu, 04 Nov 1999 21:43:09 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>Tim Tyler wrote:
>> 
>> In sci.crypt <[EMAIL PROTECTED]> (i.e. my good self) wrote:
>> 
>> : The document is located at http://www.alife.co.uk/securecompress/
>> 
>> An alternative to this scheme has been proposed to me by email.
>> 
>> This scheme /appears/ to be simpler and offers better performance.
>> 
>> It has fewer constraints on the dictionary used.
>> 
>> I expect to be away from usenet for a week.
>> 
>> On the offchance that nobody else presents the scheme in the interim,
>> I hope to write more about it upon my return ;-)
>
>I have no idea of what your future scheme will be. However, in a
>most recent follow-up I questioned whether your first rule could
>be redundant. I have further thought about the whole thing. I
>currently believe that even your second rule could be weakened a bit.
>Using your <--> convention, i.e. when doing processing anything
>on the left CAN be replaced by what is on the left and vice versa
>and that one may left things unchanged (cf. one of your previous
>examples), the only rule that is needed seems to be the same as that 
>for Huffman encoding, i.e. the prefix-freeness. That is, no string
>(the whole entity on any side of <-->) may be the prefix of any
>other string in the dictionary.
>
>On the assumption that I am right in the above, the question
>could be re-asked whether such a compression (on the byte level
>instead of on the bit level as in the case of the normal Huffman
>compression) may not suffer from performance, i.e. having a lower
>compression ratio, due to its operating with larger granularity
>(bytes instead of bits).
>
>So the superiority of schemes in the direction of your proposal
>remains to be demonstrated. In particular, how your dictionary is
>to be built from scratch (i.e. general guidelines for constructing
>an 'optimal' one) appears to be unclear, at least for me. On the 
>other hand, for the 'one-to-one' property David Scott, who use to 
>stress its need, has a functioning algorithm. In a previous thread
>I pointed out that, if one sacrifies one Huffman code symbol, 
>namely the one consisting of all 0's, to take care of the file ending 
>problem, then the normal Huffman algorithm (together with the
>rule that on compression O's at file end may be truncated and
>on decompreesion 0's may be appended) can also function under the
>requirement of 'one-to-one'. I look forward therefore with interest 
>to read and discuss your future scheme.
>

   You pointed out wrong the sacriffcing of one token the all zero
token as an EOF symbol. This has repeatidly been shown to be wrong.
You do not get  one to one compression by this method. In summary
how do you uncompress afile like
 11000000 00000000 00000000 10101010  assuming  as shallow a tree
as possable. How do you uncompress in your method of your socalled
EOF token is embedded in a file. I don't think you get the point yet.




David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

Reply-To: "Larry Mackey" <[EMAIL PROTECTED]>
From: "Larry Mackey" <[EMAIL PROTECTED]>
Subject: Data Scrambling references
Date: Thu, 4 Nov 1999 14:53:18 -0600

Hi,

I have a project where we need to scramble (and unscramble) a parallel data
stream such that when the data stream is serialized, the stream is a fairly
symetrical set of ones and zeros.

The data does not need to be compressed or encrypted rather we need to
randomize the data on a bit level.

I am trying to find a scheme that encodes and decodes the data words in as
uncomplicated manner as possible.  This is presently a bi-directional path
but we would like to be able to do this in a single direction only if
possible.  Since all the data in the stream needs to be randomized, the
decoding procress information needs to be extracted from the data stream or
decoding logic.

Does anyone have any suggestions, pointers to references, thoughts or
ideas??
We have been doing a number of web searches and have not found any
references that go into enough detail to understand the process enough to
replicate it.

It appears that high speed data links 100 Mbit/sec and above use this
approach but we are unable to find a detailed description anywhere of the
logic, or process.  We don't have the $$ to buy all the various upper level
reference documents called out to determine if they have the information we
are looking for.



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to