Cryptography-Digest Digest #103, Volume #10      Tue, 24 Aug 99 12:13:03 EDT

Contents:
  Re: Human-Readable Encryption (Newbie)
  Re: Ciphile Software (OFF TOPIC) (Anthony Stephen Szopa)
  Re: Quadibloc VI Taking Shape
  Re: NIST AES FInalists are.... (SCOTT19U.ZIP_GUY)
  Re: CRYPTO DESIGN MY VIEW (SCOTT19U.ZIP_GUY)
  Re: NIST AES FInalists are.... (Rochus Wessels)
  NIST ECC curves August document (DJohn37050)
  Re: NIST AES FInalists are.... (David Wagner)
  Re: What the hell good is a session key! (Anton Stiglic)
  Re: US export laws re Canada (W.G. Unruh)

----------------------------------------------------------------------------

From: <[EMAIL PROTECTED]>
Subject: Re: Human-Readable Encryption (Newbie)
Date: Tue, 24 Aug 1999 08:20:22 -0400

Thanks for those insights, John.

After some study I think that I understand the concept of fractionation.
How does encryption fit into all this?  The only encryption method I have
reproduced so far is Xor between ASCII characters of the key and plain-text.
It occurs to me that any system that has a base of (2^n)-1, e.g. base
1,3,7,15,..255 can utilize this XOr encryption method.

Thus, an effective method might be:
1. convert from base 255 (ASCII) x 3 -> base 7 x 7
2. encrypt using XOr (key is base 7)
3. convert base 7 x 5 -> base 26^3

I suspect that this encryption is not terribly strong (or is it?).  Any
comments or variations are welcomed (and encouraged).

BTW, great web site.  I'm just starting to dig into it.

Jeff Kanel

John Savard wrote in message <[EMAIL PROTECTED]>...
>[EMAIL PROTECTED] (wtshaw) wrote, in part:
>
>>Of course, without real keys, these are not examples of encryption, so
some say.
>
>But they are relevant none the less.
>
>Data compression at the input end, and conversion to convenient form
>at the output end are relevant to encryption.
>
>Base conversion is a form of substitution, and substitution is
>important in cryptography. Fractionation is a strong pencil-and-paper
>form of cryptography, and fractionation not involving exact factors of
>the alphabet length can indeed make things really hard.
>
>Of course, if one simply converts to another base approximately, one
>introduces some additional redundancy.
>
>As I've noted, I've fished in these waters a little myself:
>
>In extreme cases, such as 47 bits -> 10 letters, this additional
>redundancy is very slight.
>
>Otherwise, one can look for useful combinations of numbers that allow
>an intermediate step which doesn't add any bulk:
>
>128 = 125 + 3, 32 = 27 + 5, so there are two ways to convert a binary
>message to streams of base-3 and base-5 symbols. Also, 26*26 = 1 +
>27*25, and thus a very tight interweaving of a binary message, half
>converted to letters, is possible.
>
>But I make no apologies for starting from binary, instead of, say,
>from a 44-character typewriter keyboard input alphabet. Besides being
>able to encrypt pre-existing data files this way, binary, using the
>smallest base, lets me efficiently compress data using Huffman codes.
>If I used another base, the grain size would be larger. (Of course, I
>*could* use arithmetic coding, but that's not a possibility I want to
>consider.)
>
>John Savard ( teneerf<- )
>http://www.ecn.ab.ca/~jsavard/crypto.htm



------------------------------

From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto
Subject: Re: Ciphile Software (OFF TOPIC)
Date: Tue, 24 Aug 1999 02:46:38 -0700
Reply-To: [EMAIL PROTECTED]

Tommy the Terrorist wrote:

> In article <[EMAIL PROTECTED]> [    Dr. Jeff    ],
> [EMAIL PROTECTED] writes:
> >Okay, so no one in sci.crypt has any idea about or interest in talking
> >about Ciphile Software's Original Absolute Privacy Level 3 software.
> >Why is that? Is the software not considered good? Do people have
>
> Dude, sci.crypt is supposed to be about the SCIENCE of
> cryptography, not some software program.  And talk.politics.crypto
> is only relevant if you think it might be compromised deliberately
> to serve the NSA (which is not exactly what I'd call implausible),
> or some similar political tie-in.  Why don't you have another look
> around the newsgroups, especially the comp.* newsgroups, and
> see if you can find something with some people who know more
> about your particular platform and software?

You don't have science without inquiring minds.



------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Quadibloc VI Taking Shape
Date: 24 Aug 99 13:05:47 GMT

[EMAIL PROTECTED] wrote:
: [EMAIL PROTECTED] wrote:
: : At

: : http://www.ecn.ab.ca/~jsavard/co040709.htm

: : is a description of Quadibloc VI

: A variant with 16 regular rounds (from one point of view, only four
: rounds) instead of eight is now described, and an additional error in the
: key schedule is fixed.

Key-dependent byte permutations, like those used in Quadibloc II and
Quadibloc III between stages, have been added.

John Savard

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: NIST AES FInalists are....
Date: Tue, 24 Aug 1999 14:27:56 GMT

In article <7pt5p9$886$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] () wrote:
>> [EMAIL PROTECTED] wrote:
>>:I would rather see time invested and work done under the assumption
>>:of a reasonable attack model. If all finalists turn out to be equally
>>:strong under this model, then other reasonable parameters can be
>>:taken into account for judging their security (simplicity, mature
>>:design philosophy, partial proofs, even trust in their designers).
>>:Otherfactors such as implementation flexibility and speed will count
>>:also.
>>
>>Well, an unreasonable attack model _is_ very much an example of an
>>"other reasonable parameter". It's more indicative of security than
>>many of the few other choices remaining.
>
>I agree up to a point. If a purely theoretical attack is found
>against candidate A and nothing at all is found against candidate
>B, then one can argue that A has a "flaw" that can grow it the
>future, whereas B is "perfect". There is in fact a good posibility
>(as Wagner mentions in another post) that several finalists will
>endure the second phase of the AES competition flawlessly as far as
>security is concerned.
>
>On the other hand, if an attack is found against cipher A that
>requires 2^60 plaintexts and an attack is found against B that
>requires 2^70 plaintexts, I believe that the practical significance
>of this for choosing between them is zero. Here, I would rather
>take into account parameters such as simplicity, or even, using
>Schneier's memorable phrase, give weight to the "warm and fuzzy
>feelings" of experienced cryptanalysts. Cryptology is really a very
>young and underfinanced discipline, and may still be more an
>alchemy than a science. So we should better not let fancy numbers
>attain meaning that is not really there.
>
   I don't think numbers are the end all either. But just who does mister
BS think is an experienced cryptanalysts whose "warm and fuzzy
feelings" one should value fot this task. Maybe we should just openly
let him and/or the NSA decide which one to use. That way they can
not duck the real responsiblilty for picking a bad method. IF we don't
let the NSA pick the method they can always sat they knew it was weak.
IF there are honest and pick the best then good. But at least if they are
dishonest someone 10 years from know can point the finger at them.
 Of course we can argue then weather it was a trojan horse or if the
NSA was just plain stupid. But since I think this is a phony contest
anyway. And the idea is to pick the best. Why play games just let
the dam NSA pick the one they want since I will not use it anyway.




David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: CRYPTO DESIGN MY VIEW
Date: Tue, 24 Aug 1999 14:11:49 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:

>
>I have put much energy in a previous post to illustrate my example.
>I'll repeat here, since you have evidently overlooked my essential 
>points. No matter what your modifications of the Huffman algorithm
>at the end of input processing is, from the begining of the 
>compression process up to the vicinity of (not including) the end 
>part of the input file you are certainly following the normal 
>(unmodified) Huffman algorithm, aren't you??? (If not, please 
     NO I have changed the method from what I used as a baseline
see MY WEBSITE http://members.xoom.com/ecil/compress.htm
namely I start with a full tree of 256 symbols. Secondly I start with
a weight of one. Thirdly I  not add a value of one wieght wise for
each time symbol used. Foruthly I swap right and left nodes so that
the longest path is stream of all zeros and no stream of more than
8 ones can appear in a path. That is basiclly what is done but I doubt
if you can even follow this. One would actaully need to read C.
>clearly say so!!) Now let's say we have already encoded n input 
>symbols according to the normal (unmodified) Huffman algorithm and 
>the bits obtained till present can be represented as follows:
    since I have NOT USED AN UNMODIFIED HUFMAN but what
I have is clearly a huffman of the form I described I will continue
assuming your using my HUFFMAN values for the post ceeding.
>
>       ........ xxxxxxxx xxxxxxxx xxxxxxx
>
>where .... represent some bytes and the first 16 x's here are bits
>occupying two bytes and the following 7 x's are bits occupying the
>following byte which at this time point is not yet full, there being
>one bit position free. (All these bits come from encoding the n
>input symbols. Nothing(!) is said about which Huffman codes occupy
>which positions in the ensemble of bit sequences denoted above.) 
>Now the process comes to encode the n+1 th input symbol. By my 
>assumption this symbol has a Huffman code of 9 bits, so afterwards 
>we have
>
>       ........ xxxxxxxx xxxxxxxx xxxxxxxy yyyyyyyy
>
>To be concrete, let's assume a specific Huffman code, so we have
>a situation like this
>
>       ........ xxxxxxxx xxxxxxxx xxxxxxx0 10110010
>
>By assumption of my example this n+1 th symbol terminates the input
>file. Thus the output bits from the (normal) Huffman scheme exactly
>fills the last byte, the last output bit is (in our special case)
>exactly at the byte boundary. Now I repeat also the questions I asked
>in a previous follow-up: Are you (with your modifications of the
>Huffman algorithm) going to append anything to the output file,
>i.e. add something after the bit sequence shown above??? (If yes, 
>please say that and explain why!! I am convinced that it would be 
>foolish if you would add anything in this case to the bits already 
>obtained. You said you want to have the compressed file as short as 
>possible and not to have any waste, didn't you?) Are you going to 
>delete anything from the above sequence??? (If yes, please say that
>and show those bits that you delete!! I am convinced that you 
>couldn't do any deletion. The reason is that there could be another 
>possible Huffman code that also has 9 bits, e.g. 0 10110011. If you 
>delete any of the bits occupying the last byte shown above, you 
>wouldn't be able, on decompressing, to correctly recover the original 
>file, because the program wouldn't know whether the last symbol of 
>the input file is the one corresponding to 0 10110010 or the one 
>corresponding to 0 10110011 or something else. This is exactly 
>because the Huffman code has the so-called prefix property.) Now, 
>assuming that your answers to the above two questions are 'no', it 
>follows that the output file is EXACTLY as given above, there being 
>neither additions nor deletions, even though in your modifications 
>of the Huffman algorithm there are certain mechanisms that could 
>append or delete bits at the end of the output file in certain more 
>general situations. In other words, I have 'purposedly' constructed 
>my example in such a way that the mechanisms provided in your 
>modifications don't get activated in my special case. Is the above 
>now finally very clear to you??? If not, please kindly indicate 
>which point or points I have not clearly stated or what is exactly
>wrong in the lines I have written above!!
   I really don't care what kind of crap you are using in your analysis
since I feel it has nothing to do with my method. You can take about
conditons on pluto to say that affect the weather on Mars. You can
rant all you want. But the only meat in the above over bloated paragrapf
was what does the last byte to the file look like. It looks like
10110010 and that is end of that file.
>
>Now assuming that you have no counter-arguments to my claim above 
>that the end of a correct output (compressed) file can be of the 
   Again your words "correct" are bogus and misleading. It is what
the code does. Let me say again there is no "correct" or "wrong" it
is just how it works.
>form shown above, I'll construct a 'wrong' file by simply deleting 
>the last byte of the 'correct' file. So the bit sequence of the 
>'wrong' file is as follows:
>
>        ........ xxxxxxxx xxxxxxxx xxxxxxx0
      No again and again and again. I must repeat there is no wrong file.
In the first case you had a bit stream that happed to match the actually
finail file.  If your trying to analyse what happens as the file is built you
need to supply complete tokens. The input file is made of 8 bit inputs
and the output will start as a string of compressed bits. So what you
have above is BOGUS.  the last xxxxxxx0 is impossible because the
out put stream of huffman tokens does not end like that. If the 9 
y's was the last token in first example then you may have meant
that  xxxxxxx_ is the bit stream of huffman tokens we are writting to
that output file. In this case you either get xxxxxxx0 written out has
the last byte of compressed file or.depending on what the last "huffman"
compressed string in the x's was you may not write anything at all.
to eplain this farther in a vain attempt for you to see. think of the
last hufman symbol in you string of x's
           xxxxxxx1 1111111_   **note no zero ** since BOGUS
           xxxxxxx1       this is what gets wiritten out.

>
>Do you agree?? Assuming 'yes', let's start at the SAME time two
>Huffman decompression processes, one working on the 'correct' file
>and the other working on the 'wrong file', and compare step by step
        Again your confuseing things so lets not waste time and
follow the rest until you under stand what is going on and it is
obvious your lost at this point.

but lest assum the patterns
below represent the file. Not the bit stream that you have confused above.
>what they are doing. Evidently for processing the bits represented by
>  
>        ........ xxxxxxxx xxxxxxxx xxxxxxx
>
>both processes do exactly the same, i.e. both decoding to the same 
>n input symbols, as expected. Now both processes pick up a 0 bit. 
>But 0 alone is not a valid Huffman code, because by our assumption 
>the 9 bits 0 10110010 constitute a valid code and hence (due to 
>the prefix property of the Huffman code) any valid code beginning 
>with 0 must have a length longer than 1. (Do you agree??) Now let's
>see what the two processes presently continue to do. The process 
>handling the 'correct' file goes on to pick up further bits. It does 
>not find a valid code until it picks up 8 more bits in our example. 
>(Do you agree??) At this time point it can decode the whole bunch 
>of 9 bits and obtain the n+1 the symbol of the original input file. 
>It has done its job correctly and, since the end of the file being 
>processed is reached, it terminates entirely normally. Am I right??  
>Now examine closely what the other process which handles the 'wrong' 
>file will do!! (Please kindly pay attention here, because this is 
>the CENTRAL point of our dispute!) Like the first process it has 
>picked up the 0 bit. Because 0 alone is not a valid code (as 
>explained above), it similarly wants to pick up more bits in order 
>to obtain a valid code so that it can decode. But unfortunately it 
>can't do that because it has reached the end of the file processed 
>by it and there are no more bits available. Now what should this 
>process do??? According to normal software engineering practices, 
>the program should be written such that in such a situation an 
>error message is given to the user, saying that the process 
>has encountered a premature (unexpected) end of file without being 

 I hope you can get it this time
  the xxxxxxxx xxxxxxxy yyyyyyy  is a valid bit stream that will be written as
       xxxxxxxx xxxxxxxy yyyyyyy  for the case you used

       xxxxxxxx xxxxxxx_     is a valid bit stream  that can lead to
       xxxxxxxx xxxxxxx0     for most cases this is finally compress file

       xxxxxxxx xxxxxxx_   is a valid bit stream let  8 1's be last token
       xxxxxxx1              is a valid compressed file for this case


Another example in your format
     xxxxxxxx xxx_____   this is valid huffman stream suppose 10 last token
     xxxxxxxx x1000000   this is compressed file out for this case

    xxxxxxxx xxx_____ in this case asumme 10111 last token out
    xxxxxx10          this compressed file out.



David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: Rochus Wessels <[EMAIL PROTECTED]>
Subject: Re: NIST AES FInalists are....
Date: 24 Aug 1999 15:54:18 +0200

[EMAIL PROTECTED] writes:
> I would say that cipher B is more secure, precisely because in the
> real world 2^50 plaintexts are not available or else it is a trivial
[...]
> in the next 50 years to routinely do 2^100 work. In other words,
> 2^50 known plaintexts are not really possible, but 2^100 work may
> become feasible in the next 50 years.

Precisely what I wanted to say, but you have swapped the ciphers :-)
A was the cipher with 2^50 plaintexts, 2^50 resources,
B with a few thousand plaintexts and 2^100 resources.

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: NIST ECC curves August document
Date: 24 Aug 1999 14:28:37 GMT

The latest information (August) on NIST's suggested ECC curves can be found at
http://csrc.nist.gov/encryption.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: NIST AES FInalists are....
Date: 24 Aug 1999 07:14:56 -0700

In article <7pt6em$8lg$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
> By the way, time and memory resources are not completely
> independent. If random access memory is required then there is a
> time cost for addressing a large memory, i.e. on any given
> technology the bigger the memory the slower the access. Fundamental
> physics play a role here: the larger the memory the more space will
> it fill and therefore more time will be needed for a signal to
> cover the distance between memory and processor. So, a better cost
> function for computer resources should be Cost = Time * Memory^K
> with K>1.

If you arrange the memory as a binary tree -- or as a butterfly -- then
the time cost to access memory goes as O(log Memory), which suggests that
the right formula is Cost = Time * Memory * log Memory.

------------------------------

From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: What the hell good is a session key!
Date: Tue, 24 Aug 1999 10:39:33 -0400

>

In the project I'm working at,   master key is RSA-2048 bit, session
keys (that last about a month), is 1024 bit.   We also have different levels
of master keys, but the one in the highest level is the one that is the most
important to keep private (we keep it in a place that is only physicaly
accessible).   It makes sens to have session keys that are not to big, so
as to be able to encrypt/decrypt fast, but master keys are usualy mucy
bigger (slower to use, but you use them less often.)

If you have master key size equal session key size, you must support
your reason for doing this (it can depend on the encryption scheme
beeing used, and other stuff).

Anton



------------------------------

From: [EMAIL PROTECTED] (W.G. Unruh)
Crossposted-To: talk.politics.crypto
Subject: Re: US export laws re Canada
Date: 24 Aug 99 14:48:48 GMT

dave <[EMAIL PROTECTED]> writes:

>According to two articles in "Canadian Machinery and Metalworking", the
>USA  passed amendments to the InternationalTraffic in Arms Regulations
>in April of this year that revoked Canada's exemption from this
>regulation.

Don't know whether or not this is true.

...
>This seems to be applied to manufactured goodies, electronics, aircraft
>parts, etc, but would probably apply to the "strong" encryption
>products, too.

No. Commercial encryption (ie non-military) is not under ITAR any longer. It 
is under EAR which is controlled by the Commerce dept. The Canadian 
exemption is still there last time I looked ( a couple of weeks ago)



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to