Cryptography-Digest Digest #303, Volume #13      Sun, 10 Dec 00 16:13:01 EST

Contents:
  Re: ASM code public/private key cryptography source code (Bob Deblier)
  Re: wrapper code (Rex Stewart)
  Re: Array shuffling (Mok-Kong Shen)
  Re: document signing, XML canonicalization and why EDDF is a better    (Mok-Kong 
Shen)
  Re: EDDF: the intended audience (Roger Schlafly)
  Re: [Question] Generation of random keys - LONG - algorithms, randomness  ("John A. 
Malley")
  Re: [Question] Generation of random keys - LONG - algorithms, randomness  ("John A. 
Malley")
  Re: Panama question (Paul Crowley)
  Re: Array shuffling (Paul Crowley)

----------------------------------------------------------------------------

From: Bob Deblier <[EMAIL PROTECTED]@telenet-ops.be>
Reply-To: [EMAIL PROTECTED]@telenet-ops.be
Crossposted-To: comp.lang.asm.x86
Subject: Re: ASM code public/private key cryptography source code
Date: Sun, 10 Dec 2000 16:20:09 GMT

James Dorrington wrote:

> Does anyone have (or could direct me to) some source code for public and
> private key cryptography? Assembly source code prefered (x86.)
>
> thx in advance
>

The BeeCrypt crypto library contains public/private key routines, and some
of the core functionaly of the multi-precision integer routines has been
optimized in x86 assembler. I don't know if that is exactly what you're
looking for, but you're free to have a look at it at
http://beecrypt.virtualunlimited.com/

A new version (1.2.0) is due to be released.

Sincerely

Bob Deblier
Virtual Unlimited


------------------------------

From: Rex Stewart <[EMAIL PROTECTED]>
Subject: Re: wrapper code
Date: Sun, 10 Dec 2000 17:08:50 GMT

I agree adding a hash would be superior - but I think even
a checksum would be sufficient.  As long as there is more
variability than padding with 0's provides.  In my opinion,
the known (or verifiable) plaintext isn't a problem so much
as having a last block that provides a way to catalogue
plain/cyphertext blocks.  If an opponent can force repeated
messages ending in a string of knowns (anyone remember
what WordStar files ended with? a string of ctrl-z's)
then the last block would have 8 possibilities.  Thus
cataloging begins...

I just wish the world would settle on a standard to allow
all of us to program alike.  RFC 1423 is a "request for
comment" and I think the comunity has commented sufficiently.
Now if someone (preferably someone people recognize) would
just advocate.

BTW, I try to use method number 1 (from your sig block)
when writing code, but have had at least two compilers
add the bugs for me :-)


In article <[EMAIL PROTECTED]>,
  Benjamin Goldberg <[EMAIL PROTECTED]> wrote:
> Rex Stewart wrote:
> [snip]
> > I don't think the issue was ever completely settled.  The main
> > issue seems to be static padding provides a known plaintext and
> > random padding could provide a subliminal channel.
>
> Another possibility, which is sort of a hybrid of the above two
> suggestions, is to pad with bits of a hash of all previous blocks.
> Follow the pad bits with (for a 64 bit/8 byte block cipher) 3 or 6
bits
> stating how many bytes or bits of padding there is.  If your message
> lengths are always multiples of 8 bits (ie, are integral numbers of
> bytes), then the last byte of your message should always be 5 bits of
> padding, and a 3 bit integer, stating how many more bytes of padding
> there were preceding this one to bring the total message length to a
> multiple of the blocksize.
>
> Although this, like static padding, does provide some amount of known
> plaintext, you need to first decrypt all prior blocks to know what
that
> known plaintext is.  Thus, there is no subliminal channel, and we
avoid
> some of the drawbacks of the mechanisms like that described in RFC
1423.
>
> --
> There are three methods for writing code in which no bug can be found:
> 1) Make the code so straightforward that there are obviously no bugs.
> 2) Make the code so complicated that there are no obvious bugs.
> 3) Insist that any apparent bugs were really intentional features.
>
>

--
Rex Stewart
PGP Print 9526288F3D0C292D  783D3AB640C2416A


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Array shuffling
Date: Sun, 10 Dec 2000 18:27:06 +0100



Benjamin Goldberg wrote:
> 

> int int_shuffle_array( unsigned int * arr, unsigned int * buf, int len )
> {
>         int lt, rt, bits = 0;
>         if( len == 1 ) {
>                 buf[0] = arr[0];
>                 return 0;
>         }
>         if( len == 2 ) {
>                 if( randbit() ) {
>                         buf[1] = arr[0];
>                         buf[0] = arr[1];
>                 } else {
>                         buf[0] = arr[0];
>                         buf[1] = arr[1];
>                 }
>                 return 1;
>         }
>         do {    int i;
>                 unsigned int * bufl = buf, * bufr = buf + len;
>                 for( i = 0; bufl != bufr; ++i )
>                         if( randbit() )
>                                 *bufl++ = arr[i];
>                         else
>                                 *--bufr = arr[i];
>                 lt = bufl - buf;
>                 rt = len - lt;
>                 bits += len;
>         } while( lt==0 || rt==0 );
>         if( lt>0 ) bits += int_shuffle_array( buf   , arr   , lt );
>         if( rt>0 ) bits += int_shuffle_array( buf+lt, arr+lt, rt );
>         return bits;
> }

Could you give an (tiny) example call to ease understanding?
I am not sure that it is clear that the shuffled result
is not 'correlated' with the original array in some way. 
Thanks.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: document signing, XML canonicalization and why EDDF is a better   
Date: Sun, 10 Dec 2000 18:27:13 +0100



denis bider wrote:
> 
> Mok-Kong Shen<[EMAIL PROTECTED]> wrote:
> 
> >Just for my understanding: You mean that EDDF can
> >completely replace XML, because it provides the same
> >functionalities and is more rigorous and is just as
> >well (or even better) user-friendly. Is that right?
> 
> Yes, that would be correct - except that "developer-friendly",
> "system-designed-friendly", and "security-friendly" would probably be
> more proper terms.

I am sure that you are aware that there are some competing 
works in the same directions by others. I am an ignorant in 
the field in question. However, by pure chance (it's an 
almost unbelievable time coincidence) I got today certain 
terse information materials in my hand that seem to be of some 
relevance concerning the competition issue:  (1) Microsoft,
Verisign and Webmethods have published an XKMS (XML-Key-
Management-Specification) to help the employment of digital
signatures and PKIs. (2) A number of banks are developing
IRML (Investment Research Markup Language) to facilitate 
online stock business.

It could well be the case that your EDDF is superior from
theoretical points of view, but there is still the risk
of lack of practical acceptance. In real life, inferior 
stuffs could often be long-lived and dominate certain 
fields of practice. An example is Cobol, which is horrible 
from viewpoints of CS theorists, yet it is likely to remain 
till eternity nonetheless.

M. K. Shen

------------------------------

From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: EDDF: the intended audience
Date: Sun, 10 Dec 2000 10:04:19 -0800

denis bider wrote:
> ??its inherent disadvantage of being text-based,
> ?Being text-based is an enormous advantage.
> ?Something that is text based can be manipulated and handled by human
> ?beings. They can look at it with their own eyes, and see what is
> ?there.
> 
> I'm not sure this is entirely true. I think that this "advantage" is
> more or less an illusionary one.
> 
> As I said - if you have a large computer-generated XML, there isn't
> much that you can do without a specialized tool. I've tried - it's
> useless, and since IE5 was too clumsy to work with, I even had to
> write my own utility to display my XML documents.
> 
> Once you admit that specialized tools are needed anyway, it really
> doesn't matter whether the original format is text-based or not. Just
> as the XML viewer in IE5 is ubiquitous, and just as text editors are
> ubiquitous, an EDDF viewer/editor can be ubiquitous, too. And since
> there are all sorts of technical problems associated with having a
> text-based format, it would have been better to start off with a
> binary format in the first place. That is, in effect, what I did.

I don't follow this logic. Text files have problems if people
rewrite the file. A binary file is less likely to be rewritten.
But that doesn't make binary better. People might have to understand
that if they reformat the file then they may not be able to verify
the signature.

> I think you will agree that, ignoring the human-readability argument,
> having a text-based format only brings a bunch of problems and not
> much else.

No.

> ?A special-purpose format to facilitate signatures, rather than a new
> ?'standard' to replace what people have already decided works, might
> ?have some chance of acceptance.
> 
> That is why I was trying to find some support within this and other
> crypto groups. It seems, however, that people either don't know what
> XML is, or they do know what it is and they just don't want to look at
> it any other way than they're used to.

Have you found support from anyone else?

------------------------------

From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: [Question] Generation of random keys - LONG - algorithms, randomness 
Date: Sun, 10 Dec 2000 11:36:35 -0800


David Schwartz wrote:

> 
>         As I said, an algorithm can produce anything except unpredictability.
> By most definitions of randomness, an algorithm can produce randomness.
> 
>         DS                                            
                                                              
An interesting thread, an interesting subject, and an interesting
statement.
Actually the situation is the other way 'round - 

1) An algorithm cannot produce randomness.
 
2) An algorithm can produce unpredictability.


First, why can't an algorithm produce randomness?

Take an algorithm A acting on input I to generate output O. By
definition an algorithm always terminates after a finite number of steps
with the end result 0.  By definition the actions at each step of an
algorithm are rigorously and unambiguously specified, or in other words,
an algorithm is definite.  (See Knuth, "The Art of Computer
Programming", Vol. 1, Section 1.1 Algorithms.) For the same input I the
algorithm A always generates the same output O. 

Suppose there is a random algorithm, call it R. R must generate at least
two different outputs 0_1, 0_2  for the _same_ input I.  Run R on I and
sometimes it outputs 0_1.  Run R on I and sometimes it outputs 0_2. 
Call this behavior "randomness." It's a most basic definition. It
presupposes nothing about a probability density function (i.e. are O_1
and 0_2 equally likely or is one outcome more likely than another?)

Since R generates output we know R terminates after a finite number of
steps (or R halts.) That's consistent with R being an algorithm. 
Sometimes R generates 0_1 and sometimes generates 0_2 when it runs on
I.  Call these two cases R_o1 and R_o2.  Step by step R runs through the
same actions in case R_o1 and case R_o2, in both cases starting with I
but in one case R halts in one state with output 0_1 and in the other
case R halts in a different state with output 0_2.  The result of at
least one step in the algorithm R must have come out one way in case
R_o1 but another way in case R_o2.  So there must be at least one step
in algorithm R that is NOT rigorously and unambiguously specified so the
results could differ between the cases.  At least one step in R is NOT
definite. But, all steps of an algorithm are definite. That's necessary
for R to be an algorithm.  For algorithm R to generate random output
requires R violate the definition of an algorithm. This is a
contradiction.

Therefore R cannot be an algorithm.  And therefore an algorithm cannot
produce randomness. 

What about a non-deterministic Turing Machine? It's "non-deterministic."
That sounds like "random." Can that produce randomness? 

No.  

In a non-deterministic Turing Machine, multiple choices for the next
state (step) may exist at a given state (step) , but at that step we can
picture the Turing Machine "forking" off into as many machine copies as
there are choices. Each new copy of the machine takes one of the
possible choices and continues the algorithm from there.  And these
copies may "fork" to spawn more copies as they proceed down their path
in the algorithm.  All possible paths are explored until any one of
these copies ends up in an accept state; then the non-deterministic
Turing Machine (or algorithm) terminates. 

Run the non-deterministic Turing Machine with the same input and it will
always terminate in the same end state. Every non-deterministic
non-deterministic Turing Machine is equivalent to some deterministic
non-deterministic Turing Machine. (See Chapter 1, Section 1.2 and
Chapter 3, Section 3.2 of "Introduction to the Theory of Computation" by
Michael Sipser for example.) And as already shown a deterministic
algorithm cannot terminate in two different end states starting from the
same input and that's what's needed to generate random output.

What about a probabilistic algorithm?  It's "probabilistic."  Can that
produce randomness?

No.

A probabilistic Turing Machine is a type of non-deterministic Turing
Machine. Such an algorithm uses the output of a random process, external
to the Turing Machine, to determine the next step it will take from a
given step.  Such an algorithm contains an instruction to "flip a coin"
at a certain step and select the next step in the algorithm as function
of that random variable's value. Some of the  possible paths are
randomly explored until any one of these copies ends up in an accept
state; then the non-deterministic Turing Machine (or algorithm)
terminates. This type of algorithm also has a finite probability of
halting with the wrong answer. (See Chapter 10, Section 10.2 in
"Introduction to the Theory of Computation" by Michael Sipser for
example.)  Running the probabilistic Turing Machine multiple time on the
same input may result in the machine halting on different outputs.  This
randomness is not manufactured by the algorithm, though, it's due
entirely to randomness from the outside source controlling the step
choices in the algorithm. 


Second, how can an algorithm produce unpredictability? 

A pseudo-random bit generator (PRBG) is by definition a deterministic
algorithm taking as input a truly random binary sequence of length k
(the seed) and generating an output sequence of length l >> k.  (See
Definition 5.3 in Chapter 5, Section 5.1 of "The Handbook of Applied
Cryptography" available at http://cacr.math.uwaterloo.ca/hac/ ).  A PRBG
always generates the same output sequence for a given input (seed). 

Unpredictability is formally defined for these deterministic algorithms
(PRBGs). 

A pseudo-random bit generator is unpredictable (a.k.a. cryptographically
secure - CS ) if there is no polynomial-time algorithm A that takes the
previous k bits in the output sequence of the PRBG as input and predicts
the (k+1)st bit of the PRBG output sequence with probability
significantly greater than 1/2. (See Definition 5.6 in Chapter 5,
Section 5.1 of "The Handbook of Applied Cryptography.")  A CSPRBG
expands the seed into an output sequence in such a way that an adversary
cannot efficiently differentiate the output sequence of the PRBG from a
random sequence of bits the same length as the output sequence.  A
random bit is a random variable that takes on the value 0 or 1
equiprobably - it is a random variable whose values reflect a uniform
probability distribution.  

[ Aside:  Be aware that a random variable can take on values that are
NOT uniformly distributed. The probability distribution function (pdf)
can be non-uniform; thus some values of the random variable appear more
frequently than others. For example, consider a random variable on a
Gaussian distribution.  The random variable takes on values closer to
the mean more often than it takes on values on the tails of the
distribution.  A random variable can also take on values correlated with
its previous values.  For example for a first-order Gauss-Markov random
process the current value of the random variable is correlated with its
immediate past value.  An excellent introduction to random processes are
Chapters 1, 2 and 3 in  "Applied Optimal Estimation, written by the
Technical Staff, The Analytic Sciences Corporation, edited by Arthur
Gelb" from The MIT Press, published in 1984.  I'd like to see the 
"Handbook of Applied Cryptography" address pdfs, cumulative distribution
functions (cdfs) and random processes more thoroughly in its next
update/release.]

The RSA PRBG and the Blum-Blum-Shub PRBG are two examples of CSPRBGs
whose output sequences satisfy this definition of unpredictability. And
they are both algorithms. (See Chapter 5, Section 5.5 in the "Handbook
of Applied Cryptography" for detailed explanations of both unpredictable
algorithms.) 


John A. Malley
[EMAIL PROTECTED]

------------------------------

From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: [Question] Generation of random keys - LONG - algorithms, randomness 
Date: Sun, 10 Dec 2000 11:49:30 -0800



"John A. Malley" wrote:
[snip]
> 
> Run the non-deterministic Turing Machine with the same input and it will
> always terminate in the same end state. Every non-deterministic
> non-deterministic Turing Machine is equivalent to some deterministic
> non-deterministic Turing Machine. 

Oops, big big typo due to the magic of cut'n'paste. I missed it! That
line should read 

"Every non-deterministic Turing Machine is equivalent to some
deterministic Turing Machine." 


John A. Malley
[EMAIL PROTECTED]

------------------------------

From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Panama question
Date: Sun, 10 Dec 2000 20:47:03 GMT

Benjamin Goldberg wrote:
> 
> I suppose this is a bit of a silly question, but...
> 
> Suppose that in using the "Panama" stream cipher, you followed every
> "pull" operation [retriving some bits of keystream], you followed it
> with a "push" operation, feeding the plaintext that's just been
> enciphered back into the state of the cipher?  Would this form of
> feedback be more or less secure than the normal mode of Panama
> operation?

I would guess this would be a bad idea, though the Panama paper is
silent on modes of operation other than stream cipher, hash, and MAC.  I
should think that there should be 32 "blank pulls" between any push and
any "true pull" in any mode of operation, or some iterative guessing
attacks might become practical.
-- 
  __
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/

------------------------------

From: Paul Crowley <[EMAIL PROTECTED]>
Subject: Re: Array shuffling
Date: Sun, 10 Dec 2000 20:59:31 GMT

Benjamin Goldberg wrote:
> 
> Having recently looked at the sci.crypt FAQ, I noticed that the array
> shuffling algorithm requires that you use a modran function.
> 
> for( n = ARRLTH-1; n > 0 ; n-- ) swap( &arr[modran( n+1 )], &arr[n] );
> 
> Considering that, with an arbitrary parameter, each call to modran takes
> a nondeterministic amount of time to complete, and further is generally
> not the most efficient use of random bits, I took it upon myself to
> write my own array shuffler, which uses random bits in the most
> efficient way possible.

Pseudocode would be better than C for explaining this, but it looks like
you're doing a quicksort and replacing the comparator with a random bit
lookup.  Schneier recommends this method in the first edition of Applied
Cryptography.  However, it's biased.  In fact, *any* shuffling method
which uses a bounded number of bits has to be biased.

Let's suppose that, when shuffling n entries, your algorithm uses at
most L(n) bits from the random number generator.  Then we can treat it
as a function that maps strings of bits of length L(n) onto shuffles. 
It's unbiased if each possible shuffle turns up equally often.

But this is impossible.  There are 2^L(n) possible inputs, and n!
possible outputs, so each output must turn up 2^L(n)/n! times, but if n
> 2 then n! doesn't divide 2^L(n).

The bound on the number of bits that your generator uses is n(n-1)/2 -
ie worst case, every element is "compared" to every other. However,
usually it'll use on the order of lg(n) bits.  The bias is in favour of
the shuffles that use the fewest bits - can you see why?

hope this helps,
-- 
  __
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.cluefactory.org.uk/paul/

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to