Cryptography-Digest Digest #958, Volume #9       Sat, 31 Jul 99 00:13:03 EDT

Contents:
  How to write REALLY PORTABLE code dealing with bits (Was: How Big is a Byte?) 
(Guenther Brunthaler)
  Re: How to write REALLY PORTABLE code dealing with bits (Was: How Big is a Byte?) 
(Greg Comeau)
  Re: How Big is a Byte? (was: New Encryption Product!) (Roger)
  Re: How to write REALLY PORTABLE code dealing with bits (Was: How Big is a Byte?) 
(Mr. Leo Yanki)
  Re: Hash (One-Way) Functions (Mr. Leo Yanki)
  Re: How Big is a Byte? (was: New Encryption Product!) ("Douglas A. Gwyn")
  Re: Modified Vigenere cipher (Alexander Demin)
  Re: Q: Does ElGamal require that (p-1)/2 is also prime like DH? 
([EMAIL PROTECTED])
  Re: Cryptonomicon - low priority posting (Helge Horch)
  Re: the defintion of Entropy ("Mark Hammer")
  The Onega Cipher (wtshaw)
  Re: OTP export controlled? (Jerry Park)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Guenther Brunthaler)
Crossposted-To: 
alt.folklore.computers,alt.comp.lang.learn.c-c++,comp.lang.c++,microsoft.public.vc.language
Subject: How to write REALLY PORTABLE code dealing with bits (Was: How Big is a Byte?)
Date: Fri, 30 Jul 1999 22:01:43 GMT

A PRACTICAL GUIDE TO PORTABLY WRITE CODE
THAT DEPENDS ON SPECIFIC BIT SIZES FOR C/C++
(c) 1999 by Guenther Brunthaler

===========================
This is an excerpt of an article I recently posted on this topic for
free use and distribution by the USENET community.
===========================

In cryptographic applications as well as many other situations
integers of specific bit sizes are required.

But how to write code dealing with bits in a portable fashion? Bits
make a byte, right? And bytes always have 8 bits, most people believe.


But is that correct? What has a byte been defined anyway!?

Well, "byte" is simply short for "binary term" and nothing else. It
can have ANY number of bits, although on most machines it has 8 bits
("binary digits").

However, I know that on some older mainframes bytes also used to have
just 6 bits.

Usually, a byte is the smallest unit of memory a processor allows to
address directly.

If you have to obtain the number of bits in a byte in a C program,
just include the ANSI-C header file

#include "limits.h"

and use the constant

CHAR_BIT

for that purpose: ANSI-C defines that 'char' is mapped to a
processor's byte, however big that may be on this particular processor
(although certain minimum requirements have been specified by the ANSI
committee).

So, if you want to know how many bits there are in an 'int', just
write

CHAR_BIT * sizeof(int)

instead of #defining it for your specific processor.

And, please, avoid following the WINDOWS-misconception of thinking
"BYTE" is a good choice for referring to an 8 bit integer, WORD for 16
and LONG for 32 bits, etc.

As outlined above, a 'byte' on a machine must NOT necessarily have 8
bits. Also, the signedness of a byte can be interpreted differently on
each machine, just as as a plain 'char' in C (as opposed to 'signed
char' and 'unsigned char').

The term 'word' actually stands for 'machine word' in most contexts,
not just for "16 bits" as in windows. A machine word is usually
defined as the native unit of memory access (or standard register
size) for each processor. So 8 bit processors use 8 bit words, 16 bit
processors use 16 bit, 32 bit processors use 32 bit and a 64 bit
processors use 64 bit. A machine word is typically mapped to an 'int'
for each 'C' implementation.

Also the choice of "LONG" (or "DWORD") is a bad choice either, because
what seems long for one processor is rather short for another. In
fact, such name assignments are basically MEANINGLESS.

So, what better choices are available?

Until before a few years, I defined data types such as
U8, U16, U32 etc for unsigned integers and S8, S16, S32 for signed
integer types of a specific bit size.

The names are pretty descriptive and I was quite happy with this,
until I found QUITE A BETTER replacement for my self-defined types:
The soon-to-come standard header file "inttypes.h".

This file solves all the problems associated with assigning bit sizes
to integers - and provides even more.

Although it has not yet (1999) been approved as a standard IMHO, I
have found it to be part of many compiler distributions already. But
even if this is not the case with your compiler, it will always be
worth the effort of downloading some version of inttypes.h from the
internet and customizing the typedefs until it matches your machine
(as I did).

But what makes inttypes.h so powerful?

First of all, it provides integers with specific signedness and bit
size: uint8_t is an unsigned integer exactly 8 bits wide, int32_t is a
signed integer exactly 32 bits wide, etc.

So this header file provides complete portable replacement for all
those fancy "BYTE", "DWORD" etc. typedefs, and also uses a consistent
naming style (think of "size_t" or "clock_t").

It can also be used to access implementation-specific extensions in a
portable way by mapping them. For instance, MS Visual C provides a
'__uint64' keyword for declaring an unsigned 64 bit integer. In this
case, inttypes.h will contain a line

typedef __uint64 uint64_t;

that maps the implementation specific type to the appropriate
inttypes.h standard type. Or, on some UNIX-compiler for the MERCED
processor,

typedef long long uint64_t;

may do the same. Whatever it is defined, if you use 'uint64_t' and
inttypes.h, your code will run on both machines without modification.

The inttypes.h header file also defines minimum and maximum constants
for its types, such as UINT32_MAX.

But those constants do more than just define minimum or maximum
values: They also can be used to check whether a machine (or compiler)
provides a specific bit-sized integer. For instance,

#include <inttypes.h>
#ifdef UINT64_MAX
 typedef uint64_t bitblock64;
#elif defined(UINT32_MAX)
 typedef struct {uint32_t b1, b2} bitblock64;
#else
 #error "this code requires at least a 32 bit integer type"
#endif

can be used to check whether a machine provides integers of specific
sizes.

But even this has not been all that inttypes.h does provide:

It also defines integers of a given minimum size or the fastest
integer types that can contain at least a specific number of bits.

What's the difference?

In the example above, 'bitblock64' had to be declared as an object
that was EXACTLY 64 bits in size. But actually, such requirements are
usually only present when directly accessing hardware devices.

For most purposes, however, it is often only important that a variable
can hold AT LEAST the specified number of bits, but it does not hurt
if the integer actually contains more bits.

As an example, some RISC processors simply do not have a 16 or 32 bit
data type. They support 8 bit bytes and 64 bit machine words - and no
integers of any other size.

So, if your code insisted to use

uint16_t var;

it cannot be compiled on that machine. This may be ok if your code
REALLY depends on an integer to have EXACTLY 16 bits, as that
particular processor cannot provide a 16 bit integer.

But it is a completely unnecessary portability hazard if your code
only wants to just use some 16 bits of an integer. For this case,
inttypes.h provides two better choices:

uint_fast16_t var1;
uint_least16_t var2;

The first declaration declares a variable 'var1' that is the fastest
unsigned integer type provided by the compiler that can store at least
16 bits. In our example, this would be mapped to a 32 bit integer (as
this can store 16 bits as well).

The second declaration declares a variable 'var2' that is the shortest
unsigned integer type provided by the compiler that can store at least
16 bits. In our example, this would also be mapped to a 32 bit
integer.

But the following

uint_fast8_t var1;
uint_least8_t var2;

would be mapped differently: While 'uint_fast8_t' remains mapped to a
32 bit machine word (faster than byte processing on this example RISC
processor), 'uint_least8_t' will be mapped to an 8-bit byte!

As a guide what flavour of integer to use in what situation, I suggest
the following:

Avoid using types such as 'int16_t' directly unless accessing
hardware-registers or referring to external API's with specific bit
size requirements (which should provide their own header files
anyway).

For all normal applications, only use the '(u)int_fastXXX_t' types for
performing calculations and all normal processing, and use the
'(u)int_leastXXX_t' types where memory size should be minimal
(typically as elements in large arrays).

This is similar to using 'double' and 'float' in most implementations:
'double' is usually the faster and more native type and should be used
for all calculations, while 'float' may use less memory and may be the
better choice for storing large amounts of data in arrays, lists and
so forth.

Following this path will make your applications portable to the
highest degree possible (at least concering bit sizes).

However, there are also pitfalls a developer unfamiliar with those
concepts should be aware of:

The most frequent problem is MASKING. For instance, in a function

uint_least16_t add16(uint_fast16_t a, uint_fast16_t b) {
 return (uint_least16_t)(a + b);
}

your would expect that the return value will always be in range from 0
to 65535, would you?

As long as the machine your program is running on provides an 16 bit
integer, you will probably be right.

But if the machine did not - as in our example RISC processor - then
an 'uint_least16_t' will be mapped to a larger type, such as a 32 bit
word, and the return value can be very well larger than 65535!

A simply idea solution is to use

return (uint_least16_t)(a + b & 0xffff);

so that the code will work on any machine - but this will be an
unnecessary operation if the machine DOES provide 16 bit native
integers.

So, the best solution is to provide a macro such as

#ifdef UINT16_MAX
 #define MASK16(x) ((uint16_t)(x))
#else
 #define MASK16(x) ((x) & UINT16_C(0xffff))
#endif

and change the statement to

return (uint_least16_t)MASK16(a + b);

This way, there will be no overhead if the machine provides a native
16 bit integer type, but it will be masked as required if the machine
does not.

The macro definition above exploits the fact that UINT16_MAX is only
defined if the machine supports uint16_t, that is, a native unsigned
16 bit integer type. The macro just performs a cast that clips off the
unwanted bits in that case.

Otherwise, the machine will use some larger native integer type, and
so masking is required which the macro then performs. Also note the

UINT16_C(0xffff)

macro that has been used in the declaration: Such XintXXX_C(value)
macros are another feature of inttypes.h. They add any necessary type
suffix to the argument value to make it a valid literal constant for
the indicated type.

An example for such a suffix is "L" for 'long'. This is very handy,
because this way your code does not depend on what a compiler
considers to be a 'long'.

For instance, on 16 bit MSDOS compilers a number such as

const uint_fast32_t= 80000L;

should be defined with the suffix 'L', because it is too large for an
int (16 bits on such systems), and should therefore be a 'long'
constant - thus the suffix 'L'.

However, on a hybrid 32/64 bit system with 32 bit machine words,

const uint_fast32_t= 80000;

would be the appropriate way to write it, as 'long' may refer to a 64
bit constant on such a machine (an 'int' is 32 bit here).

Anyway, with inttypes.h you can forget all about this and just write

const uint_fast32_t= UINT32_C(80000);

which will add the 'L'-suffix automatically where necessary.

There are even more features in inttypes.h, such as defining the
fastest, shortest or largest integer types without a specific byte
size and more, but I think this article is long enough now.

Download a copy of inttypes.h and look at it on your own - you will
never want to miss it again once you have experienced its merits.

Whether you are writing cryptographic algorithms, multi-precision
arithmetic routines or designing portable subfunctions of hardware
device drivers - inttypes.h can help you make your code easier to
read, easier to understand, easier to port and easier to maintain.

And if you happen to know someone who is a member of the ANSI/ISO or
other standards committee involved in standardizing C/C++, please
encourage him or her to consider suggesting inttypes.h for inclusion
into the standard!


Greetings,

Guenther
--
Note: the 'From'-address shown in the header is an Anti-Spam
fake-address. Please remove 'nospam.' from the address in order
to get my real email address.

In order to get my public RSA PGP-key, send mail with blank body
to: [EMAIL PROTECTED]
Subject: get 0x2D2F0683

Key ID: 2D2F0683, 1024 bit, created 1993/02/05
Fingerprint:  11 71 47 2F AF 2F CD F4  E6 78 D5 E5 3E DD 07 B5 

------------------------------

From: [EMAIL PROTECTED] (Greg Comeau)
Crossposted-To: 
alt.folklore.computers,alt.comp.lang.learn.c-c++,comp.lang.c++,microsoft.public.vc.language
Subject: Re: How to write REALLY PORTABLE code dealing with bits (Was: How Big is a 
Byte?)
Date: 30 Jul 1999 22:21:13 -0400
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]> [EMAIL PROTECTED] (Mr. Leo 
Yanki) writes:
>[EMAIL PROTECTED] (Guenther Brunthaler) wrote:
>
>>... "byte" ... can have ANY number of bits
>
>A byte has exactly eight bits and can have 256 different values.

Not so.

>A general term for any size collection of bits is "binary number".

So then of which a byte is a subset of said.

>Please don't attempt to confuse people by trying to spread your
>personal redefinitions of commonly used technical terms.

Your definition does not represent the common definition of byte, and
so hence you are the one confusing the term with your personal definition.
But both of us saying this is not helpful.

A byte is not eight bits.  The etymology of what IBM did or didn't
do about the term _decades ago_ in now longer appropriate.

If nothing else, it is not defined as eight bytes in C or C++.
And Guenther has already provided accurate information from Standard C
and Standard C++ on this point.  If you believe otherwise,
find quotes in either Standard C or Standard C++ to the contrary.
Otherwise, let's not debate this as it will take up too much air time
with no real benefit. 

- Greg
-- 
       Comeau Computing, 91-34 120th Street, Richmond Hill, NY, 11418-3214
     Producers of Comeau C/C++ 4.2.38 -- New Release!  We now do Windows too.
    Email: [EMAIL PROTECTED] / Voice:718-945-0009 / Fax:718-441-2310
                *** WEB: http://www.comeaucomputing.com *** 

------------------------------

From: [EMAIL PROTECTED] (Roger)
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Sat, 31 Jul 1999 02:18:10 GMT

1010 0101 

NEW !!! URL:

http://members.xoom.com/enigma67/pages/index.htm


VISIT "THE REVELATION"

HIGHLIGHTS:

ANCIENT EGYPTIAN LIGHT BULB

LUCIFERIC SYMBOLS IN GOVERNMENT CENTER WASHINGTON D.C. , AVEBURY,
ENGLAND
AND CYDONIA MARS

THE PLANETARY GRID AND CROP CIRCLES

THE NEW WORLD ORDER

Visit "The Revelation"

On Sat, 24 Jul 1999 04:31:12 -0400, [EMAIL PROTECTED] wrote:

>wtshaw wrote:
>> 
>> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>> 
>> > wtshaw wrote:
>> > >
>> > > Zero has no value in itself as it expresses the absence of a number in a
>> > > particular place.
>> >
>> > There is a difference between zero the number and zero the digit.  You
>> > are using the second to replay to the first.
>> >
>> Nothing=nothing... I consider you above argument a NULL hypothesis.
>
>You consider the digit as the same as the value?
>
>I suppose you also believe NUL == NULL?


------------------------------

From: [EMAIL PROTECTED] (Mr. Leo Yanki)
Crossposted-To: 
alt.folklore.computers,alt.comp.lang.learn.c-c++,comp.lang.c++,microsoft.public.vc.language
Subject: Re: How to write REALLY PORTABLE code dealing with bits (Was: How Big is a 
Byte?)
Date: Sat, 31 Jul 1999 01:52:28 GMT

[EMAIL PROTECTED] (Guenther Brunthaler) wrote:

>... "byte" ... can have ANY number of bits

A byte has exactly eight bits and can have 256 different values. A general
term for any size collection of bits is "binary number". Please don't
attempt to confuse people by trying to spread your personal redefinitions
of commonly used technical terms.
-- 
"Mr. Leo Yanki"     better known as [EMAIL PROTECTED]
 01  234 56789      <- Use this key to decode my email address.
                    Fun & Free - http://www.5X5poker.com/

------------------------------

From: [EMAIL PROTECTED] (Mr. Leo Yanki)
Subject: Re: Hash (One-Way) Functions
Date: Sat, 31 Jul 1999 01:42:15 GMT

[EMAIL PROTECTED] wrote:

>Ok, I have a question:
>I am not sure about MD5, but SHA can take an input of over 1000
>characters long.  The digest of this input is only 20 characters
>long...now if these algorithms have very few collisions, how is it
>possible for a inputs over 1000 characters long to all have a unique
>hash only 20 characters long?

For either of the cryptographic hash algorithms you mentioned, it's merely
very difficult, not impossible, to discover any of the numerous possible
inputs that will produce any given output.
-- 
"Mr. Leo Yanki"     better known as [EMAIL PROTECTED]
 01  234 56789      <- Use this key to decode my email address.
                    Fun & Free - http://www.5X5poker.com/

------------------------------

Crossposted-To: alt.folklore.computers
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Fri, 30 Jul 1999 20:14:20 GMT

Patrick Juola wrote:
> I can certainly recognize that the zero-based indexing conflicts with
> standard mathematical practice and is difficult to present to students.

Standard mathematical practice is to index over a discrete set.
Summations indicate the relevant range for the indices, which
often start at 0, depending on the application.  And it's
quite common in relativity work to index 0,1,2,3 with 0 being
the time coordinate (thus the spatial subspace has indices
1,2,3; here is an example of both conventions being used
simultaneously).

The real lack in C array indexing is a way to specify arbitrary
upper/lower bounds, stride, etc. and let the compiler do the
work to index storage efficiently.  (One can do this with macros
but why should we have to?)  There is a new kind of array in C9x
that better supports this stuff.

------------------------------

From: [EMAIL PROTECTED] (Alexander Demin)
Subject: Re: Modified Vigenere cipher
Date: Thu, 29 Jul 1999 21:43:00 GMT

On Thu, 29 Jul 1999 07:04:32 -0700, Jim Gillogly <[EMAIL PROTECTED]> wrote:

>> P.S. I can't post the crypted text here because I've got only russian
>> translation of the original text.
>
>Is the problem translated into a corresponding Russian version?

Yes, the author of russian translation of the Wetherell's book did
solve the original english problem. And russian author published the
method he used. The method is based on binomial (I guess it's right
english name) distribution. But I can't repeat this method because the
length of the problem is not enough for getting exact result. There
are many variations of the result. I tried to pass it through the
dictionary but no matches.

--
/ad ([EMAIL PROTECTED])

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Q: Does ElGamal require that (p-1)/2 is also prime like DH?
Date: Sat, 31 Jul 1999 02:31:05 GMT

Anton Stiglic  wrote:
 Bob Silverman had written:
> > Thus, primes of the form 2q+1 , q prime,  are indeed much rarer than
> > primes in general.
>
> Of cours they are Bob, this is what started this news discussion!
> But thanks for pointing it out again!

Bob's post contained something new to the thread - a
mathematical expression for the number of primes of
this form.

[...]
> > Bzzt. Wrong.  Thank you for playing. Characterizing p = 2q+1
> > as a 'probable prime' because it has this special form "Isn't
> > even wrong".  It has NOTHING to do with probable primes.  p is
> > a probable prime if it satisfies a^(p-1) = 1 mod p  for (a,p) = 1.
> > Your statement "With this form, p is what we call probably prime"
> > is nonsense.
>
> Bzzt, Double Wrong!!!   Go back to the minors.
> This statement has NOTHING to do with probable primes!
> A number p is said to be a probable prime if it is beleived to be
> a prime number du to the fact it passed a probabilistic primality
test

Violent agreement.  What Bob disagreed with was:

[Anton]
| Anyways,  if you are looking for a prime of the
| form p = 2q + 1, you start by computing q.
| With this form, p is what we call probably prime

I think we all agree that we use the imperfect term
"probable prime" for numbers that pass probabilistic
primality tests.  I suspect Anton made a simple typo
in calling p probably prime just because q = p/2-1 is
prime.

[...]
> > If q is indeed prime,  then you can trivially PROVE p is prime.
> > Since all the factors of p-1 are known, all you need do is
demonstrate
> > a primitive root. In fact, all you need do to PROVE primality is
> > to find  a depending on r such that for each r|p-1   one has
> > a^(p-1)/r != 1 mod p but a^p-1 = 1 mod p. (Selfridge).

> You have an exact reference to this, showing the complexity of the
algo so
> as to compare it to using Miller-Rabin?

Checking wether a given a has this property takes
about as long as one Miller-Rabin iteration.  The
only remaining question is how long it takes to
find such an a, given that p and q are prime.  The
number of elements of order p-1 is
totient(p-1) = (q-1)(2-1) = q.  So if we choose
candidates for a at random, it will take an average
of two tries to prove p is prime.

Or we can use the test from fact 4.59 in the Handbook of
Applied Cryptography (page 152).  If p is prime, it will
almost surely terminate after one modular exponentiation
(and a trivial GCD).  The test then guarantees that if q
is prime then p is prime.  It takes as long as one
iteration of Miller-Rabin.

--Bryan


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: [EMAIL PROTECTED] (Helge Horch)
Subject: Re: Cryptonomicon - low priority posting
Date: Fri, 30 Jul 1999 22:50:58 GMT

[EMAIL PROTECTED] (Wolf) wrote:

>For a novel, it may seem daunting at 900-plus pages, but it
>reads quickly and is lots of fun.

<nod> Still, the most lovable nits I can't resist to pick:

p.55: "And what if new mathematical techniques are developed that can
simplify the factoring of large prime numbers?"

I'd have chuckled, but I think Bill Gates' ghostwriter pulled that one
before.

I guess the honorable members of this group would like to argue over
p.809: "[...] sequence generators -- which is to say, machines for
spitting out series of pseudo-random numbers, which is exactly what a
one-time pad is."

But I'm still puzzled by the 40 wave patterns on p.118. Has anyone
made sense of them? If so, please don't explain yet. (I really enjoyed
being able to read p.20 of Coupland's "Microserfs" after a month-long
struggle.)

Still, I'm already eagerly awaiting the sequel.

Cheers,
Helge
-- 
@^u`a#$# @:^u`a#$# 32+16u`a ! set up F1 !

------------------------------

From: "Mark Hammer" <[EMAIL PROTECTED]>
Subject: Re: the defintion of Entropy
Date: Fri, 30 Jul 1999 19:24:37 -0700

Even though it isn't the most intuitive definition, it is not non-intuitive.

--


Mark Hammer
[EMAIL PROTECTED]
http://free.prohosting.com/~maqua/
Keith A Monahan <[EMAIL PROTECTED]> wrote in message
news:7no9rj$drv$[EMAIL PROTECTED]...
> If that was an intuitive approach, I'd hate to see a non-intuitive
approach.
>
> Keith
>
> Anton Stiglic ([EMAIL PROTECTED]) wrote:
>
> : I have seen some bad usage of the word entropy, so I taught I'd post the
>
> : definition.
>
> : There is two ways of considering entropy, one is mathematical (through a
>
> : set of axioms), the other is intuitive, I present the last one here:
>
> : Consider a source S that spits out bits,   S -> 1 0 0 1 0 1 0 1 1
> : One intersting problem is to caracterize the output.
> : Say that the probability of outputs are p_0 = Pr(0) and p_1 = Pr(1).
> : We would like to know the uncertinty about the output of the source
> : S, that is, the entropy of S.
> : The entropy of S can be considered as "the average lenght, in bits,
> : needed to represent the output (a bit) of S".
> : In other words, if S spits outputs with probability p_0, p_1,  then
> : the entropy of S (H(S)) is
> : H(S) = H(p_0, p_1) = p_0 *lg_2 (1/p_0) + p_1*lg_2(1/p_1)
>
> : Entropy can be generalized to a source that spits out elements of
> : a different base.  Say S spits outputs with prob p_0, p_1, p_2, ...,
> : p_n,
> : then the entropy of S is
> : H(S) = H(p_0, p_1, ..., p_n) =   sum(i=1; i<=n; i++)  p_i*lg_2(1/p_i)
>
> : numbered examples:
> :    S_0 -> 0000000...   (S_0 outputs a constant)
> :     then  p_0 = 1 and p_1 = 0 and we have H(S_0) = 0;
>
> :    S_ur -> 10010110  (S_ur outputs a bit choosen uniformatly randomly
> :    then p_0 = 1/2 and p_1 = 1/2 and we have H(S_ur) = 1;
>
>
> : Anton.
>



------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: The Onega Cipher
Date: Fri, 30 Jul 1999 20:48:20 -0600

This is another in the series of base translation ciphers, base 38 to 64
this time. The significance of the base 38 frontend is that it handles two
cases of 36 characters, unlike my prior ones, base 100, which worked
mainly with digits, most characters merely offset ASCII, and base 27, 
which was little more than the alphabet and a space character.  With minor
changes, the base 38 front end can be easily modified to other dual case
situations.

In Onega, a shift character is used to go between the two 36 character
sets, including the alphabet and ten pairs of useful punctuation.  This
particular cipher is not digit friendly, I simply chose to bypass that
area, however, standard non-numeric vanilla email is possible.

I could have substituted the 38 set, but chose not to.  In Onega, 8
plaintext characters, after formatting, is converted to 42 bits, which can
be transposed according to a key.  Output is in groups of 7 base 64
characters.  The default keys are as follows:

Trans(On): abcdef ghijkl mnopqr stuvwx yz0123 456789 ()[]{}
Subs(On): abcdefghijklmnopqrstuvwxyzABCDEF
          GHIJKLMNOPQRSTUVWXYZ0123456789+/

As before in these programs, the keys are generated by a THF, transparent
hash filter, which can read a key directly into the buffer, or make one or
both of the keys from text.   42 plus 64 is 106, the maximum number of
characters that can be used to make both keys at once.  Blank lines
between paragraphs are encoded as two spaces.

To see what strange keys and preformating actually look like, I'll use
this sentence to make some keys, and also, show you the formatted and
encrypted forms:

`to=see=  what=str  ange=key  s=and=pr  eformati  ng=actua  lly=look 
=like,=`  i'll=use  =this=se  ntence=t  o=make=s  ome=keys  ,=and=al 
so,=show  =you=the  =formatt  ed=and=e  ncrypted  =forms`.

9GNlQs2 aeiowtO t7KyOgp kDJMe2t d4wPLLs j0Atugq 7H3UULs ZJ7hR+f pGx9ykl
toQnG13 5DdIk/g 17z1sr5 +4t+sft cYfICuF 3EOiQQU Z2MjHeE T2LJ7qW 6a1tCbK
9oxSMnh B2LnRZW 

Trans(On): }z5ij] qa8692 bwoksl {7cxh1 3mn04v d(rype g)[ftu
Subs(On): 8zKLvAtwg/BC5Th0quUViH1jIek2MEax
          lWNFmyn9XbJfcDYOZrP7+Q63sopRSGd4

Decrypted and postformatted:

To see what strange keys and preformating actually look like, I'll use
this sentence to make some keys, and also, show you the formatted and
encrypted forms:

If all goes according to plan, the next cipher appears to promise
considerable utility.
-- 
It is fine and dandy to choose to serve the government. It is fine 
and dandy to obtain government services.   It is still another thing 
to find some in the crypto field have been serviced by government, 
with contemplations of how they might do it better in the future.

------------------------------

From: Jerry Park <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Crossposted-To: talk.politics.crypto
Subject: Re: OTP export controlled?
Date: Fri, 30 Jul 1999 23:28:48 GMT

John Savard wrote:

> [EMAIL PROTECTED] (W.G. Unruh) wrote, in part:
> >"Douglas A. Gwyn" <[EMAIL PROTECTED]> writes:
>
> >>It is ludicrous to think that export regulations can really keep
> >>foreigners from implementing decent encryption.
>
> >Not their purpose. Their purpose is to pervent US residents from providing
> >foreigners with decent encryption. What the foreingners do on their own
> >is not a purpose of the regulations.
>
> The regulations, being enacted by a body without the necessary
> jurisdiction, do not limit what foreigners can do on their own.
>
> However, the _purpose_ of the regulations is clearly to ensure
> foreigners will have and use less powerful encryption, even if
> regulating the help US residents can give them is a very limited and
> inadequate means to that end.
>
> John Savard ( teneerf<- )
> http://www.ecn.ab.ca/~jsavard/crypto.htm

When a law, rule, or regulation clearly cannot accomplish its stated purpose,
and it is obvious from the beginning that it cannot, it is reasonable to infer
that the real purpose of the law, rule, or regulation is not the stated purpose
-- whatever the real purpose may be.

Otherwise an intelligent examination of the situation would necessarily
conclude that the implementors of the law, rule, or regulation were
incompetent. And that is far less likely than that the regulations stated and
ineffective purpose is not really the regulations purpose.

--
Jerry Park



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to