Cryptography-Digest Digest #896, Volume #10      Thu, 13 Jan 00 12:13:01 EST

Contents:
  Re: LSFR (Bo Lin)
  Re: AES & satellite example (Nicol So)
  Re: "1:1 adaptive huffman compression" doesn't work (SCOTT19U.ZIP_GUY)
  Re: RSA encrypt (Frank the root)
  Re: LSFR ("Michael Darling" noral<dot>co<dot>uk>)
  Random numbers generator ("Simone Molendini")
  Re: LSFR ("Trevor Jackson, III")
  Re: LSFR ("Trevor Jackson, III")
  Re: LSFR ("Trevor Jackson, III")
  Re: Little "o" in "Big-O" notation (Anton Stiglic)
  Re: LSFR ("Michael Darling" noral<dot>co<dot>uk>)
  UK Government challenge? ("Tim Wood")
  Re: Triple-DES and NSA??? (mdc)
  Re: "1:1 adaptive huffman compression" doesn't work (Tim Tyler)
  Re: LSFR (Scott Nelson)
  Re: LSFR ("Michael Darling" noral<dot>co<dot>uk>)
  Re: Random numbers generator (Eric Lee Green)
  Re: Why is EDI dead?  Is S/MIME 'safe'?  Who and why? (sb5309)

----------------------------------------------------------------------------

From: Bo Lin <[EMAIL PROTECTED]>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 13:28:02 +0000

D. Coppersmith, "Fast Evaluation of Logrithms in Fields of Characteristic
Two", IEEE Trans. Info. Theory IT-30, 587-594 (1984).

Michael Darling wrote:

> Anyone point me to a link describing algorithms that solve the 2^49 LSFR.
> I've been trying to find some.  Is there any other reference material
> which has details of these algorithms?




------------------------------

From: Nicol So <[EMAIL PROTECTED]>
Subject: Re: AES & satellite example
Date: Thu, 13 Jan 2000 08:41:18 -0500
Reply-To: see.signature

David A Molnar wrote:
> 
> Greg <[EMAIL PROTECTED]> wrote:
> 
> > So if the satellite has onboard symmetric keys a, b, c, ...
> > in that order, when cipher A gets the ax, then cipher B can be
> > uploaded and then be verified with its own key b, no?
> 
> Do that and I replace the new algorithm with one which
> does what I tell it to do -- *before* it has a chance to verify
> itself on the satellite.
> 
> Even if you have a random shared secret, and say "don't accept new
> ciphers unless you see this 128-bit number", that doesn't stop active
> attacks. I can wait for you to try uploading a new cipher, then cut off
> your connection. Now I can step in and add my evil data.

That's obviously not the way to do authentication--the data being
authenticated is not cryptographically bound to the authenticator.

> I like the idea brought up by David Wagner and Nicol So of
> info-theoretically secure cryptosystems as a means of certifying cipher
> updates. The problem Jerry Coffin alluded to is that if we use an
> information-theoretic MAC by itself, then we need to store an
> unbounded amount of random data for use as keys (since we have no idea how
> many times our algorithm may be broken!).

I'm not sure if that's Jerry Coffin's position and I don't want to put
words in his mouth, but the assertion that we need an unbounded amount
of shared random secret is, quite apparently, based on questionable
assumptions (e.g. we may *realistically* need to update the algorithm an
unlimited number of times).

A typical satellite have a service lifetime of something like 10 years.
(Eventually it will run out of fuel needed to correct its orbit). You
may put a realistic bound on how many times you may want to update the
algorithm during the period.

One last remark, using an informaiton-theoretic approach to secure
algorithm upload does not necessarily mean "burning" shared secrets in
an extravagant manner.

-- 
Nicol So, CISSP // paranoid 'at' engineer 'dot' com
Disclaimer: Views expressed here are casual comments and should
not be relied upon as the basis for decisions of consequence.

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Thu, 13 Jan 2000 14:40:55 GMT




 
  Is D. V. still there and has he tested or any one else checked the
2000110 version of the software. There was a bug and the new version is
out there.
Take Care


David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

I leave you with this final thought from President Bill Clinton:

   "The road to tyranny, we must never forget, begins with the destruction of the 
truth." 

------------------------------

From: Frank the root <[EMAIL PROTECTED]>
Subject: Re: RSA encrypt
Date: Thu, 13 Jan 2000 14:18:48 GMT

Tanks a lot for your patience. It was not that complicated that it might appears.
Tanks.

 Frank

--
Ceux qui r�vent le jour, savent des choses qu'ignorent ceux qui
r�vent la nuit.


------------------------------

From: "Michael Darling" <michaeld<at>noral<dot>co<dot>uk>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 15:01:00 -0000

correction to taps: taps should be at bit 49 and bit 9.


Michael Darling <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> We wish to use an LSFR to generate a time stamp for an electronic
component
> we are designing.
>
> The idea is to use a very simple 2 tap LSFR which is 49 bits long with
taps
> at bit 9 and bit 0.
> This would provide us a nice non-repeating sequence which we could use to
> uniquely identify each stamp.
>
> Now then:  We want to say when each stamp occurred, i.e. map each output
to
> its location in the sequence.
>
> In other words we want to get an output and say that it is output 'n' in
the
> sequence.
>
> Obviously, we don't want to run through the sequence in software and match
> the output we got from the
> hardware - as this could take some time :)
>
> Are we being naive to expect that we can do this, or is there a method
that
> we don't know about?
>
> Thanks in advance for any replies,
> Mike Darling.
>
>
>



------------------------------

From: "Simone Molendini" <[EMAIL PROTECTED]>
Subject: Random numbers generator
Date: Thu, 13 Jan 2000 16:18:26 +0100

Hi all,

where can I find C code for a *good* random number generator?

C rand() routine seems me to be a weak one: it has only a 32768 cycle (it
seems me).

Ciao, Simone



------------------------------

Date: Thu, 13 Jan 2000 10:28:50 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: LSFR

Scott Nelson wrote:

> On Wed, 12 Jan 2000 "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
>
> >Scott Nelson wrote:
> >> It's fairly easy to calculate what the state will be after N shifts.
> >> Going the other way is less obvious, but clearly possible.
> >> If nothing else, you can construct a massive table and get it
> >> in a single lookup.  You can trade off calculation size
> >> for table size - store every 2^24th entry and check the
> >> next 2^24 states against the table.
> >
> >I don't think that helps.  Given step number X = A*2^24+B there is no way to
> >determine A, so there is no way to decide which sorbing of 2^24 states to scan.
> >Only if you have the step and want the state does such an index/dictionary help.
>
> Perhaps an example would help.
> Consider the 8 bit lfsr  0,2,3,5,8
> Here's all the states;
>
> 01 96 4b b3 cf f1 ee 77 ad c0 60 30 18 0c 06 03
> 97 dd f8 7c 3e 1f 99 da 6d a0 50 28 14 0a 05 94
> 4a 25 84 42 21 86 43 b7 cd f0 78 3c 1e 0f 91 de
> 6f a1 c6 63 a7 c5 f4 7a 3d 88 44 22 11 9e 4f b1
> ce 67 a5 c4 62 31 8e 47 b5 cc 66 33 8f d1 fe 7f
> a9 c2 61 a6 53 bf c9 f2 79 aa 55 bc 5e 2f 81 d6
> 6b a3 c7 f5 ec 76 3b 8b d3 ff e9 e2 71 ae 57 bd
> c8 64 32 19 9a 4d b0 58 2c 16 0b 93 df f9 ea 75
> ac 56 2b 83 d7 fd e8 74 3a 1d 98 4c 26 13 9f d9
> fa 7d a8 54 2a 15 9c 4e 27 85 d4 6a 35 8c 46 23
> 87 d5 fc 7e 3f 89 d2 69 a2 51 be 5f b9 ca 65 a4
> 52 29 82 41 b6 5b bb cb f3 ef e1 e6 73 af c1 f6
> 7b ab c3 f7 ed e0 70 38 1c 0e 07 95 dc 6e 37 8d
> d0 68 34 1a 0d 90 48 24 12 09 92 49 b2 59 ba 5d
> b8 5c 2e 17 9d d8 6c 36 1b 9b db fb eb e3 e7 e5
> e4 72 39 8a 45 b4 5a 2d 80 40 20 10 08 04 02
>
> Suppose we only know every sixteenth value
> (the left column)
> We arrange these in some convenient fashion
> to make searching easy.
>
> 01:00, 4a:20, 52:b0, 6b:60, 6f:30, 7b:c0, 87:a0, 97:10
> a9:50, ac:80, b8:e0, c8:70, ce:40, d0:d0, e4:f0, fa:90
>
> We're given a random value like f2.
>
> First we check the values that we know and
> see if any of them is f2.  Nope.
> Next, we iterate the value once to 79.
> Still no match.  79 becomes aa, then 55,
> then bc, 5e, 2f, 81, d6, and then 6b.
> We check each of these values and finally with
> 6b we find a match. the 96th (0x60) value is 6b.
> So f2 must be 96-9 or the 87th value.

But this search process is O(R*C) where R = row length and C = column length (they
might differ in order to trade off time vs space).  But in the example the row length
and the column length are the square root of the size of the full state space.  So
the search process takes as many steps as traversing the full state space.  The only
difference is the elementary steps are comparison of states rather than generation of
states.

For large LFSRs the generation of states is an O(T*N^2) operation [T=taps, N =
bits].  But the original request was for only 49 bits and three taps.  These small
states require O(T) operations to generate.  Since T is very small, and comparison
requires O(N) operations, they are comparable, differing by only a small constant
factor.

Thus I expect the proposed search process to require about the same amount of work as
a full traverse of the state space.

Note that a full traverse of the state space is useless without comparing each
generated state against the desired state.  So the full traverse requires O(2^N)
generations and O(2^N) comparisons.  The proposed search requires O(2^(N/2))
generations and O(2^N) comparisons plus O(N*2^(N/2)) storage space.  Only if
generating a state is very expensive does the proposed search gain any performance.


------------------------------

Date: Thu, 13 Jan 2000 10:30:57 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: LSFR

Michael Darling wrote:

> Anyone point me to a link describing algorithms that solve the 2^49 LSFR.
> I've been trying to find some.  Is there any other reference material
> which has details of these algorithms?

See Golumb, "Shift Register Sequences", Agean Park Press, Revised 1982,
0-89412-048-4



------------------------------

Date: Thu, 13 Jan 2000 10:40:14 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: LSFR

Michael Darling wrote:

> Because electronically it is very simple and very quick - which we need.
>
> > Why are you using an LFSR rather than a counter of some type?  If you use
> a
> > black counter, one that toggles 1/2 the bits on each increment, you'll see
> the
> > same degree of volatility as a good LFSR implementation.
> >
> >

If the LFSR is clocked by the process that samples it, and you are going to use
a "reasonable" number of such samples, say < 2^32, you can simply search the
small portion of the state space that was actually used.

If the LFSR is clock independently you'll need to record the outputs used as
tuples {state, step#} in a dictionary.

If you expect to use an "unreasonable" number of outputs, I suggest an alternate
arrangement.  Use a simple counter to track time (step #), and run the counter
though a mixing function to scramble it.  The 49/3 LFSR would be a suitable
mixing function if you iterate it 16 times or so.

Given this arrangement you can find a time given an output by loading the LFSR
with the output and running it in reverse 16 times.



------------------------------

From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Little "o" in "Big-O" notation
Date: Thu, 13 Jan 2000 10:42:14 -0500

Mike McCarty wrote:

> In article <8591f5$u8v$[EMAIL PROTECTED]>,
> Scott Fluhrer <[EMAIL PROTECTED]> wrote:
> )
> )Jeff Moser <[EMAIL PROTECTED]> wrote in message
> )news:858nb6$bdo$[EMAIL PROTECTED]...
> )> What exactly does the little "o" in "Big-Oh" notation mean? For example, I
> )> know that o(1) becomes negiligible as the integer approaches infinity. I'm
> )> uncertain on how to define it.
> )
> )o(f(x)) = g(x) is true iff:
>
> [snip]
>
> One of the rules for use of o(.) and O(.) is that neither of them is
> permitted to appear on the left hand side of an equal sign.

That is not true in the general sens as you have stated, because
the following is a valid definition:
   o(g(n)) = {f(n): for any positivie constant c>0, there exists a constant
                            n_0 > 0 such that 0 <= f(n) < c*g(n) for all n >=
n_0}
and the o notation is on the left ;)
One thing that is true is that something like
        f(x) = o(g(n)) is wrong,
you sould write
        f(x) is an element of o(g(n)) since o(g(n)) is a set.
o(f(x)) = g(x) also doesn't make sens, as what was ment to be said in
the quote above...

Anton




------------------------------

From: "Michael Darling" <michaeld<at>noral<dot>co<dot>uk>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 15:53:55 -0000

Hi Trevor,

Thanks for your reply.

The reason we are trying to use an LSFR is because it is extremely quick in
logic terms and we hope to use
it to provide a very high resolution timer.  A normal binary counter suffers
from delays caused by the propagation
of carry bits.  The LSFR wouldn't suffer from this problem.

The cryptographic side of it is a side effect for us and we wish to just get
the output from the device and convert
it into its position in the sequence thus giving us the time stamp.

Unfortunately none of us are professional mathematicians and our pure
mathematics is a tad limited.
What we really want is an algorithm we can follow and implement in code.

If we were to store every output of the sequence in a dictionary and use it
as a look up then in theory that works fine.
However, assuming 64bit storage for each 49bit output (for efficiency in
reading) then we are looking at around
4096 terabytes of storage space required.  This is clearly impractical.

So you see our problem.  We need a quick solution to a problem that we had
no idea was so complex :)

Regards,
Mike.





Trevor Jackson, III <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Michael Darling wrote:
>
> > Because electronically it is very simple and very quick - which we need.
> >
> > > Why are you using an LFSR rather than a counter of some type?  If you
use
> > a
> > > black counter, one that toggles 1/2 the bits on each increment, you'll
see
> > the
> > > same degree of volatility as a good LFSR implementation.
> > >
> > >
>
> If the LFSR is clocked by the process that samples it, and you are going
to use
> a "reasonable" number of such samples, say < 2^32, you can simply search
the
> small portion of the state space that was actually used.
>
> If the LFSR is clock independently you'll need to record the outputs used
as
> tuples {state, step#} in a dictionary.
>
> If you expect to use an "unreasonable" number of outputs, I suggest an
alternate
> arrangement.  Use a simple counter to track time (step #), and run the
counter
> though a mixing function to scramble it.  The 49/3 LFSR would be a
suitable
> mixing function if you iterate it 16 times or so.
>
> Given this arrangement you can find a time given an output by loading the
LFSR
> with the output and running it in reverse 16 times.
>
>



------------------------------

From: "Tim Wood" <[EMAIL PROTECTED]>
Subject: UK Government challenge?
Date: Thu, 13 Jan 2000 16:06:06 -0000

A news article about a GCHQ Crypto challenge:

http://news.bbc.co.uk/hi/english/uk/newsid_601000/601960.stm

you'll laugh...

tim




------------------------------

From: [EMAIL PROTECTED] (mdc)
Crossposted-To: alt.privacy,alt.security
Subject: Re: Triple-DES and NSA???
Date: Thu, 13 Jan 2000 16:17:07 GMT

On Wed, 12 Jan 2000 01:16:11 -0500, anonymous <[EMAIL PROTECTED]>
wrote:

>Did the NSA screw around with Triple-DES like they did with DES back in 
>the 70s?  How secure is it in comparison to blowfish and other 
>algorithms?

I don't have the inside dope to either confirm or deny any "screwing around"
with DES, so I'll just stick to the facts.

Triple-DES uses exactly the same algorithm as DES.  It's not a new 
algorithm.  Triple-DES simply uses two different keys and an encryption
sequence that looks like 

    C = Ek1( Dk2 ( Ek1(P)))

That is, you take the plaintext (P), encrypt with key 1, decrypt the
result with key 2, and encrypt that with key 1.  As such, it has all the same 
fundamental strengths and weaknesses of the original DES, but the 
super-encryption gives a longer effective key length and makes brute
force cracking prohibitively time consuming.

There is no absolute answer in comparing this to Blowfish.  To my knowledge,
no general weaknesses have been found in either algorithm, but DES has
been subjected to a lot more cryptanalysis, so the fact that it has stood up
bodes well.  On the other hand, this variant of DES has an effective key
length of 112 bits (two 56-bit keys) while Blowfish can use up to 468 bits.
If you're willing to bet that no inherent weakness exists in Blowfish that has
been overlooked thus far, the longer key length makes it much stronger.

Also, dedicated machines have been designed and built to crack DES
(e.g. EFF's effort last year).  To my knowledge, similar machines have not
been demonstrated against Blowfish.  However, these machines are simply
highly-optimized brute force crackers, so if Blowfish became as widely used
as DES, the dedicated cracking machines would certainly appear.

Michael





------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jan 2000 16:14:49 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

:> : My 'No' was intended to negate your 'no longer portable', not
:> : 'non-deterministic'. Tell me in what sense the software is not
:> : portable?
:> 
:> If you have no convenient source of genuine randomness, it won't work.
:> Not every system comes with a good hardware source of random numbers.
:> If you use something pat all redictable, you introduce possible problems.

: I don't understand what 'possible problems' you would have.

The attacker knows something about the data in the compressed file,
information which he need not be supplied with.  He knows it is likely
to have certain non-random stuff at its end.  This is less than ideal.

: In fact, I don't need 'true randomness', nor even 
: 'pseudo-randomness', only 'non-constancy'.

This won't do at all, IMO.

: In particular, periodicity will do perfectly well for my proposal.
: Let's take the case where the plaintext (true one, or the wrong
: one because of wrong decrypting key) is such that there need to
: be 2 filling bits.  Suppose the software is such that on the first
: use it emits 00, the second time 01, the third time 10, the fourth time
: 11, the  fifth time 00, etc. Now suppose what the analysyt has after 
: decrpytion is a version with filling bits 00. He decompresses it
: to the (presumed) plaintext and compresses that back again. There 
: are four different possibilties of the result. But in NO case can 
: he obtain any information, because he knows that any difference
: is due to the 'ideosyncracies' of the compression software and
: is not related at all to the 'proper' information in the file.

That doesn't appear to be correct.  Imagine the case where he has a
complete known-plaintext attack on the cypher that recovers the key,
and several consecutive known plaintexts.  He can thus determine what
padding bits have been used by the compressor.  If he is intercepting
all messages, he thus has information about what the padding bits are
most likely to be on the next message, should that message require two
padding bits.  This is information he would not normally have, which may
assist him with any attack.

Of course, if he only has one message to work on, then - as you say -
he's no better off.

Padding bits /ought/ to be as random as possible - given the constraint
that they must not make any complete Huffman symbols.

Any deviation from randomness are like preferentially padding with zeros -
they potentially give the attacker statistical knowledge about the bits at
the end of the file.

[super snip]
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

$$$$$$$$$$$$$$$ Money lies at the root of all wealth $$$$$$$$$$$$$$$$$

------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: LSFR
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jan 2000 16:33:35 GMT

On Thu, 13 Jan 2000 "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:

>
>But this search process is O(R*C) where R = row length and C = column length (they
>might differ in order to trade off time vs space).  But in the example the row length
>and the column length are the square root of the size of the full state space.  So
>the search process takes as many steps as traversing the full state space.  The only
>difference is the elementary steps are comparison of states rather than generation of
>states.
>
Ah, I think I see the confusion.
It doesn't take N steps to search a list of size N if you order it.
For example if the list is arranged in ascending order, 

01:00, 4a:20, 52:b0, 6b:60, 6f:30, 7b:c0, 87:a0, 97:10
a9:50, ac:80, b8:e0, c8:70, ce:40, d0:d0, e4:f0, fa:90

and we want to find out if 7C is on the list then we
could speed things up by doing a binary search.
look at the 8 number.  Too big.
Then look at (8+0)/2=4.  Too small
check (8+4)/2=6.  Too small
check (8+6)/2=7.  Too big.
low is 6, high is 7 - we're done, it's not on the list.

There are considerably faster techniques for searching 
a list (like hashing), but I think binary search is
easier to understand.  Even with binary search, 
it's only 0(R * log(C)).

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

From: "Michael Darling" <michaeld<at>noral<dot>co<dot>uk>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 16:31:31 -0000

if you order the list then you must store the sequence index along with the
item.
This would double the size of the data storage required.


Scott Nelson <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Thu, 13 Jan 2000 "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
>
> >
> >But this search process is O(R*C) where R = row length and C = column
length (they
> >might differ in order to trade off time vs space).  But in the example
the row length
> >and the column length are the square root of the size of the full state
space.  So
> >the search process takes as many steps as traversing the full state
space.  The only
> >difference is the elementary steps are comparison of states rather than
generation of
> >states.
> >
> Ah, I think I see the confusion.
> It doesn't take N steps to search a list of size N if you order it.
> For example if the list is arranged in ascending order,
>
> 01:00, 4a:20, 52:b0, 6b:60, 6f:30, 7b:c0, 87:a0, 97:10
> a9:50, ac:80, b8:e0, c8:70, ce:40, d0:d0, e4:f0, fa:90
>
> and we want to find out if 7C is on the list then we
> could speed things up by doing a binary search.
> look at the 8 number.  Too big.
> Then look at (8+0)/2=4.  Too small
> check (8+4)/2=6.  Too small
> check (8+6)/2=7.  Too big.
> low is 6, high is 7 - we're done, it's not on the list.
>
> There are considerably faster techniques for searching
> a list (like hashing), but I think binary search is
> easier to understand.  Even with binary search,
> it's only 0(R * log(C)).
>
> Scott Nelson <[EMAIL PROTECTED]>



------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Random numbers generator
Date: Thu, 13 Jan 2000 09:35:12 -0700

Simone Molendini wrote:
> 
> Hi all,
> 
> where can I find C code for a *good* random number generator?
> 
> C rand() routine seems me to be a weak one: it has only a 32768 cycle (it
> seems me).

check out http://www.counterpane.com for the "Yarrow" PRNG for the Win32 API.

If you are on the Linux or FreeBSD platforms, simply fetch bytes from
/dev/urandom (or /dev/random, if you need truly random numbers as vs. a good
PRNG). Because they have access to internal entropy events within the kernel
to continually seed and re-seed the generator, they produce better results
than user-mode PRNG's can produce. /dev/urandom has had some slight criticism
since it relies upon supposed properties of MD5 that are not guaranteed, but
in practice this is more of theoretical interest rather than being an actual
problem. 

If you are on other Unix platforms, I tossed together one using TwoFish and
MD5 that you can get at http://www.estinc.com/~eric . I'd do things
differently nowdays, since this isn't the most efficient approach in the
world, but it does work and doesn't have the cycle problem.


-- 
Eric Lee Green                         [EMAIL PROTECTED]
Software Engineer                      Visit our Web page:
Enhanced Software Technologies, Inc.   http://www.estinc.com/
(602) 470-1115 voice                   (602) 470-1116 fax

------------------------------

From: sb5309 <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,alt.security.pgp
Subject: Re: Why is EDI dead?  Is S/MIME 'safe'?  Who and why?
Date: Fri, 14 Jan 2000 00:43:36 +0800

What is "remote document processing business - invoices, price-lists,
technical drawings etc." ?

I am curious. Thanks.


 
> I have a friend who is in the remote document processing business - invoices,
> price-lists, technical drawings etc.  His software is NT based and normally
> works over a LAN/WAN configuration.  He made the statement to me recently that

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to