Cryptography-Digest Digest #217, Volume #10      Fri, 10 Sep 99 15:13:03 EDT

Contents:
  Re: 512 bit number factored (Bob Silverman)
  Re: Looking for Completely-Free Strong Algorithms (John Savard)
  Re: some coder/hacker help please? (John)
  Re: "NSA have no objections to AES finalists" (Derek Bell)
  Re: GnuPG 1.0 released (Jerry Coffin)
  Re: Looking for an asymmetric system (Jerry Coffin)
  Re: fun about FIPS74 (John Savard)
  Re: some coder/hacker help please? (Tom St Denis)
  Re: Self decimated lfsr (Medical Electronics Lab)
  Re: H.235 Keys from Passwords algorithm (Medical Electronics Lab)
  Re: Double encryption is patented (mabey) (Mok-Kong Shen)
  Re: Double encryption is patented (mabey) (John Savard)
  Re: unix clippers that implement strong crypto. (Armin Ollig)
  Re: some information theory (SCOTT19U.ZIP_GUY)
  Re: Looking for an asymmetric system (Tom St Denis)
  Re: Looking for an asymmetric system (DJohn37050)
  Linux on the Pilot (John Savard)
  Re: H.235 Keys from Passwords algorithm (David A Molnar)

----------------------------------------------------------------------------

From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: 512 bit number factored
Date: Fri, 10 Sep 1999 14:53:46 GMT

In article <[EMAIL PROTECTED]>,
  Dylan Thurston <[EMAIL PROTECTED]> wrote:
> Bob Silverman <[EMAIL PROTECTED]> writes:
>
> > I'd also like to address one more issue.  Everyone keeps harping on
> > the fact that computers are getting faster all the time.  Current
claims
> > are 2x  every 18 months.  Even if  * this*  can be sustained for 20
years
> > (and I don't know either way),  I must point out that NFS depends

<snip>

>
> This confuses me a little.  From earlier comments (about the
> impossibility of paging) I had understood that the access pattern for
> solving the matrix was essentially flat

The comments above were aimed at the *sieving* phase.  Faster
computers don't help sieving as much as they should.

Most primes in the factor base are large relative to cache size.
Sieving on them generates a cache miss. One needs to add a value
to a byte in memory which is not cache-resident. The latency to do this
becomes quite important.  Careful sieve interval partitioning can
alleviate this problem somewhat, but it can't eliminate it completely.

If you want to add  1  to  locations  x, x+p,  x+2p, x+3p   etc.
and p > cache-size you generate cache misses and this slows down the
CPU.


; that it was impossible to
> predict where future references would be.  But now you ask for bigger
> caches, which are useful exactly when you do have locality of
> reference.  Can you explain this a little?

See above. Sieving does not have good locality of reference except
for the small primes.


>
> Also, is it really memory _latency_ that matters so much?

Yes, it is.



--
Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Looking for Completely-Free Strong Algorithms
Date: Fri, 10 Sep 1999 15:40:40 GMT

"Joseph Ashwood" <[EMAIL PROTECTED]> wrote, in part:

>I'm looking for royalty-free strong algorithms. I know that AES (when it's
>decided) will meet the criteria, but I need something fairly soon,

Several of the AES candidates are available royalty-free now, although
others are not.

John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: John <[EMAIL PROTECTED]>
Subject: Re: some coder/hacker help please?
Date: Fri, 10 Sep 1999 10:22:44 -0700

How much stuff KB/MB do you have?  I could put it up on a site
I'm not using?  Let me know.

http://www.aasp.net/~speechfb

* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


------------------------------

From: Derek Bell <[EMAIL PROTECTED]>
Subject: Re: "NSA have no objections to AES finalists"
Date: 10 Sep 1999 17:34:33 +0100

pbboy <[EMAIL PROTECTED]> wrote:
: Area 51 is closed.  They moved to an undisclosed location.

        The reporter who wrote the article about that apparently took the wrong
route to the base. 

        Derek
-- 
Derek Bell  [EMAIL PROTECTED]                |   Socrates would have loved
WWW: http://www.maths.tcd.ie/~dbell/index.html|            usenet.
PGP: http://www.maths.tcd.ie/~dbell/key.asc   |    - [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Jerry Coffin)
Subject: Re: GnuPG 1.0 released
Date: Fri, 10 Sep 1999 11:39:03 -0600

In article <7ra4n1$[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
says...
> In article <[EMAIL PROTECTED]>,
> Jerry Coffin <[EMAIL PROTECTED]> wrote:
> >If, for example, you created a program that could 
> >decompress and view GIF files, this might be considered as 
> >contributory infringement, because it's of no real use unless used in 
> >some sort of overall system that can also create GIF files.
> 
> I can't make any sense of this.  Just about any web browser can
> decompress and view GIF files, but if that isn't direct infringement
> then I don't see how it can be contributory infringement.  The GIF
> files are created by totally separate programs run by totally separate
> people, and those GIF creation programs might have to be licensed by
> Unisys.  However, writing, distributing, or using unlicensed web
> browsers doesn't cause anyone to distribute unlicensed GIF creation
> programs.

Perhaps you should have read the rest of what I said: I'm quite 
certain I also talked about the requirement that there be NO 
substantial non-infringing use for the product.  That means the 
product in question would have to have NO useful purpose other than 
(in this case) viewing GIF files.  It's pretty clear to me that an 
average web browser has LOTS of purposes other than viewing GIF files, 
so contributory infringement would almost certainly NOT apply here.

I believe contributory infringement was invented primarily to keep 
people from pulling a legal maneuver to get around infringing patents.  
For example, assume you've got somebody who sells a program that 
allows you to create GIF files.  He might claim that BY ITSELF the 
program doesn't really do anything: it only infringes the patent when 
used in conjunction with a CPU, monitor, keyboard, disk drive, etc. ad 
nauseam.  What he sells is only a CD-ROM that contains some pits in 
particular places, therefore it can't infringe on anything because 
(without a computer to do the work) it doesn't do anything but sit 
there and look shiny.

Now, it's pretty obvious to most of us that if you sell a computer 
program, you intend it to be installed and run on a computer.  
Therefore, if I try the argument above, it only means I'm guilty of 
contributory infringement instead of direct infringement.  Since 
there's no difference in penalty between the two, I've accomplished 
nothing.

There's another sort of legalism this helps avoid as well: assume I 
write a program that infringes on a patent.  I know it infringes, so I 
break it up and sell it as two separate "products."   There's one 
catch though: I want to ensure I still get all the money for the whole 
thing, so I ensure that neither one is really useful by itself -- 
you've got to buy both before you've got anything useful.

Now, what I've really got is a single product that infringes the 
patent.  I'm selling it, however, as two separate products, neither of 
which infringes by itself, so I no longer have any single product that 
directly infringes the patent.

The law about contributory infringement prevents this legal nonsense 
as well.  If I try to take this route, instead of a single case of 
direct infringement I now have two cases of contributory infringement. 
Obviously I haven't gained anything.

To summarize: I believe the law about contributory infringement is 
meant primarily to prevent people from pulling legal maneuvers to get 
around infringing.  The requirement for that there be no substantial 
non-infringing use to be considered contributory infringement is there 
to separate legitimate non-infringing products from what are really 
infringing products hiding behind legal smoke-screens.

-- 
    Later,
    Jerry.

The Universe is a figment of its own imagination.

------------------------------

From: [EMAIL PROTECTED] (Jerry Coffin)
Subject: Re: Looking for an asymmetric system
Date: Fri, 10 Sep 1999 11:39:13 -0600

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
says...
> Hello !
> 
> 1/ I would like to know what is the strongest asymetric cryptosystem

Strongest for a given key size or what exactly?  Right now, there are 
systems based on three fundamental problems: factoring integers, 
discrete logarithms and elliptical curve discrete logarithms.  With 
presently known algorithms, factoring is the easiest of these, while 
elliptical curve discrete logarithms are the most difficult, though 
the difference between factoring and discrete logarithms is _fairly_ 
small.  The difference between either of these and ECDLP is 
considerable.

This, however, doesn't necessarily mean a lot except for the size of 
key required for security with a particular algorithm.  RSA is based 
on factoring; being the easiest of the problems, it requires a key of 
somewhere between 512 and 1204 bits (or so) for security for a 
reasonable length of time.

The Diffie-Hellman key exchange is based on discrete logarithms; the 
increase in difficulty of discrete logarithms is small enough that 
you're looking at nearly the same sizes of keys for similar levels of 
security.

ECDLP is enough more difficult that ECC currently uses keys in roughly 
the same range as most symmetric algorithms -- in both cases, current 
sizes tend to run between 128 and 256 bits or so.  OTOH, symmetric 
ciphers usually have relatively few intermediate sizes (e.g. 128, 192 
and 256 bit keys are all that are required for AES) while ECC often 
uses a much larger number of intermediate sizes (e.g. 168 bits).

OTOH, as was discussed in another thread fairly recently, it's at 
least theoretically possible that a new algorithm for factoring, 
discrete logs or ECDL could be invented at any time.  New algorithms 
for factoring have been invented fairly regularly for the last few 
decades since the problem became of wide interest.  In addition, I 
don't think anybody really knows how to use the number field sieve 
really optimally yet, so some efficiency gains are likely even without 
an entirely new algorithm.

ECDL is probably the real wild card here: at the present time, it's 
the most difficult of these problems by a wide margin.  OTOH, it 
doesn't have the (literally) centuries of research that factoring 
does, so it may be the most open to large breakthroughs, which would 
mean key sizes might have to be increased dramatically for it to 
remain secure.

Note that I'm not saying this WILL happen, only that it has the 
shortest history, so of the three, it may be the most likely place for 
a large breakthrough to take place.  OTOH, there are probably more 
people working on factoring than on DL and ECDL put together -- 
factoring is conceptually a simple enough thing that nearly anybody 
who studies math knows at least a little about it.  ECDL (for example) 
is esoteric enough that comparatively few people even know exactly 
what it is, not to mention studying it enough that they might possibly 
contribute a new insight (not to mention algorithm) about it.

-- 
    Later,
    Jerry.

The Universe is a figment of its own imagination.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: fun about FIPS74
Date: Fri, 10 Sep 1999 16:07:57 GMT

[EMAIL PROTECTED] (jerome) wrote, in part:

>in the fips74 about the DES implementation, you can found this in the 
>Table of content (in http://www.itl.nist.gov/fipspubs/fip74.htm)

>   5. IMPLEMENTATlON OF THE A1GOR1THM

>It is probable that this document has been scanned and went through 
>a charactere recogniser... i found that funny for an organisation 
>feared partly because of its CPU power :)

I didn't know people were afraid of NIST because of its collection of
large computers... :)

John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: some coder/hacker help please?
Date: Fri, 10 Sep 1999 16:46:25 GMT

you can get pb from me off my pop3 email

[EMAIL PROTECTED]


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: Self decimated lfsr
Date: Fri, 10 Sep 1999 12:39:53 -0500

Cairus wrote:
> Thanks again for your answer. What you say is right, however my
> ipothesis was that the output bit and the control bit do not coincide
> (by 'control bit' I mean the bit which determines the number of steps),
> whereas in your example they do, and this of course gives a big help to
> the analyst. But what if they do not?
> I mean: if the lfsr if L bit long, then at each step its content can be
> represented by L bit, let's say R(0), R(1), ..., R(L-1). For example
> you could use R(0) for the output and then look at R(3) to determine
> how many steps to advance before generating the new output bit.

Why not use 2 LFSR's?  One for the control and one for the output.
If the attacker has a clue that the same LFSR is used for both,
then I don't think it really matters what the delay is between the
control and output bits, it should be possible to create a mathematical
relationship between the single clocked data and the control signal.
That's the point that the original article is warning against, a
single clock gives the attacker information you don't want them
to have.  Even if there's a delay, there will still be some leakage.

Patience, persistence, truth,
Dr. mike

------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: H.235 Keys from Passwords algorithm
Date: Fri, 10 Sep 1999 12:53:02 -0500

Douglas Clowes wrote:
> 
> Section 10.3.2 of ITU-T H.235 states in part:
> 
> The encryption key is length N octets (as indicated by the AlgorithmID), and
> is formed as follows:
> � If password length  =   N, Key = password;
> � if password length  <   N, the key is padded with zeros;
> � if password length  >   N, the first N octets are assigned to the key,
> then the N + Mth octet of the password is XOR'd to the Mmod(N)th octet (for
> all octets beyond N) (i.e. all "extra" password octets are repeatedly folded
> back on the key by XORing).
> 
> is it just me, or is this less than secure for generating keys to be used in
> algorithms like RC2, DES, 3DES, MD5, SHA1?

SHA is a hash, so that doesn't matter.  But I agree with you that it
could cause problems for passwords that have duplicate letters in
the mod Nth octet.  What they should do is hash the password and
use some N octets  from that as the key.  That makes longer passwords
far more useful than the above algorithm.

Patience, persistence, truth,
Dr. mike

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Double encryption is patented (mabey)
Date: Fri, 10 Sep 1999 20:00:18 +0200

Mok-Kong Shen wrote:
> 
> John Savard wrote:
> >
> 
> > I got the impression that:
> >
> > - the invention would work as well if one didn't do the first
> > encryption with CBC and a secret key, but did an unkeyed hash instead,
> > and
> >
> > - a session key applicable to the remainder of the file can be
> > produced just as well as an IV using the same length-preserving
> > technique.
> 
> A naive question: Length preserving is certainly fine but is it
> of particualarly high desirability for a sufficiently large file
> containing a large number of blocks? (Length preserving is the
> main point of the patent, isn't it?)


How about a simple variant like this:

1. XOR blocks 1-N to obtain X. Encrypt X with algorithm E1 and key
   K1 to obtain Y.

2. Use Y as IV and algorithm E2 and key K2 and CBC (or its variants)
   to encrypt blocks 1-(N-1).

3. Send Y (or encrypted Y) and the ciphertext of 2.

Questions:

a. Is this an infringement to the patent?

b. Is it very much weaker than the patent?

c. Would it constitute an inherent weakness if E1=E2 and K1=K2 
   (besides the fact that the effective key is shorter)?

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Double encryption is patented (mabey)
Date: Fri, 10 Sep 1999 15:37:14 GMT

Anonymous <[EMAIL PROTECTED]> wrote, in part:

>The invention in question is a method to avoid using an IV in the classic
>way. (by concatenating it with the text) The invention uses CBC, but allows
>the ciphertext to be the same length as the plaintext, and be resistant
>against replay attacks. (as is the case if using a classic IV) BTW, it uses
>*two* secret keys. And it has nothing to do with session keys. A session key
>would mean that it is *not* length preserving.

I got the impression that:

- the invention would work as well if one didn't do the first
encryption with CBC and a secret key, but did an unkeyed hash instead,
and

- a session key applicable to the remainder of the file can be
produced just as well as an IV using the same length-preserving
technique.

These variations aren't covered in the claims, but my intent was
always to generate the IV along with the session key to be completely
length preserving (then RSA would cause a slight increase in length).

While CBC is designed to permit error recovery in such a way that
keeping the IV secret provides no benefit, in other modes the IV can
be looked upon as part of the key, but a part that is transmitted
openly to allow it to be changed with each message. (Historically, the
IV is related to starting rotor positions, which often were sent
encrypted.) So I don't think of the distinction between key and IV as
fundamental.

If indeed what I'm thinking of misses the patent, it would be because
of issues related to the precise scope of the claims, which I think a
patent lawyer would be needed to resolve.

John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: Armin Ollig <[EMAIL PROTECTED]>
Crossposted-To: comp.security.unix
Subject: Re: unix clippers that implement strong crypto.
Date: Fri, 10 Sep 1999 18:31:31 +0200

Martin Pauley wrote:
> 
> Armin Ollig <[EMAIL PROTECTED]> wrote:
> > AFAIK it is better to first encrypt and then compress the data.
> > This means more cpu cycles but better security, because the compression
> > may leave a characteristic pattern that may be usefull for
> > crytpanalysis. ?!
> 
> I compress first and then encrypt, for two reasons:
> 1. an encrypted file does not compress well; quite often an encrypted
>    file will increase in size when you try to compress it.

Agreed, but we do not care for increased sized or wasted CPU cycles.
We have plenty of CPUs :-)

> 2. compressed files contain less of a pattern than most plain files.
>    This is by design, since most (all?) compression algorithms work by
>    detecting and removing patterns.  In contrast, plain files tend to
>    have well-defined patterns, which is why they compress.

Lets discuss that in detail. 
Text files e.g. have patterns that are very easy to spott. 
While binary files can be random or have patterns too. That depends on
the structur of the content (well obviously :-).

So imho the problem with compressing first is the cryptoanalysis would
exactly know what is behind the encrypted stream. Knowing the first few
bytes behind the encryption is a lot already. 

To make that clear, here's an example of what i mean:

1. Make 3 files of random data,  512 bytes each
 dd if=/dev/urandom of=random1 count=1 bs=512
 dd if=/dev/urandom of=random2 count=1 bs=512
 dd if=/etc/passwd  of=text1   count=1 bs=512

2. Compress all of them
 gzip random* text1

3. Take the first few (4) bytes of each compressed file
 dd if=random1.gz of=firstbytes1 bs=4 count=1
 dd if=random2.gz of=firstbytes2 bs=4 count=1
 dd if=text1.gz   of=firstbytes3 bs=4 count=1

4. Compare:
$ diff firstbytes1  firstbytes2 
$ diff firstbytes1  firstbytes3


If i use gzip first, the crypto analysis knows exactly the first 4 bytes
of my clear text stream. 4 bytes are a lot of bits already. And the gzip
pattern (iam sure it *does* make one...) is constant among the complete
data stream, whereas the patterns of different files may vary every X
bytes.

> I'm not an encryption expert, but I met one once! :-)

...needless to say: nor am i :-)

best regards,
--Armin

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: some information theory
Date: Fri, 10 Sep 1999 16:10:40 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>Anti-Spam wrote:
>> 
>> Adaptive encoding for compression usually begins with an assumed model
>> for the probabilites of occurance for the symbol set ("alphabet" based
>> on some sample of text, data, file using those symbols. Each of the
>> symbols in the "alphabet" gets an a priori probability.  We can use this
>> initial probability model to define an initial encoding for the
>> symbols.
>> 
>> As we process the data/file we update the frequencies for the symbols as
>> they appear. The encoding for a given symbol changes "rapidly" early
>> on.  We encounter subsets of the "alphabet" as we progress through the
>> data/file.  Now here's what charms me about this problem.
>> 
>> What is the spatial "rate of change" of probability of occurance for a
>> symbol as I process the uncompresses data/file from its beginning to its
>> end?  At some point in the data/file binary stream the statistics
>> should/may (a conjecture - no proof yet) tend to steady-state values and
>> thus the encoding for symbols in the compressed data/file will not
>> change over time.
>> Autocorrelation testing of the compressed data/file may/should identify
>> candidate binary values for the encoded symbols.
>..............
>
>> If the encoding reaches a "steady state" then we should expect to see
>> the same codes for symbols from that point on in the compressed
>> data/file.  We would need to guess at the value of k (how many unique
>> symbols there are) and by the assumption that frequencies are not
>> changing or changing "slowly" (can't define that yet), and assuming the
>> encoding is optimal, use the 5 points above as a guide to sort through
>> candidate code strings.
>> 
>> I would look for autocorrelations in the later part of the data/file
>> with binary sequencies of length Ln - 1 constrained by a guess for the
>> number of symbols k.  If I knew something about the symbols as the
>> appeared in the uncompressed data/file, I'd start with that. If its a
>> short text message this would prove fruitless, since many of the
>> characters in the alpabet may not occur in the uncompressed text.  I
>> (conjecture) would think text longer than the unnicity distance for
>> English would mean I could just assume all letters of the alphabet did
>> appear, and use static freq. of occurance models for the letters and
>> establish which letter should have the shortest, the next shortest, then
>> next, etc... bit length encoding.
>
>
>As said in a previous follow-up, the codes are of non-constant length
>and hence it is necessary for the analyst to identify the boundaries 
>between the symbols in the binary string if any frequency count
>is to be done. While additional arguments can certainly be offered
>(by other writers), I suppose that this is already sufficient to 
>establish that compression does render the job of the analyst more 
>difficult. On the other hand, to be conservative, one should 
>in my humble opinion regard compression and encryption as orthogonal 
>(this is the common standpoint, as far as I am aware), i.e. one 
>should access the security of one's system based on the strength of 
>the proper encryption methods employed and consider the benefits 
>arising from using the diverse well-known compression methods (which 
>have been designed by their authors without considering cryptology) 
>merely as some bonus.
>
>M. K. Shen

      One should always use the best encryption system one can
for whatever one is using. But it is foolish to think that just because
you have a warm fuzzy about the encryption method used. That if
you want to add compression you should not think what this does
do to the overall security. We have talked long about this and I think
you just assume if the encryption method is good you don't have
to consider inter action with the compression. The porblem is no
one can say for certain that a encryption method is safe. They can
say it appears safe but that is about it. ONe should use comprssion
that does not add information to the message that an attacker
could use. One way to guarantee no information is added is to
use a compression/decompression method that is "one to one".
As I have stated many times it is easy to check for this property.
And I have at my site the ONLY compression method that
uses method. But I am looking for others. IF you know of ANY
let me know. I am working on doing this with Arithmetic coding
but I am not there yet and am not done with mods to make the
current adaptive Huffman even more secure for a first pass
before encryption is used.




David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Looking for an asymmetric system
Date: Fri, 10 Sep 1999 16:40:49 GMT

In article <[EMAIL PROTECTED]>,
  Emmanuel Drouet <[EMAIL PROTECTED]> wrote:
> > > 1/ I would like to know what is the strongest asymetric cryptosystem
>
> > For what purpose?
>
> [Manu]
> For the transfert of multimedia data.

I mean how do you need it to be secure?  What is your enemy?  Personally then
I would suggest to research a stream cipher, and key exchange (i.e DH).

Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: Looking for an asymmetric system
Date: 10 Sep 1999 18:32:52 GMT

The best methods to break IF, DL, or EC methods are complex.  Just because one
can understand what it means to factor a number does not mean much in terms of
what it means to be able to factor a large number.

My personal take on this is that it would take an incredible breakthrough to
reduce the ECDLP to a cubic root, rather than the current square root, while I
do not see any similar apparently fundamental reason why the currently
complexity of GNFS to solve IFP/DLP cannot be reduced further.  Only time will
tell.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Linux on the Pilot
Date: Fri, 10 Sep 1999 16:06:59 GMT

I haven't located it yet, but a new magazine about Linux noted that
there's a version of it that runs on a Pilot.

Which is relevant to a thread in the coderpunks mailing list. Although
I'm going to see if I can't come up with something cryptosecure that
will run on, say, an SR-56. (an early TI programmable with 100 steps)

John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: H.235 Keys from Passwords algorithm
Date: 10 Sep 1999 18:35:29 GMT

Medical Electronics Lab <[EMAIL PROTECTED]> wrote:
> Douglas Clowes wrote:
>> 
>> Section 10.3.2 of ITU-T H.235 states in part:
>> 
[a hash function which doesn't look collision resistant]

When was this standard written?



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to