Cryptography-Digest Digest #895, Volume #10      Thu, 13 Jan 00 08:13:02 EST

Contents:
  Re: Cryptography in Tom Clancy (Terje Elde)
  Re: Intel 82802 Random Number Generator (Guy Macon)
  Re: Wagner et Al. (Guy Macon)
  Word Counter (NOMGON)
  Re: LSFR ("Michael Darling")
  Re: Intel 82802 Random Number Generator (Scott Nelson)
  Re: LSFR (Scott Nelson)
  Re: LSFR ("Michael Darling")
  Re: My background - Markku Juhani Saarelainen (HOOVER)
  Re: Why is EDI dead?  Is S/MIME 'safe'?  Who and why? (James Redfern)
  Re: LSFR ("Michael Darling")
  Re: Is SSL really this slow? (Eric Young)
  Re: lfsr - polynom ([EMAIL PROTECTED])
  Re: "1:1 adaptive huffman compression" doesn't work (Mok-Kong Shen)
  Re: Fun With Playing Cards (Dave Hazelwood)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terje Elde)
Subject: Re: Cryptography in Tom Clancy
Date: Thu, 13 Jan 2000 08:39:30 GMT

In article <[EMAIL PROTECTED]>, Johnny Bravo wrote:
>  Hey, if you were in a city that was about to be nuked in 20 minutes
>and if you decrypted a 128 bit message it saves your life... :)

I would not even think about bothering with it. The chances of
successfully decrypting it is about the same chance as getting hit by
lightning 42535295865117307932921825 at the same time your're falling out
of bed.
(note, I didn't factor in things like limited lifespan, because it REALLY
doesn't matter)

Terje Elde
-- 

Hi! I'm a .signature virus! Copy me into your ~/.signature to help me
spread!


------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: Intel 82802 Random Number Generator
Date: 13 Jan 2000 03:41:09 EST

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Douglas A. Gwyn) wrote:

>Until you figure this one out, you shouldn't be making *any*
>statements about statistics and probability.

I wasn't aware that I had.  I am an electronics engineer, and an admitted
clueless newbie in the areas of crypto.  I know enough statistics to
do such trivial tasks as calculating MTBF or doing Monte Carlo component
tolerance calculations, but thats about it.


------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: Wagner et Al.
Date: 13 Jan 2000 03:50:17 EST


Trevor Jackson, III wrote:
>
>Guy Macon wrote:
>
>> Alas, for the kind of protection against trojans I was hoping for,
>> the executables that make up NT should run from a CD-ROM (so they
>> cannot be modified by any program).  That I have not been able to do.
>
>It's not easy to get NT to run from CD, but it is not hard to use a CD
>as a master image at boot time.  If you want to harden a computer used
>by potentially hostile guests have it copy a partition image from the
>CD to the disk upon powerup.  It's a sledgehammer style of solution,
>but it allows you to avoid all reliance upon the security of a
>vulnerable disk.

That's a good idea that I hadn't thought of.  I think that I could
get the boot code and the zfer code to run off of the CD at boot
and not reference anything on the hard disk until after the image
load.  Correction; almost anything on the hard disk.  I would still
need a flag to tell me to boot from the hrd disk instead of running
the image blaster program on the CD again and again. 


------------------------------

From: [EMAIL PROTECTED] (NOMGON)
Subject: Word Counter
Date: 13 Jan 2000 08:54:29 GMT

I am looking for word counting software, a program which will not only count
the number of words in a document, but actually physically number them. Does
anyone know of such software?

[EMAIL PROTECTED]


------------------------------

From: "Michael Darling" <[EMAIL PROTECTED]>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 09:06:51 -0000

I want the time given a particular output.


Trevor Jackson, III <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> You still have not described the direction you want to connect.  Do you
want the
> output for a particular time or the time for a particular output?  These
are
> drastically different questions.




------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: Intel 82802 Random Number Generator
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jan 2000 09:30:34 GMT

On 12 Jan 2000 16:44:33 EST, [EMAIL PROTECTED] (Guy Macon) wrote:
[edited]
>(Discussing paper at ftp://download.intel.com/design/security/rng/CRIwp.pdf )
>
>Scott Nelson wrote:
>>On the major annoyance end, the "sameness" correlation could 
>>be specific to the tested device.  Some devices might
>>have a "not the same" correlation, and only reject 1/3 of the
>>pairs, instead of passing 1/3.  These devices would produce bits 
>>at twice the rate, but the bits would have a bias of about .6
>>Using the bits directly would reduce keyspace dramatically.
>>
>>I note the CRI claims that an oscillating state (which would
>>produce an output of 10101010...) is theoretically possible,
>>but unlikely.  I wonder how likely a semi-broken state of
>>_mostly_ 10101... is.
>
>I thought that they said that such a state would pass the corrector
>without offering an opinion as to the likelyhood, which is a question
>for solid stae physycists, not crypto experts.
>
>In the larger sense, if your RNG cannot produce very long strings
>of 1111..., 0000.... 101010.... etc. then it is by definition biased.
>A true RNG just might output the complete works of Mark Twain someday.
>
They were talking about failure states.  They thought
the idea of failing in the 1010101 mode unlikely.  I don't.
The FIPS poker test would catch this failure mode, 
so it's really an academic question.

In the beginning, the device uses the thermal noise of 
a resister to vary a gating pulse on a high frequency
oscillator (45 mHrtz, gated by 450 kHrtz)
The high frequency oscillator can be thought of as the LSB 
of a counter.  We're told that it takes an average of
6 bits to get one from the Von Neuman rejector.
>From that, I conclude that in 2/3 of the cases, 
the LSB is the same.  The most reasonable model I can
construct for that result, is that successive counts are
usually the same, even, value, but sometimes it's a different,
odd one.  For example, it might vary between 100 and 101
ticks.  Now imagine that the resistor is less than 1% 
different, and the numbers are between 99 and 100, with
99 being more likely.  Suddenly the device drops from
over .9 bits of entropy per output bit, to under .1 bits
per output bit.  

Note that we're also told that the low frequency varies 
10-20 clocks.  Either it doesn't drift much between
successive counts, the figure of 6 raw bits per 1
processed bit is wrong, or there are some pretty
severe flaws in the analysis.  The more I look into
this paper, the less confidence I have.


>>
>>>I was glad to see that Intel is *not* using the thermal noise of
>>>a resistor as a source for the raw bits, but is rather using the
>>>difference between two such resistors.  Matching of resistor
>>>characteristics on a single die tends to be very good, and external
>>>electromagnetic fields are (imperfectly) rejected.  The small
>>>physical size of an onchip solution also helps by making the
>>>effective loop area of the internal parts small, which makes
>>>them poor antennas.
>>>
>>Well, it looks good on paper.
>>It's probably even a good trade-off.  However, 
>>theoretically there's more entropy without it.
>>It does improve the noise to signal ratio, 
>>but at the expense of reducing the absolute noise.
>
>I don't understand the reasoning behind this.  Could you
>explain further?
>
I'll try.
First, think of the voltage on the resistor.
now it varies because of a lot of things.
Temperature, humidity, electromagnetic fields,
cosmic rays, quantum effects, etc.
Some of these effects are tiny, some large.  
Some of these effects also affect the resistor 
next to it, some don't.
Some of these things can be measured by the 
enemy, some can't.
For most enemies, the electromagnetic fields
aren't measurable.  So rejecting them means
throwing away that entropy.  Even if they are 
measurable, they aren't measurable to infinite
precision.  The precision used to reject this
noise is way better than any reasonable enemy.

That's the theory.  But that theory assumes that 
all the noise on the device is measurable.
In practice this is extremely difficult, so
as I said, a reasonable trade off is to reduce
the electromagnetic noise so that the other,
weaker signals can be amplified without clipping.
Others have noted that it also helps against 
active enemies who might, for example, beam
signals at the device in an attempt to 
influence it.

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: LSFR
Reply-To: [EMAIL PROTECTED]
Date: Thu, 13 Jan 2000 09:52:29 GMT

On Wed, 12 Jan 2000 "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:

>Scott Nelson wrote:
>> It's fairly easy to calculate what the state will be after N shifts.
>> Going the other way is less obvious, but clearly possible.
>> If nothing else, you can construct a massive table and get it
>> in a single lookup.  You can trade off calculation size
>> for table size - store every 2^24th entry and check the
>> next 2^24 states against the table.
>
>I don't think that helps.  Given step number X = A*2^24+B there is no way to
>determine A, so there is no way to decide which sorbing of 2^24 states to scan.
>Only if you have the step and want the state does such an index/dictionary help.

Perhaps an example would help.
Consider the 8 bit lfsr  0,2,3,5,8
Here's all the states;

01 96 4b b3 cf f1 ee 77 ad c0 60 30 18 0c 06 03
97 dd f8 7c 3e 1f 99 da 6d a0 50 28 14 0a 05 94
4a 25 84 42 21 86 43 b7 cd f0 78 3c 1e 0f 91 de
6f a1 c6 63 a7 c5 f4 7a 3d 88 44 22 11 9e 4f b1
ce 67 a5 c4 62 31 8e 47 b5 cc 66 33 8f d1 fe 7f
a9 c2 61 a6 53 bf c9 f2 79 aa 55 bc 5e 2f 81 d6
6b a3 c7 f5 ec 76 3b 8b d3 ff e9 e2 71 ae 57 bd
c8 64 32 19 9a 4d b0 58 2c 16 0b 93 df f9 ea 75
ac 56 2b 83 d7 fd e8 74 3a 1d 98 4c 26 13 9f d9
fa 7d a8 54 2a 15 9c 4e 27 85 d4 6a 35 8c 46 23
87 d5 fc 7e 3f 89 d2 69 a2 51 be 5f b9 ca 65 a4
52 29 82 41 b6 5b bb cb f3 ef e1 e6 73 af c1 f6
7b ab c3 f7 ed e0 70 38 1c 0e 07 95 dc 6e 37 8d
d0 68 34 1a 0d 90 48 24 12 09 92 49 b2 59 ba 5d
b8 5c 2e 17 9d d8 6c 36 1b 9b db fb eb e3 e7 e5
e4 72 39 8a 45 b4 5a 2d 80 40 20 10 08 04 02 

Suppose we only know every sixteenth value 
(the left column)
We arrange these in some convenient fashion
to make searching easy.

01:00, 4a:20, 52:b0, 6b:60, 6f:30, 7b:c0, 87:a0, 97:10
a9:50, ac:80, b8:e0, c8:70, ce:40, d0:d0, e4:f0, fa:90

We're given a random value like f2.

First we check the values that we know and
see if any of them is f2.  Nope.
Next, we iterate the value once to 79.
Still no match.  79 becomes aa, then 55, 
then bc, 5e, 2f, 81, d6, and then 6b.
We check each of these values and finally with
6b we find a match. the 96th (0x60) value is 6b.
So f2 must be 96-9 or the 87th value.

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

From: "Michael Darling" <[EMAIL PROTECTED]>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 10:12:35 -0000

Anyone point me to a link describing algorithms that solve the 2^49 LSFR.
I've been trying to find some.  Is there any other reference material
which has details of these algorithms?





------------------------------

From: HOOVER <[EMAIL PROTECTED]>
Crossposted-To: alt.politics.org.cia
Subject: Re: My background - Markku Juhani Saarelainen
Date: Thu, 13 Jan 2000 10:28:58 GMT

Damn, the names are very close aren't they?  Did you change it when the
KGB/SVR filtered you into the country?

Markku-Juhani O. Saarinen wrote:

> In sci.crypt Markku J. Saarelainen <[EMAIL PROTECTED]> wrote:
> > 1. Born in 1967 in Varkaus, Finland
> > 2. Educated in Finland, U.S.A. and the USSR
> > 3. Political beliefs: no political beliefs
> > 4. Major Achievements:
> (..)
>
> I would like to make a public statement: I do not have ANYTHING to do
> with Mr. Markku J. Saarelainen (despite similar names).
>
> PLEASE do not confuse me with this wannabe-spy (whatever) person.
>
> - mj
>
> Markku-Juhani O. Saarinen <[EMAIL PROTECTED]>  University of Jyv�skyl�, Finland




------------------------------

From: James Redfern <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,alt.security.pgp
Subject: Re: Why is EDI dead?  Is S/MIME 'safe'?  Who and why?
Date: Thu, 13 Jan 2000 11:04:43 +0000
Reply-To: James Redfern <redfern[AT]privacyx[DOT]com>

On Wed, 12 Jan 2000 14:46:03 -0500, "Richard A. Schulman"
<[EMAIL PROTECTED]> wrote:

| I'm not a mind reader. He's your friend -- why don't you ask him?

Well I wanted to get a broader input from people other than him.  And thanks
to you I did.

| > Well, bless her heart.  But does she know what sort of companies
| > they are and why they would want to do so, that's the question, not
| > whether they can or not.
| 
| I was just using her delightful phrase (= "It doesn't make any
| difference.") 

And I was just pulling your/her leg.  Many thanks for the information and your
take on EDI/XML.  I was kind of hoping to find some sort of 'killer app', like
'e-mail' is to the Internet, but which would demand high volume remote output
of documents.  I can see major e-commerce sites evolving to a more structure
global infrastructure and needing something like this, and maybe XML would be
the transport, but from the 'research' I have been able to do, I still don't
get a sense of large numbers involved, like with e-mail and even the original
hopes for EDI.

JR.


-- 
James Redfern <[EMAIL PROTECTED]> The Redfern Organization
PGP key ID 0x8244C43A from <mailto:[EMAIL PROTECTED]?subject=0x8244C43A>
...ActiveNames delivers my undeliverable mail at <www.ActiveNames.com>

------------------------------

From: "Michael Darling" <[EMAIL PROTECTED]>
Subject: Re: LSFR
Date: Thu, 13 Jan 2000 11:12:31 -0000

Because electronically it is very simple and very quick - which we need.

> Why are you using an LFSR rather than a counter of some type?  If you use
a
> black counter, one that toggles 1/2 the bits on each increment, you'll see
the
> same degree of volatility as a good LFSR implementation.
>
>



------------------------------

From: Eric Young <[EMAIL PROTECTED]>
Subject: Re: Is SSL really this slow?
Date: Thu, 13 Jan 2000 21:40:20 +1000

Greg wrote:
> In the process of integrating SSL into some software that uses
> sockets, I was surprised to see more than a 10 fold decrease in
> speed.  At this time, my testing is allowing all traffic to be
> encrypted.  Most of it does not need to be, but it is impossible
> for our software to distinguish between standard marshaling packet
> information and any confidential data that is embedded in the
> packets, since we are approaching this integration at a low level.
> 
> So my question is this: If I were to transmit 250k across a wire
> and it took about one second, is it reasonable to assume that
> SSL can slow this transmission down to require 10 seconds?  Or
> is this too slow that I am most likely doing something wrong?

There is definitely something wrong.  If you are noticing you
machine getting it's CPU thrashed, you are probably doing something
like sending lots of small SSL messages.

Some SSL implementations send a SSL packet with each call to the 
SSL_write function.  The encryption of 1000 8 byte packs is a similar
cost to encrypting 1 8000 byte record.  The MAC is not.
When using RC4, on most CPU's, the message checksums (MAC) is more
expensive than encryption.  For small messages eg, on a pentium II-350,
type              8 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
md5               4386.43k    22556.08k    41630.81k    52626.15k    57733.02k
hmac(md5)         1346.23k     9053.11k    24628.91k    43307.22k    55924.05k
This means that if you are generating message digests, one per 8 bytes,
your throughput will be 4400k/s.  If using 8k blocks, 52600k/s.
If you used 4 bytes, 2200k/s, 2 bytes, 1100k/s, you get the idea.

Anyway, lots of small write kill SSLv3 and do much worse on TLS, since it uses
the HMAC construct, which as you can see above is much worse :-).

These number indicate that performance should not be too bad, but depending on
the CPU and network, it does make a difference.
Another issue can be cause by the TCP protocol which normally hates multiple
packets being sent in one direction without messages in the other direction.
Depending on the implementation, time-outs slow things down allot.

So, anyway, if the small messages is you problem, you need to just insert a
'buffering' layer above you SSL_write type function and performance should
greatly improve.

eric (who knows lots of ways to make SSL run slowly, and also how to speed it
up :-)

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: lfsr - polynom
Date: Thu, 13 Jan 2000 11:55:47 GMT


> I think you've misinterpreted Rueppel. This
> statement is correct. Note, however, that these
> other books are not recommending that you actually
> use the output of the shift registers directly...
> just that primitive polynomials make the best
> building blocks.

You are right.
The componentes of an streamcipher system are
possibly lfsr's. The polynom of this lfsr's
has to be chosen as primitive polynoms.
to use polynoms without full period makes no
sense.

Rueppels book (lin complexity close to the period
length) probable concern to the output sequence of a
stream cipher system, not on the component sequences.

gransche


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Thu, 13 Jan 2000 13:52:10 +0100

Tim Tyler wrote:
> 
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

> : My 'No' was intended to negate your 'no longer portable', not
> : 'non-deterministic'. Tell me in what sense the software is not
> : portable?
> 
> If you have no convenient source of genuine randomness, it won't work.
> Not every system comes with a good hardware source of random numbers.
> If you use something pat all redictable, you introduce possible problems.

I don't understand what 'possible problems' you would have. 
In fact, I don't need 'true randomness', nor even 
'pseudo-randomness', only 'non-constancy'. In particular, periodicity
will do perfectly well for my proposal. Let's take the case where 
the plaintext (true one, or the wrong one because of wrong 
decrypting key) is such that there need to be 2 filling bits. 
Suppose the software is such that on the first use it emits 00, 
the second time 01, the third time 10, the fourth time 11, the 
fifth time 00, etc. Now suppose what the analysyt has after 
decrpytion is a version with filling bits 00. He decompresses it
to the (presumed) plaintext and compresses that back again. There 
are four different possibilties of the result. But in NO case can 
he obtain any information, because he knows that any difference
is due to the 'ideosyncracies' of the compression software and
is not related at all to the 'proper' information in the file. If 
you are not convinced of my claim, please tell me what information 
he gathers from the difference (in the four different possible
cases), and I'll respond also to your remaining points (untouched 
below). If you are convinced, then I think I don't need to deal 
with them. 

M. K. Shen

========================================
========================================

> 
> Whatever, not even something like Java has an interface for generating
> "real" randomness in a portable manner, AFAIK.  There's the
> "SeederDialog" - but you can hardly use that in your compression routine.
> You're probably going to have to use different methods of getting
> randomness on different systems.
> 
> :> : Whether the software on two runs of compression put in the
> :> : same bunch of (supposedly) random bits is of no significance.
> :>
> :> It makes little difference to security - if the padding is genuinely
> :> random.  But it may not be of *no* significance overall. For example it
> :> prevents the end user from checking whether two files deccompress to
> :> identical results by using the "diff" program on the compressed files.
> :> Very few things are of no significance at all.
> 
> : I mean the 'significance' in the context which Scott raised with
> : his 1-1 concept. In that context the difference is of no significance,
> : because it tells the analyst nothing (and in fact
> : it tells him nothing for the very reason that there are very
> : often differences.)
> 
> Um, what about my example?
> 
> : Besides, did you actually mean 'compress'  insteaed of 'decompress' in
> : your last but one sentence above?
> 
> I did not.  I meant "decompress", instead of "deccompress", though ;-)
> 
> : (Decompressing is by construction o.k.! Only the compressed files
> : may be different due to the different filling bits.
> 
> Yes, but the user can't check if two compresssed files are the same by
> using diff on the compressed files.  He'd have to decompress them first.
> 
> : If you want to check whether two compressed files have the same 'content',
> : decompress these and then apply the 'diff'.)
> 
> What if your user then tells you this is a pain in the ass, and is causing
> him to fart around deleting files unnecessarily to save space?
> 
> I bother to make this point only to illustrate that random padding may not
> be "of no significance".
> 
> :> As a sample security concern, compressing the file and encrypting it
> :> will no longer produce the same result.
> :>
> :> This means if the file gets lost and you have not stored a local copy you
> :> may wany to recompress and reencrypt, using the original key.
> :  Normally
> :> this will involve few security problems, as eavsdroppers gain little
> :> additional information from the identical cyphertext.  However a "random"
> :> compressor ending gives them two files, encrypted with the same key, with
> :> a few bits of plaintext differing between them.
> 
> : If you have a 1-1 compressor and 'the file gets lost and you have
> : not stored a copy', what do you do? You have to do the same amount
> : of work, don't you? What is the 'plaintext' of 'a few bits of
> : plaintext differing ...' above?
> 
> The compressed file.  This has a few bits different where the padding is.
> 
> : Isn't that what you called 'plaintext' random?
> 
> I'm not sure how to parse this question.  I /think/ the answer is "no",
> but would have to see a clarification of what is being asked to say for
> sure.
> 
> : So what kind of information the analyst possibly could
> : get from that random stuff?
> 
> Like I said it may provide information about the type of encryption used.
> It might tells an attacker that information doesn't diffuse backwards
> through the file.  The distance from the start of the message at which the
> corruption occurs may provide information about the block size which he
> did not previously posess.
> 
> : Could you give some detailed description of how he could exploit
> : such random 'plaintexts'?
> 
> He can get information from them relating to the encryption which would
> not have been available had a deterministic compressor been employed.
> 
> :> This may give the attacker information about - for example - the block
> :> size being used, which he did not have from the original message alone.
> 
> : What do you mean by 'block size' here?
> 
> The size of the block used for encryption with a block cypher.
> 
> Say we have a two messages that are a multiple of 128 bits long that
> differ in one corrupted section, which starts and ends a multiple of 128
> bits from the start of the file.  The attacker might immediately think
> "aha, 128-bit block cypher, ECB mode".  However there is more corruption
> at the very end of the file, caused by non-deterministic compression
> ending.  This scrambles a block of data that starts a multiple of 64 bits
> from the start of the file.
> 
> The attacker can now think - AHA - 64 bit block cypher with coincidental
> resembalance to 128-bit block cypher!
> 
> In principle this sort of information might result in an identification
> of which party the message was sent from, if it could have come from a
> number of different locations which use different forms of encryption.
> This is traffic-analysis - style information - which should not be sneezed at.
> --
> __________
>  |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]
> 
> May all your PUSHes be POPped.

------------------------------

From: Dave Hazelwood <[EMAIL PROTECTED]>
Subject: Re: Fun With Playing Cards
Date: Thu, 13 Jan 2000 20:44:45 +0800

Perhaps off topic but fun nonetheless is some card magic.

Try it. Can you figure out how it's done?

http://www.timwike.dircon.co.uk/card.html


[EMAIL PROTECTED] (John Savard) wrote:

>Well, I've added my take on a playing card cipher to my web page, at
>
>http://www.ecn.ab.ca/~jsavard/pp0105.htm
>
>with links to Bruce Schneier's web page, as he invented Solitare,
>which inspired all this, and to Paul Crowley's web page where his
>proposed Mirdek is located.
>
>Although I tried to make my cipher simpler and faster than Solitare, I
>have to admit I think that it still involves too much drudgery to be
>practical in any but the most extreme emergencies.
>
>John Savard (teneerf <-)
>http://www.ecn.ab.ca/~jsavard/index.html


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to