Cryptography-Digest Digest #161, Volume #14 Mon, 16 Apr 01 17:13:01 EDT
Contents:
Re: NSA is funding stegano detection (Mok-Kong Shen)
Re: Why Not OTP ? (Mok-Kong Shen)
Re: There Is No Unbreakable Crypto (David Wagner)
Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")
Re: AES poll ("Jack Lindso")
Re: MS OSs "swap" file: total breach of computer security. (wtshaw)
Re: MS OSs "swap" file: total breach of computer security. ("Christian Bohn")
Re: LFSR Security ("Trevor L. Jackson, III")
Re: LFSR Security ("Trevor L. Jackson, III")
Re: MS OSs "swap" file: total breach of computer security. ("Tom St Denis")
Re: LFSR Security (Ian Goldberg)
Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
Re: LFSR Security ("Trevor L. Jackson, III")
Re: LFSR Security ("Trevor L. Jackson, III")
Re: AES poll ("Trevor L. Jackson, III")
----------------------------------------------------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: Mon, 16 Apr 2001 21:21:51 +0200
Bernd Eckenfels wrote:
>
> as long as stegano is theoretical safe but in practise detectable, it is a
> nice mind experient but otherwise completely useless.
Yes. It is currently the discussion how easy/difficult
is that detection. I like to ask experts in image
processing to answer one rather global question: In the
average case, if one arbitrarily modifies the LSB
of one tenth of the coefficients of fourier transform in
one colour, is there anything that can be noticed by the
naked eye when comparing the pictures? Thanks.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Why Not OTP ?
Date: Mon, 16 Apr 2001 21:43:44 +0200
John Savard wrote:
>
> Frank Gerlach <[EMAIL PROTECTED]> wrote:
>
> >Why is it that people do not like OTP ? It seems that some people do not like
> >Public-Key crypto, so why not just exchanging a box of CDs ?
>
> Well, it is cumbersome and expensive. Worse yet, it imposes a strict
> limit on how many communications can be exchanged - and maybe it may
> become important to communicate securely just when exchanging a new
> box of CDs has suddenly become harder.
Very long ago, I read that the Washington-Moskau hotline
was based on OTP. I suspect that that may not necessarily
be true now. Does one happen to have any information
of the type of encryption used?
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: There Is No Unbreakable Crypto
Date: 16 Apr 2001 19:59:50 GMT
Henrick Hellstr�m wrote:
>The only reference I found was "(e.g., Bellare and Goldwasser's course
>notes)" and a web search turned out blank. Could you please specify where I
>should look?
http://www-cse.ucsd.edu/users/mihir/papers/gb.html
Once you understand the background of provable security, see Theorem 5.5.1.
However, I just took a look and it seems that they don't prove the theorem
in the lecture notes, so you may need to refer to the original paper by
Goldreich, Goldwasser, & Micali ("How to construct random functions").
Or, you can prove it yourself: It's not hard to prove once you understand
the basics of PRGs, PRFs, and provable security.
------------------------------
From: "Brian Gladman" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Mon, 16 Apr 2001 21:09:12 +0100
"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> Brian Gladman wrote:
> >
>
> > If two different PRNGs giving unfiformly distributed random numbers in
> > [0.0:1.0) are added and the result is taken 'mod 1.0', this output will
then
> > be uniformly distributed in [0.0:1.0). A bit of maths shows that the
output
> > in [0.0-2.0) is not uniform but that the mod function combines the
ranges
> > [0.0:1.0) and [1.0:2.0) in such a way that a uniform distribution
results.
> >
> > But if the outputs of the generators are multiplied by constants close
to
> > 1.0 before combination, the output will not generally be uniformly
> > distributed in [0.0:1.0).
> >
> > This can be seen by considering a single PRNG giving uniformly
distributed
> > random numbers in [0.0:1.0) and considering the output after multiplying
by
> > a number (1.0 + delta), close to 1.0, and taking the output 'mod 1.0'.
In
> > this case numbers in the range [0.0:delta) will occur twice as often as
> > those in the range [delta:1.0).
> >
> > Although the maths is more complicated when several generators are
> > combined, the same issue turns up.
> >
> > The uneven distributions that result may not be a problem in some
> > applications but they will frequently be undesirable.
>
> One can consider the continuous case as the limiting
> case of the discrete case. In the discrete case, i.e.
> for integer range [0, n-1], it can be easily proved that
> the sum of a uniform random variable and an arbitrary
> random variable (more exactly one that is not degenerate
> in that it has non-zero frequency for at least one value
> relatively prime to n) mod n is a uniform variable.
Unless I misunderstood your intentions, your original post suggested - by
using the terminology '1.0 + delta' - that the multipliers involved were
intended to be close to 1.0. It also seemed that your starting PRNGs had
outputs in the range [0.0:1.0). But maybe this was not your intention.
In any event, this was this case I was referring to, not one where the
multipiers are large.
If several PRNGs with uniformly distributed outputs in the range [0.0:1.0)
are combined by adding 'mod 1.0' after multiplying each of them by factors
close to 1.0, then the resulting distributions will be very non-uniform.
Brian Gladman
------------------------------
From: "Jack Lindso" <[EMAIL PROTECTED]>
Subject: Re: AES poll
Date: Mon, 16 Apr 2001 23:15:47 +0200
>From reading the government document concerning the choice of AES I had the
feeling that Rijindael was selected without evident/sufficient proof for
being the best choice.
--
Anticipating the future is all about envisioning the Infinity.
http://www.atstep.com
===============================================
"Steve K" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Thu, 12 Apr 2001 10:47:50 +0100, yomgui <[EMAIL PROTECTED]> wrote:
>
> >free, small, cross platform, safe, simple, fast, open source.
> >
> >http://bigfoot.com/~kryptyomic
> >
> >kctang wrote:
> >>
> >> Hi,
> >>
> >> "Good" file encrypt/decrypt utility wanted!
> >> Any recommendations?
> >>
> >> Thanks,
> >> Tang
> >>
> >> PS. What is good? That depends.
> >>
> >> Might be it is free. Might be it is available
> >> "everywhere".
> >> Might be it is fast. Might be it is small.
> >> Should be "save"?
> >
> >--
> >���g��
> >oim 3d - surface viewer - http://i.am/oim
> >kryptyomic - encryption scheme - http://bigfoot.com/~kryptyomic
>
> Another suggestion: Fast, free, convenient, encrypts whole directory
> trees on the fly (your files *never* have to be written to disk as
> plain text), open source, top reputation:
>
> Scramdisk, http://www.scramdisk.clara.net/
>
> :o)
>
>
>
> ---Support privacy and freedom of speech with---
> http://www.eff.org/ http://www.epic.org/
> http://www.cdt.org/
> PGP keys:
> RSA - 0x4912D5E5
> DH/DSS - 0xBFCE18A9
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Crossposted-To: talk.politics.crypto,alt.hacker
Subject: Re: MS OSs "swap" file: total breach of computer security.
Date: Mon, 16 Apr 2001 13:59:08 -0600
In article <tFBC6.20394$[EMAIL PROTECTED]>, "Tom St
Denis" <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> > And, recognizing this, your reason for continuing to use Win98 would be
> > ......??????
>
> What's your point? It's possible to secure memory in Win98, ASS is just too
> stupid to figure out how.
>
It does not follow that if a better choice is available why one would seek
a ride in a leaky boat with only a slight hole in it. I doubt you
understand what you advocate nor the limitations of trying to fully
control the effects of a black box.
--
At peril to the country, Texas is glad to be rid of Bush. The Texas
legislature is busy undoing the messes he created. I told you so.
------------------------------
From: "Christian Bohn" <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.hacker
Subject: Re: MS OSs "swap" file: total breach of computer security.
Date: Mon, 16 Apr 2001 22:51:27 +0200
Why waste a lot of system resources on encrypting/decrypting all data as it
is paged in and out? There is only a small portion of the data you really
want to protect (do you really want to have all the data that is already
easily accessable on your disk encrypted in the swapfile?), and you do that
by telling Windows to NOT cache those pages. Lookup VirtualProtect and
PAGE_NOCACHE in your Win32 documentation.
Christian
> Unbelievable.
>
> For me, the "swap" file implementation in MS OSs is proof positive
> that MS is in a conspiracy to control OUR information (and all of
> US by implication) and is most probably cooperating with the
> government in this regard. MS is intentionally placing our right
> to privacy at risk.
>
> It also tells me that this Justice Dept. anti-trust case against MS
> may be nothing but a political charade.
>
> A computer user must have total discretionary control over certain
> aspects of OS implementation such as the activation, use, and
> access to a "swap" file.
>
> The only discretion one has at this time is to NOT use any leaky MS
> security sieve of an OS.
------------------------------
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: Mon, 16 Apr 2001 20:45:17 GMT
Scott Fluhrer wrote:
> Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Ian Goldberg wrote:
> >
> > > In article <[EMAIL PROTECTED]>,
> > > Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
> > > >With N unknown you just use Berlekamp-Massey. The invariant in BM is
> > > >that one always has the smallest configuration that explains the
> > > >sequence up to the current bit. By continuing this process through the
> > > >gaps, assigning each unknown bit the subsequent output of the current
> > > >machine, one can maintain the invariant and preserve the validity of
> > > >the result.
> > >
> > > So after reading this thread this morning, I spent the day studying the
> > > BM algorithm. I now Understand it.
> > >
> > > The above isn't true. For example, find the LFSR that generates:
> > >
> > > 1 0 0 0 ? 0 ? 0 1 1
> > >
> > > The answer is actually the LFSR of size 5: 111101 which generates
> > >
> > > 1 0 0 0 1 0 1 0 1 1
> > >
> > > But if you use Trevor's technique, you get that after the first 4 bits,
> > > you're working with the LFSR of size 1: 10 which generates
> > >
> > > 1 0 0 0 0 0 0 0 0 0 ...
> > >
> > > and you'll only see a problem when you get to the 1's at the end,
> > > at which point you're forced to change it to the LFSR of length 8:
> > > 110000001.
> >
> > Not quite. You can assume that the most recent known bit is the culprit,
> as
> > in your example, but there's no reason to prefer that bit, and good reason
> > to believe that the culprit is earlier in the sequence. So when a
> conflict
> > is found one must backtrack by trying (toggling) each of the assumed bits
> in
> > turn and use the smallest machine created. The backtracking is complete
> > when no single-bit change of the assumed bits produces a smaller machine.
> (I
> > need to show that any change to a smaller machine can be found by
> successive
> > single-bit changes -- can't quite do that yet).
> >
> > The idea is to, at all times, have as small a machine as is necessary to
> > produce as much of the sequence as possible. In your example the machine
> > found at bit nine was not the smallest machine that could generate the
> first
> > nine bits, so it violates the BM invariant.
>
> If you're going to do all that back-tracking, wouldn't it be easier to, if
> you have N unknown bits, to scan through all 2**N possible settings for them
> at the outset, and simply run BM 2**N times, and select the smallest output?
That's the worst case performance. Maintaining the BM invariant avoids testing
a large fraction of the possible configurations in most cases.
------------------------------
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: Mon, 16 Apr 2001 20:48:32 GMT
Ian Goldberg wrote:
> In article <9bf38p$mq0$[EMAIL PROTECTED]>,
> Scott Fluhrer <[EMAIL PROTECTED]> wrote:
> >If you're going to do all that back-tracking, wouldn't it be easier to, if
> >you have N unknown bits, to scan through all 2**N possible settings for them
> >at the outset, and simply run BM 2**N times, and select the smallest output?
>
> We're really hoping to do better than O(2^n). The proposal was to not
> try *all* 2^N settings at once, but rather to try to "approach" the
> right one by changing one at a time. This has a complexity of (I think)
> O(n^4). But, as I indicated in another post, the algorithm doesn't
> work.
How did you get N^4? Standard BM is O(N^2) I think. The single pass, no
backtracking algorithm you analyzed has the same performance. Where did the
extra come from?
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,alt.hacker
Subject: Re: MS OSs "swap" file: total breach of computer security.
Date: Mon, 16 Apr 2001 20:52:10 GMT
"wtshaw" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> In article <tFBC6.20394$[EMAIL PROTECTED]>, "Tom St
> Denis" <[EMAIL PROTECTED]> wrote:
>
> > <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> > > And, recognizing this, your reason for continuing to use Win98 would
be
> > > ......??????
> >
> > What's your point? It's possible to secure memory in Win98, ASS is just
too
> > stupid to figure out how.
> >
> It does not follow that if a better choice is available why one would seek
> a ride in a leaky boat with only a slight hole in it. I doubt you
> understand what you advocate nor the limitations of trying to fully
> control the effects of a black box.
Now repeat your reply in English please.
Tom
------------------------------
From: [EMAIL PROTECTED] (Ian Goldberg)
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: 16 Apr 2001 20:52:41 GMT
In article <[EMAIL PROTECTED]>,
Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
>Ian Goldberg wrote:
>
>> In article <9bf38p$mq0$[EMAIL PROTECTED]>,
>> Scott Fluhrer <[EMAIL PROTECTED]> wrote:
>> >If you're going to do all that back-tracking, wouldn't it be easier to, if
>> >you have N unknown bits, to scan through all 2**N possible settings for them
>> >at the outset, and simply run BM 2**N times, and select the smallest output?
>>
>> We're really hoping to do better than O(2^n). The proposal was to not
>> try *all* 2^N settings at once, but rather to try to "approach" the
>> right one by changing one at a time. This has a complexity of (I think)
>> O(n^4). But, as I indicated in another post, the algorithm doesn't
>> work.
>
>How did you get N^4? Standard BM is O(N^2) I think. The single pass, no
>backtracking algorithm you analyzed has the same performance. Where did the
>extra come from?
I was talking about the "backtrack and change the unknown bits one at a
time when you find a mistake" algorithm. Every time you find a mistake,
you'll have O(N^2) applications of BM, as you try flipping each ? bit
individually to find the best one, then do that again to try to improve
your result, and so on. I'm guessing you may have to do that O(N)
times. [I'm assuming the number of ? bits, the total number of bits you
have, and the size of the LFSR are all O(N).]
- Ian
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Mon, 16 Apr 2001 22:53:17 +0200
Brian Gladman wrote:
>
> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote:
> >
> >
> > Brian Gladman wrote:
> > >
> >
> > > If two different PRNGs giving unfiformly distributed random numbers in
> > > [0.0:1.0) are added and the result is taken 'mod 1.0', this output will
> then
> > > be uniformly distributed in [0.0:1.0). A bit of maths shows that the
> output
> > > in [0.0-2.0) is not uniform but that the mod function combines the
> ranges
> > > [0.0:1.0) and [1.0:2.0) in such a way that a uniform distribution
> results.
> > >
> > > But if the outputs of the generators are multiplied by constants close
> to
> > > 1.0 before combination, the output will not generally be uniformly
> > > distributed in [0.0:1.0).
> > >
> > > This can be seen by considering a single PRNG giving uniformly
> distributed
> > > random numbers in [0.0:1.0) and considering the output after multiplying
> by
> > > a number (1.0 + delta), close to 1.0, and taking the output 'mod 1.0'.
> In
> > > this case numbers in the range [0.0:delta) will occur twice as often as
> > > those in the range [delta:1.0).
> > >
> > > Although the maths is more complicated when several generators are
> > > combined, the same issue turns up.
> > >
> > > The uneven distributions that result may not be a problem in some
> > > applications but they will frequently be undesirable.
> >
> > One can consider the continuous case as the limiting
> > case of the discrete case. In the discrete case, i.e.
> > for integer range [0, n-1], it can be easily proved that
> > the sum of a uniform random variable and an arbitrary
> > random variable (more exactly one that is not degenerate
> > in that it has non-zero frequency for at least one value
> > relatively prime to n) mod n is a uniform variable.
>
> Unless I misunderstood your intentions, your original post suggested - by
> using the terminology '1.0 + delta' - that the multipliers involved were
> intended to be close to 1.0. It also seemed that your starting PRNGs had
> outputs in the range [0.0:1.0). But maybe this was not your intention.
>
> In any event, this was this case I was referring to, not one where the
> multipiers are large.
>
> If several PRNGs with uniformly distributed outputs in the range [0.0:1.0)
> are combined by adding 'mod 1.0' after multiplying each of them by factors
> close to 1.0, then the resulting distributions will be very non-uniform.
I suppose I don't yet understand you. Do you mean that
the case where the multipliers are close to 1.0 produces
a worse result than the case where they differ more?
M. K. Shen
------------------------------
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: Mon, 16 Apr 2001 20:58:50 GMT
Ian Goldberg wrote:
> In article <[EMAIL PROTECTED]>,
> Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
> >Not quite. You can assume that the most recent known bit is the culprit, as
> >in your example, but there's no reason to prefer that bit, and good reason
> >to believe that the culprit is earlier in the sequence. So when a conflict
> >is found one must backtrack by trying (toggling) each of the assumed bits in
> >turn and use the smallest machine created. The backtracking is complete
> >when no single-bit change of the assumed bits produces a smaller machine. (I
> >need to show that any change to a smaller machine can be found by successive
> >single-bit changes -- can't quite do that yet).
>
> I believe I have a counterexample to this conjecture.
>
> Consider the sequence
>
> 000010100?000?1?0
>
> Here's how it'll go:
>
> First you'll process "000010100" in the usual way to find the LFSR
> of size 5 with connection polynomial (in the language of HAC)
> 1+D^2+D^4+D^5, which generates the sequence 0000101001101110...
>
> So when you hit the first ? you'll optimisticly guess that it's a 1.
> But then you hit the 0. The best LFSR that generates 00001010010 is
> of size 6 (1+D^2+D^4+D^5+D^6), which generates 000010100100110...
> But you decide to see what happens if you flip the first ? to a 0,
> and you find it doesn't help (also size 6), so you stick with what
> you've got.
>
> Then you hit the next 0, which is what your current LFSR predicts, so
> that's good. Now you're at the third 0 after the ?, which is *not* what
> the current LFSR predicts. So you fix your LFSR to get one of size 7
> (1+D^5+D^7), which generates 00001010010001101...
> Again, you see if flipping the ? helps, but it doesn't (also of size 7),
> so you stick with your current choice.
>
> Now you hit the second ?, so you optimistically guess that it's a 1,
> in accordance with your current best LFSR. The next bit is 1, which
> matches your LFSR, so you still have the (7,1+D^5+D^7) LFSR when you
> get to the third ?, so you optimistically guess that it's a 0.
>
> Now you get to the last bit in the sequence, a 0, which doesn't match
> you current LFSR. Now, to clarify, your current LFSR sequence is
> (with guessed bits in parens):
>
> 000010100(1)000(1)1(0)
>
> but its next bit is a 1, and you want a 0. So you grow the LFSR
> to the shortest one which generates 00001010010001100, which is
> of size 10 (1+D^4+D^5+D^6+D^7+D^8+D^9+D^10). Now to try to make it
> smaller by changing your guesses of the ? bits.
>
> You propose changing them one at a time (in effect, trying a
> 'hill-climbing' algorithm), moving towards shorter LFSRs if possible.
> Now we see that this algorithm does not work.
>
> Changing the bits one at a time yields the following 3 sequences, with
> their associated shortest LFSRs:
>
> 000010100(0)000(1)1(0)0: 9 (1+D+D^9)
> 000010100(1)000(0)1(0)0: 8 (1+D^3+D^8)
> 000010100(1)000(1)1(1)0: 9 (1+D+D^3+D^6+D^9)
>
> So we change our guess of the second ?. Now we try it again, to
> see if we can do any better. We won't bother trying to flip the second
> ? back the way it was, of course, so we get the following two
> possibilities:
>
> 000010100(0)000(0)1(0)0: 9 (1+D^4+D^6+D^8)
> 000010100(1)000(0)1(1)0: 9 (1+D+D^4+D^5+D^7)
>
> So we conclude that (8,1+D^3+D^8) is the best we can do.
>
> But we're wrong.
>
> In fact, (7,1+D+D^3+D^5+D^7) generates the sequence
>
> 000010100(0)000(1)1(1)0
>
> which we didn't find by changing one bit at a time. :-(
I've failed to show the single bit connectivity of the optimal intermediate
machines, and you'd found a counter example. Fairly conclusive evidence that the
single-bit approach to maintaining the invariant is inadequate.
Did you find your example by analysis or by search?
------------------------------
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: LFSR Security
Date: Mon, 16 Apr 2001 21:00:08 GMT
David Wagner wrote:
> Trevor L. Jackson, III wrote:
> >So when a conflict
> >is found one must backtrack by trying (toggling) each of the assumed bits in
> >turn and use the smallest machine created.
>
> That's exponential-time.
In pathological cases yes. But if optimal intermediate machines are simply
connected within the search space then it is far better than exponential time.
------------------------------
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: AES poll
Date: Mon, 16 Apr 2001 21:02:58 GMT
Jack Lindso wrote:
> From reading the government document concerning the choice of AES I had the
> feeling that Rijindael was selected without evident/sufficient proof for
> being the best choice.
How many reams of argument do you want over the definition of "best"?
>
>
> --
> Anticipating the future is all about envisioning the Infinity.
> http://www.atstep.com
> -----------------------------------------------
> "Steve K" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > On Thu, 12 Apr 2001 10:47:50 +0100, yomgui <[EMAIL PROTECTED]> wrote:
> >
> > >free, small, cross platform, safe, simple, fast, open source.
> > >
> > >http://bigfoot.com/~kryptyomic
> > >
> > >kctang wrote:
> > >>
> > >> Hi,
> > >>
> > >> "Good" file encrypt/decrypt utility wanted!
> > >> Any recommendations?
> > >>
> > >> Thanks,
> > >> Tang
> > >>
> > >> PS. What is good? That depends.
> > >>
> > >> Might be it is free. Might be it is available
> > >> "everywhere".
> > >> Might be it is fast. Might be it is small.
> > >> Should be "save"?
> > >
> > >--
> > >���g��
> > >oim 3d - surface viewer - http://i.am/oim
> > >kryptyomic - encryption scheme - http://bigfoot.com/~kryptyomic
> >
> > Another suggestion: Fast, free, convenient, encrypts whole directory
> > trees on the fly (your files *never* have to be written to disk as
> > plain text), open source, top reputation:
> >
> > Scramdisk, http://www.scramdisk.clara.net/
> >
> > :o)
> >
> >
> >
> > ---Support privacy and freedom of speech with---
> > http://www.eff.org/ http://www.epic.org/
> > http://www.cdt.org/
> > PGP keys:
> > RSA - 0x4912D5E5
> > DH/DSS - 0xBFCE18A9
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************