Cryptography-Digest Digest #362, Volume #11 Sat, 18 Mar 00 23:13:01 EST
Contents:
Re: Quantum crypto and the name of god (ca314159)
Re: Enigma encryption updated (Adam D) ("Adam Durana")
Re: Card shuffling (Mok-Kong Shen)
Re: Card shuffling (Mok-Kong Shen)
Re: Any Mathematicians Surnamed _Saint_-Germain? ("Douglas A. Gwyn")
Re: 64-bit Permutations ("Douglas A. Gwyn")
Opinions? ("Marc Howe")
Re: new Echelon article ("Douglas A. Gwyn")
Re: Card shuffling ("Douglas A. Gwyn")
Re: Card shuffling ("Douglas A. Gwyn")
Re: EOF in cipher??? ("Douglas A. Gwyn")
Re: Enigma encryption updated (Adam D) ("Douglas A. Gwyn")
Re: Opinions? ("Douglas A. Gwyn")
Re: Card shuffling (Jim Reeds)
Re: 64-bit Permutations ([EMAIL PROTECTED])
Re: SHA-1 as a stream cipher ("Tom St Denis")
Re: Opinions? ("Tom St Denis")
Re: Concerning UK publishes "impossible" decryption law ("ÐRëÐÐ")
Re: SHA-1 as a stream cipher ("Steve A. Wagner Jr.")
Re: Opinions? (Steve K)
VB Property help with Twofish algorithm: PLEASE HELP ("Marc Howe")
Re: Enigma encryption updated (Adam D) (Nemo psj)
Re: Enigma encryption updated (Adam D) (Nemo psj)
----------------------------------------------------------------------------
From: ca314159 <[EMAIL PROTECTED]>
Subject: Re: Quantum crypto and the name of god
Date: Sun, 19 Mar 2000 01:09:57 GMT
John Savard wrote:
> You're probably thinking of an old Arthur C. Clarke short story (The
> Nine Billion Names of God) in addition to Douglas Adams' famous
> oeuvre. And the pun on Hilbert...
I've been having fun with Eco's Foucault's Pendulum.
Though my favorite pun is the quantum.
Foucault's "This is not a pipe." an art quantum:
http://images.amazon.com/images/P/0520049160.01.LZZZZZZZ.gif
------------------------------
From: "Adam Durana" <[EMAIL PROTECTED]>
Subject: Re: Enigma encryption updated (Adam D)
Date: Sat, 18 Mar 2000 20:16:18 -0500
> No on all accounts..... Cept first I posted the source, and how it is used
in
> the algy in the word doc i suggest you stop thinking math and start
thinking
> plain english when you read it, then maybe youll get it.
Cryptography is math, so I think you need to rewrite your paper taking this
into account.
And its ALGORITHM not "algy". =)
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Card shuffling
Date: Sun, 19 Mar 2000 02:35:04 +0100
Tim Tyler wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> : The new problem is now how to 'measure' randomness and that
> : involves further the (difficult) problem of defining 'randomness'.
> : Of course, we want something that is 'practical' not the stuffs of
> : the pedantic people who demand (absolute) perfectness. But I am
> : yet ignorant of anything that could be useful in that direction.
>
> Knuth made serious attempts to both formally define randomness, and to
> present statistical tests that attempted to measure deviations from it,
> in TAOCP v.2.
>
> These seem applicable to shuffling - since a good shuffle represents a
> sequence where at any point the next card in the deck is chosen at random
> from those not yet dealt.
>
> There are various ways of turning the information in a shuffled pack into
> a what would be a genuinely random sequence - *if* the shuffle was a
> random permutation in the first place.
My reading of Knuth's book is certainly not deep. But I am fairly
sure that he doesn't give a (practical) 'measure' of 'randomness'.
There are a number of tests described. But there is no single
numerical value that can be computed and taken as the (standard)
measure of 'randomness' comparable to the measures obtained
in physics. Could you please cite him or else give your own favourite
measure? Note further that I am considering shuffling done by humans.
But even with a computer, there seems to be problems. If I use a
very poor PRNG to obtain a permutation according to the method of
Durstenfeld, is that permutation 'random' or not? Thanks.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Card shuffling
Date: Sun, 19 Mar 2000 02:35:11 +0100
Jim Reeds wrote:
>
> [EMAIL PROTECTED]> writes:
> |> ... Meanwhile I should appreciate a short hint
> |> of the meaning of 'effective mixing'? Is there a rigorous measure
> |> of the 'effectivness'?
>
> A given shuffling method repeated n times gives rise to a probablity
> distribution on the space of permutations of the card deck. Call that
> distribution P_n, so P_n(A) is the chance that the permutation you
> get after n shuffles is one of the perms. in the set A. Let Q be the
> uniform distribution on the same space. What we'd like is for P_n to
> be close to Q. One measure of the discrepancy is called the "total
> variation distance" between P_n and Q, namely the max over all A of
> |P_n(A) - Q(A)|. It is not hard to see that this is the same as
> one half the "L1" distance of P_n and Q, the sum over all permutations
> x of |P_n({x}) - Q({x})|.
>
> Suppose we play a betting game, where you think prob. law Q obtains,
> but I know P_n does. For any proposed bet, P_n(A) - Q(A) is how much
> our judgements differ, and if I'm right, then P_n(A) - Q(A) is how
> much money I can make off of you per dollar bet, on average, by betting
> that A will occur. Now I'm crafty & pick the most favorable A, from the
> point of view of extracting money from you. That's the A in the definition
> of TV distance, and that gives a "natural" interpretation of the numerical
> values of the TV distance. If it is 2^-50 (say) then to exploit the
> fact that P_n is not actually flat random, I will have to bet (invest)
> 2^50 dollars to expect to earn 1 dollar. Etc.
I am afraid that I don't yet understand your points. First, do you
think that a human shuffling a deck can be comparable to shuffling
with an algorithm? I mean there could be quite big variations
in his performance from one shuffling to another, so it would
at least need quite an amount of experimental data to determine
any probability distribution. Second, do I understand correctly
that you count the frequencies of the occurence of each possible
permutation? For simplicity, let's say there are three cards
numbered 1, 2 and 3. Thus there are 6 permutations. How is the
experiment to be carried out? Does one always start from 1-2-3
and check what permutation one gets after the shuffling process?
It is my feeling of the gut that if one carries out a sufficiently
large number of such shuffling, then the frequencies will arbitrarily
well approach a uniform distribution. Well, I can see that for any
fixed number n of shuffling, the person with poor performance will
have a less flat frequency distribution than the one with better
performance. But that difference would decrease as n increases.
For extremely large n, I surmise that the distribution will always
be very flat, so that the L1 distance will be infinitesimal. Now
given, say, experimental data of 100 persons with their corresponding
frequency distribution curves for a wide range of values of n, I
don't yet see how you are going to define the 'effectiveness' of
shuffling from the L1 distances. In particular, should one choose
a certain fixed n value? But then why that particular choice?
Would you be kind enough to explain a bit more? Many thanks in advance.
M. K. Shen
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: Any Mathematicians Surnamed _Saint_-Germain?
Date: Sun, 19 Mar 2000 01:47:35 GMT
John R Ramsden wrote:
> One reason for adding Sophie to this qualifier is obviously that the
> word "germaine" actually means something in English, i.e. "relevant",
> unlike say "Fermat". So in speaking of "Germain primes" one might
> confuse the listener (although this shouldn't be a problem in writing
> about them). After all, one doesn't refer to "Emmy Noetherian rings".
Your argument is not germane; we don't call Good graphs "Irving Good
graphs".
> These hypersensitive anti-sexist types can be such a pain. If
> anything they should be delighted by the phrase "Sophie Germain
> primes", as this highlights the contribution of a female
> mathematician in an area that is undeniably more of a male preserve.
You "just don't get it", do you. It would honor her more to treat
her as a peer.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: 64-bit Permutations
Date: Sun, 19 Mar 2000 01:49:44 GMT
[EMAIL PROTECTED] wrote:
> I might suggest that you take a crash course in basic abstract
> algebra. ;-)
Why is it that nobody else had trouble understanding the question?
------------------------------
From: "Marc Howe" <[EMAIL PROTECTED]>
Subject: Opinions?
Date: Sun, 19 Mar 2000 01:52:58 GMT
There is nothing that is truly random, correct?
Marc
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Crossposted-To:
alt.politics.org.cia,alt.politics.org.nsa,talk.politics.crypto,alt.journalism.print,alt.journalism.newspapers
Subject: Re: new Echelon article
Date: Sun, 19 Mar 2000 01:54:12 GMT
JimD wrote:
> Well of course they do! Isn't '...the economic well-being of the
> United States.' part of the NSA's mission statement?
No.
However, they *have* been tasked with helping protect the
"information infrastructure", which is a legitimate national
security interest that happens to contribute to our economic
well-being as well as being of importance for other reasons.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Card shuffling
Date: Sun, 19 Mar 2000 02:09:06 GMT
Mok-Kong Shen wrote:
> There are a number of tests described. But there is no single
> numerical value that can be computed and taken as the (standard)
> measure of 'randomness' comparable to the measures obtained
> in physics.
That's because you're asking the wrong question. What you *should*
ask for is a test that provides a quantifiable amount of evidence
for or against the hypothesis that a process is operating according
to spec, where the spec includes some component of randomness. E.g.
for a possibly "random" permutation, the spec could be that each
index in sequence is assigned to any so-far-unassigned cell with
equiprobability. To contrast the "to spec" hypothesis with *every*
possible "not to spec" hypothesis, one needs a theory that develops
in the main space and the dual space simultaneously. Kullback has
already presented this in "Information Theory and Statistics" (1959).
For general guidance on using "weight of evidence" in making
rational decisions, see Good's writings.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Card shuffling
Date: Sun, 19 Mar 2000 02:13:38 GMT
"NFN NMI L." wrote:
> I deal the cards out into X piles, and then stack the piles one onto
> another. First I'll start with 2 piles, then 3, then 5, then 7, etc.
> Does _that_ increase randomness, or not?
What do you mean by "randomness"? If every step in your procedure is
precisely prescribed, including when to terminate it, then there is
nothing random about it.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???
Date: Sun, 19 Mar 2000 02:32:58 GMT
Jerry Coffin wrote:
> With that given, if you want to say that my conclusion was incorrect,
> it seems to me that there are exactly two possibilities: you can
> point out where I got things wrong factually, or else you can point
> out where my reasoning that led from those facts to a conclusion was
> wrong.
Your reasoning, which was not presented in a logically verifable
format (chain of deduction) was wrong because you apparently
assumed that since *some* aspects of I/O have implementation-defined
aspects, *all* I/O has impl.-def. aspects. That is simply not so
for normal I/O, for example in a filter that performs, say, Vigenère
encryption using a key from, say, the command line (merely to avoid
a separate argument about whether filename arguments can be used
uninterpreted without invoking implementation-defined behavior).
There are a lot of system-specific things going on in the
environment, but the output of the program *depends on none of them*.
The arguments you have given would imply that even the "Hello,
world!" program is not strictly conforming, which is assuredly not
the intent of the C standard.
Now, if a program *did* depend on which way the implementation
defines impl.-def. characteristics, for example, on whether or
not opening an existing file for writing truncated it or merely
overwrote the existing contents, then it certainly would not be
strictly conforming. Anyone can check the entire list of impl.-def.
characteristics in the "Files" section of the C standard and verify
that they aren't a problem for most routine programs.
Writing strictly-conforming C code is *important* for portability.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Enigma encryption updated (Adam D)
Date: Sun, 19 Mar 2000 02:34:27 GMT
Adam Durana wrote:
> And its ALGORITHM not "algy". =)
Maybe he ain't got rithm.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Opinions?
Date: Sun, 19 Mar 2000 02:45:15 GMT
Marc Howe wrote:
> There is nothing that is truly random, correct?
Consensus among physicists is that the fundamental behavior
of elementary particles etc. involves purely random "choices"
being made according to inherently probabilistic laws.
For example, if you use a Geiger counter or similar detector
to watch for decay products from a chunk of a radioactive
isotope, the time interval between detected events is
predictable only as a statistical ensemble, not as separate
events. The distribution has an exponential distribution
with characteristic time between decays that is some constant
multiple of the half-life of the nucleus. The randomness
involved here is not due to merely an absence of sufficiently
exact knowledge of the state of everything, but rather the
impossibility of having such knowledge, even in principle.
------------------------------
From: [EMAIL PROTECTED] (Jim Reeds)
Subject: Re: Card shuffling
Date: Sun, 19 Mar 2000 02:47:15 GMT
Forgive me for quoting at length.
In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]>
writes:
..
|>
|> I am afraid that I don't yet understand your points. First, do you
|> think that a human shuffling a deck can be comparable to shuffling
|> with an algorithm? I mean there could be quite big variations
|> in his performance from one shuffling to another, so it would
|> at least need quite an amount of experimental data to determine
|> any probability distribution. Second, do I understand correctly
|> that you count the frequencies of the occurence of each possible
|> permutation? For simplicity, let's say there are three cards
|> numbered 1, 2 and 3. Thus there are 6 permutations. How is the
|> experiment to be carried out? Does one always start from 1-2-3
|> and check what permutation one gets after the shuffling process?
|> It is my feeling of the gut that if one carries out a sufficiently
|> large number of such shuffling, then the frequencies will arbitrarily
|> well approach a uniform distribution. Well, I can see that for any
|> fixed number n of shuffling, the person with poor performance will
|> have a less flat frequency distribution than the one with better
|> performance. But that difference would decrease as n increases.
|> For extremely large n, I surmise that the distribution will always
|> be very flat, so that the L1 distance will be infinitesimal. Now
|> given, say, experimental data of 100 persons with their corresponding
|> frequency distribution curves for a wide range of values of n, I
|> don't yet see how you are going to define the 'effectiveness' of
|> shuffling from the L1 distances. In particular, should one choose
|> a certain fixed n value? But then why that particular choice?
|> Would you be kind enough to explain a bit more? Many thanks in advance.
You are addressing several questions all at once, getting me mixed up in the
process. I introduced the total variation distance as one way of measuring
how effective a given shuffling method is, by concentrating on the probablilty
distribution it induces on the set of permutations and measuring how far
that distribution is from another "target" distribution. A completely
separate question is how to measure or model the actual shuffling behavior
of a given person. No naive experiment of the form "let him shuffle the
deck 10 times 52! times over, and count up how often each permutation came
up, and run a chi-suared test of what we got against flat random: with
10 expected in each cell we are in statisticians' heaven" will work, because
52! is such a large number. Yet another question is the purely mathematical
one of whether successive, statisticlly independent applications of a given
random shuffling method will give results close to a randomly chosen
permutation. In math talk, we are asking for the rate of convergence of
a random walk on the group of permutations towards its limit distribution,
and whether that limit distribution is uniform. This is the topic of the
Diaconis-Bayer paper I mentioned before, for one particular random walk,
the SGR model of riffle shuffling.
So, in principle, determine a better model for real riffle shuffling than the SGR
model, verify by experiment that it describes what people do better than SGR
does. Then somehow repeat the Diaconis--Bayer analysis for the new model.
You will find that the TV distance between flat random and n-fold applications
of your model goes down like a * b^n for certain numbers a and b. Decide how
close to flat random you want to be, say you want your TV distance to be 1/1000,
(that is, you are saying you dont care about deviations from perfect randomness
that take sample sizes in excess of 1,000 shuffles to detect). Now solve
the equation a b^n = 1/1000 for n. The solution tell how many times you
have to shuffle to make the deck effectively random.
Summary: effectiveness of card shuffling is studied by the theory of random
walks on groups, in particular, by the rate of convergence to uniformity.
--
Jim Reeds, AT&T Labs - Research
Shannon Laboratory, Room C229, Building 103
180 Park Avenue, Florham Park, NJ 07932-0971, USA
[EMAIL PROTECTED], phone: +1 973 360 8414, fax: +1 973 360 8178
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: 64-bit Permutations
Date: 19 Mar 2000 03:10:03 GMT
In a previous article, "Douglas A. Gwyn" <[EMAIL PROTECTED]> writes:
>Why is it that nobody else had trouble understanding the question?
Frankly, I do not think I have been arguing that my original interpretation
followed with logical necessity from the question. I did not even assert that
my interpretation was the most coherent, most probable or whatever you might
call it. (I have in fact even admitted that it was a bit far fetched.)
What I do assert is that my interpretation was not wrong, not explicitly nor
implicitly contradicted by the question, and not completely unlikely.
On the contrary, I assert that analyzing the set of bijective encrypt
functions E_k as a permutation group is very fruitful regardless of whether
you are constructing a cipher or are trying to break one.
----- Posted via NewsOne.Net: Free Usenet News via the Web -----
----- http://newsone.net/ -- Discussions on every subject. -----
NewsOne.Net prohibits users from posting spam. If this or other posts
made through NewsOne.Net violate posting guidelines, email [EMAIL PROTECTED]
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: SHA-1 as a stream cipher
Date: Sun, 19 Mar 2000 03:27:15 GMT
Michael K <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Does anyone have any thoughts on using SHA-1 as a stream cipher?
>
> I'll clarify...
>
> Lets say you have a block of data to encrtypt....
>
> D = data to encrypt
> K = hash of user input password
>
> DO
> -------------------------------------------
> xor the next 160 bytes of D with K (one byte at a time)
> -------------------------------------------
>
> then...
>
> K = hash of K
>
> LOOP (until data is completely encoded)
>
> Any thoughts/ideas would be great.
What is the period of this generator?
Tom
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Opinions?
Date: Sun, 19 Mar 2000 03:29:06 GMT
Marc Howe <[EMAIL PROTECTED]> wrote in message
news:_pWA4.1819$[EMAIL PROTECTED]...
> There is nothing that is truly random, correct?
>
> Marc
Assume the definition of 'Random Enough' as 'random'. Well that's kinda
recursive. In other words, what you think of 'random' is simply so
unpredictable that it's imagineable what the precursor state was...
etc
Tom
------------------------------
From: "ÐRëÐÐ" <[EMAIL PROTECTED]>
Crossposted-To:
alt.security.pgp,comp.security.pgp.discuss,alt.security.scramdisk,alt.privacy
Subject: Re: Concerning UK publishes "impossible" decryption law
Date: Sun, 19 Mar 2000 13:22:46 +1100
an electric magnet is not so hard to make or get hold of, its harmless
unless power is given to it, and when powered, can be easily be strong
enough to destroy data an the disks. its also harmless to organic matter,
unless you make a super powerful one. but of course, as some one also
pointed out. they wont power on you pc, they will take it apart and use the
hd's on their analysis machine.
--
"Oh GOD, Please save me from your followers"
more of my ramblings can be found at http://oakgrove.mainpage.net
"Man is a part of nature, not apart from nature"
anti spam, remove 'nospam' to mail me
ICQ:16544782
"Lincoln Yeoh" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Mon, 13 Mar 2000 00:14:59 +1100, "ÐRëÐÐ" <[EMAIL PROTECTED]> wrote:
>
> >I would probably go along with an electro-magnetic pulse set-up through a
> >printer port, that if an incorrect login is done, the port turns on the
> >magnet field over the top of the hard drives.
>
> Are you sure you can generate a magnetic field strong enough to wipe data
> from the top of a harddrive?
>
> As far as I know, it's not so easy, especially nowadays.
>
> But I am very certain turning it to slag makes it very difficult.
>
> Cheerio,
>
> Link.
> ****************************
> Reply to: @Spam to
> lyeoh at @[EMAIL PROTECTED]
> pop.jaring.my @
> *******************************
------------------------------
From: "Steve A. Wagner Jr." <[EMAIL PROTECTED]>
Subject: Re: SHA-1 as a stream cipher
Date: Sun, 19 Mar 2000 00:12:24 -0800
this question has been posted recently by someone else. you suggest
using sha in ofb mode. output feedback where any algorithm keeps
encrypting its state and uses that to xor with the plaintext.
here's a better way:
1. begin with a 20byte riv.
2. hash=sha(riv+key)
3. ciphertext=[xor hash with first plaintext block (20 bytes)]
4. hash=sha(ciphertext+key)
5. and so on until end of file.
--SAW
Michael K wrote:
> Does anyone have any thoughts on using SHA-1 as a stream cipher?
>
> I'll clarify...
>
> Lets say you have a block of data to encrtypt....
>
> D = data to encrypt
> K = hash of user input password
>
> DO
> -------------------------------------------
> xor the next 160 bytes of D with K (one byte at a time)
> -------------------------------------------
>
> then...
>
> K = hash of K
>
> LOOP (until data is completely encoded)
>
> Any thoughts/ideas would be great.
>
> Thanks,
>
> Mike K
------------------------------
From: [EMAIL PROTECTED] (Steve K)
Subject: Re: Opinions?
Date: Sun, 19 Mar 2000 03:44:32 GMT
On Sun, 19 Mar 2000 01:52:58 GMT, "Marc Howe" <[EMAIL PROTECTED]>
wrote:
>There is nothing that is truly random, correct?
>
>Marc
Philosophically, it's a debatable issue. Einstein said that god does
not play dice with the Universe. Most modern physicists are certain
that quantum events are uncertain. Take your pick.
For practical applications, proof of randomness is not required. All
you need is something that produces a result that defies advance
prediction, and contains no discernable patterns other than the
obvious constraints of its range of outputs. There are numerous
sources of "random" noise that pass these tests flawlessly. Most rely
on turbulence of some sort-- ping pong balls in an air jet, dice
tumbling about, and the sound of running water over a microphone are
examples.
On the other hand, you can't make random numbers with math. By
definition, any process that tries to do so, can be duplicated exactly
anywhere, by anyone who knows the method used and the initial input.
This also rules out making random numbers with a computer, unless the
computer is getting input from a natural source of noise.
Whether the use of a "random seed" in conjunction with non-random
numbers is good enough for secure cryptography, is definitely a point
to ponder. It may be difficult to predict the system timing events
that are used by most cryptosystems to get the fresh data to re-hash
the random seed data with every time a new key is generated, but given
a large sample of traffic from one system, I suspect that regularities
in this data may be of some assistance to the analyst.
I wonder what other folks think about this?
Steve
---Continuing freedom of speech brought to you by---
http://www.eff.org/ http://www.epic.org/
http://www.cdt.org/
PGP key 0x5D016218
All others have been revoked.
------------------------------
From: "Marc Howe" <[EMAIL PROTECTED]>
Subject: VB Property help with Twofish algorithm: PLEASE HELP
Date: Sun, 19 Mar 2000 04:06:12 GMT
I am new to VB and am trying to understand how to use the "Property"method.
Below is some code that is in its own class that I need to access from my
main form. Do I need to use a Get or Let statement? How would I do that?
I really do not understand these property functions.
I am trying to use the TwoFish algorithm to encrypt/decrypt some text and am
having
difficulty with this ONE area...I think I am ready for the rest, though.
I tried the VB discussion groups and they were helpful, but what I really
need is a
demo of how to use the property code. Any help is GREATLY appreciated!
The way I understand the following (according to the VB discussion groups)
is
that it is a way to expose a whole host of variables to the main program
without
the main program getting exclusive rights to them. However, I still don't
understand
how to make this work.
BEGIN CODE:
Public Property Let bKey(Optional ByVal lMinKeyLength As KeyLengths, ByRef
bKey() As Byte)
Dim lKeyLength As Long
On Error GoTo ErrorHandle
If boolIsArrayInit(bKey) = False Then Exit Property
lKeyLength = (UBound(bKey) + 1) * 8
If lKeyLength < lMinKeyLength Then ReDim Preserve bKey((lMinKeyLength \
8) - 1)
If lKeyLength > 256 Then ReDim Preserve bKey(31)
If lKeyLength > 192 And lKeyLength < 256 Then ReDim Preserve bKey(23)
If lKeyLength > 128 And lKeyLength < 192 Then ReDim Preserve bKey(15)
If lKeyLength > 64 And lKeyLength < 128 Then ReDim Preserve bKey(7)
tSessionKey = makeKey(bKey)
Exit Property
ErrorHandle:
End Property
END CODE:
If anyone is interested, the Twofish implement was converted by
someone else, I downloaded it from Planet Source Code. It is supposed to
make a .dll, but I was trying to include the class and module in my .exe.
If you'd like, the link is below, just do a search for Twofish, it'll pop it
up.
http://www.planet-source-code.com/vb/
Thank you very much.
Peace, Honor & Respect,
Marc
------------------------------
From: [EMAIL PROTECTED] (Nemo psj)
Subject: Re: Enigma encryption updated (Adam D)
Date: 19 Mar 2000 04:07:11 GMT
LOL hey ;)
------------------------------
From: [EMAIL PROTECTED] (Nemo psj)
Subject: Re: Enigma encryption updated (Adam D)
Date: 19 Mar 2000 04:09:33 GMT
<<This can't even be unambiguously decrypted (the lowest bit will be
wrong for 1 in 256 characters).>> This statement here shows that you have no
idea how the encrypter works :)
Anyhow.....
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************