Cryptography-Digest Digest #196, Volume #12      Mon, 10 Jul 00 20:13:01 EDT

Contents:
  Re: SecurID crypto (was "one time passwords and RADIUS") ("Joseph Ashwood")
  Re: A thought on OTPs (Mok-Kong Shen)
  Re: DES Analytic Crack (Mok-Kong Shen)
  Re: Proposal of some processor instructions for cryptographical    (Mok-Kong Shen)
  Re: A thought on OTPs (Mok-Kong Shen)
  Re: Proposal of some processor instructions for cryptographical  (Mok-Kong Shen)
  Re: SCRAMdisk or PGPdisk? ("Joseph Ashwood")
  Re: One plaintext, multiple keys ("Joseph Ashwood")
  Re: Proposal of some processor instructions for cryptographical   (Mok-Kong Shen)
  Re: Steganographic encryption system (Rex Stewart)
  Re: Proposal of some processor instructions for cryptographical  (Edward Wolfgram)

----------------------------------------------------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: SecurID crypto (was "one time passwords and RADIUS")
Date: Mon, 10 Jul 2000 16:06:11 -0700

[large amounts of snip involved]

[snip stuff that's almost pure marketting regurgitation]
> If you are a current or (serious) potential customer, RSA  has always
> been willing to give you -- or your designated crypto consultants --
> access to the SecurID algorithm under a NDA. So, while Brainard's hash
> has never been published, it has, over the years, been studied in depth
> by a sizable and well-credentialed community of corporate and government
> cryptographers.

Of which you seem to be the only person who is willing to step forward and
claim even a fair amount of knowledge. Although it seems quite likely that
Rivest has examined the algorithm, his (unreleased) review is not the end
all. As no other entity has seen fit to step forward and state that they
have personally examined it, I personally still do not consider it well
reviewed.

>
> So many prominent cryptographers have had occasion to evaluate the
> SecurID hash over the past 15 years -- legally, under NDA; or,
> illicitly, on gray market work benches -- that it was probably
> inevitable that the algorithm has begin to be mentioned more frequently
> in professional crypto circles. Good hashes are fairly rare, after all.
[more on this later]

> Published details about the SecurID hash are sparse.

I'd call it rare more than scarse

[snip 256 dependent operations]
[foolish claim that this means it can't be sped up]
That is entirely not true. Given the rather small size of the time taken
(24-bits according to you), it is now entirely possible to store all these
values on a harddrive, this leads to a speed of to exactly that of a search
of a tree with 2^24 nodes, or actually very fast.
[snip look how great I am]

[snip look how great SecurID is]
[snip more marketting]
[snip barest description of possibilities, which itself leads to better
analysis than I gave before]
Given the information you've stated it seems reasonable that there exist
only 2^24 unique possible outputs of each card, and that each card must be
present a unique number at each instant (unique relative to all other
SecurID tokens in existance). This alone begs for the application of even a
modest personal computer, simply because the maximum matrix for generic
elimination is only 2^24 in size, however I suspect that it is believed that
more than 2^24 outputs will need to be known to solve the equations.

Further from this I gather the following:
There does not exist X,Y,T,U that satisfy the following
X = 64-bit nmber
Y = 64-bit number
T = 24-bit number
U = 24-bit number
X != Y
T != U
F(X,T) != F(Y,T)
F(X,T) != F(X,U)

This sounds ever so not vaguely like a strong encryption algorithm. Which
would mean that typical cryptographic attacks are likely possible.

[snip, look how I can try to make you think I know what I'm talking about by
trying to compare it to a real cryptographic algorithm RC5]

> (Elsewhere, Mr. Ashwood recently described the NTRU PKC a piece of
> "shit" which doesn't deserve the patent it just received -- a crisp
> cryptographic evaluation which I suspect his professional and personal
> friends may find opportunity to recall often in the years to come;-)
Now we get to the part that is the actual reason I replied. I think you have
mistaken me for someone else. A quick search of www.deja.com helps assure
me, the only occassions where my name and NTRU appear in the same message is
the one I'm replying to, and in one where I state that with finitely sized
keys brute force is always possible, and one reply to my brute force
comment. Searching for my name and the word "shit" shows nothing that was
written by me. In fact searching for NTRU and "shit" turns up only your
message. If your abilities at journalism are anywhere near you abilities at
writing to newsgroups I'm sure you've heard what I have to say quite often.

Either provide some evidence or I would appreciate a retraction of your
statement. However since it would be off-topic for sci.crypt, having it
inline of the rather innevitable response your going to make to this message
will be more than suitable.

> Here, Mr. Ashwood wrote:
>
> >> Honestly I think the SecurID system needs some work. It's a
> >> great idea, and a rather good implementation. But a massive
> >> amount has been learned in the last decade about security,
> >> and the SecurID cards don't take advantage of this.
>
> I can't argue with his last line, but Mr. A judges the SecurID as
> somehow, vaguely, deficient, solely by virtue of its longevity in the
> marketplace. He never bother to ask the real question: does the SecurID
> meet the requirements and demand of the market efficiently, effectively,
> and securely.

I was merely judging that some of the assumptions made behind the scenes may
no longer hold true, this also seems a far cry from making judgements on the
security of the final system, although it does hint that some recent
research into it should at least be published.

>
> (We've learned a lot about metallurgy since we began to fabricate
> titanium alloys -- but so what?)

Perhaps we should re-evaluate the efficiency of the titanium alloys, oh
wait, that already gets done.

>
> It is, nonetheless, widely expected that a new SecurID -- probably with
> a 128-bit secret key -- will be phased in beginning next year.  RSA has
> not made any public commitments, but rumors also suggest that the next
> generation of SecurIDs may carry RSA's AES candidate, RC6.

That would seem like a reasonable concept. I would hope though that they
make a provision for themselves to step through, just in case RC6 is proven
to be not as secure as currently believed. I'm sure the press release would
say something about compatability with AES, and how unifying the community
will help everyone.

> Ummm. Methinks Mr. Ashwood hasn't bothered to find out much about the
> "antiquated" crypto he is pontificating on. Even basic widely-published
> stuff, like the fact that SecurID uses a hash rather than a symmetric
> cipher, escapes him.

Quite the contrary, I have never stated that it uses a symmetric cipher,
what I have stated is that the function that they use can certainly be
massaged to appear as a symmetric cipher, just as SHA-1 can be made to look
like a relatively non-invertable symmetric cipher by taking the internal
state to be the key and fixing the input to be a specific size.

> "Two-factor authentication," which Mr. Ashwood doesn't mentioned, is
> the classical definition for "strong authentication."

Your point seems to be to misinterpret what I have said. One can create
two-factor authentication that is far from strong, say by authenticating the
users possession of a (US)nickel, and knowledge of the letter A. Strength
lies not in the pretty publicity, strength lies in what it takes to create a
working false authentication.

>
> IMNSHO, it is subjective and downright eccentric to suggest that -- for
> this class of user authentication devices -- some crucial distinction
> between "moderate" and "high" security lies in giving an on-site
> administrator the option to "update" or change a token's internal
> secret.

Except for the very real fact that with numbers predetermined it becomes
possible for a shipment to be delayed while the numbers are recorded, also
making it possible for RSA to maintain a record of each card's owwer. While
if the numbers are only known to the end site, the model becomes far more
restricted.

>
> There are pros and cons here,
I agree on that

> but I suspect that few security experts
> would agree with Mr. A that a two-factor HHA which has to be programmed
> at the local site, and can be repeatedly reprogrammed, has any big
> advantage over a token with a factory-loaded secret, and a limited and
> preset lifespan.

In the generic terms no. However I have problems with puny algorithms that
have not been subject to effective verification (that is publically known),
which themselves take values small enough as to be ineffective in modern
terms. You have made statements that are patently false, and statements that
are false for any significant purpose. To continue, your statement that
64-bit security is good enough is far from supported by reality, RC5 is not
suitable to be run on Alpha processors, the processor which did a very
significant portion of the work against DES. DES a 56-bit algorithm has now
been reduced to a matter of 22 hours, making 64-bit cryptography a matter of
256*22 hours, or about 234 days, significantly less than the minimum 2 year
life expectancy of a SecurID token.

> [RSA (predictably, I think;-) offers product up and down this line:
> software versions of SecurID for PCs and Palm Pilots; the classic
> SecurID token; the SecurID 1100 in a smartcard form-factor (which fits
> Mr. A's "high-security" model above); and various RSA/Gemplus
> smartcards. The smartcards all feature key and credential storage:
> <http://www.rsasecurity.com/products/>]
I just looked, it appears to me that all RSA offers is reboxed versions of
the same old SecurID. The SecurID token will NEVER be the exclusive portion
of a high-security model by my standards. The SecurID token does not perform
authentication in a way that the local machine cannot replay it, the SecurID
does not offer any encryption services. It seems to be you and only you who
believe that the SecurID token offers a level of security above midrange.

>
> Obviously, any "high security" network will also require link or
> network encryption to protect the packets (and any authentication
> service) from eavesdroppers and session hijackers -- but you don't need
> a smartcard or Mr. A's "full hardware solution" (whatever that may be;-)
> to set up network crypto.
What a surprise, you actually admit that you have no clue what a full
hardware solution is. It generally consists of a smart card (not just a
reformed SecurID), that smartcard uses a STRONG AUTHENTICATION METHOD (see
SRP for one possible varient) to authenticate itself in such a manner that
the host system, the server, and intermediate systems cannot determine the
information needed to mount an attack. From there encryption keys are
established that may be used at will. Gee that sounds like it would exceed
your foolish idea of strong security.

> some similar personal credential repository. (Personally, I like
> Ashwood's idea of an ephemeral key, a la PK-INIT -- but I note that the
> idea didn't seem to have legs in the commercial Kerberos market.)

Quite the contrary, I am as we speak working on a project that will do just
that, there are also various vendors that use smartcards for cryptographic
logins to Windows2000 via kerberos and the PK-INIT draft standard. Please
also note that RSA SecurID card cannot be used in such an environment except
with the base Kerberos protocol.

>
> I guess it is unavoidable that Mr. A's model of a security spectrum
> that runs from passwords, to HHA tokens, and up to a smartcard-based
> "full hardware solution," is awkward and oversimplified -- despite the
> general structure being fairly conventional.

It is amazingly oversimplified, there are far too many variables to
completely simplify the environment to low, medium, and high. But it's
fairly effective at getting the point across (you certainly understood that
I feel that SecurID offers security sufficient for many environments).

>
> There is obviously a lot unspecified. No network admin, for instance,
> would have difficulty imagining a scenario in which Mr. A's
> all-smartcard environment would be an unmitigated security disaster;-)

I never said a full smartcard environment, I said full hardware, there is a
very significant difference (it's very hard to fit a server on a smartcard)

>
> Six or seven years ago -- after the SecurID and its competitors had
> been judged fundamentally sound by the Infosec pros -- the competitive
> market requirements for HHAs shifted to focus on the administrative and
> cross-realm functionality of the authentication server. (Even today,
> while the SecurID seems to maintain an ease-of-use advantage, the
> relative security of the various HHA tokens is pretty much a wash in the
> marketplace.)

I absolutely agree that the offered security seems to be roughly equivalent,
having never administrated any of them I can't speak on the ease of use.

>
> A smartcard, of course, is even more obviously just a tail attached to
> an elephant of unknown size and disposition. The smartcard's function
> and integrity can be attacked from many different places (e.g., the
> reader, the CA) in the larger infrastructure.

Not if the smartcard is properly and securely designed. The CA will know
only the public key, the reader will have no more information than that, the
computer the reader is attached to will know no more than the public key,
etc. There are levels of smart cards that you do not seem to know about.

>
> Mr. Ashwood is quite right to imply that the security continuum
> encompasses both apples and apple orchards.

With a few orange tree for spice :)


>  with no circuit
> connection between the token and the network -- offers an elegant
> simplicity that many may regret losing.

But more will probably be glad they're gone, at least from a security
standpoint.

> There are, IMNSHO, several alternative ways -- bribery, among the most
> obvious -- by which an attacker could probably obtain the SecurID hash
> far more quickly, cheaply, and easily than a DPA attack.

I'm not so sure about that. There were some published results early on in
the AES competition that quickly recovered keys from smartcards, perhaps
less time than it would take to get the key from the person, not even
including the time to bribe/brainwash/torture.

>
> There are, after all, tens of thousands of copies of the Brainard hash
> distributed all over the world in RSA software. Last month, the author
> of an article on SecurID in 2600, the Hacker Quarterly, offers anyone
> access to an illicit copy of ACE/Server.

But because it is an illicit copy, there will be no published results. If
however someone takes that software and reverse-engineers the hash out, and
publishs it here, we will treat it as a potential SecurID hash, just as was
done with the claimed RC4.

> A SecurID token is purposely designed so that it is very difficult to
> "speed up" the rate at which it generates new token-codes. If it takes a
> week to get a 300,000 token-codes, we might reasonably hope that the
> SecurID user would report the loss or theft of his SecurID, and the
> token would be made useless, long before the bad guys set to work.)
So instead of throwing 1 system at it you throw 200, for 6 million a week.
Or you build custom hardware (like EFF for DES)

> Unfortunately, I haven't seen anyone manage SHA-1 in less
> than 600 bits of RAM, which probably makes it impractical for a token
> like the SecurID.

I'd call anyone who claimed to do it with less than 160+512 a fool, simply
because the output must be 160-bits and the input must be 512 bits, leaving
a minimum memory footprint of 672 bits. Which by the way is actually quite
cheap, costing only 4032 transistors.

>
> Also, the Birthday Paradox (which helps us discover collisions, more
> than one input which gets hashed down to the same token-code) is really
> irrelevant here. Collisions are useless to someone attacking an
> ACE/SecurID environment, since without possession of a token's secret
> key, they are unpredictable.

Actually they are quite useful, a collision (or non-collision) of even a
single bit, eliminates a class of possible keys. Once one finds a way to
sort the keys into these classes efficiently it becomes quite simple to find
the inputs. Thusly a hash collision is very important, also if the equations
I recited above are correct, then a single hash collision results in a
broken card.

> Not a symmetric cipher, a hash -- whose strength and rep will probably
> survive Mr. Ashwood's doubts.

I already addressed this point above.

> /> It's safe to
> /> assume that the secret is no larger than 64-bits, making it
> /> brute-forcable.
>
> Humbug. I think the Distributed.net effort, noted above, is probably
> the best answer to that.

Also addressed above, try 234 days, against a proper opponent.

>
> /> There are probably attacks against it that
> /> will be much more effective than brute force, but the
> /> secrecy of the algorithm, combined with the security of
> /> hardware makes it difficult to analyze.
>
> A harsh, if vague, judgment -- particularly from someone who has so
> demonstrably proven that he knows very little about the cryptosystem he
> so glibly suggests is "probably" so vulnerable.

Actually I make no reservations about saying that every encryption algorithm
can be attacked, and almost every one of them can be attacked faster than
brute force.

>
> /> For reference I
> /> would refer everyone to the analysis of the clipper chip
> /> that was performed before the algorithm was made public
> /> (limited to little more than disabling the LEAF), and the
> /> analysis done afterwards, leading to significant analysis.
>
> I'm at a bit of a loss to find myself challenging someone who claims to
> know exactly what happened, and what didn't happen, in the NSA's
> internal development and testing of Clipper: the "key-escrow" protocol
> that the US spooks hoped to force all Americans to adopt.

I was not using it as a reference in the fashion that you seem to so easily
interpret it. I was using it as a reference that even very good
cryptanalysts can have their once secret super-secure algorithm recieve
quite a rather harsh arrival where it is cryptanalyzed and the security
reduced.

>
> Not to put too fine a point on it, I suspect that Mr. A's cartoonish
> description of the NSA's pre-publication "analysis" of Clipper is -- for
> all the crisp certainty and detail -- is either a misstatement, or a
> figment of faulty memory or fertile imagination.

There has been no evidence that they did not examine it, that they did not
feel that it was secure, or that they were not surprised when the algorithm
was successfully reduced.

> I beg the indulgence of the newsgroup for the length of these comments.
> These are interesting issues, and it was a quiet Sunday eve on the Net.
No problem, it's actually quite unusual that we see someone have so much to
say, it's always good to hear interpretations of events that are new.
                    Joe



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: A thought on OTPs
Date: Tue, 11 Jul 2000 01:22:24 +0200



Bryan Olson wrote:

> Mok-Kong Shen wrote:
> >
> > Bryan Olson wrote:
> >
> > > So the answer yet again: there is no test that will
> > > always find dependence when it exists.
> >
> > My point has been that it is an interesting fact that,
> > while there is no generally applicable test for
> > independence
>
> Now you say your point has included the answer to
> the question on which you claimed to have received
> no definite answer?

I posted a couple of times asking for a practical genuine test of
independence in the past. Nobody has given (in these other threads)
a clear-cut answer 'NO'. That's why in this thread Douglas A. Gwyn
started with the following comment (I mean he would otherwise
not have written this):

    I think you got answers, but just didn't like them.
    "Independence" of events is a theoretical notion used in models.
    It is not directly testable, but its consequences are testable
    with the usual statistical tools of hypothesis testing.

In fact, he continues to argue that independence can be tested
indirectly through correlation with a chi-square test. See my most
recent response to him. (There I responded that a test of correlation
IS a test for correlation, NOT a test for independence.)

M. K. Shen




------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: DES Analytic Crack
Date: Tue, 11 Jul 2000 01:22:03 +0200



Volker Hetzer wrote:

> Mok-Kong Shen wrote:
> > What do you mean by your second approach? Is that an trial and
> > error method? (If yes, what guides you in the search?) Would you
> > please give literature references on the progress you mentioned?

> This kind of progress came from formal verification techniques.
> Unfortunately right now I don't have web access, but you could check
> amazon.de (or a search engine) for keywords like "binary decision
> diagram". "Graphical representation of boolean functions" and the like.
> The technique itself is already rather old. The point is that you can
> find a binary decision tree that is unique to each boolean equation.
> Kind of a canonical normal form.
> Once you got that (which is the part that still can be NP complete
> in a few bad cases) solving the equation and several other things
> become trivial.

I misunderstood you. You did mention BDD, but I didn't identify that
with your second approach. BDD, if I don't err, is a special form
of boolean optimizations which the electrical engineers have used to
simplify their circuits. So, for example, the equations describing the
S-boxes can be optimized with that. But the formal equations
describing DES in terms of bits as variables after optimizations are,
I guess, still beyond the reach of current resources to handle, in time,
if not in storage.

M. K. Shen


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical   
Date: Tue, 11 Jul 2000 01:22:31 +0200



Thomas Womack wrote:

> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote
>
> > Maybe some support for multi-precision integer arithmetics would also be
> > advantageous, not only for crypto but also for other scientific
> applications.
>
> What additional support do you need? x86 has (rather awkward) 32x32 -> 64;
> Alpha has commands for top and bottom of a 64x64 product, ARM has top and
> bottom of 32x32 (though this is really for doing 16.16 FP work), P4 can do
> two 32x32->64 multiplies with one instruction.
>
> Or are you more thinking of the idea of having a microcoded engine which
> will go off and do hundreds of memory loads, operations and stores with a
> single instruction - a vector machine, basically.

I admit that my idea was vague, because I haven't attempted to code
a package that optimally computes with multi-precision integers. There
are, as far as I am aware, other techniques used than the method one
commonly uses with doing computations with pencil and paper, and
I guess (may be wrong) that these probably may need some support
for better efficiency.

M. K. Shen


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: A thought on OTPs
Date: Tue, 11 Jul 2000 01:22:13 +0200



"Douglas A. Gwyn" wrote:

> Mok-Kong Shen wrote:
> > The chi-square test can be used to test how well the hypothesis
> > that the die is unbiased, i.e. there is a test for the adequacy of
> > the model. There is no parallel in the case of independence of
> > random variables (there is no test).
>
> Sure there is.  Since lack of independence implies correlation,
> one simply tests for correlation.  In fact chi-square can be used.

Let's have the hypothesis of independce of two random variables.
You test for their correlation with the chi-square test and find that
the correlation is extremely small. How could you claim that the
given hypothesis of independence is not rejected at certain given
confidence level (since one knows that two dependent variables
MAY even have zero correlation)? If you in fact want to test
independence, you have to use the definition of independence.
That definition is stated in textbooks. However, there has, as far
as I am aware, yet been no practical test developed by the
mathematicians for that, at least for the bit sequences we are
interested in.

M. K. Shen



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical 
Date: Tue, 11 Jul 2000 01:22:39 +0200



Runu Knips wrote:

> And, well, Serpent contains a really complex initial (and final)
> bit permutation, even if I don't understand whats the use for it,
> except that the cipher is seriously slowed down in software.

It seems to be the majority opinion that the IP and inverse IP of
DES are entirely useless. Does anyone know any probable
design rationale for that?

M. K. Shen


------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: SCRAMdisk or PGPdisk?
Date: Mon, 10 Jul 2000 16:10:40 -0700

ScramDisk is free, open source, offers more algorithms.
PGPdisk is free, well understood source, offer fewer algorithms, but the
algorithms chosen are the same as the strong ones used by ScramDisk.

Take your pick, realistically they're equal, although personally I'd choose
ScramDisk, but it's just that a personal choice.
            Joe

"Simon Hogg" <[EMAIL PROTECTED]> wrote in message
news:8kdkt2$fed$[EMAIL PROTECTED]...
> Forgive me, but I can't find any information comparing these two
(SCRAMdisk or
> PGPdisk), so what's the consensus?
>
> --
> Simon



------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: One plaintext, multiple keys
Date: Mon, 10 Jul 2000 16:11:01 -0700

Actually against a OTP this makes no difference.
Given
    P = Plaintext (unknown)
    C1 = P XOR K1
    C2 = P XOR K2
Derivable equations
    C3 = C1 XOR C2
    C3 is actually K1 XOR K2
    C2 = C1 XOR C3
    C1 = C2 XOR C3
Plaintext cannot be derived, iff K1 and K2 are perfect pads (as required for
a true OTP).
                Joe

"Doug Kuhlman" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Hey all,
>
> Got a couple questions.  In a certain protocol, I can imagine a new
> type of attack.  The attacker has access to the ciphertext of one
> plaintext encrypted with multiple keys.  That is, he has C_1, C_2, ... ,
> C_n where C_i=E(K_i,P) (K_i != K_j for i != j).  Does this help the
> attacker?  How much?
> Related question.  If my encryption algorithm is OTP, then it seems
> like this should be insecure, since it basically amounts to reusing P
> (the fixed plaintext) as a key.  But how exactly would that attack work?
>
> Thanks,
> Doug





------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical  
Date: Tue, 11 Jul 2000 01:48:35 +0200



phil hunt wrote:

> When is this likely to happen?
>
> My understanding is that AES is a process for coming up with a "standard"
> encryption algorithm -- is this right? Who's in charge of it?

NIST is going to announce the AES winner in a couple of months. It's
strange that AES doesn't seem to be that popularly known. In Germany,
the Computer Zeitung devoted recently one page to report the most
recent status of AES.

M. K. Shen



------------------------------

From: Rex Stewart <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Steganographic encryption system
Date: Mon, 10 Jul 2000 23:33:20 GMT

You can find a paper on
Chaffing and Winnowing: Confidentiality without Encryption,
by R. Rivest, MIT Lab for Computer Science,
March 18, 1998 (rev. April 24, 1998)
at the Counterpane website (and elsewhere)
http://www.counterpane.com/biblio/author-R.html

If you mean you do not understand what he means by the
all or nothing transform - I think he means that to
achive optimum effect you should do (in order):
1. a nonsecret transformation of the plaintext so that the entire
   plaintext has to be recovered to insure it is a real plaintext
2. a encryption (usually by secret key cypher, such as Blowfish
3. your steganographic combining majic
   (the term majic is simply used to indicate I don't understand
    at this time how you plan to do this - but I am intrigued)

--
Rex Stewart
PGP Print 9526288F3D0C292D  783D3AB640C2416A


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Edward Wolfgram <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical 
Date: Mon, 10 Jul 2000 23:42:57 GMT

Terje Mathisen wrote:

> Please!
>
> You did intend for this instruction to _speed up_ your crypto sw, right?
>

Hmmmm. Let's see: 384 bits = 48 bytes, and if we can read 16 bytes/cycle from
lev 1 cache, it means it takes 3 cycles to do a permute.   That doesn't sound so
shabby.  Just how fast can your Pentium 4 do it?   :)

Edward Wolfgram



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to