Cryptography-Digest Digest #466, Volume #12 Thu, 17 Aug 00 10:13:01 EDT
Contents:
Re: Cracking RC4-40 in FPGA hardware (Paul Rubin)
Re: New quantum computer - any details? (Gordon Walker)
Re: New Stream Cipher like SEAL ([EMAIL PROTECTED])
Re: OTP using BBS generator? (Mok-Kong Shen)
Re: OTP using BBS generator? (Mok-Kong Shen)
Re: blowfish problem (John Hascall)
Re: 215 Hz five-qubit quantum processor (Dale Pontius)
Re: PGP Algorithm (Sander Vesik)
Re: PGP Algorithm (Sander Vesik)
Re: Is this Diffie-Hellman modification safe? ("Scott Fluhrer")
Re: OTP using BBS generator? (Mok-Kong Shen)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: Cracking RC4-40 in FPGA hardware
Date: 17 Aug 2000 12:09:59 GMT
In article <8ngap1$86k$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
>In article <8mco74$erk$[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] wrote:
>
>>
>> Each cycle requires a memory read and a pair of 8 bit adds, so 10ns
>> looks like a good estimate. Then we have 10 usec/key, or 10^5 keys
>> per second. 2^40 is about 10^12, so even with one such piece of
>> hardware we have 10^7 seconds, or about 2800 hours to search all
>> keys. We expect success on the average in 1/2 this time, or 1400
>> hours.
>>
>
>Sorry guys, I've seen your discussion too late. I'd like to say that it
>is possible to achieve the speed of 280.000 key/sec on Pentium II/333
>(or 45 days to finish 2^40 keys) when cracking simple RC4 (like PDF
>implementation) and 180.000 keys/sec (or 70 days) when cracking RC4+MD5
>(like MS Word does). It means that on _ONE_ modern P III/1000 the
>average time to crack RC4-40 is one week, not 1400 hours ;-).
Remember that the Xilinx chip had 4 of those blocks with 1400 hour
average crack time. So the 4 blocks working together crack in 350
hours on average--half the speed of that PIII/1000. That means a
machine with 50 Xilinx chips can *always* (worst case) crack in under
24 hours and you'd need 25 PIII's to do the same. But the Xilinx
chips cost about 10 USD each and use a lot less power than the
Pentiums. The 50-chip Xilinx machine can probably be built in a
single PC-sized box (card cage with a few wire-wrapped boards) with
materials cost equal to not much more than one or two PIII machines.
To the person who was doing this project: any news?
------------------------------
From: Gordon Walker <[EMAIL PROTECTED]>
Subject: Re: New quantum computer - any details?
Date: Thu, 17 Aug 2000 13:20:26 +0100
On 16 Aug 2000 16:23:54 GMT, Sander Vesik <[EMAIL PROTECTED]>
wrote:
>How long is 'realistical length' and what constitutes a practical
>quantum computer? A qc that can crack say 512 bit RSA in say 4 weeks
>is practical, but not overly threatening for 16/32 kbit keys that are
>still realistically long.
>
>Even if you speed it up 4 times, longer keys are still realistic. Beyond
>that, we need something else than RSA.
But by my limited understanding, a quantum computer can bring down the
order of complexity of the factoring problem. Previously adding one or
two bits to the key required a vast increase in processing power to
break it. With an improved O() value for the solving machines you have
the situation where the cracking machines are chasing keylength much
more quickly and that just a few years research might allow the
hardware to catch up with the keylength you have chosen.
--
Gordon Walker
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: New Stream Cipher like SEAL
Date: Thu, 17 Aug 2000 12:48:35 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Mark Wooding) wrote:
> [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > hello tom
> >
> > one question
> > why in your c code
> >
> > return (x<<(r&31)) | (x>>((32-r)&31));
> >
> > you do &31
> >
> > the processor realize the mask automatically no ???
>
> No. In C, if you shift a value by its bit length or more, the
behaviour
> is undefined.
>
> Some processors will give a zero result for a shift by a value greater
> than the word length. Others will truncate the the shift amount.
>
> Tom's got this one right.
>
> > OPSO for sc1.dat using bits 23 to 32 157695
54.433
> > 1.0000
>
> Ouch! That's really bad.
Sometimes it outputs poorly, but for the most part it appears ok. I
will have to look into it more.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: OTP using BBS generator?
Date: Thu, 17 Aug 2000 15:39:49 +0200
Tim Tyler wrote:
>
> :> Bryan Olson wrote:
>
> :> : Many times on sci.crypt people have objected to the proof of
> :> : perfect secrecy for the OTP based on the fact that the zero
> :> : vector is one of the possible keys. The false logic goes
> :> : something like: since the OTP is provably secure, and zero
> :> : is a legal key, then encrypting with the zero key must be
> :> : secure, and since it obviously isn't the proof must be
> :> : wrong.
> :>
> :> : The OTP theorem doesn't say that encrypting with a
> :> : particular key will maintain secrecy. It says choosing a
> :> : one-time key uniformly at random and exposing the resulting
> :> : ciphertext does not increase the chance of the attacker
> :> : determining the plaintext.
> :>
> :> This case still seems totally different from the case of
> :> using BBS with short cycles.
> :>
> :> Using the all-0 key in an OTP doesn't help the attacker
> :> get the message, since he has no way of knowing it has been used.
>
> : Not so. If in fact one does use the all zero key of
> : significant length, the attacker should win. He would
> : almost certainly think we were not in fact using a true one
> : time pad. There's nothing in the theorems that will stop
> : him from reading cleartext.
>
> Doesn't this violate the assumption that the attacker has full knowledge
> of the cyphermachinery?
>
> Anyway, the cases still seem significantly different: *if* the attacker
> knows what machinery has been used, the chance use of a zero key in an OTP
> gives the attacker no information about the plaintext, while the chance
> use of a short cycle in a rng-based stream may completely give the game
> away - *assuming* the attacker is generally running a short cycle check,
> based on known plaintext for some reason.
So it appears that in the proof of security of OTP one
needs an additional assumption:
The opponent knows (and is convinced) that the
sender uses an OTP.
It also shows the essential role played by psychology
(the belief) in crypto, as I recently mentioned.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: OTP using BBS generator?
Date: Thu, 17 Aug 2000 15:39:55 +0200
Tim Tyler wrote:
>
> Imagine a rare spy, who - after landing in enemy territory - opens up
> his OTP and prepares to report that his espionage mission began
> successfully. Scanning the pages, he finds all the symbols are zeros.
> Imagine his dismay ;-)
Why? If the spy knows that his agency has given him a
true OTP and he also happens to be a crypto theoretician,
he will not hesitate a micro-second before applying the
pad! BTW, if the process is automatic, he (and his
colleagues who oversee the OTP generation at the agency)
shouldn't have visual contact with the pad at all.
In a follow-up (responding to John Savard) I said that
one should take a practical standpoint and apply ALL
available (as far as resources permit) statistical tests
to filter any bit sequences generated for use. For, on
the assumption that the opponent doesn't have better
means than us, we are safe. (If the opponent is
omnipotent, then we lose anyway.)
The conceptual difficulty involved in the present issue
is in my opinion attributable to the fact that theory
is only a model of the reality and hence uses assumptions
that are idealized forms of reality. Since these
idealized things never actually occur in practice, the
existece of paradoxes don't mean that the theory is any
wrong, only that we should be constantly aware of the
underlying assumptions when applying it. To support
this view, I like to provide the anecdote (a true
history that I personally experienced!) attached below.
M. K. Shen
===================================
In the theory of elasticity there is a rather simple
problem concerning a thin (two-dimentional) wedge that
extends to infinity. If a certain type of load is
applied on the wedge, then the stress and strain in
the wedge can be obtained via solution of a partial
differential equation. This solution becomes however
singular for a particular value of the angle of the
wedge, so that in that case infinitely large stress
is the result. In a laboratory an assistant was doing
an experiment trying to see how the general solution
obtained in theory agrees with the practice for
different values of the wedge angle. His professor
cautioned him, saying 'Be very careful when you
approach the critical value of the angle, for the
wedge could explode!' Of course, the wedge didn't
explode. This is because the theory assumes the Hooke's
law, which is a linear relationship between stress
and strain, an idealization that never actually
exactly obtains in reality. (Further, there is never
a two dimentional, only a three dimentional wedge
and there is no wedge that extends to infinity.) So
paradoxes can happen when theory makes idealized
assumptions.
------------------------------
From: [EMAIL PROTECTED] (John Hascall)
Crossposted-To: comp.lang.c
Subject: Re: blowfish problem
Date: 17 Aug 2000 13:31:01 GMT
P. Pascual <[EMAIL PROTECTED]> wrote:
>Hi all.
>I have a problem that's breaking my head.
>I need to encrypt text in C, so i have decided to use the blowfish
>algorithm.
>But the encrypted text generated are composed of control caracters, \0
>inclusive.
>This is a problem to me because the application is web oriented and
>i need to work with the encrypted text, to append it in a URL, for example.
>
>I have seen in the web some implementation in java of blowfish that generate
>a text encrypted
>like this:
>
>305c083744ad2de7cc52488a53dea2ca85
They are just converting each byte (0..255) to 2 printable
characters ('0'..'f')('0'..'f').
char * hexify (
char * inBuf,
size_t inLen
) {
char * out;
char * pos;
size_t i;
if (inBuf == NULL) return NULL; /* safety first */
out = malloc(inLen * 2 + 1);
if (out == NULL) return NULL; /* malloc failed */
pos = out;
for (i = 0; i < inLen; ++i) {
*pos++ = hexDigit(inBuf[i] >> 4); /* high 4 bits */
*pos++ = hexDigit(inBuf[i]); /* low 4 bits */
}
*pos = '\0';
return out;
}
int hexDigit (
int fourBits
) {
fourBits &= 0x0f; /* safety first */
return (fourBits < 10) ? (fourBits + '0') : (fourBits - 10 + 'a');
}
John
--
John Hascall (__) Shut up, be happy.
Software Engineer, ,------(oo) The conveniences you demanded
Acropolis Project Manager, / |Moo U|\/ are now mandatory.
ISU Academic IT * ||----|| -- Jello Biafra
------------------------------
From: [EMAIL PROTECTED] (Dale Pontius)
Crossposted-To: comp.arch
Subject: Re: 215 Hz five-qubit quantum processor
Date: 17 Aug 2000 13:55:38 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Steve Newman) writes:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Mike Haertel) wrote:
...
>> Of course, this suggests the obvious improved strategy: generate all
>> possible bit strings, all possible virtual machine interpreters... :-)
...
>
> Of course, finding the best instruction set for typical files is
> easy -- just iterate over all possible bitstrings of a certain
> length, treat each one as a VM, and see what compression length
> it supports for a suite of benchmark files. Pick the best one
> and use it as your standard. *Damn* I wish this worked. :)
I held off from this comment yesterday, but it's getting to be just
too much.
All you need to do is brew a really hot cup of tea, crank up your
Quantum Qubit Processor, and the Infinite Improbability Compression
Algorithm will appear out of thin air.
Dale Pontius
NOT speaking for IBM
------------------------------
From: Sander Vesik <[EMAIL PROTECTED]>
Subject: Re: PGP Algorithm
Date: 17 Aug 2000 13:59:20 GMT
David Crick <[EMAIL PROTECTED]> wrote:
> Mark Wooding wrote:
>>
>> Runu Knips <[EMAIL PROTECTED]> wrote:
>>
>> > Cast5(?)
>>
>> For some reason, this is another name for CAST128.
> Cast5 has both 80 and 128-bit versions.
which is also known as cast5-128 (cast5-256 is a different animal).
cast5 can operate in 2 modes - key longer than 80 bits (if less than
128 bits, padd with zero and use 16 rounds) and 80 bits or less (pad
with zeros to 128 bits, use 12 rounds).
> --
> +-------------------------------------------------------------------+
> | David Crick [EMAIL PROTECTED] RSA 22D5C7A9 DH BE63D7C7 87C46DE1 |
> | Damon Hill Tribute Site: http://www.geocities.com/MotorCity/4236/ |
> | M. Brundle Quotes: http://members.tripod.com/~vidcad/martin_b.htm |
> +-------------------------------------------------------------------+
--
Sander
FLW: "I can banish that demon"
------------------------------
From: Sander Vesik <[EMAIL PROTECTED]>
Subject: Re: PGP Algorithm
Date: 17 Aug 2000 14:00:16 GMT
tomstd <[EMAIL PROTECTED]> wrote:
> David Crick <[EMAIL PROTECTED]> wrote:
>>Mark Wooding wrote:
>>>
>>> Runu Knips <[EMAIL PROTECTED]> wrote:
>>>
>>> > Cast5(?)
>>>
>>> For some reason, this is another name for CAST128.
>>
>>Cast5 has both 80 and 128-bit versions.
> As long as you don't use the original CAST key schedule you're
> ok. I developped a class of weak keys for CAST as presented in
> Applied Crypto.
> Tom
Which one is the original? What about the schedule as presented in rfc2144?
Which keys are weak?
> -----------------------------------------------------------
> Got questions? Get answers over the phone at Keen.com.
> Up to 100 minutes free!
> http://www.keen.com
--
Sander
FLW: "I can banish that demon"
------------------------------
From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: Is this Diffie-Hellman modification safe?
Date: Thu, 17 Aug 2000 06:42:53 -0700
tomstd <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> "George Harth" <[EMAIL PROTECTED]> wrote:
> >Greetings,
> >
> >The basic scenario is that we log into the server app and use
> Diffie-Hellman
> >to generate a shared secret key. We plug those keys into
> Blowfish and use
> >Blowfish for all further data communications, including the
> official
> >password login. The problem is that this method doesn't ensure
> that the
> >server we connected to is actual, trusted server.
> >
> >With that in mind, I modified the Diffie-Hellman algorithm
> slightly. I am
> >wondering if this modification is relatively safe, or if I am
> opening up
> >some trouble. This system uses the fact that the real server
> should already
> >know the password for the user requesting a connection.
> >
> >Where,
> > n = shared prime
> > g = shared base
> > h = hash of password (same bit length as n)
> > x = random secret number
> > y = random secret number
> >
> >1. Alice sends Bob her username and requests a connection. No
> password is
> >sent at this stage.
>
> First weakness, the attacker now knows who is logining in to the
> computer.
>
> >2. Bob computes and sends Alice X, where:
> > X = (power(g, x) mod n) xor h
> >
> >3. Alice computes and sends Bob Y, where:
> > Y = (power(g, y) mod n) xor h
>
> I assume the password is known only by the server and the
> client? Then why even use pk crypto?
See below
>
> >4. Bob computes and uses Z1 as his Blowfish key:
> > Z1 = (power((Y xor h, x) mod n)
>
> You should really hash the bignum instead of truncating it.
>
> >
> >5. Alice computes and uses Z2 as her Blowfish key:
> > Z1 = (power((X xor h, y) mod n)
> >
> >The hash, h, and the random numbers x and y are never
> transmitted.
> >
> >If Bob (the server) is not the real server, then he won't know
> Alice's
> >password. Without the correct password, the two secret keys Z1
> and Z2 will
> >not be the same and therefore all data sent to the Fake Bob
> will be garbage
> >as far as Bob is concerned.
> >
> >I suppose Fake Bob could use brute force to try and break
> the "garbage" he
> >received and determine what the hash of Alice's password is.
> Is this likely
> >to be easy? If we just send the password over the Blowfish
> connection
> >created using the regular Diffie-Hellman connection, the Fake
> Bob server
> >would get the password much easier and the client wouldn't
> realize the
> >server wasn't real until after the password had been sent.
> >
> >Thanks for any and all help... George
>
> You make it harder then it needs to be. If the server and
> client have a shared secret password simply do this
>
> 1. Make up a 128-bit string R
> 2. Use K = hash(R || h) as your shared symmetric key.
> 3. Transmit R to the server (or to the client as the case may
> be).
>
> This requires no pk math and is considerably simpler.
However, this protocol has an obvious weakness that the original protocol
does not. Suppose you had an attacker listening on your line. He records
everything that is transmitted, in your case, R and the encrypted
conversation. Then, offline, he can take a password dictionary, and for
each password in the dictionary h', compute K' = hash(R || h'), and attempt
to use that to decrypt the conversation. If the password actually occurs in
the dictionary, this will find it.
Protocols like EKE (and possibly the original protocol -- it's somewhat
difficult to prove, especially if you also have to deal with attackers that
can modify transmitted messages) have the property that, by
listening/modifying a single login attempt, they can test at most a single
password from the dictionary -- not all of them at once. And so, this makes
a human-chosen password somewhat more secure, in that an attacker cannot
test thousands of potential passwords without doing (effectively) thousands
of login attempts.
And (to complete a reference I made above), there doesn't appear to be a
protocol using only symmetric cryptographical primitives that has this 'only
able to test 1 password' property. It has been proven (derivable from a
Crypto '99 paper -- I'll dig up the reference if you're interested) that
there is no such protocol against attackers with infinite computational
resources.
>
> Also you may consider EKE type systems if you need passwords,
> but really with a password you are wasting your time. The whole
> point of PK is that I don't need to share a secret with you,
> that's the point.
PK is always better assuming that you have a secure place to hold the
private keys. However, in some situations (such as logging in through a
memoryless terminal), you have no such place, and that is where EKE and
friends are useful.
--
poncho
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: OTP using BBS generator?
Date: Thu, 17 Aug 2000 16:16:47 +0200
Mark Wooding wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
> > > I have finally been able to obtain a copy of the BBS paper. The
> > > result of interest here says that modulo the quadratic residuacity
> > > assumption, the x^2 mon N is polynomial-time unpredictable to the
> > > left. Is there any concrete relation between the quadratic
> > > residuacity assumption and the assumption of hardness of factoring
> > > N?
>
> Yes. There is a polytime reduction from factoring to QRP. Since the
> quadratic residuosity problem is easy modulo a prime number (we have
> x^{(p - 1)/2} = 1 (mod p) iff x is a quadratic residue mod p) we can
> decide quadratic residuosity mod each factor of n separately.
>
> There is no proven reduction in the other direction. However, we don't
> have any algorithm which can decide quadratic residuosity mod n = p q
> with probability better than 1/2 without factoring n. This doesn't mean
> that no such algorithm can exist.
>
> The result is then that factoring is at least as difficult as QRP.
>
> > I like to request the experts on BBS to kindly help me out in some
> > additional difficulties detailed below:
> >
> > From the global structure of the paper (i.e. without understanding the
> > details), I don't see where the argument 'finding short cycles is
> > equivalent to factoring', which seems to be one of the points
> > repeatedly stressed in this thread, is actually used in the paper. Am
> > I missing something?
>
> Firstly: in 1984, Vazirani and Vazirani showed that predicting the
> output of the x^2 mod N generator reduced to factoring n. Since
> traversing a cycle evidently permits prediction of the generator's
> output, it must therefore also reduce to factoring n.
Sorry, doesn't this result of Vazirani and Vazirani conflict
with the sentence 'There is no proven reduction in the other
direction' above, or have I understood your text entirely
in the wrong manner? Could you please say a bit more?
> Then see my own work, elsewhere in this group, about efficient choice of
> parameters for ensuring large period for the parity sequence.
I hope that this efficient choice does not result in severe
reduction of the space of N.
> > The theorem concerns unpredictability to the left. Why 'to the left'
> > and not 'to the right'? Does the one implies the other or is 'to the
> > right' of no use in practice?
>
> This puzzles me too. I've certainly seen (e.g., in HAC) claims that BBS
> is unpredictable in both directions, so there's certainly been some
> work done here.
Unpredictablility to the right seems to be actually more
useful, isn't it?
> > The paper employs the term 'probabilistic poly-time statistical
> > test'. Is this a theoretical concept like the Kolmogorov complexity or
> > does there exist a conrete implementation of such a test or at least a
> > practically realizable specification of it?
>
> It's the name for a (large) class of statistical tests. It does what it
> says on the tin.
But to be meaningul one needs to know what that large class
of tests is, or more exactly one should have a practical
way of determining whether a given test belongs to that set.
Is there any availble information/literature to that effect?
M. K. Shen
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************