Cryptography-Digest Digest #15, Volume #11 Sun, 30 Jan 00 13:13:01 EST
Contents:
Re: Intel 810 chipset Random Number Generator (Tim Tyler)
Re: NIST, AES at RSA conference (CLSV)
Re: Is there a practical guide to using crypto? ("JCardinal")
Re: Is there a practical guide to using crypto? ("JCardinal")
Connection resets ("Kasper Pedersen")
XML vs Javabean for B2B (Drew Cutter)
Re: *** ECC Strong and Weak combined (David Hopwood)
Re: Help needed on peculiar use of cryptography (David P Jablon)
Re: Mac encryption algorithm? (Vernon Schryver)
Re: Mac encryption algorithm? (Vernon Schryver)
----------------------------------------------------------------------------
Crossposted-To: sci.physics
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Intel 810 chipset Random Number Generator
Reply-To: [EMAIL PROTECTED]
Date: Sun, 30 Jan 2000 15:12:01 GMT
In sci.crypt Michael Kagalenko <[EMAIL PROTECTED]> wrote:
: Terry Ritter ([EMAIL PROTECTED]) wrote
: ]I claim the proposed effect simply does not exist at sensible levels.
: That's because you lack very basic understanding of the statistics
: of Brownian random walk.
If you want to bring the discussion to a personal level, I'd recommend
learning something about your target first. Terry Ritter is not your man
in this case.
While (obviously) you could - in principle - extract *some* randomness
from crystal clock drift, it is /far/ from clear that the effect could
be used to produce random numbers at a practical rate.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
Strip mining helps prevent forest fires.
------------------------------
From: CLSV <[EMAIL PROTECTED]>
Subject: Re: NIST, AES at RSA conference
Date: Sun, 30 Jan 2000 15:32:30 +0000
Joseph Ashwood wrote:
[...]
> The point here is that since all cryptography functions can
> be abstracted to function(key, in , out), where there may be
> some additional work to accomodate IVs before creating the
> key in the function, the P ?= NP problem does not
> satisfactorally address the issue of having a set of known
> in and out with an unknown key.
> I mistakenly attempted to
> use a provably hard problem (reversing a sort, without
> additional information) for a cipher, as it turns out, given
> a number of outputs and a partial knowledge of the inputs
> (e.g. English text) it becomes possible to find a suitable
> solution (although it may not be possible to recover the
> exact password, you recover a perfect equivalent).
If you consider the following problems in
symmetrical cryptography:
Known Unknown
--------------------------------------------
1) key, plaintext, algorithm -> crypttext
2) key, crypttext, algorithm -> plaintext
3) crypttext, algorithm -> plaintext
4) plaintext, crypttext, algorithm -> key
Having just one of 3) or 4) hard and the
other problems easy is not enough, you need 3)
and 4) both hard.
Regards,
CLSV
------------------------------
From: "JCardinal" <[EMAIL PROTECTED]>
Subject: Re: Is there a practical guide to using crypto?
Date: Sun, 30 Jan 2000 09:10:16 -0700
> Encrypting a file for a single user is a special case; in a more
> general use of encryption, data is protected in transit to some
Thats the case I was referring to. From what I can deduce there is no such
thing as a secure program for encrypting a single file on a computer. If
the program encrypts the file by requiring the user to type in a password
then it can be cracked easily and though you mention that "16 printable
ASCII characters means something like 92^16 possible keys", that is true in
a *theoretical* sense, it doesn't appear to be true in a practical sense
since it's only 16 possible characters more than likely consisting of
letters and numbers.
I guess humans will always be the weak link in the chain. <grin>
Thanks for your help.
------------------------------
From: "JCardinal" <[EMAIL PROTECTED]>
Subject: Re: Is there a practical guide to using crypto?
Date: Sun, 30 Jan 2000 09:22:41 -0700
>
> [Use your brain man! It's free. ]
>
> [Steven Siew]
I've been considering using my brain for years now but could never really
seem to find the motivation until I read your captivating reply.
Thank you Steven!
-john
------------------------------
From: "Kasper Pedersen" <[EMAIL PROTECTED]>
Subject: Connection resets
Date: Sun, 30 Jan 2000 17:41:04 +0100
I had my computer try fetching the P3C2000 manual (pdf file) overnight.
This morning when I came back your server had reset the connection 11 times,
after an average 1.9MB. This evening I had another computer elsewhere do
it - same result.
Get real. Nobody has that much patience.
Also, when listing technicalities for a given motherboard, would you please
state what I/O chip it uses (for COM ports)?
There are a number of multifunction chips I don't like, (such as the winbond
W83877/977 which will enter a metastable state in some error conditions)
because of various silicon bugs.
/Kasper Pedersen
BSc. EE (Hons)
Test, R&D
------------------------------
Date: Mon, 31 Jan 2000 11:53:26 -0500
From: Drew Cutter <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: XML vs Javabean for B2B
Which would offer better encryption , XML or Javabean for B2B
transactions ? Wireless ?
------------------------------
Date: Sun, 30 Jan 2000 05:26:03 +0000
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: *** ECC Strong and Weak combined
=====BEGIN PGP SIGNED MESSAGE=====
Greg wrote:
>
> > > This makes me think that given a very strong public key for a
> > > server, a client can use a random small private key which can
> > > be used to quickly generate both the client's public key and
> > > shared secret.
> >
> > This can be broken in O(2^(n/2)) work, where n is the number of
> > bits that can vary in the private exponent (e.g. using the
> > Pollard lambda algorithm).
[The "exponent" is the integer that the base point is multiplied by;
I suppose it should technically be called a "multiplier", but I'll
keep on using "exponent" to maintain the analogy with DH over GF(p).]
> I thought that n was related to the field size, not the number of
> bits in the private exponent that can vary.
No. To be a little more precise, n would be the minimum of the number
of bits that can vary in the private exponent, and the bit-length of
the largest prime factor of the curve order. To get the best possible
security/speed trade-off, IMHO you should use a random curve with a
known prime or "nearly prime" order ("nearly prime" means a large prime
multiplied by a small integer cofactor, say 2 or 4), and a full-length
exponent.
If you have performance problems, it's better to decrease the size of
the field but keep using full-length exponents, than it is to decrease
the exponent length [*]. This is because a curve multiplication takes
time proportional to the cube of the field size (for full length
exponents), while shortening the exponent only gets you a linear
speed-up.
[* Of course the curve order, and hence a maximum for the largest
prime factor of the curve order, depends on the field size, so that
determines how far you can decrease the field size for a given
conjectured security level.]
Technically, it could be argued that a too-short field size allows
breaking a long-term key, whereas a too-short exponent requires a
separate attack for each session. However, I don't think this really
makes much difference; security parameters should normally be chosen
sufficient to prevent *any* sessions from being broken.
There was a thread about exponents with some always-zero bits in
sci.crypt last November, for the case of Diffie-Hellman over GF(p) -
look on DejaNews for the title "bits of diffiehellman private key".
The attacks described in that thread also apply to elliptic curve
Diffie-Hellman.
> Also, my example of 80% was extreme. Let's say a field of 163 bits
> and a key space of 64 bits or 80 bits. Even if n were related
> to the size of the key, would you say 80 bits was secure for at
> least a couple of years?
No. The cost of an attack is then on the order of 2^40 curve operations.
At a rough guess, that's probably a few days with a small network of
PCs (*much* less if the attacker has custom hardware). To get a
conjectured security level of about 2^80 operations, you would need
a 160-bit exponent, in which case you might as well use the full 163
bits.
- --
David Hopwood <[EMAIL PROTECTED]>
PGP public key: http://www.users.zetnet.co.uk/hopwood/public.asc
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5 0F 69 8C D4 FA 66 15 01
"Attempts to control the use of encryption technology are wrong in principle,
unworkable in practice, and damaging to the long-term economic value of the
information networks." -- UK Labour Party pre-election policy document
=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv
iQEVAwUBOJPKpjkCAxeYt5gVAQHf9Af/f1E7KNuUUNKZvFEhRuNGCzvbRqFj5LPv
qRObeFYUp2Nk7062teFe+4mBqO60kZsbvG2ToJiSNkfft1iB65Q+P/DyM79z5InQ
zo6Pz4vrSlcu9Zu8ilW5HanFoRVHpsSpUr7UGrOoq+Ud0qx1qTVMMPylOmP60I6Q
Dfcx9nlAp3ijTveRHZZeYFvyEKakEFptLheJZZ0Q/miuNTutoA6GWhhtskq3TNVv
cy7eOsNMV3zyWumTd7nXu8ScELil41jj9hVPFll1crCCPNu8+Lnt3Q/Ne3oShWe/
UpL3Yb9iBW/FZY6TSldNtmDsrpm9/l8rHcewj4FHD4/Gr/qoW7ReqQ==
=LsHv
=====END PGP SIGNATURE=====
------------------------------
From: [EMAIL PROTECTED] (David P Jablon)
Subject: Re: Help needed on peculiar use of cryptography
Date: Sun, 30 Jan 2000 17:26:41 GMT
In article <86vu71$j7g$[EMAIL PROTECTED]>,
David A Molnar <[EMAIL PROTECTED]> wrote:
>David P Jablon <[EMAIL PROTECTED]> wrote:
>> Salting the hash as suggested in another post merely obfuscates
>> the result, unless the salt is kept more securely than the hash data.
[plus, as David Wagner noted, the salt decreases the
efficiency of a large-scale attack.]
[David M:]
>From what I can tell, this may be reasonable in the situation. The company
>is turning over the hash data to the lawyer/economist in the form of
>"encrypted" employee IDs. There is no reason why the hash needs to
>be kept around, _if_ we trust the company to perform the
>transformation correctly. The salt could even be destroyed after the data
>is produced.
>
>The flaw, as David Scott has pointed out, is that now we cannot guarantee
>collision freeness.
[David J:]
If you make the assumption that lookups by SSN are not needed
after the data is produced, then all this crypto talk is gobbledygook.
In that case, just issue unique truly random keys, and forget all the
nonsense of checking for hash collisions. Otherwise,
a well salted crypto hash gives a strong-enough guarantee of
collision freeness for most purposes.
[David M:]
>One suggestion in another post was to try several different salts until
>colllision freeness occurs. I'm not sure how many you would need, though.
>Perhaps another way is to try different salts and run the hash on the data
>until we have a collision-free mapping for the particular set of
>employee ID numbers at issue.
>
>My assumption is that the company can look at the hashed IDs and check
>for collisions *before* turning them over to the lawyer. So if a collision
>occurs, change the salt or salts and try again. This assumes that
>we need only avoid collisions within the same company. It also assumes
>that a collision-free map for the particular set of data will not be
>so rare that finding one will take "too long."
[David J:]
Again, if the lawyer doesn't need a mapping for SSN to key, then just
use random keys. If not, then extra initial steps to avoid collision
is trivial as you suggest, but it cannot permit the database to be
extended or compared to other SSNs in the future with a guarantee of
no collision, without access to the original SSNs. This is where a
sufficiently large crypto hash with a suitable collision-freeness property
is useful. MD5 should be fine, and SHA1 even better.
======================================================
David P. Jablon
Integrity Sciences, Inc.
[EMAIL PROTECTED]
<http://www.IntegritySciences.com>
------------------------------
From: [EMAIL PROTECTED] (Vernon Schryver)
Subject: Re: Mac encryption algorithm?
Date: 30 Jan 2000 10:03:34 -0700
In article <871e5g$pb8$[EMAIL PROTECTED]>,
Paul Schlyter <[EMAIL PROTECTED]> wrote:
> ... But at some stage, the type checking and conversion must
>result in some actual code. If you want to convert a larger integer
>type to a byte by picking the lowest order byte only, on a big-endian
>machine, there must be code to shift to that lowest order byte, at
>the end of the integer. On a little-endian machine, no extra code
>must be produced, since the lowest order byte already is situated
>at the beginning of the integer.
That is wrong. To narrow a value or to convert a wider type to one with
fewer bits, such as from a 2, 4, or 8 byte integer to a byte, most systems
will fetch the wide value into a register and then store it into the byte.
Reasonable systems can fetch all of their wide basic types into a register
with single instructions and then store the least significant byte. If
you're writing assembly language and want to convert, you might cheat and
fetch only the least significant byte(s) of the source that you need, but
to no real effect, because data cache effects will generally make the
fetching the full value take the same number of cycles. Even if you do
fetch only the least significant byte, you'll probably use an addressing
mode that has an offset in the instruction, and won't care whether the
offset bits are 0, 1, 3, or 7. (Note that modern ISA's handle 8 bit int's
as real types, and do not have equivalents to the old 8085/80*86 AH, BH,
CH, and DH registers.)
> ...
>> How often does that happen in non-buggy code?
("that" is calling by reference and passing one byte of a wider value)
>I have no statistics of this, but I believe it happens more often
>than you think. But perhaps you'd call each and every such occurrence
>a "bug"? It's a "bug" only if you don't know what you're doing, and
>if it can produce some undesirable side effect.
You say you don't use Fortran. So what call-by-reference language do you
use? If you don't use a call-by-value language, I hope you essentially
never use this kludge.
> ...
>I'd call that "call-by-reference" though....
Yes, my fingers got confused, just as yours did about increasing
vs. decreasing indeces.
>> and coding a bug or
>> being too clever by half (i.e. writing bad code that only works
>> for now). In either big or little endian systems, what happens
>> when the called function increments the called-by-reference value?
>> Does the wide result back in the calling function make sense?
>
>You know it doesn't -- and if this methodology is used, the byte
>should only be accessed, not modified.
How do you ensure that in 3 years someone working on the code doesn't
notice that your code was wrong and needed to use the formal +1 in 37
places after all other uses of it, and so increments the formal?
> ...
>> How many languages do you use that use call-by-reference?
>
>I used FORTRAN prior to 1985 and Pascal prior to 1990.
How do you pass the least significant byte of a wider number in Pascal?
> C/C++ and
>friends allow call-by-reference, although you must do it explicitly,
>by passing a pointer.
I hope that you are not one of those who chooses to write non-portable
code like:
extern void func(char *);
int int i;
i = ...
func(1+(char*)&i);
I hope you would always instead write:
extern void func(char *);
int i;
char c;
i = ...
c = i;
func(&c);
I trust you'd write that even when func is declared func(const char *),
and let the compiler optimize away the temporary c.
> ...
>> Do you know that C and relatives call by value,
>
>:-) ....yes, I've known that since 1981....
> ...
>Too much -- I've seen many horrors in Fortran, mostly due to the fact
>that it's missing function declarations...
Shouldn't that be "*was* missing *fancy* function declarations"? My main
Fortran days were back when Fortran 2 was still in use, decades before
1981, but I think I recall that even Fortran IV had function declarations
and typed formal parameters (albeit based on the first character) and that
the newfangled Fortran 9* stuff even has block structioning. My
recollection of ancient Fortran is that it was tough but often possible to
get the address of anything, that arithmetic on bytes was very rare and
quite painful. In other words, I doubt your claim that early 1980 vintage
Fortran did a lot of computing on the least significant bytes of integers.
Now if you'd talk about EQUIVALENCE and COMMON kludges between single and
double precision reals and even between integers and reals, I'd agree that
such stuff was common and often unavoidable, but I'd also point out that
whether simply increasing or decreasing pointers would narrow a double
had nothing per se to do with big vs. little endian but the fact that
every vendor had its own floating point formats.
>> If big endian code is bigger and slower, then the effect should be
>> noticable. If so, how is it that the fastest Fortan systems have
>> long been big-endian? Below the old "super computer" class, how
>> did the many big-endian "high performance workstations" manage to
>> clobber the performance of all little endian systems starting about
>> 1983 and continuing until now? (My answer to that is "because byte
>> order doesn't matter, but other things do.")
>
>...and if these systems had been little-endian and not big-endian,
>these systems could have been faster still.... <g>
No, if they could have been significantly faster by being little endian,
they would have been little endian. Some of those systems used CPU's that
could be used to build to either big or little endian systems.
Vernon Schryver [EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (Vernon Schryver)
Subject: Re: Mac encryption algorithm?
Date: 30 Jan 2000 10:32:34 -0700
In article <871e6o$pcg$[EMAIL PROTECTED]>,
Paul Schlyter <[EMAIL PROTECTED]> wrote:
> ...
>Floating point is a whole different animal. The advantage of
>little endianness is noticeable only on some integer operations.
>On floating-point operations they vanish.
That seems to contradict the statement that started this thread.
The other gentleman wrote about floating point numbers, not integers.
>> If so, then until you offer far
>> more explicit reasons, and certainly reasons unrelated to mixed-endian
>> chimeras, I'll continue to believe you're expressing a mere religious
>> preference.
>
>Look who's talking.... <g>
Nonsense! I'm an agnostic pointing out that you've taken a theological
position. When it comes to byte order, I think that Gulliver and two
kingdoms were all fools, the two kingdoms for caring so much and for
the wrong reasons, and Gulliver for fighting. If you had simply said
little endian bytes are cleaner, prettier, or the moral choice, then I
could not have disagreed. Instead, you made the error of the Creationists,
and offered fossil evidence for a religious or at least aesthetic truth.
Besides, your fossils are of the quality of the Piltdown Man.
If you don't have an endian religion, then big endian is the right answer
on the network, but only because of history. The installed base of
big-endian protocols is by far the largest, and it is vastly harder to
change protocols than to swap bytes in hosts. If your existing data are
little endian, if the computers you've previously sold or bought are little
endian, or if the computers of your prospective new customers are little
endian, then new little endian systems will be fastest and cheapest (all
else equal). And the same for big endian if your target market, installed
base, or existing data are big endian. If you are starting from scratch
and have no existing customers, computers, or data, then you should pick
big endian, because the costs to convert between big endian network formats
and little endian host data are small but noticable.
Vernon Schryver [EMAIL PROTECTED]
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************