Cryptography-Digest Digest #522, Volume #12 Thu, 24 Aug 00 05:13:00 EDT
Contents:
Re: blowfish problem (Kaz Kylheku)
Re: Very Fast Decorrelated Cipher (Benjamin Goldberg)
Re: blowfish problem (Richard Heathfield)
Serious PGP v5 & v6 bug! ("Sam Simpson")
Re: Provably secure stream cipher (Mok-Kong Shen)
Re: A few big primes? (S. T. L.)
Re: The DeCSS ruling (Pekka =?iso-8859-1?Q?Ala=2DM=E4yry?=)
Re: SHA-1 program (cool!) (S. T. L.)
Re: Bytes, octets, chars, and characters (Keith Thompson)
Re: Provably secure stream cipher (Mok-Kong Shen)
Re: SHA-1 program (cool!) (S. T. L.)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Kaz Kylheku)
Crossposted-To: comp.lang.c
Subject: Re: blowfish problem
Reply-To: [EMAIL PROTECTED]
Date: Thu, 24 Aug 2000 07:05:50 GMT
On Wed, 23 Aug 2000 21:55:43 -0700, Spud <[EMAIL PROTECTED]> wrote:
>"Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>> Kelsey Bjarnason wrote:
>> > Nope; neither of those do it. #2 refers to the size "in bytes";
>> > if a "byte" on the implementation is 16 bits, but a char 8, the
>> > implementation is still free to refer to both as having size 1.
>>
>> No, because a char has no padding bits.
>
>So? A char doesn't _have_ to have padding bits for this to hold true.
>
>Byte: 16 bits
>char: 8 bits, no padding
>sizeof(char): 1
Sheer nonsense.
A byte is the smallest unit of storage in C. Every object has a storage
representation that is a multiple of the byte. (Other than bitfields,
which are regions within objects).
The representation of a character type occupies a full byte of storage.
In this case, since a byte is 16 bits wide, a character type occupies 16 bits.
Moreover, since there are no padding bits, then each of these 16 bits is a
value bit. A value bit is one that contributes to the value of the type. The
width of a type is the number of value bits that it has; thus char must be 16
bits wide. The only way a char could be narrower than 16 would be if it had
padding bits, but that is prohibited.
>> Whatever a "byte" is,
>> the fact that a char has size precisely 1 byte means that every
>> bit of the byte is used in the representation for type char.
>
>No, it means that every bit of the _char_ is used; unless and until it is
>shown that a byte _cannot_ be bigger, physically, than a char.
Here is a word of advice: if you reply to Doug Gwyn with a ``No,'' chances are
you are setting youself up to be burned, unless your name is something like
Clive Feather or Paul Eggert. :)
As long as
>sizeof(char) still evaluates to 1, and as long as the _char_ has no unused
>bits, all of what you say above still holds true - despite the _byte_ having
>more bits than the char.
Except that the sizeof operator counts how many bytes an object occupies.
A size of 1 means one byte. One full byte, not half a byte, or a quarter
of a byte. Objects cannot have fractional sizes in C; they are regions of
bytes.
>> Keep in mind that I'm not guessing -- if there is ever any
>> reason to request an official interpretation on this point,
>
>Well, here's a reason: it's not specified by the document, and as I've
>pointed out in a couple of different places, unless it is specified, some
>*very* nasty things can potentially occur. See the issue of reading files
>written by a Pascal program which _does_ use 16 bit chars, then being read
>by a C program which uses 16 bit bytes but 8 bit chars. I still see nothing
>to indicate that this isn't perfectly legal.
Gradeschool math tells you that it's not legal.
Consider V to be the number of value bits and P to be the number of padding
bits. The following holds true for the char type in this example
implementation:
V + P = 16
P = 0
Now solve for V.
--
Any hyperlinks appearing in this article were inserted by the unscrupulous
operators of a Usenet-to-web gateway, without obtaining the proper permission
of the author, who does not endorse any of the linked-to products or services.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Very Fast Decorrelated Cipher
Date: Thu, 24 Aug 2000 07:35:50 GMT
[EMAIL PROTECTED] wrote:
>
> To go with my point of precomputed tables I present (source only,
> paper to follow) a pair-wise Decorrelated 64-bit cipher that encrypts
> at 12 cycles per byte on my K6-2 with the reference C code.
>
> Please check it out at
>
> http://www.geocities.com/tomstdenis/files/tc6a.c
>
> The source is relatively easy to follow.
>
> I will write a paper soon so please comment.
>
> Thanks,
> Tom
I've a silly question: what were the criteria behind the choice of
0x29DF548E as the polynomial?
Because the low bit isn't set in the poly, the gf product will have the
low bit set iff both multiplicands have the low bit set. This is a Bad
Thing. If both halves of the plaintext have the lowest bit clear, then
the lowest bit of each half of the ciphertext depends ONLY on the round
keys, *not* on the rest of the plaintext.
--
... perfection has been reached not when there is nothing left to
add, but when there is nothing left to take away. (from RFC 1925)
------------------------------
Date: Thu, 24 Aug 2000 08:37:15 +0100
From: Richard Heathfield <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c
Subject: Re: blowfish problem
Spud wrote:
>
[snips]
>
>
> No, the byte is 16 bits, char is 8 bits, CHAR_BIT is 8. There has *still*
> been *no* indiciation offered anywhere that byte and char are neessarily the
> same size (other than the determination that sizeof(char), measured in
> bytes, evaluates to 1 - which is true of the implementation I discussed).
Not quite true. We've had a member of the ANSI C committee saying that
the intent of the Standard is to say that char and byte are necessarily
the same size. Now, please produce either a quote from the Standard
which makes it clear that he is wrong, or produce an ANSI or ISO C
committee member (a real member, not an "observing member") who says
he's wrong. I can't help thinking that the "other" group here should be,
not sci.crypt (who surely must have lost interest by now?), but
comp.std.c; however, I'm loathe to cross-post this still further.
--
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
65 K&R Answers: http://users.powernet.co.uk/eton/kandr2/index.html (32
to go)
------------------------------
From: "Sam Simpson" <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,comp.security.pgp.discuss
Subject: Serious PGP v5 & v6 bug!
Date: Thu, 24 Aug 2000 08:48:32 +0100
Original from Dr R.Anderson, uk-crypto mailing list.
Rgds,
--
Sam Simpson
Comms Analyst
http://www.scramdisk.clara.net/ for ScramDisk hard-drive encryption &
Delphi Crypto Components. PGP Keys available at the same site.
=====Original Message=====
Ralf Senderek has found a horrendous bug in PGP versions 5 and 6.
It's of scientific interest because it spectacularly confirms a
prediction made by a number of us in the paper on `The Risks of Key
Recovery, Key Escrow, and Trusted Third-Party Encryption'
<http://www.cdt.org/crypto/risks98/> that key escrow would make it
much more difficult than people thought to build secure systems.
He's written a paper on his work and it's at
http://senderek.de/security/key-experiments.html
Since NAI joined the Key Recovery Alliance, PGP has supported
"Additional Decryption Keys" which can be added to a public key. The
sender will then encrypt the session key to these as well as to your
main public key. The bug is that some versions of PGP respond to ADK
subpackets in the non-signed part of the public key data structure.
The effect is that GCHQ can create a tampered version of your PGP
public key containing a public key whose corresponding private key is
also known to themselves, and circulate it. People who encrypt
traffic
to you will encrypt it to them too.
The problem won't go away until all vulnerable versions of PGP are
retired, since it's the sender who is responsible for encrypting to
the ADKs, not the recipient.
In the meantime there might be a nasty denial-of-service attack in
which bad guys upload tampered versions of everybody's public keys to
all the public keyrings.
Ross
> PS: my student Steve Early has trawled the PGP-6.5.1i-beta2 source
> code and found the bug:
>
> In file libs/pgpcdk/priv/keys/keys/pgpRngPub.c, I see two
functions:
> one called ringKeyFindSubpacket(), which finds a subpacket from a
> self-signature packet, and ringKeyAdditionalRecipientRequestKey(),
> which uses ringKeyFindSubpacket() to search for ADK subpackets.
>
> ringKeyFindSubpacket() is declared as follows:
>
> PGPByte const *
> ringKeyFindSubpacket (RingObject *obj, RingSet const *set,
> int subpacktype, unsigned nth,
> PGPSize *plen, int *pcritical, int *phashed, PGPUInt32
*pcreation,
> unsigned *pmatches, PGPError *error);
>
> In particular, the "phashed" parameter is used to return whether
the
> subpacket was in the hashed region. Now, looking at the call in
> ringKeyAdditionalRecipientRequestKey() I see this:
>
> krpdata = ringKeyFindSubpacket (obj, set,
> SIGSUB_KEY_ADDITIONAL_RECIPIENT_REQUEST, nth,
&krdatalen,
> &critical, NULL, NULL, &matches, error);
>
> ...the "phashed" value isn't checked (or even asked for)!
>
>
> Fixing the bug, and generating BIG WARNINGS when ADKs are found in
the
> non-hashed area, should be trivial.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Provably secure stream cipher
Date: Thu, 24 Aug 2000 10:14:04 +0200
[EMAIL PROTECTED] wrote:
>
> Here is an idea for a somewhat slow provably secure (as long as the
> underlying prng is statistically random, but need not wholely random).
>
> Based (yet again) on the simple pair-wise decorrelated function
>
> f(x) = ax + b
>
> in GF(2^n) (n = 8, 32, 64 ...)
>
> The idea is that the prng will make the (a, b) values for each 'x' that
> is being encrypted. Since this is pairwise decorrelated and the values
> (a, b) are made anew with each output this is provably secure if the
> stream (a, b) is not guessable (in this case without any known stream).
Could you please explain 'this' in the phrase 'this is
pairwise decorrelated'? Further, with the little that I
(presume to) know about decorrelation, 'decorrelated'
means a decorrelation (to some order) of zero. Which
value is zero in the present case? Thanks.
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (S. T. L.)
Date: 24 Aug 2000 08:41:40 GMT
Subject: Re: A few big primes?
<< I was wondering where I could pick up a few big primes (Around 256-512
> bits), preferably in an easy to read (ie hexidecimal or raw binary,
not
> decimal or BCD) format. Alternatively, how to programs like PGP
generate
> primes? Then I could do a DIY job of it.>>
Primes under 512 bits can be generated by a TI-92+ calculator, actually.
-*---*-------
S.T.L. My Quotes Page * http://quote.cjb.net * leads to my NEW site.
My upgraded Book Reviews Page: * http://sciencebook.cjb.net *
Optimized pngcrush executable now on my Download page!
Long live pngcrush! :->
------------------------------
From: Pekka =?iso-8859-1?Q?Ala=2DM=E4yry?= <[EMAIL PROTECTED]>
Subject: Re: The DeCSS ruling
Date: Thu, 24 Aug 2000 08:49:03 GMT
Ichinin wrote:
>
> Pekka Ala-M�yry wrote:
> > So I dare to say that it is illegal also in Europe.
>
> *EEEEEEEEE* wrong...
>
> You seem to forget that EU "law" is only GUIDANCE, it is up
> to EACH INDIVIDUAL STATE to IMPLEMENT that GUIDANCE in law.
In principle yes but seems to be quite a strict quidance still.
Maybe our legislators are too soft and adopt all "given from above".
> Define reverse engineering?
>
> This is cut from 1960:729 (Swedish copyright law)
>
Here is a link to the the Finnish version (in Swedish :)
http://finlex.edita.fi/dynaweb/stp/1995fs/@ebt-link?showtoc=false;target=IDMATCH(id,19950446.fs)
And for the Finnish version (foor masochists abroad) replace all "fs"s
to "sd"s
([EMAIL PROTECTED] ...19950446.sd)
Funny thing is that the laws seem to be almost identical.
The Article 6 Decompilation, paragraph 2 a), b) and c) limit the
use of the information obtained by disassembly. I think this is
the part which has lots of room for interpretation (and which I
thought the law enforcement utilized).
a) Is the DeCSS program used for goals other than to achieve the
interoperability of the independently created computer program?
b) Has it been given to others, except when necessary for the
interoperability of the independently created computer program?
c) Is it used for the development, production or marketing of a
computer program substantially similar in its expression, or
for any other act which infringes copyright
The "EULA" (End User License Agreement) cannot restrict the right
to disassemble.
> On the right side of the pond (= .SE)
East of (Sw)eden
--
pam
p��esikunta palomuuri pankkiautomaatti pedofiili PIC-ohjelmointilaite
pernarutto PGP panssariluoti PIN plutonium poliisi-TV polttopullo
proletariaatti Pu-239 Pu-240 PUK riist�j�kapitalisti rikastettu uraani
------------------------------
From: [EMAIL PROTECTED] (S. T. L.)
Date: 24 Aug 2000 08:47:07 GMT
Subject: Re: SHA-1 program (cool!)
<<That's more than 2^31 bits. Check that you're using unsigned integers to
encode
the bit count (and when you've fixed that, check that your implementation works
for files larger than 512 MB, i.e. 2^32 bits).>>
Well, for the pak0.pak file, I checked my program against another program (the
<500byte assembler SHA1.COM by Robert G. Durnal) and it agreed with my
SHA1.EXE's hash, and not with the bokler.com HASHcipher. It does seem like a
good idea that this bug is triggered by files > 2^31 bits (~260MB), because the
HASHcipher and SHA1.EXE agree for a 100MB file of all "a"s. Since I use
unsigned long ints to store filesize (both in bits and bytes), I believe that
it is my program that is correct and HASHcipher which is wrong. However, I
hadn't considered the bit/byte thing (again?!? DARN IT) explicitly and probably
my program *will* fail for files larger than 2^32 bits. I'll have to check
into it more closely. While I'm at it, could anyone independently hash a few
big files using some other program that I haven't heard of? A hash of
Half-Life's pak0.pak would be useful, as would a hash of a file consisting of
one billion instances of the letter "a". :-P
-*---*-------
S.T.L. My Quotes Page * http://quote.cjb.net * leads to my NEW site.
My upgraded Book Reviews Page: * http://sciencebook.cjb.net *
Optimized pngcrush executable now on my Download page!
Long live pngcrush! :->
------------------------------
From: Keith Thompson <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c
Subject: Re: Bytes, octets, chars, and characters
Date: 23 Aug 2000 18:09:08 -0700
[EMAIL PROTECTED] (Paul Schlyter) writes:
> In article <[EMAIL PROTECTED]>,
> Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
>
> > Paul Schlyter wrote:
> >
> >> Yep -- a word is X bits on an X-bit machine, that's the rule.
> >
> > That is a tautology. The real question is, what is meant
> > by "X" in calling a computer an "X-bit machine"? I have
> > encountered real cases where no matter what you use for X,
> > it could be disputed.
>
> X = the width, in bits, of the data bus.
I think the size of the integer register(s) is a more useful measure
of the "bitness" of a machine. There was a posting on, I think,
comp.arch not too long ago listing various characteristics of several
architectures, including data bus width, register size, etc; register
size almost universally matched what most people would call the
"bitness" of the architecture. Note that this makes the VAX and 68k
(all versions) 32-bit machines. (Whether this is called a "word" is
another matter.)
To get back to the topic of the newsgroup, the size of the integer
registers is *usually* the most appropriate size for type int. The
only reasons I can think of for using a different size are (1)
compatibility with another, usually smaller, architecture, and (2) the
integer registers are smaller than 16 bits.
--
Keith Thompson (The_Other_Keith) [EMAIL PROTECTED] <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
Welcome to the last year of the 20th century.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Provably secure stream cipher
Date: Thu, 24 Aug 2000 11:07:59 +0200
wtshaw wrote:
>
> So, you want neither pure block nor pure stream to get the best
> encryption. My GVA answers that with variable blocks and inductive
> output, but the cost, slight as it is, is in some increased length.
> Rather than blocks, maybe I say the data is handled in batches.
Yes, it is unreasonable to sharply separate stream and
block encryptions. An optimum design would advantageously
incorporate features that are customarily considered to
belong to one or the other class. In a previous thread it
was discussed that control informations can be inserted
into the ciphertext stream to dynamically effect change
of the encryption process. Your GVA is an example that
can serve to very neatly explain the idea. For block
encryptions one can send an extra block after a number
n of normal blocks to cause e.g. change of the key,
thus defeating those analysis techniques that depend
on a huge number of materials encrypted with the same
key. (Of couse there are other means of changing the
key dynamically.)
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (S. T. L.)
Date: 24 Aug 2000 09:03:44 GMT
Subject: Re: SHA-1 program (cool!)
<<It seems to me that you really pushed things to the limit by testing a 300 MB
file.>>
Yeah. In another post someone suggested that tripping over the 2^31 bit
boundary triggered the bug (and as I now know, it was in HASHcipher and not
SHA1.EXE). This is probably because the bokler.com people weren't careful and
didn't use unsigned when they should have. I explicitly used unsigned long
ints everywhere, which means my program is good at least up to 2^32 bits.
Beyond that, it will fail (to my knowledge); I never had the intention of being
fully FIPS 180-1 compliant, but because I didn't think in terms of bits and
bytes clearly (for the second time, argh), SHA1.EXE's noncompliance will begin
at 512MB and not 4GB, as I thought. Crud. 4GB was a reasonable limit, but
512MB might be a little low.
<<If there is some way of showing that your program is good for files
below a certain size, it would still be very useful for my purposes. So do
we prove that, to a statistically acceptable level of certainty?>>
Well, SHA1.EXE passes all three Appendices in the SHA-1 spec, and also agrees
with SHA1.COM and bokler.com's HASHcipher for a 100,000,000 "a" file. And
SHA1.EXE agrees with SHA1.COM for the pak0.pak (300MB) file. I still believe
that SHA1.EXE provides bulletproof hashes for files under 512MB, but beyond
that my implementation _will_ screw up because I do refer to the bits in
origbits, and that's only an unsigned long int which is only guaranteed to be
32 bits (and DJGPP doesn't provide any more, incidentally). I'll see what I
can do about breaking the 512MB limit, but realistically I don't think that
that many people have 512MB files on their hard drives. Not even I do. Yet,
that is. :-P
<<NIST has some test vectors (beyond those in FIPS-180) that may be useful,
though they like kind of hard to use:
http://csrc.nist.gov/cryptval/shs/sha1-vectors.zip>>
Oddly enough, the bokler.com HASHcipher is certified. Maybe both SHA1.EXE and
SHA1.COM are failing... crud. I'll look at these test vectors; thanks!
<<I would suggest trying it on a different compiler and/or library. May be a
problem with the library handling certain size files.>>
A good suggestion, but I believe that DJGPP is fine, thank goodness. :-D
<<Also check the method of appending the length code.
You may be overwriting data with the length or vice versa.
Or you may be overwriting the low 32 bits of the length with
the high 32 bits. Also possible is not shifting properly since
the 'normal' word size is 32 bits and the length is 64 bits
and expresses the total length of data in bits.>>
Well, I've always made the crocky assumption in SHA1.C that the file length
will be less than 2^32 bits, and the first word is automatically all zeroes.
(Hence why I know absolutely that it will fail for files > 512MB). Otherwise,
I think that I append the length properly.
Perhaps I'll need to include a bignum library or make a clever hack so that I
can deal with files that are over 2^32 bits. Argh. Thanks for everyone's
help.
-*---*-------
S.T.L. My Quotes Page * http://quote.cjb.net * leads to my NEW site.
My upgraded Book Reviews Page: * http://sciencebook.cjb.net *
Optimized pngcrush executable now on my Download page!
Long live pngcrush! :->
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************