Cryptography-Digest Digest #791, Volume #12 Thu, 28 Sep 00 16:13:01 EDT
Contents:
Re: Why is TwoFish better than Blowfish? (Tom St Denis)
Re: Adobe Acrobat -- How Secure? (Stephan Eisvogel)
Re: Why is TwoFish better than Blowfish? ("Joseph Ashwood")
Re: Chaining for authentication ("Joseph Ashwood")
Re: Question on biases in random-numbers & decompression (Mok-Kong Shen)
Re: Question on biases in random-numbers & decompression (Mok-Kong Shen)
Re: Question on biases in random-numbers & decompression (Mok-Kong Shen)
Re: A New (?) Use for Chi (Mok-Kong Shen)
Re: Adobe Acrobat -- How Secure? (Simon Johnson)
Re: Deadline for AES... (Mok-Kong Shen)
Re: Josh MacDonald's library for adaptive Huffman encoding (Mok-Kong Shen)
Re: PRNG improvment?? ("Douglas A. Gwyn")
Re: Chaos theory ("Douglas A. Gwyn")
Re: Chaos theory ("Douglas A. Gwyn")
Re: RSA and Chinese Reminder Theorem (DJohn37050)
Re: Question on biases in random-numbers & decompression (Bruno Wolff III)
----------------------------------------------------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Why is TwoFish better than Blowfish?
Date: Thu, 28 Sep 2000 18:01:46 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
> [EMAIL PROTECTED]=NOSPAM (Arturo) wrote in
> > I doubt it. The author is Bruce Schneier, boss-in-chief of
> > Counterpane
> >security and writer of the "bible" in crypto, Applied Cryptography.
Not
> >the kind of guy I think most likely to have been bought by the NSA
> >
>
> Then please tell me the kind of guy you think the NSA would own.
> Terry who seems not to have press conections or do you think I am
> the type Arturo.
>
> >> At least these
> >>are my feelings about these fishy ciphers. It seems like NSA humour
> >>to give both ciphers FISHY names.
> >
> > Rather blame the author.
>
> I see blame the author because this is nit the kind of twist
> the NSA would do?
>
> >
> >> But since the idea of a cipher is security. It is plain stupid to
> >>say Twofish is better than Blowfish becasue Blowfish is a PC cipher.
> >>If one has a PC and is sending messages to someone with a PC then
why
> >>use a cipher that could becasue of its ability to be run on many
> >>machines would ecpoxe it to more attacks. Even if they
algoritmically
> >>had the same level security which can't be proved anyway.
> >>
> > Right, maybe. But the AES is not being chosen just to play
with on
> > a
> >PC. It is intended to be used as a general-purpose standard, PC,
> >software, hardware or whatever. It has to be strong, resistant to
> >cryptanalysis, fast, have several block + key size features, and so
on.
> >
>
> There are a lot of conflicting requiremesnts. For one make it
> secure but make it fast. For my purposes secure is a much more
> valuable requirement. The problem is you really can't measure security
> becasue what is secure today is insecure tomorrow. Why does there
> need to be cipher that runs easily on all machines. If it can run
> on a common machine very easily and is "secure" why should one drop
> it for one that works on all machines. I don't use one wrench for
> all the times I need one. But I have to admit vice grips come close
> 90% of the time.
So you're saying in the absense of real security the only solution is
to make a cipher that is big, sludgy and hard to use on various
platforms?
Seems stupid to me. I would have a cipher that's academic, hard to
break with what we know and really easy to port around. Twofish is by
far the best AES cipher. Make Twofish use 24 rounds or so and I doubt
even the slightest weakness (see Knudsen's paper on Twofish Trawling)
will show in 24 rounds..... :) Even at 16 rounds Twofish seems secure.
It's design is sound, it's a versatile cipher and it's free to use,
that's the best of all!
In another view, who cares if your method is secure if it's only usable
in a limited range of platforms?
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Stephan Eisvogel <[EMAIL PROTECTED]>
Subject: Re: Adobe Acrobat -- How Secure?
Date: Thu, 28 Sep 2000 20:36:11 +0200
Dave Ashley wrote:
> It appears that if you put an
> Acrobat document out there, the security features are meaningless.
Related to this PDF security thread, history is repeating over at
http://cryptome.org/carnivore-mask.htm, NYT is not the only one
falling for layered PDF files, DoJ also employs careless and/or
unaware office personel.
My grandma once reported when motorized vehicles were getting
more common in the bigger cities nearby, quite a few accidents
happened in her village because people were not used to looking
left and right before entering the street with their slow horse
vehicles. After a while, the problem went away because everyone
was alert to the danger. Reminds me of infosec today, just that
looking left and right doesn't save your behind any more.
So long
--se
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Why is TwoFish better than Blowfish?
Date: Thu, 28 Sep 2000 10:53:15 -0700
> Then please tell me the kind of guy you think the NSA would own.
> Terry who seems not to have press conections or do you think I am
> the type Arturo.
You beat me to it. I was going to suggest that they would buy someone who
would continually bombard intellectual conversations with his own not so
intellectual observations, someone who seems to rather deliberately choose
conversations that are interesting, and turn them into flame wars. Does this
by any chance sound familiar to you?
Now back to the question at hand. Why is Twofish better than Blowfish? The
answer is, there is no inherent reason why Twofish is always better than
Blowfish, or why Blowfish is always better than Twofish. The difference is
what you are trying to do. Blowfish was an (rather successful) attempt at
designing a secure cipher that would match the block size expectations of
areas that previously made use of DES. Twofish is another (successful)
attempt at making a cipher that is secure given the restrictions of AES
qualification. These different decisions have some very minor, but
potentially critical differences in the trust/threat/attack/design model
needed.
Just a very quick example, which one shouldn't trust completely with their
decisions (due to the fact that there are a great many more considerations):
DES: should be chosen if the protection needs are within the 56-bit limit,
when speed is a minor issue, where the level of security needs to be as
known as possible, there must never be more than 2^64 unique blocks under
the same key available to an attacker
3DES: should be chosen where the security needs are 90-bits or lower, speed
is a non-issue, and the level of security needs to be understood, but
there's potential for a small amount of progress against it, there must
never be more than 2^64 unique block available under the same key to an
attacker
Blowfish: should be used where speed is an issue, where the security limits
must exceed 2^64, there will never be more than 2^40 unique blocks encrypted
with the same key
Twofish: should be used wherever speed is necessary, the data blocks match
into 128-bit blocks easily, and the security level needs to exceed 2^120,
with a margin of error that may eventually drop it below anything
acceptable. There must never be more than 2^128 unique blocks under the same
key available.
Basically there are still situations where DES is the best choice, even
though it is considered broken for most purposes, and there are situations
where Twofish, Blowfish, 3DES, (insert all your favorite ciphers here)
should each be used. I'd recommend to anyone who needs accurate security,
that they hire someone who really knows the ciphers in question, and
understands the security requirements/restrictions that each one has, as
well as understanding what you really want to do.
Joe
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Chaining for authentication
Date: Thu, 28 Sep 2000 11:26:38 -0700
These are my recommendations.
1) Put the knowable data last, or remove it completely, this way it becomes
the most random portion, and requires breaking 1 block of known data, and
one block of known relation data to recover the key, as opposed to breaking
one known block to recover the key. But regardless, don't put it first.
2) tack on a checksum of some kind to the end, since you are dealing with
relatively small messages it would not be reasonable to make use of
something the size of SHA-1 (it would impact the size by a minimum of 20%),
however something simple like a checksum might be suitable (as long as it's
encrypted along side, it really depends on your messages, but using an 8-bit
method would increase the size by 1 to 10% instead of 20 to 200%.
3) make all the messages the same length, or at least undependable length
4) since most of your messages appear to be longer than 128-bits/16-bytes,
move to a cipher that has a block size of 128-bits, you'll find that it
makes the system faster overall, unless your model requires a 64-bit model,
or your data will be difficult to fit in 128-bit blocks.
5) if a constant connection is maintained, send confusion blocks. As an
example I send myself 10 PGP encrypted messages a day, all right before I
head home from work. Most of these messages are just random garbage, but
when I have a real message, it replaces one of the random messages. This
makes traffic analysis of data going from me[work], to me[home] virtually
impossible, since the messages are nearly the same size. I do the same from
home->work. I'd recommend something similar for you, just make the messages
random length, and the last byte have the value 7, a real message has a last
byte of 0 (I have it a bit easier, since I have some idea what was sent).
6) You might also consider only having one block encrypted with the common
key, that block would contain the session key which would encrypt all future
blocks. This would avoid several attacks.
Joe
"Marc" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Hi..
>
> My current project involves handling of small variable sized messages
> (10-100 bytes in length) that are meant to be unreadable and unmodifyable
> for outsiders. There will be a great number of messages protected
> by the same key, and expectedly even with near-identical contents.
>
> The algorithm to be used is a generic 64bit block cipher. It is
> considered secure.
>
> The protected message may be longer than the plaintext, but should not
> exceed the size by much (ie less than the block size of 8 bytes would be
> preferable).
>
>
> After reading "Applied Cryptography" I am not sure how to apply the
> block cipher to accomplish my goals. I see that chaining is of
> advantage, because the messages will be near-identical. I can put
> the non-identical parts of a message to its head, and thereby achive
> different (chained) ciphertext for each message.
>
> But how can I authenticate the contents of the message, preferably
> in the same pass? My plans are to encrypt like this:
>
> 1. only on first call: chain = known_value;
>
> 2. crypt = blockcipher_function( data_input ^ chain );
> 3. chain ^= crypt;
> 4. chain += data_input;
>
> 5. return crypt as output
>
> This appears (to me) to carry on any ciphertext modification in
> the chain register, without automatic re-syncronization.
>
> Also, attacking single bits in a plaintext-block n by modifying the
> same bits in ciphertext-block n-1 seems to be impossible because
> the (then destroyed) plaintext n-1 also influences the chain
> register (on contrast to normal CBC mode).
>
> In step 4 I intentionally avoid XOR (and prefer ADD) because during
> decryption a datablock is recovered with XOR chain, and later XOR-
> modifying chain with the result would cancel away the influence
> of plaintext-blocks n-2 and older. At least that's what I think,
> am I right here?
>
>
>
> To achive the goal of unmodifyable messages I plan to add a fixed
> constant to the message, probably 4 bytes (smaller than 1 block).
> I plan to use ciphertext stealing to avoid further bloating of the
> message size, and expect that after decrypting the 4 byte fixed
> and known constant is OK when the message has not been tampered with,
> or BROKEN when "any imaginable" modification has been done.
>
> Am I on the right way, or do you see obvious traps?
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.compression
Subject: Re: Question on biases in random-numbers & decompression
Date: Thu, 28 Sep 2000 21:29:15 +0200
"D.A.Kopf" wrote:
>
[snip]
> So the original poster was correct, the inverse of an arithmetic
> compresser would be effective. Just "decompress" the random bitstream
> into the needed bin size.
Could you give a reference to an efficient decompressor
that works for arbitrary target range? Thanks.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Question on biases in random-numbers & decompression
Date: Thu, 28 Sep 2000 21:29:09 +0200
Terry Ritter wrote:
>
[snip]
> No, I'd say it's better to use a faster random source so that one can
> afford throw some of it away. We're probably talking about throwing
> away 25 percent; even double that should be affordable.
Do you mean throwing away in the literal sense? How to
select a scheme of optimal throwing? Wouldn't it be better
to do some 'condensation' of the whole stuff available,
e.g. hashing, or taking parity, etc.? Thanks.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.compression,sci.crypt.random-numbers
Subject: Re: Question on biases in random-numbers & decompression
Date: Thu, 28 Sep 2000 21:29:21 +0200
Bruno Wolff III wrote:
>
> I have a die rolling perl module that I am including here, a long with
> a very simple test program. These are free for public use. I have only
> done minimal testing of this module.
>
> Besides minimizing the entropy used to generate a single unbiased roll,
> there is a function that will make multiple rolls of the same sided die
> that will try to conserve even more entropy by combining some of these
> rolls together. The test program shows that the savings for D6's is
> very roughly 30%.
Do I understand correctly that you use software to
simulate dice? How do you know that the result is
perfectly unbiased? How do you estimate the entropy
of the result? Thanks.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: A New (?) Use for Chi
Date: Thu, 28 Sep 2000 21:29:04 +0200
"Douglas A. Gwyn" wrote:
>
> John Savard wrote:
[snip]
> Basically you're reinventing a Markov model. HMM or SVD
> methods provide more reliable classification.
Dumb question: What is SVD? Singular value decomposition?
Thanks.
M. K. Shen
------------------------------
From: Simon Johnson <[EMAIL PROTECTED]>
Subject: Re: Adobe Acrobat -- How Secure?
Date: Thu, 28 Sep 2000 19:37:16 GMT
In article <[EMAIL PROTECTED]>,
Dido Sevilla <[EMAIL PROTECTED]> wrote:
> "David C. Barber" wrote:
> >
> > I am looking to distribute some documents I don't want the user to
be able
> > to alter or print. Acrobat was suggested, but IIRC, wasn't the
Steven King
> > story distributed through Acrobat, and it was broken quickly just
by loading
> > it into the full fledged Acrobat program?
> >
>
> Forget it. Not unless you add a click-wrap license, and place your
> information under the DMCA and jump through all kinds of legal
> contortions will it be even remotely possible. And even then, end
users
> will probably just laugh at it and say: "Screw the legal consequences.
> David can't see me cracking this document of his anyhow." There
really
> is no technological solution. If someone's determined enough, all
they
> need to do is fire up a screen capture program, or to open a copy of
MS
> Word or whatever, open your file reading program, share the screen
with
> your file reader and Word, and just type whatever he or she sees on
the
> file reader's window to the Word window. Nothing to it. Cryptography
> gives you a measure of control over your information, but only as much
> as the intended recipients of that information will allow you to have.
>
> This is also why the MPAA and RIAA's efforts with DVD-CCA and SDMI are
> doomed. The intended recipients are not willing to give them the
amount
> of control over their information that they want to have, so no amount
> of technology they throw at it will ever make any scheme they come up
> with more secure.
I agree fully, they might aswell give up now. If you produce and
encrypted media for the masses, you must also produce a reader, for the
masses. You have to rely on security though obsecurity to insure you're
scheme isn't cracked and with this newsgroup in existances..... you're
scheme aint gonna last that long.
So, a word of warning to DVD producers: Don't bother. :D
> --
> Rafael R. Sevilla <[EMAIL PROTECTED]> +63 (2) 4342217
> ICSM-F Development Team, UP Diliman +63 (917) 4458925
> OpenPGP Key ID: 0x0E8CE481
>
--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Deadline for AES...
Date: Thu, 28 Sep 2000 22:08:15 +0200
Volker Hetzer wrote:
>
> The program on monday has an entry "AES and Beyond" which starts
> "The end of the AES development process is now in sight. The
> algorithm has been selected, and the draft standard is ready
> for public comment."
> Does this mean that on oct, 16 latest, the waiting will be over
> or are the guys from the NISSC just speculating?
I am interested to know how did you arrive at a 'definite'
latest date of release? Do you have some insider info?
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.compression,comp.theory
Subject: Re: Josh MacDonald's library for adaptive Huffman encoding
Date: Thu, 28 Sep 2000 22:07:59 +0200
"SCOTT19U.ZIP_GUY" wrote:
>
> But doing tests on my own static huffman compression
> ( if you don't count space for table) usually beats
> adaptive huffman compression. But not always. Some files
> are such that adaptive hufman compression beats the static
> even if you don't count the table space.
> Hay its not that hard to test this yourself, but one
> word of warning not all adaptive huffman compression programs
> the same most use what you called a NYT followed by the ascii
> coding of the symbol when a new symbol is incountered in
> the input stream. Mine do not.
>
> My main one starts with a full tree and then the tree
> changes based on input. There is no reason that if you always
> compressing simialar files not to use a different starting tree
> and a different amount of adapting. If one is only using a certain
> class of files. I think a mod like above would beat static most
> of the time since you can tune it the types of file you like.
> But like I said it is not that hard to play with it.
If one starts from nothing, then one has to use NYT
followed by ASCII or its equivalent (i.e. a 'standard'
representation of the same space), I suppose. Otherwise
I don't see how a new symbol could be transmitted.
If one has some approximate idea of the real frequencies,
I believe that starting with a full tree would be better
in general.
M. K. Shen
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: PRNG improvment??
Date: Thu, 28 Sep 2000 19:20:59 GMT
Paul Pires wrote:
> <[EMAIL PROTECTED]> wrote:
> > But isn't one of the criteria for a One Time Pad truly uniform
> > distribution of the key values?
Uniform distribution emphatically does *not* mean that all
values occur exactly the same number of times in any given
finite sample. Trying to enforce such an additional
criterion introduces nonrandomness that can potentially be
exploited by a cryptanalyst.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Chaos theory
Date: Thu, 28 Sep 2000 19:28:05 GMT
zapzing wrote:
> a sufficiently hashed chaotic RNG would not have
> any cycles.
So what? Lack of exact cycles is by no means sufficient
for security.
------------------------------
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Chaos theory
Date: Thu, 28 Sep 2000 19:27:09 GMT
Tim Tyler wrote:
> Jim Gillogly <[EMAIL PROTECTED]> wrote:
> : It can be worse than this: because chaotic systems have attractors,
> "Having attractors" is neither a defining nor a necessary property of
> chaotic systems. For a definition, see the sci.nonlinear FAQ:
> http://www.enm.bris.ac.uk/research/nonlinear/faq-[2].html#Heading12
> : In mathematics, however, chaos lies on the boundary between
> : order and disorder, and is a study of systems that have behavior
> : that's largely predictable statistically...
> Not necessarily correct - chaotic systems can be highly disordered.
Gillogly was closer to the mark.
Random chaotic systems are relatively uninteresting,
and would not be usable to construct cryptosystems in the
sense envisioned by people who ask the original question.
What they have in mind are iterated functions, which under
quite general conditions do have statistically predictable
properties that render them unsuitable for building
high-grade cryptosystems.
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: RSA and Chinese Reminder Theorem
Date: 28 Sep 2000 20:02:48 GMT
I thought d was usually the RSA decryption exponent.
Don Johnson
------------------------------
From: [EMAIL PROTECTED] (Bruno Wolff III)
Crossposted-To: comp.compression,sci.crypt.random-numbers
Subject: Re: Question on biases in random-numbers & decompression
Date: 28 Sep 2000 20:08:19 GMT
On Thu, 28 Sep 2000 21:29:21 +0200, Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>Do I understand correctly that you use software to
>simulate dice? How do you know that the result is
>perfectly unbiased? How do you estimate the entropy
>of the result? Thanks.
Yes, I use software to simulate dice.
What I was measuring in the software was the entropy used.
The results are unbiased giving the assumption that bits provided from
/dev/random are unbiased. The reason for this is roughly that when trying
to choose a random number, a number is chosen with equal probability (given
the above assumption) from a range of numbers greater than or equal to
the range of interest. If the returned number is not in the desired range
than the process is repeated.
/dev/random gets data from somewhat random system events that passes through
an SHA-1 filter. The driver for this device makes an estimate on the amount
of entropy it has gathered and will block if you try to read too much data
from the device.
Other sources of random binary data could be used.
The feature of the program is that it generates random values from ranges
that are not powers of two, without introducing bias and trying to be as
efficient as possible in using up entropy. I belive that for single rolls,
the method it uses is the best possible (in terms of using entropy), but
I am not sure how to prove that.
Combining rolls can also save a lot of entropy. I only added a routine for
combining rolls using the same range, because that is what is typically
seen and because specifying rolls using a mix of ranges results in
complicated calls. Since I was just making one pass through the input and
didn't want to make a complicated way for the users to specify combined
rolls, I didn't make a function in Roll.pm to do that. This gets extra
complicated if you aren't using infinite precision arithmetic and have
to make sure your numbers don't get too large.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************