Re: permutations +- groups

2005-12-22 Thread John Denker

Ben Laurie wrote:


Good ciphers aren't permutations, though, are they? Because if they
were, they'd be groups, and that would be bad.


There are multiple misconceptions rolled together there.

1) All of the common block ciphers (good and otherwise) are permutations.
 To prove this, it suffices to require that ciphertext blocks be the
 same length as plaintext blocks, and that arbitrary text can be enciphered
 and (given the proper key) uniquely deciphered.  It's an elementary
 pigeon-hole argument:  each plaintext block must map to one and only one
 ciphertext block.

2) If you consider the set of all imaginable permutations, there will be
 ((2^B) factorial) elements, where B is the blocksize of the cipher.  Call
 this set U.  The set U is closed under composition, so in fact we necessarily
 have a group.  This is neither a good thing nor a bad thing;  it just is.

3) However, the group mentioned in item (2) is not what people mean when
 they talk about a cipher having the group property.  Let me illustrate
 using plain old DES.  There are at most 2^56 keys.  Therefore let us
 consider the set of all 2^56 /keyable/ permutations;  call this set K.
 This is a verrry small subset of the ((2^64) factorial) imaginable
 permutations.

4) The interesting question is whether K is closed under composition.  This
 is of particular interest if you are trying to cobble up an improved cipher
 with an N-times longer key, by layering N copies of the original cipher.
 This is guaranteed to be a waste of effort if K is closed under composition,
 i.e. if K is in fact a group, i.e. a subgroup of U.

 The converse does not hold;  showing that K is not closed does not suffice
 to show that layering is a good idea.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-22 Thread Matt Crawford

On Dec 21, 2005, at 0:10, Ben Laurie wrote:

Good ciphers aren't permutations, though, are they? Because if they
were, they'd be groups, and that would be bad.


A given cipher, with a given key, is a permutation of blocks.   
(Assuming output blocks and input blocks are the same size.)  It may  
be (and often is) the case that the set of all keys does not span the  
set of all possible permutations, in which case the permutations


  { E_k() | k in set of all keys }

may or may not turn out to be a group.

For blocks of n bits and keys of m bits, there are n! permutations  
but 2^m of them are representable by some key.  If m = n, this is a  
fraction roughly equal to


  (2e/n)^n

About 10^-70 for n=64.  I don't know the probability of a randomly  
selected subset of a permutation group being a group, but at these  
scales, I bet it's small.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-22 Thread Ben Laurie
Matt Crawford wrote:
 On Dec 21, 2005, at 0:10, Ben Laurie wrote:
 Good ciphers aren't permutations, though, are they? Because if they
 were, they'd be groups, and that would be bad.
 
 A given cipher, with a given key, is a permutation of blocks.  (Assuming
 output blocks and input blocks are the same size.)  It may be (and often
 is) the case that the set of all keys does not span the set of all
 possible permutations, in which case the permutations
 
   { E_k() | k in set of all keys }
 
 may or may not turn out to be a group.
 
 For blocks of n bits and keys of m bits, there are n! permutations but
 2^m of them are representable by some key.  If m = n, this is a fraction
 roughly equal to
 
   (2e/n)^n
 
 About 10^-70 for n=64.  I don't know the probability of a randomly
 selected subset of a permutation group being a group, but at these
 scales, I bet it's small.

Must try not to post to crypto when I'm jetlagged! I had my wires
crossed here, what's bad is when the keys form a group, of course (as
others have also pointed out).

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
**  ApacheCon - Dec 10-14th - San Diego - http://apachecon.com/ **
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: another feature RNGs could provide

2005-12-22 Thread Anton Stiglic
Actually, by definition, a cipher should be a permutation from the set
of plaintexts to the set of ciphertexts. It has to be 1 to 1 bijective
or it isn't an encryption algorithm.

Therefore, if you want an ergodic sequence of size 2^N, a counter
encrypted under an N bit block cipher will do it.

Perry

Yes, and the set of keys define a subset of all of the possible permutations
(working on the same size input as the block cipher).  The set of all
permutations is a group, but a subset of that is not necessarily a subgroup.

Most security proofs of modes of operations, and others, model a block
cipher as a random permutation.

--Anton


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificates

2005-12-22 Thread James A. Donald
--
Peter Gutmann
   In fact the real situation is even worse than this. 
   Although there has been plenty of anecdotal evidence 
   of the ineffectiveness of SSL certificates over the 
   years, it wasn.t until mid-2005 (ten years after 
   their introduction) that a rigorous study of their 
   actual effectiveness was performed. This study, 
   carried out with computer-literate senioryear 
   computer science students (who  one would expect 
   would be more aware of the issues than the typical  
   user) confirmed the anecdotal evidence that invalid 
   SSL certificates had no effect whatsoever on users 
   visiting a site.

   [...]

   A contributing factor in the SSL certificate problem 
   is the fact that security warnings presented to the 
   user often come with no supporting context. Since 
   web browsers implicitly and invisibly trust a large 
   number of CAs, and by extension a vast number of  
   certificates, users have no idea what a certificate 
   is when an error message mentioning one appears. One 
   user survey found that many users assumed that it 
   represented some form of notice on the wall of the 
   establishment, like a health inspection notice in a 
   restaurant or a Better Business Bureau certificate, 
   a piece of paper that indicates nothing more than 
   that the owner has paid for it (which is indeed the 
   case for most SSL certificates). Users were 
   therefore dismissive of .trusted. certificates, and 
   as an extension cared equally little about 
   .untrusted. ones.

   This user conditioning presents a somewhat difficult 
   problem. Psychologists have performed numerous 
   studies over the years that examine people.s 
   behaviour once they.ve become habituated into a  
   particular type of behaviour and found that, once 
   acquired, an (incorrect) whirr, click response is 
   extremely difficult to change, with users resisting 
   attempts to change their behaviour even in the face 
   of overwhelming evidence that what they.re doing is 
   wrong.

But is what they are doing wrong?

To solve the phishing problem (man in the middle attack) 
using certificates, not only must users become alarmed 
on encountering no certificate or a defective 
certificate, but businesses that may be potentially 
phished must faithfully and regularly employ 
certificates, which they do not consistently do, and 
faithfully and regularly sign their mail, which they 
almost never do, and must, like google or paypal, use a 
single user memorable brandnamed root to their domain 
names, which the new internet businessess generally do, 
for example skype.com, but pre internet businesses 
generally do not do.  Further, businesses must fix all 
their servers so that redirects and the like are immune 
to cross scripting attacks and do full server side 
checking of all user input data, and must never solicit 
users to click on links that are full of large amounts 
of hidden gibberish.  Further email clients should never
allow a clickable post link within email, though at
present all of them do.

Since most businesses are not doing any of that, there 
is little incentive for even the most sophisticated user 
to worry too much about certificates.

Further, even if all the businesses start doing the 
right thing, we will never succeed in explaining to 
users that https://atbbr.bankofadelaide.com is safe 
while https://bankofadelaide.atbbr.com is unsafe.  

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 7lvFKmh9CI9ZQfYIy78zI4N2dRYic3ejlTGQRoao
 4R5oEEaOy/wO1wELCYESt8HByRqNhqN5UjF6Br4c3



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-22 Thread Bill Stewart



 Good ciphers aren't permutations, though, are they? Because if they
 were, they'd be groups, and that would be bad.

Actually, by definition, a cipher should be a permutation from the set
of plaintexts to the set of ciphertexts. It has to be 1 to 1 bijective
or it isn't an encryption algorithm.


The groups-are-bad problem applies to the
mapping between keys and plaintext-cyphertext bijections,
not the mapping between plaintext and cyphertext.
You're trying to avoid the situation where
E(x,key1) == E( E(x,key2), key3) for all x

The mapping between plaintext and cyphertext doesn't need to be 1-1 
bijective.

1-n mappings from 1 plaintext to multiple cyphertexts
can work fine for many applications,
but have the practicality problem that the cyphertext is
longer than the plaintext, and there aren't many
applications where you really want the expansion.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A small editorial about recent events.

2005-12-22 Thread dan


Clinton's Asst. A.G.
http://www.chicagotribune.com/news/opinion/chi-0512210142dec21,0,3553632.story?
coll=chi-newsopinioncommentary-hed

Dick Morris
http://www.drudgereport.com/flash7.htm


--dan


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RNG quality verification

2005-12-22 Thread Philipp Gühring
Hi,

I have been asked by to verify the quality of the random numbers which are 
used for certificate requests that are being sent to us, to make sure that 
they are good enough, and we don´t issue certificates for weak keys.

The client applications that generate the keys and issue the certificate 
requests are the usual software landscape OpenSSL, IE, Firefox, 
SmartCards, ... and we would like to be able to accept all normally used 
software.

We are being asked to either issue the keys for our users (I don´t want to), 
or alternatively demand the users to have good quality random numbers with a 
contract for the user. Now it might be easy that I demand the user to have 
good random numbers, but the first question will likely be and how do I do 
that? or which software/hardware does that?

So I guess I have to ask the vendors, whether ther random numbers are good 
enough. But what if they just say yes or no? 
I think the better way would be if I had a possibility to verify the quality 
of the random numbers used in a certificate request myself, without the 
dependence on the vendor.

From what I remember of the usual RSA key generation, random numbers gathered 
are being put into a field with the expected keysize. Then the first and last 
bit is set to 1, to make sure that the key has the necessary size, and to 
have it odd (not to be devidable by 2). Then it is verified for primeness, 
and if the check is ok, the number is used.

So if I extract the key, remove the first and the last bit, then I should have 
the pure random numbers that are being used. If I do that with lots of keys, 
I should have a good amount of random material for the usual statistical 
tests.

Am I right? Am I wrong?
Has anyone done that before?
Any other, better ideas?
Should I do it that way?

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: another feature RNGs could provide

2005-12-22 Thread Travis H.
On 12/21/05, Perry E. Metzger [EMAIL PROTECTED] wrote:
  Good ciphers aren't permutations, though, are they? Because if they
  were, they'd be groups, and that would be bad.

 Actually, by definition, a cipher should be a permutation from the set
 of plaintexts to the set of ciphertexts. It has to be 1 to 1 bijective
 or it isn't an encryption algorithm.

Isn't the question people normally care about whether encryption over
all keys is closed or not, and only relevant if you're trying to
increase the keyspace through multiple encryption?

The other day I was thinking of using a very large key to select a
permutation at random from the symmetric group S_(2^x).  That would be
a group, but I don't see how you knowing that I'm using a random
permutation would help you at all.
--
http://www.lightconsulting.com/~travis/
I once went to a mathematics conference.  I got the room number Pi.  It was
easy to find, but took forever to dial on the in-house phone. -- Steven Wright
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: browser vendors and CAs agreeing on high-assurance certificates

2005-12-22 Thread Thor Lancelot Simon
On Sun, Dec 18, 2005 at 09:47:27AM -0800, James A. Donald wrote:
 
 Has anyone been attacked through a certificate that 
 would not have been issued under stricter security?  The 
 article does not mention any such attacks, nor have I
 ever heard of such an attack.

Ought we forget that two such certificates were issued to a party
(identity, AFAIK, still unknown) claiming to be Microsoft?  What,
exactly, do you think that party's plans for those certificates
were -- and why, exactly, do you think they were inocuous?

  Thor Lancelot Simon[EMAIL PROTECTED]

  We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others.  - H.L.A. Hart

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-22 Thread Alexander Klimov
On Thu, 22 Dec 2005, Philipp [iso-8859-1] G?hring wrote:

 I have been asked by to verify the quality of the random numbers which are
 used for certificate requests that are being sent to us, to make sure that
 they are good enough, and we don?t issue certificates for weak keys.

Consider an implementation which uses x = time and when
SHA1(hardcoded-string||x), SHA1(hardcoded-string||x+1), etc. as a
starting point to search for primes. Unless you know what is the
hardcoded-string you cannot tell that the random starting point was
not that random: it is very important to realize that randomness is
the property of the source and not of a string.

BTW, note that what you can see in the certificate request for an
RSA key is n and not p and q themselves.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-22 Thread Victor Duchovni
On Thu, Dec 22, 2005 at 10:28:47AM +0100, Philipp G?hring wrote:

 I think the better way would be if I had a possibility to verify the quality 
 of the random numbers used in a certificate request myself, without the 
 dependence on the vendor.

This is impossible. You don't see the raw random inputs: the CSR does
not disclose the input primes, it only discloses their product. The real
problem is deeper: there is no such thing as a single number that is or
is not random, a random *process* produces the same discrete outputs as
a similar non-random process.

Furthermore, it is impossible to prove an untrusted sampled process to be
securely random. To have *assurance* of randomness you need to participate
in the process, and to have access to the raw random data. This of
course you cannot do as a CA, because you don't have legitimate access
to the private keys. If you must police the users, hand them their keys
on smart cards (or other suitable hardware) that you initialize.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-22 Thread Travis H.
On 12/22/05, Philipp Gühring [EMAIL PROTECTED] wrote:
 So if I extract the key, remove the first and the last bit, then I should have
 the pure random numbers that are being used. If I do that with lots of keys,
 I should have a good amount of random material for the usual statistical
 tests.

The only thing is, you cannot test in randomness, and it is an abuse
of statistics to make predictions about individual events -- they
describe populations.  The best thing you could do is combine them
with a truly random source that you control.  Of course then your
users may not trust you, so you have to do a cryptographically strong
combination such that control of one of the inputs doesn't translate
into control of the outputs.  For example, you cannot simply XOR them
or you could force the key to be anything of the same length by
choosing an appropriate stream.  Also, you could not do this with
small input spaces or else exhaustive search is trivial (try every
input until the output is what you want).

The best you could do is examine (reverse engineer) the RNGs in the
products, and whatever seeds them, and then create tests for their
nonrandom properties, and then see if the tests work.  This would,
however, not tell you anything you didn't already know once you had
examined the internals.  You might be able to find structure in their
outputs through blind application of general-purpose statistics, but
it will likely take a great deal more output, even with supposedly
sensitive statistics like double-sided Kolmogorov-Smirnof.

As a pathological example, my RNG may output the text of the King
James Bible, encrypted with AES-CBC using a counter as the key, and
uniquified across instances by using a processor serial number or
licence number as an IV.  Unless you knew this, you would be
hard-pressed to tell they were not random and in fact totally
predictable to anyone who knows the secret.  If a general statistic
could distinguish this from a random stream, I think it would imply a
weakness in AES-CBC.  The tests would likely fail until enough output
was generated that it started to repeat itself.  On the other hand, I
could decrypt it with a counter and see what pops out, and all I'd
have to do is distringuish the KJV from a random stream.

I'd look at seeding techniques first, as that's an easy win. 
Predictable seed - predictable output.  If that bootstrap is wrong,
you can treat everything else as an oracle and still get a good
distinguisher.
--
http://www.lightconsulting.com/~travis/
You are free... to do as we tell you! --
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Comparison of secure email technologies

2005-12-22 Thread Ed Gerck


Thanks for the comments. A new version of the work paper
Comparison Of Secure Email Technologies X.509 / PKI, PGP, and IBE
is available at http://email-security.net/papers/pki-pgp-ibe.htm
The Blog (link in the paper page) contains the most relevant
public input; private input is also appreciated.

Comments are welcome.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-22 Thread Philipp Gühring
Hi Travis,

 The only thing is, you cannot test in randomness, 

That´s true, but I can test non-randomness. And if I don´t detect 
non-randomness, I can assume randomness to a certain extent.

 and it is an abuse 
 of statistics to make predictions about individual events -- 

Wasn´t that one of the reasons, why statistic was invented?

 they 
 describe populations.  The best thing you could do is combine them
 with a truly random source that you control.  

I don´t control the software everyone is using on this world.

 Of course then your 
 users may not trust you, so you have to do a cryptographically strong
 combination such that control of one of the inputs doesn't translate
 into control of the outputs.  For example, you cannot simply XOR them
 or you could force the key to be anything of the same length by
 choosing an appropriate stream.  Also, you could not do this with
 small input spaces or else exhaustive search is trivial (try every
 input until the output is what you want).

The problem is that I have to live with COTS (Common-off-the-shelf) software 
out there, that is generating the certificate requests. The only thing I can 
do is create a blacklist or a whitelist of known bad or known good software, 
to tell the users: Use this software, or don´t use that software.

 The best you could do is examine (reverse engineer) the RNGs in the
 products, and whatever seeds them, and then create tests for their
 nonrandom properties, and then see if the tests work.  This would,
 however, not tell you anything you didn't already know once you had
 examined the internals.  

Has anyone done this yet?

 You might be able to find structure in their 
 outputs through blind application of general-purpose statistics, but
 it will likely take a great deal more output, even with supposedly
 sensitive statistics like double-sided Kolmogorov-Smirnof.

Hmm, every key should deliver about 1000 bits of randomness, I guess. How many 
bits should I collect for the tests in your opinion?

 As a pathological example, my RNG may output the text of the King
 James Bible, encrypted with AES-CBC using a counter as the key, and
 uniquified across instances by using a processor serial number or
 licence number as an IV.  Unless you knew this, you would be
 hard-pressed to tell they were not random and in fact totally
 predictable to anyone who knows the secret.  If a general statistic
 could distinguish this from a random stream, I think it would imply a
 weakness in AES-CBC.  The tests would likely fail until enough output
 was generated that it started to repeat itself.  On the other hand, I
 could decrypt it with a counter and see what pops out, and all I'd
 have to do is distringuish the KJV from a random stream.

I guess someone would have noticed already, if Microsoft, Mozilla or OpenSSL 
had done that.

Wait. How many LOC(lines of code) does the King James Bible have? Mozilla had 
something like 13 Mio. LOC as far as I remember ... perhaps they really hid 
the KJ Bible in it! ;-)

 I'd look at seeding techniques first, as that's an easy win.
 Predictable seed - predictable output.  If that bootstrap is wrong,
 you can treat everything else as an oracle and still get a good
 distinguisher.

Contrary to the normal challenge of developing a new random number generator, 
I don´t have the possibility to change the existing software, I just have to 
evaluate them, and find out, whether it´s ok or not.

I first thought about a black-box test, by simply tracing in the operating 
system where the software gets its numbers from. A open(/dev/random) 
systemcall at the time of key generation might be a good hint for good random 
numbers. But as Netscape proofed some years ago, you can 
ch=read(stream,ch,1) one perfectly random number, and overwrite it with the 
value 1 (which is not soo random anymore) in one single line of error, and 
invisibly to the operating system failing to use the random numbers given.
So since the random numbers might be modified between gathering and using for 
the keypair, I thought that I need to evaluate the quality at the end of the 
keypair generation.

Best regards,
Philipp Gühring


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RNG quality verification

2005-12-22 Thread David Wagner
Philipp G#ring [EMAIL PROTECTED] writes:
I have been asked by to verify the quality of the random numbers which are 
used for certificate requests that are being sent to us, to make sure that 
they are good enough, and we don´t issue certificates for weak keys.

Go tell whoever wrote your requirements that they (to be frank) don't
know what they're talking about.  What they're asking for doesn't make
any sense.  You should ask them what problem they're trying to solve.
Don't let them try to tell you how to solve it; you just need to know
the goal, not the mechanism.

The standard solution is to just not worry about this at all, and say
that it is the user's responsibility to choose good random numbers.
If the user fails to do so, they're the one who bears the costs of their
failure, so why should you care?

If the goal is to hold the hands of your users, then you might want to
think carefully about whether you want to be in that business, what are
the most likely failure modes, and what is the best way to deal with it.
(Trying to check whether their numbers are random probably isn't the best
answer.)  Most CA's have gravitated towards the opinion that that's not
something they can control, nor do they want to, nor should they -- and
that sounds reasonable to me.  But if you want to be in the hand-holding
business, you're going to have to do an awful lot more than just check
the random numbers.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: RNG quality verification

2005-12-22 Thread Peter Gutmann
Victor Duchovni [EMAIL PROTECTED] writes:
On Thu, Dec 22, 2005 at 10:28:47AM +0100, Philipp G?hring wrote:
 I think the better way would be if I had a possibility to verify the quality
 of the random numbers used in a certificate request myself, without the
 dependence on the vendor.

This is impossible. You don't see the raw random inputs: the CSR does not
disclose the input primes, it only discloses their product. The real problem
is deeper: there is no such thing as a single number that is or is not
random, a random *process* produces the same discrete outputs as a similar
non-random process.

This question may be solveable, but to do it you need to step back and look at
who's set this requirement and why.  I've run into the random-number-obsession
a couple of times, generally when government agencies or government
departments following policy set by government agencies are involved (example:
The CA generates the keys for the user and emails them their private key,
because we can't trust the quality of the user's RNG).  The reason I label it
an obsession is because the entire remainder of the keygen process can be
arbitrarily insecure as long as the RNG is some sort of officially approved
one (in fact in at least two cases the officially approved RNGs were quite
dubious).  So if this is merely a checkbox requirement then it can be met by
including an attribute in the request saying that the RNG was FIPS-certified,
and recording during the issue process that the request stated that it used an
approved RNG.

Easily solveable bureaucratic problems are much simpler than unsolveable
mathematical ones.

Peter.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]