Cryptography-Digest Digest #545, Volume #13 Thu, 25 Jan 01 00:13:01 EST
Contents:
Re: Snake Oil (Splaat23)
Re: Random stream testing. (long) ("Matt Timmermans")
Re: Dynamic Transposition Revisited (long) ("Matt Timmermans")
Re: Snake Oil (Eric Lee Green)
"How do we know an Algorithm is Secure?" (was RC4 Security) (William Hugh Murray)
Re: Snake Oil (William Hugh Murray)
Re: Snake Oil (Paul Rubin)
----------------------------------------------------------------------------
From: Splaat23 <[EMAIL PROTECTED]>
Crossposted-To: or.politics,talk.politics.crypto,misc.survivalism
Subject: Re: Snake Oil
Date: Thu, 25 Jan 2001 03:35:16 GMT
In article <[EMAIL PROTECTED]>,
Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:
> William Hugh Murray wrote:
> >
> > Anthony Stephen Szopa wrote:
> >
> > > Richard Heathfield wrote:
> > > >
> > > > Anthony Stephen Szopa wrote:
> > > > >
> > > > <snip over 200 lines>
> > > > >
> > > > > So that's all I have to say for a while.
> > > >
> > > > Is that a promise?
> > >
> > > Here is a guy who spits on the souls of anyone for no damned
reason.
> > >
> > > I told you that I am the inventor that will save people tens or
> > > hundreds of billions of dollars in lost revenue and you verbally
> > > shit on me with your sarcasm.
> >
> > > <snip>
> >
> > > Gee, you didn't get any more significant information from me about
> > > my claim?
> > >
> > > Too bad.
> >
> > My Daddy told me, "Son, if it looks like snake oil, tastes like
snake
> > oil, and a smells like snake oil, it is usually snake oil." My
Daddy was
> > a wise man and he loved me very much. He rarely misled me.
> >
> > We see a lot of claims here that look like snake oil. Sci.crypt
seems to
> > attract more than its fair share of snake oil. We are very
sensitive to
> > snake oil and have a very low tolerance for it. This is not a
place for
> > unsupported assertions. It is not a place for the discussion of
trade
> > secrets or pending patents. These might be snake oil; it is not
> > possible to tell. It is not personal it, is just sci.crypt.
> >
> > We have noticed that snake oil salesmen have very thin skins; they
are
> > easily provoked and become very defensive. One can often detect a
snake
> > oil salesman by taking a little poke at him and watching to see how
he
> > walks and talks. If he walks like a snake oil salesman and talks
like a
> > snake oil salesman, he may be a snake oil salesman; one cannot tell
for
> > sure. Sci.crypt attracts a lot of snake oil salesmen and we tend
to have
> > a very low tolerance for them. We have a low tolerance for people
who
> > are overly defensive. It is not personal, it is just sci.crypt.
> >
> > Being a great inventor, humanist, philosopher, or philanthropist is
not
> > much of a defense here. We might not crucify Jesus Christ here
but we
> > would certainly contribute the hammer and the nails. It is not
personal,
> > it is just sci.crypt.
> >
> > Please do not take it personally or go away mad; just go away.
There
> > are probably lots of forums that will appreciate you for the great
human
> > being that you are. We are not one of them. It is not personal,
it is
> > just sci.crypt.
>
> It's 2001.
>
> You cannot lie anymore these days and not get caught.
>
> Take my encryption software. Give it a go. Prove to us you can
> break it. Give us your most tenuous reasonable explanation on how you
> would go about it.
>
> Or do you just talk about snake oil having never known what it really
> is?
>
Sounds great. I'll think I'll try. First though, what is the reward for
cracking your weak encryption? Are you going to offer us anything for
our time? Certainly there is no data worth cracking you code (because
you have no real users).
And, unless you're offering more money, we'd like the source code as
well. Trust us, we're not like Microsoft - we won't steal your code and
sell it ourselves (even if it was good). It would just save us a lot of
time and assure you that enough people would be able to have a good
look. Because I'm sure we'll need that - you seem to have a hearing
problem, as nothing we say here seems to have any impact - and we'll
need not one but multiple attacks on your code before you'll see the
light of truth. And the truth is this: very few people can create truly
secure systems/algorithms from scratch. I am certainly _not_ one of
them. But just as sure I couldn't create something as secure as Twofish
or SHA-1, I'm sure you can't either.
Anyway, meet these requirements and I'm sure you'll get a thorough
analysis of your method. Think of the press it'll make - someone here
will try everything and write a report on how utterly secure OAL3
is ;)) lol.
Sorry, needed a good laugh.
- Andrew
Sent via Deja.com
http://www.deja.com/
------------------------------
From: "Matt Timmermans" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Random stream testing. (long)
Date: Thu, 25 Jan 2001 03:58:38 GMT
"Paul Pires" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> First off, the notion that this is testing the randomness
> of the data. It seems obvious to me that it is only
> comparing the results to an expectation of how
> random data would look according to very narrow
> criteria. It say's nothing about the data's randomness
> per se but only how it compares to a theoretical result,
> according to a very limited criteria. No test can detect
> randomness.
Well, sort of. What randomness tests actually do is compare the PRNG output
against multiple kinds of non-random expectations. The difference is only
in the numbers. For example:
> Second, the notion of evaluating a single test result
> seems bizarre. The documentation for these tests
> invariably say something like "A value of 99% is an
> indication that it is most probably not random"
P=.99 means that if you were to run the test on real random data, you would
have only a 1/100 chance of getting a value as "high" as the one you did.
Yes. If you do a hundred of these tests on real random data, you would
expect one result in that range. Perhaps you were just "unlucky". You can
remove a lot of this ambiguity by using a large sample size. For example:
Let's say your data has a simple bias: P(1) = .75 and P(0) = .25. An
"average" sample of 8 bits will have six 1s and 2 0s, and you will get a
P-value out of a bias test of 0.86 or so. An "average" sample of 16 bits,
though, will have 12 1s and 4 0s. Running the bias test on this larger
sample will return a P-value of .962.
As you increase your sample size, the expected results for non-random data
get much closer to 0 or 1.
You're testing PRNGs, right? So you can make as much data as you like. I
would suggest using 100 meg samples, and accepting as "OK" any result
between 0.0001 and .9999. Anything outside of that would make me nervous,
unless I had several hundred test results. If you get something outside of
that "good range", but you're running a whole lot of tests, then make a new
or larger sample and run it again -- if it is an artifact of your PRNG, then
it should be reasonably consistent.
It seems that you simply stumbled upon a bad description of how to use
randomness tests. I seem to remember that Marsaglia (sp?) included a very
nice description of how to interpret the results of his DIEHARD suite.
> Terry Ritter has made a convincing argument
> that data sets should be examined for any deviation from
> a random expectation including the case were the results
> are "too good".
That is exactly right -- if you run the same tests on multiple samples, you
should expect an even distribution of P-values. If you run 1000 tests and
don't get any results outside of .25-.75, it is very likely that your data
isn't random.
------------------------------
From: "Matt Timmermans" <[EMAIL PROTECTED]>
Subject: Re: Dynamic Transposition Revisited (long)
Date: Thu, 25 Jan 2001 04:10:52 GMT
"Terry Ritter" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> That's the part that is too cute for me: You can say you have an OTP,
> so users think they have "mathematically proven" security, and then,
> later, if we find out that the pad really is predictable, you announce
> that the damage really was not due to the OTP after all.
It's like saying you have Rijndael, but you left out the S-boxes.
> We are discussing a security proof. If you want a security proof, you
> need to prove the assumptions. If OTP assumes a random pad, then you
> need to be able to prove that pad is random. In reality, we cannot
> measure such a thing, and probably cannot prove it.
You don't need to prove it to anyone but your self, so you can base the
proof on the way the key was generated, rather than the statistical
properties of the key itself. Note -- the same thing is true with any
cipher. If you use some black-box program to generate the key, you just
have to trust that the key is unpredictable. If the key is predictable,
brute-force attacks might suddenly become quite feasible. We have seen
examples of this as well, but you don't use those examples to say that the
cipher is insecure.
> It may be possible to have equipment which has a pretty decent proof
> of strength. In reality, quantum events are fairly small, and sensing
> them typically requires some electronics. That means the electronics
> can go bad, or oscillate, or have a noisy connection, or work
> intermittently, or whatever. Your task is to prove absolutely beyond
> any shadow of a doubt that nothing like that happened.
>
> I am very familiar with electrical noise, and I have promoted sampling
> a source which has a known non-flat distribution. Then, if we get the
> expected distribution, we may leap to the conclusion that everything
> has worked OK. However, since there can be no test for abstract
> randomness, it is extremely difficult to make the leap to
> cryptographic levels of assurance. Any number of subtle things might
> happen which might not be detected, and yet would still influence the
> outcome. We can be very sure, but is that really mathematical proof?
In all likelyhood, that would be a very practical generator for OTP keys,
and it would be reasonably easy to purposely underestimate the amount of
entropy you're getting. If you want proof, though, you should do something
different. For instance:
Generate a photon, and polarize it vertically. Then measure its
polarization at 45 degrees from the vertical. Repeat.
By measuring the transparency of your optics, the sensitivity of your
photomultipliers, and the orientation of your polarizers, you can place a
very confident lower bound on the rate of real randomness.
------------------------------
From: [EMAIL PROTECTED] (Eric Lee Green)
Crossposted-To: or.politics,talk.politics.crypto,misc.survivalism
Subject: Re: Snake Oil
Reply-To: [EMAIL PROTECTED]
Date: Thu, 25 Jan 2001 04:34:27 GMT
On Thu, 25 Jan 2001 03:35:16 GMT, Splaat23 <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>> Take my encryption software. Give it a go. Prove to us you can
>> break it. Give us your most tenuous reasonable explanation on how you
>> would go about it.
>>
>> Or do you just talk about snake oil having never known what it really
>> is?
>
>Sounds great. I'll think I'll try. First though, what is the reward for
>cracking your weak encryption? Are you going to offer us anything for
>our time? Certainly there is no data worth cracking you code (because
>you have no real users).
>
>And, unless you're offering more money, we'd like the source code as
>well. Trust us, we're not like Microsoft - we won't steal your code and
You mean this guy's still hanging around wasting our time?
When I wanted encryption for the product that my employer is shipping
shortly, I didn't mess around with any amateur junk algorithms. The
symmetric encryption is Rijndael, it uses Diffie-Hellman for key
exchange (that will become RSA soon, since the RSA patent has now
expired, at the time it did not make sense to license B-Safe given
that the patent was going to expire soon), it uses MD5 for message
digest. Why in the world should I use some algorithm that as far as I
can tell is snake oil? ("as far as I can tell" meaning, as far as I
know, no reputable cryptoanalysts have examined the source code and
algorithm and attempted to find breaks in it.) And why would I *PAY*
for the privilige, when Rijndael and RSA and MD5 are *free*? What, you
think I'm a moron?
--
Eric Lee Green Linux Subversives
[EMAIL PROTECTED] http://www.badtux.org
------------------------------
From: William Hugh Murray <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: "How do we know an Algorithm is Secure?" (was RC4 Security)
Date: Thu, 25 Jan 2001 04:38:55 GMT
EE wrote:
> I have two questions:
>
> 1. How can someone know the amount of bits of an encryption?
> 2. How can someone determine if an encrypted file or an encryption algorithm
> is secure?
First, keep in mind Courtney's first law, "Nothing useful can be said about the
security of a mechanism except in the context of a specific application and
environment." That said, cryptographic strength is a function of the complexity
of the algorithm, and the secrecy, length, life, and use, of the key. However,
we "know" these things mostly in the breech.
Most of us learn these things from a trusted source. We get our encryption
software from a trusted, usually open, source in trusted packaging. For
example, we download PGP from a known server and we check the (cryptographic)
signature on it.. Alternatively, we order it from NWA or Egghead, and get it on
a shrink-wrapped CD. Then we read the documentation.
If one is in the defense establishment, one gets one's encryption from the NSA,
the authorized agency for that purpose. One usually gets it in trusted
hardware. One knows personally or checks the credentials of the agent
delivering the machine.
We never really know that an encryption algorithm is strong. Rather we know is
that it is not known to be weak. We do not really know that the DES is
"secure." Rather we know that the US Government asserts that the cheapest known
attack against the DES is an exhaustive attack against the key. They tell us
that the effective length of the key is 56 bits and we can calculate how much
work and time is required to exhaust that key space. We can measure the
complexity of the DES in terms of the number of steps that it goes through and
we can compare it to other algorithms in those terms. However, we can never
really know with certainty that the complexity is effective in hiding the
message. We know that DES has been in use for 25 years, that many have reported
on their efforts to analyze it and that no one has ever reported recovering a
message without benefit of the key. That is usually good enough for government
work.
We know that an RSA public key implies the private key. We know one way to
calculate the private key from the public the key but that it is computational
infeasible to do so. We do not know any easier way to do it but we do not know
that there is not one. We know that in more than twenty years no one has become
rich and famous by coming up with one.
We can estimate the cost of attack in terms of the amount of work, access,
indifference to detection, special knowledge, and time to detection and
corrective action (WAIST). We can compare one algorithm to another in terms of
these estimates. We can compare the cost of such an attack to the value of
success and to the cost of alternate attacks. From these estimates we can
conclude that the cost of attack against the encryption algorithm is orders of
magnitude higher than that of alternative attacks. That is to say, it is almost
always cheaper to learn the key by bribing someone who already knows it than to
learn it by trying all possibilities.
Most of us never do these calculations for ourselves but we rely upon what the
cryptogaphers here tell us. In other words, we rely on authority. The
authorities that I rely upon include many of the individuals who write here, and
such laboratories as RSA, IBM, NSA, NIST, MIT, Stanford, et. al. I rely on
Hellman, Diffie, Matyas, Coppersmith, Rivest, Kaliski, Shamir, Biham, Schnieir,
Ritter et. al..
I hope this answers your questions and serves the purposes for which you raised
them.
William Hugh Murray, CISSP
------------------------------
From: William Hugh Murray <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Crossposted-To: or.politics,talk.politics.crypto,misc.survivalism
Subject: Re: Snake Oil
Date: Thu, 25 Jan 2001 04:44:28 GMT
Splaat23 wrote:
>
> Sounds great. I'll think I'll try. First though, what is the reward for
> cracking your weak encryption? Are you going to offer us anything for
> our time? Certainly there is no data worth cracking you code (because
> you have no real users).
>
> And, unless you're offering more money, we'd like the source code as
> well. Trust us, we're not like Microsoft - we won't steal your code and
> sell it ourselves (even if it was good). It would just save us a lot of
> time and assure you that enough people would be able to have a good
> look. Because I'm sure we'll need that - you seem to have a hearing
> problem, as nothing we say here seems to have any impact - and we'll
> need not one but multiple attacks on your code before you'll see the
> light of truth. And the truth is this: very few people can create truly
> secure systems/algorithms from scratch. I am certainly _not_ one of
> them. But just as sure I couldn't create something as secure as Twofish
> or SHA-1, I'm sure you can't either.
>
> Anyway, meet these requirements and I'm sure you'll get a thorough
> analysis of your method. Think of the press it'll make - someone here
> will try everything and write a report on how utterly secure OAL3
> is ;)) lol.
Yes. And does not the FAQ for sci.crypt say all of this?
>
>
> Sorry, needed a good laugh.
>
> - Andrew
>
> Sent via Deja.com
> http://www.deja.com/
------------------------------
From: Paul Rubin <[EMAIL PROTECTED]>
Crossposted-To: or.politics,talk.politics.crypto,misc.survivalism
Subject: Re: Snake Oil
Date: 24 Jan 2001 20:50:13 -0800
Anthony Stephen Szopa <[EMAIL PROTECTED]> writes:
> Take my encryption software. Give it a go. Prove to us you can
> break it. Give us your most tenuous reasonable explanation on how you
> would go about it.
>
> Or do you just talk about snake oil having never known what it really
> is?
That's another standard whine of the snake oil salesman, saying "how
can you know it's bad unless you try it?". Of course, you have to
expend your own resources / risk your own health in order to try it,
with no compensation from the salesman if (as you suspected) the
product is no good. In typical cases the salesman even wants you to
pay for the product before you can test it, though that may not be
going on here. In either case, the salesman is claiming you're remiss
unless you're willing to work for him for free. It's not an
impressive argument.
Anthony: you are not offering to let people test your cipher under the
same conditions that 3DES can be tested. Specifically, 3DES protects
millions of dollars of live traffic every day, so it's worth that much
for someone to be able to crack it.
How many million dollars are you offering to anyone who cracks your
cipher? That's the test that 3DES passes every day, that you have not
offered to submit your cipher to.
After all, some of us are professionals here. That means if we do
cryptography for someone, we expect to get PAID for it.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************