Cryptography-Digest Digest #124, Volume #13       Wed, 8 Nov 00 17:13:00 EST

Contents:
  Re: hardware RNG's ([EMAIL PROTECTED])
  Re: Purported "new" BXA Encryption software export restrictions (CiPHER)
  Re: Updated XOR Software Utility (freeware) Version 1.1 from Ciphile Software 
(CiPHER)
  Re: hardware RNG's (David Schwartz)
  Re: CHALLENGE TO cryptanalysts (Mok-Kong Shen)
  Re: Updated XOR Software Utility (freeware) Version 1.1 from Ciphile   (Richard 
Heathfield)
  Re: Hardware RNGs (David Hopwood)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: hardware RNG's
Date: Wed, 08 Nov 2000 20:09:32 GMT



"David Schwartz" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
> Mack wrote:
> >
> > >
> > >Mack wrote:
> > >
> > >> The LSB of the RDTSC are purely deterministic.  It increments
one for each
> > >> clock tick.  I may not be able to 'guess' the bits of the RDTSC
but I can
> > >sure
> > >> calculate them.  In a multi-process environment this is a bit
more
> > >difficult
> > >> but
> > >> not impossible.  It is effectively measuring the combined
process run
> > >times.
> > >
> > >       You can't calculate them because the number of clock cycles
it takes to
> > >do many things the CPU takes is non-deterministic. For example, I
go to
> > >read a block from a disk controller. Which CPU clock cycle that
read
> > >comes in at is non-deterministic because it relies upon the exact
ratio
> > >of two real numbers (the CPU oscillator and the disk controller
> > >oscillator).
>
> > Hard drive access times are not the same as the RDTSC being random.
>
> Yes they are because CPU operations will be delayed until data from
the
> hard drive is available.

Ummm, no. That will apparently happen within a single program, but if
you would try an small experiment, and run a small program that reads
from and writes to the hard drive continually, and a seperate
interactive program, you will see that this is simply not the case. The
program that reads/writes will be forced to wait for the hard drive but
only because the operating system offers a syncronous view of the
system, the second application will not see the system the same way,
the second program will see a system without the disk delay. What you
are perceiving as CPU delay is only an artifact of performing the
measurements on a synchronous appearing system, using the same process.
If you seperate them, you will notice that the read/write overhead is
very, very small, and will not give you anything even remotely like
good randomness. What you can do is use the task scheduler as a random
number generator, and that can be detected in the same way. To do that
you start a large number of processes, and record which one was the
last one active when you need a random number. Something like (in C):
int activity;

void thread(int whoami)
{
     while(1) activity = whoami;
}

or you can time the different times that the current process is brought
on to the CPU. However neither of these is fast. In fact I checked both
of these out a few months ago, I spent 3 days each gathering the
entropy, I was exceedingly careful about it, and it failed DIEHARD
miserably there were massive correlations, even looking at the raw data
they could sometimes be seen. BTW I also checked the randomness that
could be gotten from hard drive access times, it generated entropy
slightly faster, and slightly better, but it was still miserable. I'd
strongly recommend that where possible you use something like Yarrow or
Octillo or /dev/random

>
> > Many workstations don't have hard-drives of thier own.  The network
> > traffic is very easy to monitor.  It can often be done from outside
of the
> > building. Tempest certainly works if the cables aren't shielded.
>
> Monitoring the network traffic doesn't do you any good because you
> don't know the exact instant the network card on the client machine
will
> notice the traffic. Knowing when it's put on the wire won't tell you
the
> billionth of a second that it will be noticed by the CPU.

Neither will the system clock. When the data reaches and is detected by
the NIC, that data then gets processed for the PCI bus, which runs at
either 33 MHz or 66 MHz, you will lose all entropy from precision, and
god forbid it should be an ISA card, that transfer is < 10 MHz if I
remember correctly.

>
> > >       You would need to know an awful lot of internal timing data
from the
> > >computer that would normally be completely inaccessible to an
attacker.
> > >And, of course, any attempt you made to measure the disk
controller's
> > >performance would change the very numbers you are measuring. The
net
> > >result is that there is certainly no practical way and arugably no
> > >conceivable way to predict the LSB of the TSC.
>
> > I agree that there is no practical way.  But the arguement was
> > strictly are the LSBs random.  By themselves no.
>
> What do you mean by "by themselves"?
Exactly, there is some very, very tiny amount of entropy in the LSB,
that entropy can be used in combination with the tiny entropy from the
hard disk, along with the tiny entropy from the network, combined with
the tiny entropy of source X, and you can build a good system (see
Yarrow)

>
> > Measuring other sources of randomness using the LSB of
> > the TSC will certainly give randomness.  But that is
> > far from the TSC being random.
>
> That happens automatically on any realistic machine. Consider, for
> example, a machine that provides random numbers for Internet gambling.
> Each request that requires random numbers must come in from somewhere,
> and the timing of that request measured to a billionth of a second has
> some entropy in it. That entropy will be in the lsb of the TSC when
the
> code to process that request gets executed.

I already covered this, no it won'tm that "billionth of a second" will
be chopped to the granularity of the bus. Most use PCI, that is either
33 or 66 MHz, then it will hit the system bus, where it will be either
66, 100, 133, or 150 MHZ, and will likely be forced to the common
granularity. After this it gets put in the CPU cache (again with
granularity loss), then it is read by the CPU (again with granularity
loss, this time by forcing it to match the CPU clock). If the system
has an EV6 bus (or another advanced bus protocol) you can add at least
one more granularity killer into that, all before you can measure it.
Computers are designed to eliminate all the randomness in their
behavior that is possible, pretending it is another way does not change
anything.

>
> > On a side note there is also the matter of hard disk turbulence
> > which produces a very slight amount of randomness. There are
> > also misreads which happen occassionally.
> >
> > Does anyone know which IDE drives have indpendent internal
> > clocks and which ones synchronize the clock to the system bus?
> > This tends to be a serious issue in overclocked systems.
> > Ie. if the bus is overclocked the drive stops working.
>
> They may synchronize their I/O clock to the system bus, but I doubt
> they could synchronize other clocks. That would require that the
> frequencies line up. If they do synchronize their CPU to the bus
clock,
> however, that would significantly reduce (in theory) the amount of
> entropy available from disk reads.

They all syncronize to some degree, the better the drive the more
syncronous it's going to be. Unfortunately taking the clock drift
between the HD cycle and the system cycle won't generate entropy, it
will only express the entropy of another influence, most likely
temperature, which is now very carefully controlled on virtually all
systems. That is not to say that there isn't any entropy available
there, just that you'll have to work very hard to get enough of it to
be useful.
                   Joe


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: CiPHER <[EMAIL PROTECTED]>
Crossposted-To: alt.freespeech,talk.politics.misc,talk.politics.crypto
Subject: Re: Purported "new" BXA Encryption software export restrictions
Date: Wed, 08 Nov 2000 20:21:39 GMT

In article <Q%eO5.688$[EMAIL PROTECTED]>,
  "CMan" <[EMAIL PROTECTED]> wrote:

> I can't believe they were able to pull the wool over the eyes of our
> famous cryptographer friends who have written popular books on
> cryptography - unless these guys now have corporations and big fat
> government contracts!!!

(1) Relaxtion exists. What do you have to fear from review? _All_ is is
doing is setting classification. You can even freely export BEFORE
you're classification has been finalised.

(2) They aren't doing it to help joe US public... they were under a lot
of pressure from the EU and large US firms. I can't see them trying to
publicly screw the EU and their own economy by lying to them.

(3) These h4x0r 'fight the power' posts are getting tiring... and hell,
I mostly read alt.cyberpunk. The US government finally did what they
should have done in the past, freed up export restrictions on
encryptions software... so that to most places it's not like you're
sending missiles. Now, if you go through the application procedure,
your product is more then likely going to be allowed export to the rest
of the richest countries in the world. Opening up the market and
helping the US blast worldwide competition.

--
Marcus
---
[ www.cybergoth.cjb.net ] [ alt.gothic.cybergoth ]


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: CiPHER <[EMAIL PROTECTED]>
Crossposted-To: alt.freespeech,talk.politics.misc,talk.politics.crypto
Subject: Re: Updated XOR Software Utility (freeware) Version 1.1 from Ciphile Software
Date: Wed, 08 Nov 2000 20:33:46 GMT

In article <[EMAIL PROTECTED]>,
  Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:

> Or going way over the top ranting and raving in these news groups.

Well for someone that is coming across as so conceited as you are...
are you saying you're not ranting straight back at us with your
disjointed, evasive replies?

--
Marcus
---
[ www.cybergoth.cjb.net ] [ alt.gothic.cybergoth ]


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Date: Wed, 08 Nov 2000 12:32:25 -0800


[EMAIL PROTECTED] wrote:

> > > Hard drive access times are not the same as the RDTSC being random.
> >
> > Yes they are because CPU operations will be delayed until data from
> the
> > hard drive is available.
> 
> Ummm, no. That will apparently happen within a single program, but if
> you would try an small experiment, and run a small program that reads
> from and writes to the hard drive continually, and a seperate
> interactive program, you will see that this is simply not the case.

        Nonsense, it _is_ the case.

> The
> program that reads/writes will be forced to wait for the hard drive but
> only because the operating system offers a syncronous view of the
> system, the second application will not see the system the same way,
> the second program will see a system without the disk delay.

        Nonsense again. The other program will sometimes be interrupted for
disk I/O and sometimes not be, depending upon exactly when the disk I/O
completes. When the disk I/O completes, not only will the TSC be
advanced by the time to do the I/O but the code will run slower due to
the caches being blown out by the disk I/O code.

> What you
> are perceiving as CPU delay is only an artifact of performing the
> measurements on a synchronous appearing system, using the same process.
> If you seperate them, you will notice that the read/write overhead is
> very, very small, and will not give you anything even remotely like
> good randomness.

        *sigh* Again, you make no sense. We aren't after "good" randomness. We
are just after sufficient randomness. A thousand bits where one in a
hundred of them is a '1' is a good entropy source if properly fed
through a cryptographically strong hash function.

> What you can do is use the task scheduler as a random
> number generator, and that can be detected in the same way. To do that
> you start a large number of processes, and record which one was the
> last one active when you need a random number. Something like (in C):
> int activity;
> 
> void thread(int whoami)
> {
>      while(1) activity = whoami;
> }

        Right, this will give you some randomness for precisely the reason I
said it would, which is that some things a computer does are
unpredictable, such as disk I/o.
 
> or you can time the different times that the current process is brought
> on to the CPU. However neither of these is fast. In fact I checked both
> of these out a few months ago, I spent 3 days each gathering the
> entropy, I was exceedingly careful about it, and it failed DIEHARD
> miserably there were massive correlations, even looking at the raw data
> they could sometimes be seen. BTW I also checked the randomness that
> could be gotten from hard drive access times, it generated entropy
> slightly faster, and slightly better, but it was still miserable. I'd
> strongly recommend that where possible you use something like Yarrow or
> Octillo or /dev/random

        Did you need read the thread? Or not understand it? Or what? Of
_course_ it fails DIEHARD, it's not unbiased. That doesn't mean it's not
random!
 
> > > Many workstations don't have hard-drives of thier own.  The network
> > > traffic is very easy to monitor.  It can often be done from outside
> of the
> > > building. Tempest certainly works if the cables aren't shielded.
> >
> > Monitoring the network traffic doesn't do you any good because you
> > don't know the exact instant the network card on the client machine
> will
> > notice the traffic. Knowing when it's put on the wire won't tell you
> the
> > billionth of a second that it will be noticed by the CPU.
> 
> Neither will the system clock. When the data reaches and is detected by
> the NIC, that data then gets processed for the PCI bus, which runs at
> either 33 MHz or 66 MHz, you will lose all entropy from precision, and
> god forbid it should be an ISA card, that transfer is < 10 MHz if I
> remember correctly.

        A precision of 66Mhz would allow one-66th of a part per million to be
detectable in a second.
 
> > > I agree that there is no practical way.  But the arguement was
> > > strictly are the LSBs random.  By themselves no.
> >
> > What do you mean by "by themselves"?

> Exactly, there is some very, very tiny amount of entropy in the LSB,
> that entropy can be used in combination with the tiny entropy from the
> hard disk, along with the tiny entropy from the network, combined with
> the tiny entropy of source X, and you can build a good system (see
> Yarrow)

        Exactly.
 
> > > Measuring other sources of randomness using the LSB of
> > > the TSC will certainly give randomness.  But that is
> > > far from the TSC being random.
> >
> > That happens automatically on any realistic machine. Consider, for
> > example, a machine that provides random numbers for Internet gambling.
> > Each request that requires random numbers must come in from somewhere,
> > and the timing of that request measured to a billionth of a second has
> > some entropy in it. That entropy will be in the lsb of the TSC when
> > the
> > code to process that request gets executed.
> 
> I already covered this, no it won't that "billionth of a second" will
> be chopped to the granularity of the bus.

        Strangely, it depends upon whether the CPU multiplier is an integer or
has a fractional part.

> Most use PCI, that is either
> 33 or 66 MHz, then it will hit the system bus, where it will be either
> 66, 100, 133, or 150 MHZ, and will likely be forced to the common
> granularity. After this it gets put in the CPU cache (again with
> granularity loss), then it is read by the CPU (again with granularity
> loss, this time by forcing it to match the CPU clock).

        Please explain to me how measuring something clocked at 66Mhz by a
500Mhz clocks results in a granluarity loss.

> If the system
> has an EV6 bus (or another advanced bus protocol) you can add at least
> one more granularity killer into that, all before you can measure it.

        These are not granularity killers at all. Going from a slower clock to
a bus at a faster clock loses nothing.

> Computers are designed to eliminate all the randomness in their
> behavior that is possible, pretending it is another way does not change
> anything.

        That's a nonsensical claim.
 
> > They may synchronize their I/O clock to the system bus, but I doubt
> > they could synchronize other clocks. That would require that the
> > frequencies line up. If they do synchronize their CPU to the bus
> > clock,
> > however, that would significantly reduce (in theory) the amount of
> > entropy available from disk reads.
> 
> They all syncronize to some degree, the better the drive the more
> syncronous it's going to be.

        Actually, the reverse seems to be true. The more sophisticated drives
are more likely to have independently clocked agents, such as a cache
management processor. Cheap IDE drives, however, seem to be fully
synchronous to the bus clock. Nevertheless, there's entropy in the
variation in the rotational speed of the disk as measured by that clock.
However, how much entropy is there seems to be in dispute.

> Unfortunately taking the clock drift
> between the HD cycle and the system cycle won't generate entropy, it
> will only express the entropy of another influence, most likely
> temperature, which is now very carefully controlled on virtually all
> systems. That is not to say that there isn't any entropy available
> there, just that you'll have to work very hard to get enough of it to
> be useful.

        The temperature is not carefully controlled enough to restrict
oscillator drift. It still drifts by a few parts per billion, which is
enough to be measured. If you can take active steps to measure it, you
can grab about 4 good bits of entropy per second.

        DS

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: CHALLENGE TO cryptanalysts
Date: Wed, 08 Nov 2000 22:24:47 +0100



Melinda Harris wrote:
> 
> The notorious Crypto-Eccentric is preparing to issue a challenge on January
> 1st 2001 to cryptanalyst and hackers alike including all government agencies
> to crack his braintwisting ANEC code. It is to support his claim of having a

I can hardly imagine that there will be a single person
of our group unreasonable enough to spend a minute of
his precious time examining any new encryption scheme
that does not have a well crafted document giving concise 
and clear description of the algorithm as well as convincing
rationales of the design. Indeed, presenting pure codes is
the notoriously worst way of attracting other person's
attention to a cipher.

M. K. Shen

------------------------------

Date: Wed, 08 Nov 2000 21:56:18 +0000
From: Richard Heathfield <[EMAIL PROTECTED]>
Crossposted-To: alt.freespeech,talk.politics.misc,talk.politics.crypto
Subject: Re: Updated XOR Software Utility (freeware) Version 1.1 from Ciphile  

Scott Craver wrote:
> 
> Richard Heathfield  <[EMAIL PROTECTED]> wrote:
> >
> >Mr Szopa's program is 315392 bytes in size after decompression. No
> >source code is provided. I know I'm not the only one to think this to be
> >the height of lameness.
> 
>         That's a pretty big increase between versions.  Wow.

No - whoever reported 155KB (and it might even have been me) was
reported the compressed Zip file size. The actual program is the size I
have given.

>  And all it does is XOR?

It would appear so, yes.

> 
> >So, perhaps not unnaturally, I wondered (purely in the spirit of scientific
> >enquiry, as befits a sci. newsgroup) if it were possible to write an
> >even lamer program. I tried hard. But did I succeed?
> 
>         [snip]  Amazing. Maybe you should run speed tests.

I did (having first sorted out a test machine that I didn't mind
reinstalling from scratch if need be). They're about the same. Mr
Szopa's may even have a slight edge (I didn't spend any time trying to
make mine quick, after all). For speed, I have an ISO C version on the
Web.

> 
>         The funny thing about Mr. Szopa's utility is that, before he
>         posted it, we were only suspicious of his algorithm, and his animosity
>         towards people who wanted to analyze his algorithm.  Now, he
>         accidentally gives away that his skills as a programmer might be a
>         problem too, by making an unbeatably HUGE binary to perform one of the
>         simplest operations on two files, *and* somehow making the first
>         version unable to XOR files in different directories.

In the interests of fairness, the version I have (which is, I
understand, the version to which Mr Szopa first drew our attention the
other day) can cope with two files in different directories just fine,
so I'm not sure where this little complaint came from. Perhaps I missed
an earlier version.

> I didn't even
>         know that this kind of deficiency was possible with the full-blown
>         canned File Open dialog boxes in Win32 and MFC.

It's probably possible, but you'd have to work damned hard at it...

> 
>         But the really funny part is his apparent air of superiority as
>         a result, despite very obvious size and performance difference between
>         his and others' software.

<shrug> typical Snake-Oil Merchant, methinks.

-- 
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton

------------------------------

Date: Wed, 08 Nov 2000 22:01:48 +0000
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Hardware RNGs

=====BEGIN PGP SIGNED MESSAGE=====

Tim Tyler wrote:
> David Hopwood <[EMAIL PROTECTED]> wrote:
> : [EMAIL PROTECTED] wrote:
> :> This reveals non-perfect entropy [...]
> 
> : Perfect entropy was not claimed.
> 
> AR:"Hashing does not increase entropy, whether one pass or multiple."
> 
> PC:"No, of course not.  However, at least it doesn't *reduce* entropy."

That's a misquote of what Paul Crowley wrote.

Paul Crowley:
> Alan Rouse:
> > Hashing does not increase entropy, whether one pass or multiple.
>
> No, of course not.  However, at least it doesn't *reduce* entropy
[context restored]
> until you already have enough for your state to be unguessable,
> unlike many preprocessing techniques suggested for entropy sources.
> It's also much harder to get wrong. [...]

Note "until you already have enough for your state to be unguessable."
~159 bits is unguessable.

- -- 
David Hopwood <[EMAIL PROTECTED]>

Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01
Nothing in this message is intended to be legally binding. If I revoke a
public key but refuse to specify why, it is because the private key has been
seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip


=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv

iQEVAwUBOgi8gzkCAxeYt5gVAQGWyQgAjHrn60E5Yey5W/v6jACdJRS1cMsx7a2r
gA32Ydm/IVBz5YpG1H7SSMF8ZJk3ZuKqz91sIia2pAn/vswuXaaxVBqQaFKHXBXF
GE+ecR0frstpszY0NyV32wa0nbZ+AqPXj2oCODKVCsYrQD4gbsUuWbwC5AfGyLk5
a74jHwkEQitl4ZszVg2CtG8SLEgYcQqUiv8OX51O/Vj2RdM0O9s6Uezg64GdUpML
6XU9LZjERfIrSEjQB77517lF/to+dTxl4YQJnjq/epjLAZt7TffP59VuV6Q9S+uZ
WvHYwE8mJRwrJgECo3eAz2I3nZ77vaUgy7K1NJhqD7oQyqvven06EQ==
=Liim
=====END PGP SIGNATURE=====

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to