Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period(was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-06 Thread John S. Denker
Will Rodger wrote:

John says:

 Wireless is a horse of a different color.  IANAL but
 the last time I looked, there was no federal law
 against intercepting most wireless signals, but you
 were (generally) not allowed to disclose the contents
 to anyone else.
No longer, if it ever was. It's a crime, as evidenced by the wireless
scandal a few years back when some Democrat partisan intercepted
communications of Republican leadership in Florida, then talked. The
simple act of interception was illegal.


Next time, before disagreeing with someone:
  a) Please read what he actually wrote, and
  b) Don't quote snippets out of context.
Three sentences later, at the end of the paragraph that
began as quoted above, I explicitly pointed out that
cellphone transmissions are a more-protected special case. 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period

2003-03-05 Thread John S. Denker
Tim Dierks wrote:

 In order to avoid overreaction to a nth-hand story, I've attempted to
 locate some primary sources.

 Konop v. Hawaiian Airlines:
   http://laws.lp.findlaw.com/getcase/9th/case/9955106pexact=1
[US v Councilman:]
  http://pacer.mad.uscourts.gov/dc/opinions/ponsor/pdf/councilman2.pdf
Well done.  Thanks.

 I'd be interested in any opinions on how this affects the government's
 need to get specific wiretap warrants; I don't know if the law which
 makes illicit civilian wiretapping illegal is the same code which
 governs the government's ability (or lack thereof) to intercept
 communications.
0) IANAL.  But as to the question of same code, the
answer is clearly no.
1) As to government-authorized intercepts, see

http://www.eff.org/Privacy/Surveillance/Terrorism_militias/20011031_eff_usa_patriot_analysis.html

which gives a plain-language discussion of at least
eight different standards under which some sort of
authorization could be obtained.
Also note that neither Konop nor Councilman involved
government intercepts, so you can't learn anything about
authorized intercepts by studying them.  Also note that
post-9/11 laws have superseded everything you might
previously have known on the subject.
2) As to intercepts by civilians, it's wrong, and it
may be punishable under many different theories and
standards, including invasion of privacy, copyright
infringement, computer trespass, computer vandalism,
simple theft of things of value, and who-knows-what
else.
3) As to unauthorized intercepts by government agents,
in theory it is exactly the same as item (2), but
in practice your chance of seeing anybody punished
for it is comparable to your chance of seeing a State
Trooper ticketed for speeding, tailgating, weaving,
and failing to signal turns enroute to the donut shop.
They're doing God's work, you know;  why should mere
laws and bills of rights apply to them?  About the
best you can realistically hope for is the exclusionary
rule (illegally siezed evidence can't be used against
you) but I wouldn't necessarily count on that.
4) Crypto-related sidelight: I wonder what would
have happened if Konop had encrypted his sensitive
data. (eBook format or the like. :-)  Then could he
have used the draconian provisions of the DMCA
against his opponent (Hawaiian Airlines)?
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Columbia crypto box

2003-02-08 Thread John S. Denker
As reported by AP:

| Among the most important [debris] they were seeking was
| a device that allows for the encryption of communication
| between the shuttle and NASA controllers. A NASA spokesman
| in Houston, John Ira Petty, said Friday that NASA feared
| the technology could be used to send bogus signals to the
| shuttle.

Apparently some folks skipped class the day Kerchhoffs'
Principle was covered.

One wonders what other shuttle systems were designed
with comparable disregard of basic principles.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Patents as a security mechanism

2003-01-21 Thread John S. Denker
Matt Blaze wrote:


Patents were originally intended, and are usually used (for better
or for worse), as a mechanism for protecting inventors and their
licensees from competition.  

That's an oversimplification.  Patents were originally
intended as a bargain between the inventors and the
society at large.  Under the terms of this bargain, the
inventors make public (which is the root meaning of
patent) the details of the invention, rather than
thereby advancing general knowledge and permitting
follow-on inventions.  In exchange the inventor was
granted limited protection from competition.  In the
absence of a patent system, inventors will try to
keep everything a trade secret, which is another way of
fending off competition for a while.  From society's
point of view, patents are generally better than trade
secrets.  From the inventors' point of view, patents
are generally better than trade secrets.  So we have a 
mutually-beneficial bargain.  Patents were originally
intended to be a win/win proposition.

Of course it is axiomatic that whatever you're doing,
you can always do it wrong.  We can debate whether the
current system fulfills the original intention, but
let's not go there right now.

But I've noticed a couple of areas where
patents are also used as a security mechanism, aiming to prevent the
unauthorized production of products that might threaten some aspect of a
system's security.


OK.


... mechanical locks ...  Many users actually prefer these patented products
because even though it means they might have to pay monopoly prices for their
keys, it makes it less likely that a thief will be able to get a duplicate
at the corner hardware store.


An interesting observation.

 I'm a bit skeptical about whether this really is effective

So am I.

 (and at least one legal case, Best v. Ilco, casts some

doubt on the validity of many of the key blank patents)


It's amusing that Best had a utility patent and a
design patent, both of which were held invalid (on
different grounds).  It is the design patent which
I think speaks most clearly to the point Matt is
making.
  http://www.law.emory.edu/fedcircuit/aug96/95-1528.html

==


One example close to home is the DVD patents, which, in addition to
providing income for the DVD patent holders, also allows them to prevent
the production of players that don't meet certain requirements.  This
effectively reduces the availability of multi-region players; the patents
protect the security of the region coding system.


The following sounds like a nit, but I think it is
more than that:  I think it is the _CSS licenses_
rather than the DVD patents that play the role
of protecting the region coding system and reducing
the availability of multi-region players.

This gets back to the bargain discussed above,
because the CSS license is based, as far as I can
tell, on trade secrets.  No particular patents are
mentioned in the CSS license forms I've seen;
instead there is much mention of Highly Confidential
Information.

Perhaps a more important point is the economic angle.
Let's re-examing the statement:
 Many users actually prefer these patented products

We need sharper terminology.  We need to unbundle
the products;  that is, we have a _lock_ product
and a _key_ product.  It is unsafe to assume that
whoever buys the lock product is the same person
who buys the key product.

Whoever pays for the locks has a vested interest
in high-security locks that open to as few keys
as possible.  Whoever pays for the keys, on the
contrary, has a vested interest in keys that are
extra-powerful and/or cheap and extra-widely
available.

Suppose some party Alice controls a restriction,
such as a patent or trade secret.  Alice will try
to sell the restriction to the lock-buyer, Larry,
who benefits directly from the security.  Larry
won't buy it unless he is convinced that Alice is
willing and able enforce the restriction against
key-makers and key-buyers such as Kathy.


Are there other examples where patents are used as 
 a security mechanism?

Not that I know of.

So we have a grand total of less than one valid
examples.
 -- CSS depends on secrecy, which is by definition
the opposite of patentcy.
 -- Best v. ILCO held that patenting key-blanks is
an abuse of the design-patent law.

I think this is as it should be.  That's not the
proper purpose of patent law.

Of course if you ask about non-patent laws, there
are many examples:
 -- in some jurisdictions it is illegal in general
to carry lock picks.
 -- in some jurisdictions it is illegal in general
to copy a key marked do not duplicate.
 -- copyright law is sort of a do not duplicate
stamp protecting original creative works against
certain types of duplication.
 -- DMCA makes it a federal criminal offence to
circumvent triple-rot-13.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL 

DeCSS, crypto, law, and economics

2003-01-07 Thread John S. Denker
Regarding the acquittal of Jon Johansen, I quoted CNN
as saying:


The studios argued unauthorised copying was copyright theft
and undermined a market for DVDs and videos worth $20
billion a year in North America alone.


Some elements of the industry did indeed claim that,
but such claims are grossly irrelevant, and to bring
them up is foolish or dishonest.  This case was never
about unauthorized _copying_ of DVDs.  You can make a
bit-for-bit perfect copy of a DVD without decrypting
it.  Indeed it's easier to copy if you don't decrypt
it!

The main thing the industry really had at stake in
this case is the zone locking aka region code
system.  The studios like to release videos in
different parts of the world at different times,
and to charge different royalty fees in different
places.  This is called market segmentation. The
idea of an open-source player was abhorrent to them,
because it makes it easy to buy a DVD in one region
and play it in other regions.  This is an example of
arbitrage.

For normal products, market segmentation is neither
forbidden by law nor protected by law.  Mushrooms that
cost $4.00 per ounce at the supermarket can be purchased
for $4.00 per pound at the Asian grocery down the street.
The stores are free to charge whatever they like, and I
am free to shop wherever I like.  The law is silent on
the issue.

People who engage in market segmentation are always
looking for ways to prevent arbitrage.  For instance,
airlines make sure tickets are non-transferable, to
prevent some ticket agent from stocking up on tickets
at excursion prices and reselling them to business
travelers.

Movie studios never had a really good market segmentation
system, because
 -- I can legally own region-1 or region-4 DVDs or some
of both, no matter whether I live in the US or Australia.
 -- I can legally own a region-1 or region-4 DVD player,
or both, no matter whether I live in the US or Australia.

To be clear: the industry was never able to erect a legal
barrier to arbitrage of disks _or_ arbitrage of players.
The closest they could come was to make it slightly hard
to get a _multi-region_ player.  The manufacturers of
player hardware had to do the studios' bidding because of
the the controversial (to say the least) anti-circumvention
provisions of the 1998 DMCA law.

If we somewhat charitably assume the studios knew what
they were doing, their whole market segmentation scheme
was predicated on the lack of multi-region players _and_
on the assumption that players would remain sufficiently
expensive that users couldn't just buy a stack of players,
one per region.  Less charitably the scheme was predicated
on the foolish assumption that nobody would ever discover
the possibility of inter-region arbitrage of player
hardware.

I repeat, the practical issue in this case was never about
cheating the studios out of their per-disk royalties on
DVDs.

At this point you might be wondering about per-player
royalties.

  First, let's dispose of an irrelevant side-issue.  The
  rights to patents on raw DVD hardware are held by a
  consortium of hardware companies, not movie studios.
  These people presumably collected their cut when Mr.
  Johansen purchased his raw DVD drive hardware.  So
  this case was never about patent infringement.

The studios arguably hold intellectual property rights
in the CSS decoding keys, and they can collect per-player
royalties from hw mfgrs who incorporate such keys in
their products.  AFAIK Mr. Johansen never copied any
such key (or even had one he could have copied), so
this case was never about illegal copying even on a
per-player basis.

The truly amazing thing about this case is that the
crime would not have occured if the studios had used
decently-strong crypto.  It's ironic that in an age when
for cryptographers enjoy a historically-unprecedented
lopsided advantage over cryptanalysts, the industry
adopted a system that could be cracked by amateurs.
This probably wasn't simply due to stupidity in the
industry; it is more plausibly attributed to stupidity
in the US export regulations which induced the industry
to use 40-bit keys.

So what we have here are remarkably intrusive laws:
under US regulations the crypto must be easy to break,
while under US law it is illegal to break it.  The
latter is dressed up as a copyright law even if no
illegal copying is involved.

This strikes me as analogous to requiring everyone
to use pin/tumbler locks with only a single pin, so
that all locks can be picked using a popsicle stick,
and then arresting people for burglary whenever they
are caught carrying a popsicle stick.

US law is not the same as Norwegian law.  You should
not imagine that this case sets a precedent for US
courts.

Additional remarks:

We should try to avoid overwrought arguments about the
morality of market segmentation and/or arbitrage.
Producers and retailers will always try to benefit
themselves by segmenting the market;  consumers and
arbitrageurs will always try to 

Re: did you really expunge that key?

2002-11-08 Thread John S. Denker
1) This topic must be taken seriously.  A standard technique
for attacking a system is to request a bunch of memory or
disk space, leave it uninitialized, and see what you've got.

2) As regards the volatile keyword, I agree with Perry.
The two punchlines are:

 if, for example, gcc did not honor [the volatile keyword],
 the machine I am typing at right now would not work because
 the device drivers would not work.

 If they haven't implemented volatile right, why should
 they implement the pragma correctly?

3) However, a discussion of compilers and keywords does not
complete the analysis.  A compiler is only part of a larger
system.  At the very least, we must pay attention to:
 -- compiler
 -- operating system
 -- hardware architecture
 -- hardware physics

At the OS and hardware-architecture levels, note that a
device driver accesses a volatile device register only
after beseeching the OS to map the register to a certain
address in the driver's logical address space. In contrast,
for some address that points to ordinary storage, the OS and
the hardware could (and probably do) make multiple copies:
Swap space, main memory, L2 cache, L1 cache, et cetera.
When you write to some address, you have no reason to assume
that it will write through all the layers.

Swap space is the extreme case: if you were swapped out
previously, there will be images of your process on the
swap device.  If you clear the copy in main memory somehow,
it is unlikely to have any effect on the images on the swap
device.  Even if you get swapped out again later (and there's
no guarantee of that), you may well get swapped out to a
different location on the swap device, so that the previous
images remain.

The analogy to device drivers is invalid unless you have
arranged to obtain a chunk of memory that is uncacheable and
unswappable.

To say the same thing in other words: a compiler can only do
so much.  It can generate instructions to be executed by the
hardware.  Whether that instruction affects the real
physical world in the way you desire is another question
entirely.

4) In the effort to prevent the just-mentioned attack, a
moderately-good operating system will expunge memory right
before giving it to a new owner.  It would be more secure
(but vastly less efficient) to expunge it right after the
previous owner is finished with it.

To see this in more detail, consider swap space again: a
piece of used swap space need not be expunged, unless you
are fastidious about security, because the operating system
knows that it will write there before it reads there.  Clearing
it immediately would be a waste of resources.  Leaving it
uncleared is potentially a security hole, because of the risk
that some agent unknown to the operating system will (sooner or
later) open the swap-space as a file and read everything.

5) We turn now to the hardware-physics layer.  Suppose
you really do manage to overwrite a disk file with zeros.
That does not really guarantee that the data will be
unrecoverable.  As Richard Nixon found out the hard way,
the recording head never follows exactly the same path, so
there could be little patches of magnetism just to the left
and/or just to the right of the track.  An adversary with
specialized equipment and specialized skills may be able
to recover your data.

6) To reduce the just-mentioned threat, a good strategy is
to overwrite the file with random numbers, not zeros.  Then
the adversary has a much harder time figuring out what is old
data and what is new gibberish.  (To do a really good job
requires writing your valuable data always in the middle,
and overwriting gibberish twice, once offset left and once
offset right.)

This is one of the reasons why you might need an industrial-
strength stretched random symbol generator:
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#sec-srandom

Note that the random-number trick can be used for main
memory (not just disks) to ensure that the compiler + OS +
hardware system doesn't optimize away a block of zeros.
This actually happened to me once: I was doing some timing
studies, and I wanted to force something out of cache by
making it too big, so I allocated a large chunk of memory
and set it to zero.  But no matter how big I made it, it fit
in cache.  The system was using the memory map to give me
unlimited copies of one small page of zeros (with the
copy-on-write bit set).

7) Terminology:  I use the word expunge to denote doing
whatever is necessary to utterly destroy all copies of
something.  Clearing a memory location is sometimes far
from sufficient.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Optical analog computing?

2002-10-02 Thread John S. Denker

R. A. Hettinga wrote:
...
 the first computer to crack enigma was optical
 the first synthetic-aperture-radar processor was optical
 but all these early successes were classified -- 100 to 200 projects,
 and I probably know of less than half.
 
 -- Do these claims compute?! is this really a secret history, or does
 this mean holography, of am I just completely out of the loop?1

Gimme a break.  This is remarkable for its lack of 
newsworthiness.

1) Bletchley Park used optical sensors, which were (and
still are) the best way to read paper tape at high speed.
You can read about it in the standard accounts, e.g.
  http://www.picotech.com/applications/colossus.html

2) For decades before that, codebreakers were using optical
computing in the form of superposed masks to find patterns.
You can read about it in Kahn.

3) People have been doing opto-electronic computing for 
decades.  There's a lot more to it than just holography.  
I get 14,000 hits from
  http://www.google.com/search?q=optical-computing

 Optical info is a complex-valued wave (spatial frequency, amplitude and
 phase)

It isn't right to make it sound like three numbers (frequency, 
amplitude, and phase);  actually there are innumerable 
frequencies, each of which has its own amplitude and phase.

 lenses, refractions, and interference are the computational operators.
 (add, copy, multiply, fft, correlation, convolution) of 1D and 2D arrays

 and, of course, massively parallel by default.
 
 and, of course, allows free-space interconnects.

Some things that are hard with wires are easy with
light-waves.  But most things that are easy with wires
are hard with light-waves.

 Here's a commercialized effort from israel: a space integrating
 vector-matric multiplier  [ A ] B = [ C ]
 laser- 512-gate modulator - spread over 2D
 256 Teraflop equivalent for one multiply per nanosecond.

People were doing smaller versions of that in
the 1980s.

 Unclassified example: acousto-optic spectrometer, 500 Gflops equivalent
 (for 12 watts!) doing continuous FFTs.  Launched in 1998 on a 2-year
 mission. Submillimeter wave observatory.

Not FFTs.  FTs.  Fourier Transforms.  All you need for
taking a D=2 Fourier Transform is a lens.  It's undergrad
physics-lab stuff.  I get 6,000 hits from:
  http://www.google.com/search?q=fourier-optics

 Of course, the rest of the talk is about the promise of moving from
 optoelectronic to all-optical processors (on all-optical nets  with
 optical encryption,  so on).

All optical???  No optoelectronics anywhere???
That's medicinal-grade pure snake oil, USP.

Photons are well known for not interacting with
each other.  It's hard to do computing without
interactions.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-02 Thread John S. Denker

AARG!Anonymous wrote:
 
 The problem is that you can't forcibly collapse the state vector into your
 wished-for eigenstate, the one where the plaintext recognizer returns a 1.
 Instead, it will collapse into a random state,

Sorry, that's a severe mis-characterization.

 David Honig  wrote:

 I thought the whole point of quantum-computer design is to build
 systems where you *do* impose your arbitrary constraints on the system.

David Wagner wrote:
 
 Look again at those quantum texts.  

That's good advice.

 Quantum doesn't work like the original poster seemed to wish it would;
 state vectors collapse into a random state, 

Random is not the right word.

 not into that one magic
 needle-in-a-haystack state you wish it could find.

C'mon folks, let's cut down on extreme statements like 
the-whole-point-is-this or the-whole-point-is-that
and using words like magic to describe finding the
right answer.

1) Computer design has many points that must be
taken into consideration.  Quantum computer design
is in some ways more powerful but in other ways more
constrained than classical computer design.

2) One of the points is that yes, the computer should
compute what you want it to compute.  OTOH it takes
more than wishing to bring such a computer into 
existence.

3) A sufficiently well designed quantum computer can, 
in principle, find some needles in some haystacks, 
precisely because the structure of the machine, acting 
according to the laws of quantum mechanics, does in fact 
collapse the wave-function into a representation of 
the wished-for answer.  (PS most of what has been 
written about collapse of wave-functions is baloney, 
but we need not pursue that tangent just now.)

=

A general remark about parallel computing:  For every
parallel algorithm (running on P processors) there 
exists a corresponding uniprocessor algorithm:  just 
set P=1 and turn the crank.

The converse does not hold.  The existence of a uni-
processor algorithm may or may not be a guide to the 
creation of a parallel algorithm.  As Brooks famously 
said, creating a baby requires nine months, no matter 
how many mothers are assigned to the task.

The same applies even more strongly to quantum computing:
It would be nice if you could take a classical circuit,
automatically convert it to the corresponding quantum
circuit, with the property that when presented with a
superposition of questions it would produce the 
corresponding superposition of answers.  But that cannot 
be.  For starters, there will be some phase relationships 
between the various components of the superposition of 
answers, and the classical circuit provides no guidance 
as to what the phase relationships should be.

So let's not guess about what quantum algorithms exist.
It is possible to construct such algorithms, but it 
requires highly specialized skills.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: get a grip on what TCPA is for

2002-08-15 Thread John S. Denker

bear wrote:
 
 ... I have one box with all the protection I want:
 it's never connected to the net at all.  I have another box
 with all the protection that I consider practical for email
 and web use.  Both run only and exactly the software I have
 put on them,
 
 That is trusted computing sir, and TCPA/Palladium is a huge
 step *backward* from it.  

Brother Bear belabors one obvious point while missing a
more-important obvious point.  What some people want
is not what other people want.

The TCPA/Pd designers don't much care whether the
person who has custody of the machine trusts it.  They've
been shipping untrustworthy software for years.  The
thing they care about, probably the only thing they deeply 
care about, is whether _they_ can trust the machine while 
it is in _somebody else's_ custody.

To a first approximation, TCPA/Pd is for !!their!! direct
benefit, not for yours.  But to a second approximation, 
they are not entirely wrong when they say consumers will 
benefit, because there are indirect benefits of having 
some sort of system whereby authors, performers, and
inventors get paid for their work.  Things that are
simply not available now would become available if there
were a way people could get paid for creating them.

You can wish for some Land of Cockaigne where you get 
paid but nobody has to do any paying, but that's a 
long way from reality.

==

Most of us know how to secure a machine that is disconnected
from the net.  We can probably even combine some limited 
networked functionality with some degree of security -- 
!!provided!! we retain physical custody of the machine.

But how to trust a machine when you don't have physical
custody?  Even the most-skilled members of this list 
would find that a challenge (depending, as I have emphasized 
before, on what your threat model is).

I guarantee you will not understand TCPA/Pd unless you
walk a while in the proponents' moccasins.  If you can't 
stand the smell of those moccasins, OK, but prepare 
yourself for perpetual ignorance and irrelevance.

For example: Imagine you are the owner of a valuable copyright
and you want to protect it.  You want consumers to be able 
to use your work in some ways, but you want to prevent rampant
infringement.  What will you do???  It's not an easy problem.

If your powers of imagination are not up to the task in the
previous paragraph, here's an alternative:  Suppose you want
to spend a few weeks visiting Outer Zambonia, but you want to 
communicate securely with your colleagues back home during this
time.  Alas, the Zambonian Ministry of Friendship has been
looking forward to this as an opportunity to trojanize your
laptop.  You simply don't have the resources to guard your
laptop 24 hours a day.  You can't travel with a GSA-approved
safe in your carry-on.  You can't take your laptop with you
when you go swimming.  The idea of hardware with !!some!!
degree of tamper-resistance might eventually start to appeal
to you.

Of course, our task of understanding what TCPA/Pd is trying
to do is made more difficult when proponents lie about what
they are trying to do.

===

The most interesting technical point AFAICT is figuring out
how to _vet_ a piece of tamper-resistant hardware.  Presumably
you want it to detect the early stages of tampering and
react by expunging all its private keys.  Alas essentially
identical behavior could be used to cover the tracks of
built-in trojan beasties.

Here are some partially-baked thoughts:
  1) You have to allow it to expunge things.  That's the
only way it can really protect your secrets.
  2) So allow that.  It should be possible to verify that
the box is in a tabula-rasa state -- if the trojan is gone,
it's gone, and if it's not gone, it should be detectable
if you probe hard enough.  We require the hardware to allow
certain types of probing.  
  3) After you're satisfied that the hardware is not infested, 
load the software, and the keys, from a trusted source.  Replace
the tamper-evident seals and latches.

This isn't a complete design, but you can where it's going:
It should be possible to design hardware with some degree 
of tamper-resistance !!without!! creating a monopoly as to
who decides who trusts whom.

Alas it is also possible to design the hardware so that it
becomes a monopoly-enhancer of Orwellian proportions.  We
need to be vigilant to prevent this.  This will require
nuanced, non-extremist thinking.  Those who exhibit the
knee-jerk response that all tamper-resistant hardware is
bad will be ignored.  Such hardware, like most things,
can be used for good or ill, depending on details.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Translucent Databases

2002-08-03 Thread John S. Denker

David Wagner wrote:

 It seems to me that a much more privacy-friendly solution would be
 to simply refrain from asking for sensitive personal information like
 SSN and date of birth -- name and a random unique identifier printed
 on the application form ought to suffice.  (If SSN is later needed
 for financial aid purposes, it could be requested after the student
 decides to matriculate.)
 
 Am I missing anything?

I think the problem is a lot harder than that.

Let me clarify by telling a story:  Once upon a time, Hansel
designed an online-forms system that collected credit-card
info, encrypted it using PGP, and mailed it to Goldylocks
(the secretary) with a backup copy going to Tweedledee.
Despite the fact that Hansel had installed PGP on her
computer and indoctrinated her on how to use it, Goldylocks
was unable to decrypt the info.  So at her request, Tweedledee
decrypted it -- a whole conference's worth of registrations --
and sent it to her in the clear.

In a clear violation of Murphy's law, no harm came of this,
but otherwise it was a worst-case use of cryptology:  just
secure enough to be a nuisance to the authorized users, but
in the long run providing no real protection for the card-
holders.

The sad fact is that most people on this planet cannot get
PGP to work in a way that suits them.  The future of security
depends at least as much on user-interface research as it does
on mathematical cryptology research.

Oh, BTW, a preprinted number on the admissions form doesn't
really do the trick.  Forms are printed on printing presses,
in batches of several thousand, all alike.  After they are
mailed out, the guidance counselor at Podunk South High School
will make copies as needed.  A web-based approach won't work
unless you are making computer-savviness an entrance requirement.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-08-02 Thread John S. Denker

David Wagner [EMAIL PROTECTED] writes:
 I don't know of any good cryptographic hash function 
 that comes with a proof that all outputs are possible.  

What about the scheme
Pad - Encipher - Contract
described at
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#sec-uniform-hash

 if we apply the Luby-Rackoff construction 
 (i.e., 3 rounds of a Feistel cipher), with
 ideal hash functions in each round, does this have 
 the desired properties? It might.

Paul Crowley wrote:
 
  This seems to define a block cipher with no key, which is collision
  free but not one-way.  Am I misunderstanding what you're proposing?

David Wagner wrote:
 
 You understood it perfectly.  Good point.
 I didn't notice that problem.  Harrumph.

There is only the most minor of problems here, namely
that DAW mentioned a symmetric cipher.  The problem goes 
away if you use asymmetric crypto.  You want a cipher with
no _deciphering_ key, as described in my paper.  (op. cit.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-08-01 Thread John S. Denker

1) There were some very interesting questions such as
  -- whether one can construct a hash function that
 generates all possible codes.
  -- ditto, generating them as uniformly as possible.
  -- Whether off-the-shelf hash functions such as SHA-1 
 have such properties.

The answers are respectively yes, yes, and very probably.

I wrote up a discussion of this, with examples, at
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#sec-uniform-hash

2) David W. suggested (off-list) that I clarify the relationship
of entropy-based information-theoretic arguments to computational-
feasibility arguments.  I took some steps in this direction; see
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#sec-objectives

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread John S. Denker

Barney Wolff  asked:
 Do we even know that the popular hash functions can actually generate
 all 2^N values of their outputs?

David Wagner replied:
 
 It seems very unlikely that they can generate all 2^N outputs
 (under current knowledge).  

I was temporarily astonished, but he clarified as follows:

   Q2: If we cycle through all messages (possibly very long
   or very short), are all 2^N output values possible?
...
 I'd guess that the answer
 to Q2 is probably Yes, or close to it.

1) Consider the following function H0:  Divide the input into
chunks N bits long.  Calculate the XOR of all the chunks, and
use that as the output.

This meets the definition of hash function, although it would
not be a one-way hash function.  And it would most certainly
be capable of generating all 2^N possible outputs.

2) I can't prove that a standard hash function such as SHA1
generates all possible codes, but I consider it likely.  It would 
be quite shocking if a strong hash function such as SHA1 generated
fewer codes than a weak function such as H0.

3) For a one-way hash function should not expect a _constructive_ 
proof that it generates all possible codes;  such a construction
would violate the one-way property.

4) Here is a rough plausibility argument.  Consider two hash
functions H_1 and H_2 that are independent.  We define
H'(x) := H_1(x) XOR H_2(x)  (1)
which implies
H_1(x) = H'(x) XOR H_2(x)   (2)

Now let's look at the truth table for equation (2), where
the row-index is the H' code, the column-index is the H_2
code, and each entry represents the H_1 code required
to uphold the equation:

 00   01   10   11
00   00   01   10   11
01   01   00   11   10
10   10   11   00   01
11   11   10   01   00

Now let's suppose H_2 is missing one code (say the 10 code) and
H' is missing one code (say the 11 code).  Then H_1 must be missing
at least three codes!  Otherwise there would be a way of combining
a non-missing H_1 code with a non-missing H_2 code to create the
missing H' code.

 00   01   10m  11
00   00   01   10   11
01   01   00   11   10
10   10   11   00   01
11m  11m  10m  01   00m

We can extend this argument by combining lots and lots of 
independent hash functions H_i.   The combination has far 
fewer missing codes than any of the ingredients.  So either
you conclude that 
  a) there is a conspiracy that prevents us from constructing
 independent hash functions to use as ingredients, or
  b) we can produce hash functions with very few missing codes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-27 Thread John S. Denker

I wrote:
  a) if the hash function happens to have a property I call no
 wasted entropy then the whitening stage is superfluous (and
 you may decide to classify the hash as non-simple);

David Honig responded:
 Not wasting entropy does not mean that a function's output
 is white ie uniformly distributed.  E.g., a function
 which doubles every bit: 0-00, 1-11 preserves total
 entropy but does not produce uniformly distributed output.
 (It also reduces entropy/symbol.)

That's a red-herring tangent.  I'm not talking about any old
function that doesn't waste entropy;  I'm talking about a 
!!hash!! function that doesn't waste entropy.  The hash function
has a hard constraint on the word-size of its output.  If it
starts doubling bits, or otherwise putting out redundancies,
then it is wasting entropy.  See the discussion of the BADHASH-2
function in the paper.
  http://www.monmouth.com/~jsd/turbid/

And remember:  in addition to having a non-entropy-wasting hash 
function, we are also required to saturate its input.  Then we can 
conclude that the output is white to a very high degree, as 
quantitatively discussed in the paper.

 The simple-hash's --lets call it a digest-- function is to increase the
 entropy/symbol
 by *reducing* the number of symbols while preserving *total* entropy.

Total entropy is preserved in the non-saturated regime.  This is
documented in upper rows of the table:
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm#tab-saturation
In the saturated regime, some entropy is necessarily lost.  This is
documented in the lower rows of the table.  This is only a small 
percentage, but it is mandatory.  I don't consider this to be wasted 
entropy;  I consider it entropy well spent.  That is, these are 
necessary hash collisions, as opposed to unnecessary ones.

 A function like xor(bit N, bit N+1) which halves the number of bits
 can do this.  While its output is whiter than its input, its output
 will not be perfectly white unless its input was.

See the discussion of BADHASH-1 in the paper.

 Two scenarios where SHA is contraindicated came up:
 1. hardware
 2. interrupt handlers
 
 Sometimes a hardware RNG will feed raw data into a crypto-WEAK analogue of
 SHA,
 ie a LFSR which does the bit-reduction and mixing functions,
 but doesn't mix as well as SHA.  (LFSR are hardware-friendly)  Separating
 the bit-reduction from the mixing can be useful for analyzing what's
 really going on.

Well, I tend to agree that systems that separate the bit-reduction
from the mixing are easier to analyze, in the sense that it is
easier to find flaws in them.  But that's because they're so flawed!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-27 Thread John S. Denker

Amir Herzberg wrote:
 
 So I ask: is there a definition of this `no wasted entropy` property, which
 hash functions can be assumed to have (and tested for), and which ensures
 the desired extraction of randomness?

That's the right question.

The answer I give in the paper is 

 A cryptologic hash function advertises that it is
 computationally infeasible for an adversary to unmix
 the hash-codes.

 A chosen-plaintext (chosen-input) attack will not
 discover inputs that produce hash collisions with
 any great probability.

 In contrast:

 What we are asking is not really very special. We
 merely ask that the hash-codes in the second
 column be well mixed. 

 We ask that the data acquisition system will not
 accidentally produce an input pattern that unmixes
 the hash-codes. 

We believe that anything that makes a good pretense of being 
a cryptologic hash function is good enough for our purposes,
with a wide margin of safety.   If it resists attack when the 
adversary can choose the inputs, it presumably resists attack 
when the adversary can't choose the inputs.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-25 Thread John S. Denker

David Honig helped focus the discussion by advocating the 
block diagram:

 Source -- Digitizer -- Simple hash -- Whitener (e.g., DES)

Let me slightly generalize this to:
! Source -- Digitizer -- hash -- Whitener (e.g., DES)

i.e. we defer the question of whether the hash is simple or not.

I continue to claim that
 a) if the hash function happens to have a property I call no 
wasted entropy then the whitening stage is superfluous (and
you may decide to classify the hash as non-simple);  otherwise
 b) if the hash function does not have that property, this
is a defective Random Symbol Generator and 
  b1) the whitener will _at best_ conceal, not remove the 
  defects, and
  b2) this is not the best way to conceal defects.  Very
  definitely not.

To illustrate my point, I will accept David's example of a
simple-hash function;  he wrote:
 Parity is the ultimate hash.

Well, then, suppose that the raw data coming off my digitizer
consists of an endless sequences of even-parity words.  The
words have lots of variability, lots of entropy, but the parity
is always even.  Then the output of the simple-hash is an endless 
sequence of zeros.  I encrypt this with DES.  Maybe triple-DES.  
It's not going to help.  The generator is defective and doesn't 
even have satisfactory error-concealment.

I like my design a lot better:

+ Source -- Digitizer -- good hash

where I have chosen SHA-1 as my hash function.  

Finally, since SHA-1 is remarkably computationally efficient,
I don't understand the motivation to look for simpler hash
functions, especially if they are believed to require whitening
or other post-processing.

=

Thanks again for the questions.  This is a good discussion.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: understanding entropy (was: building a true RNG)

2002-07-24 Thread John S. Denker

 At 10:59 PM 7/22/02 -0700, [EMAIL PROTECTED] wrote:
 
 Entropy is not quite a physical quantity -- rather it is on the
 slippery edge between being a physical thing and a philosophical
 thing. If you are not careful, you will slip into a deep epistemic
 bog and find yourself needing to ask how do we know what is
 knowable, and what is the whichness of why?
 
 To avoid such deep waters, know where your entropy is coming from.

Right.

Then David Honig wrote:
 
 We agree on your substantive points re RNGs, I think, 

I join in the agreement.

 but you're interestingly wrong here.  

I don't think jamesd's point was wrong.  One could quibble
about some of the wording, especially if it were taken out
of context, but the passage as a whole makes an important,
valid point.

 Entropy is a physical quantity, it even figures into chemistry.  

Yes, it is a physical quantity.  Yes, it enters into chemistry.
But it also contains an element of subjectivity.

For a careful discussion of what entropy is, including
the element of subjectivity, see
  http://www.monmouth.com/~jsd/physics/thermo-laws.htm#sec-second-law

 The physics-of-computation people (Bennett? Landaur? etc) 

Charles H. Bennett and Rolf Landauer.

 have written about thermodynamics  information.

Not to mention Leo Szilard, Ed Fredkin, Wojciech Zurek,
and others.

 Modulo Chaitin-type mindgames about measuring it :-)

Chaitin's work is profound and well-regarded.  Referring
to it as mindgames is, well, nothing but name-calling
and won't advance the scientific discussion.  If anybody
has a thoughtful objection to Chaitin's work I would be
extremely interested to hear it.

 Anyway we're cryptographers, not philosophers, so we should 
 be safe..

A lack of understanding of what entropy is has gotten more
than one cryptographer into trouble.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-23 Thread John S. Denker

Eugen Leitl wrote:
 
 ... framegrabber with a 640x480 24 bit/pixel camera. It doesn't
 compress, is rather noisy, and since self-adjusting I get the maximum
 entropy at maximum darkness.

OK.  Evidently it's dominated by thermal noise, not to
be confused with the Poisson noise recently featured
in another thread.  Not a problem.

 Is there any point in compressing the video before running it through a
 cryptohash? 

There might be a minor point, namely computational efficiency.
A well-chosen compressor might eliminate low-entropy bytes
rather quickly.  Make sure it's a lossless compressor, perhaps
GIF or PNG ... as opposed to a perceptual coder (e.g. JPEG) 
that would persumably throw away some of the entropy.  Calling 
SHA-1 on low-entropy bytes doesn't waste entropy, but wastes CPU
cycles.

 How does e.g. SHA-1 fare with very sparse bitvectors?

1) In any good hash function, any input bit should have
about as much effect on the output as any other input bit.
SHA-1 has been analyzed by experts (of which I am not one :-)
and I would imagine they checked this.

2) There are 5 one-bit shifts in the fivefold expansion, and
lots of 5-bit shifts in the main loop, so it shouldn't matter
that the sparse input bits are clustered in the bottom of the
32-bit words.

3) I performed an amateur kick-the-tires test, namely cobbling
up some sparse input vectors, calling SHA-1, and applying
standard statistical tests including Diehard and Maurer's
universal statistical test.  No nobody's surprise, the tests 
didn't detect anything.


Arnold Reinhold wrote:
 
 ... with a portable TV set and a video digitizer 
 should be a good source of high bandwidth noise. In both cases you 
 are just using the receivers as high gain amplifiers of the thermal 
 noise at the antenna terminals.

Thermal noise is good.  Antennas are bad -- just an invitation
to be attacked that way.  Get rid of the antenna.  Keep the high
gain preamp.

Better yet, do as Eugen has done:  Use a framegrabber !!without!! 
the portable TV set.  No RF section at all.  Plenty of entropy,
lower cost, greater simplicity, and less vulnerability to attack.

For that matter, an audio card (without microphone) produces more
than enough entropy for most applications.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: It's Time to Abandon Insecure Languages

2002-07-22 Thread John S. Denker

[EMAIL PROTECTED] wrote:
 
 Most security bugs reported these days are issues
 with application semantics (auth bypass, SQL injection, cross-site
 scripting, information disclosure, mobile code execution, ...), not buffer
 overflows. 

Really?  What's the evidence for that?
What definition of most are we using?
One out of 20 doesn't count as most in my book.

When I look at the reports for 2002 year-to-date, at 
http://www.cert.org/advisories/ there are 20 advisories.  
Depending on how you count multi-bug reports, it appears that 
19 out of 20 involve buffer overflows and related issues -- 
things that could easily be prevented by using a language that 
has a built-in string type and automatic object management.
Exotic languages are not required;  C++ would make a huge
impact.  And of course in any language a modicum of skill and
care is required;  it's hard to make a language foolproof 
because fools are so ingenious.

My evidence:  http://www.cert.org/advisories/ 

20- multiple, including writing out-of-bounds
19  buffer overflow
18  multiple, including buffer overflow
17  stack overflow
16  multiple, including stack overflow
15= DoS: internal consistency check
14  buffer overflow
13  buffer overflow
12- format string
11  heap overflow
10- format string
 9  multiple, including buffer overflow
 8  multiple, including buffer overflow
 7- double free
 6  multiple, including buffer overflow
 5  multiple, including heap overflow
 4  buffer overflow
 3  multiple, including buffer overflow
 2  buffer overflow
 1  buffer overflow

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: It's Time to Abandon Insecure Languages

2002-07-22 Thread John S. Denker

[EMAIL PROTECTED] wrote:
 
 This is more indicative of CERT's focus than the relative frequency of
 security issues. The fact that a large fraction of e-commerce merchants
 let you set the price for the goods you buy is in practice a larger threat
 than the widely publicized buffer overflows.
 
 Semantic security bugs in individual web sites do not rate highly enough
 on Cert's seismograph, but are in practice far more common.

Interesting..

Earlier he wrote
 Most security bugs reported these days are issues
 
 with application semantics

We are talking about _reported_ bugs.  If CERT is not the 
right place to look for reports, please tell us where we
_can_ find appropriate reports.

I was trained as a scientist.  I like to look at data.
Listening to other people's summaries and conclusions is
nice, too, but sometimes it pays off to take a look at 
the real data.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG (was: Quantum Computing ...)

2002-07-22 Thread John S. Denker

David Honig wrote:

 The thread here has split into QM  True Randomness and
 what do you need to build a true RNG...

Yup.

 Specifically:  The executive summary of the
 principles of operation of my generator is:
  -- use SHA-1, which is believed to be resistant
 to collisions, even under chosen-input attack.
  -- use it under conditions where the adversary
 cannot choose the input.
  -- the rest is just physics and statistics.
 
 Sure.  There are many examples of this kind of generator,
 using physical sources from video'd lava lamps to radioactive decay
 (incl. semiconductor junctions, resistors, microphone,
 detuned FM radio cards).  

For the humor-impaired, let me point out that the lava 
lamp is a joke.  What it conspicuously lacks is a proof 
of correctness -- that is, a nonzero lower bound on the 
entropy rate of the raw data.  The lava could turn out to 
have a not-very-complicated periodic pattern.  Secondarily, 
the pattern changes so slowly that there must be rather strict 
upper bounds on the entropy rate, small out of all proportion 
to the cost of the contraption.

A detuned FM card is a bad idea, because it is just
begging the opponent to sit next door with an FM
transmitter.

A microphone causes users to worry about privacy, and
in any case doesn't add much beyond what you'd get with
the same input circuitry open-circuited, i.e. everything
except the microphone itself.

Radioactive decay has a poor price/performance ratio, and
isn't nearly as random as neophytes might think, when the
data-acquisition hardware is taken into account.

 2) Vetting a generator by trying to detect patterns
 in the output is like kicking the tires on a used car
 ... go ahead and do it if you want, but it is far from
 sufficient for establishing any reasonable standard of
 correctness.
 
 You can't vet a RNG by looking at its output, 

We agree.

 which is likely whitened anyway, 

Depending on what whitening means;  see below.

 but you can gain confidence by looking at its design 

Yes!

 and measuring the entropy in the raw-physical-source derived bitstream.

That's the point where I would like some more detail.
If measuring means applying statistical tests, then
I've never seen such measurements done in a way that is
really convincing.  Constructive examples would be welcome.

Just saying Joe Schmoe applied all the tests he could 
think of and couldn't compress it more than XY% isn't
going to convince me.

I recommend _calculating_ the entropy from physics principles,
rather than trying to measure the entropy using statistical
tests.  The calculation is based on a handful of macroscopic
physical parameters, such as temperature, gain, and bandwidth.

 If the raw source has  1 bits/symbol (and it will), 

Commonly but not necessarily  1 bit/symbol.
Depending on what you mean by symbol, a 24-bit 
audio card provides a low-cost counterexample.

 it'd be nice if a later stage
 distilled this to near 1 bit/symbol, before whitening.  

We need to be more specific about what the symbol
alphabet is.  If the symbols are ASCII characters,
1 bit per symbol is not nearly good enough.

More importantly, I don't know what whitening means in 
this case.

The output of a good distiller has virtually 100% entropy 
density, i.e. 8 bits per byte.  I say virtually because
perfection is impossible, but 159.98 bits in a 160 bit
word ought to be good enough for most applications :-).

I see no point in whitening the output of such a
distiller.

If whitening means encrypting the output of the distiller,
I consider that just a more-complicated hash function ...
just another few rounds.

 Of course, no one
 outside the box will know, since you're whitening, but it yields resistance
 to (albeit difficult) attacks (e.g., your hash turns out to be attackable).

I assume that means know [that I'm using a distiller]

Well, in principle nobody outside the box knows _anything_
about the mechanism (unless they read the documentation).
One random symbol stream looks a lot like another :-).  
Attackers can always check to see whether the generator 
is broken or not.  But if it's not broken, all they can 
do (from outside) is measure the output-rate.

 I also fail to see harm in measuring/monitoring entropy 
 as the RNG operates.

Certainly there's no harm.  It's like kicking the tires
on the used car.  It gives some people a warm fuzzy, but 
it's far from sufficient for establishing any reasonable 
level of confidence.

I recommend monitoring the aforementioned macroscopic
(non-statistical) physical parameters, both to detect
gross hardware failure and to detect attempted jamming.
But that's very different from the traditional (and
hereby deprecated) procedure of measuring the entropy
using statistical tests.

For lots more detail, see
  http://www.monmouth.com/~jsd/turbid/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL 

Re: building a true RNG (was: Quantum Computing ...)

2002-07-22 Thread John S. Denker

David Honig wrote yet another nice note:
 
 So work in a Faraday cage...

Tee, hee.  Have you ever worked in a Faraday cage?
Very expensive.  Very inconvenient.

 Depending on what whitening means;  see below.
 
 You can imagine simple-hashing (irreversible compression)
 as distinct from whitening which is
 related to cryptographic strength, avalanche, mixing, etc.

I'm not trying to be dense, but I'm totally not 
understanding the distinction here.  The following
block diagram is excellent for focussing the discussion,
(thanks):

 Source -- Digitizer -- Simple hash -- Whitener (e.g., DES)

OK, we have DES as an example of a whitener.  
-- Can somebody give me an example of a simple hash 
that performs irreversible compression of the required
kind?
-- Isn't the anti-collision property required of even
the simplest hash?  Isn't that tantamount to a very
strong mixing property?  If there's strong mixing in
the simple hash function, why do we need more mixing
in the later whitening step?
-- What is meant by cryptologic strength?  Strength
against what kind of attack?  If this means in particular
the one-way property, why do I need it?  I can understand
why a !!pseudo!! random symbol generator needs the one-way
property, to protect its internal state, but since my
generator has no secret state to protect, why do I need
any cryptologic properties other than mixing?
-- BTW note that the term avalanche is probably not
helpful to this discussion, because it is usually defined
(see e.g. Handbook of Applied Cryptography) in terms of
single-bit flips.  The anti-collision property of the hash
function demands resistance to multi-bit flips.

 In this view, SHA combines the compression (aka digestion)
 function with the crypto-strength whitening

If you want to think of it that way, then we come
to the same final state.

I assume digestion means the same as distillation?

 You collect some representative raw data, and run a number of
 entropy measurements on that sample.  You find  1bit/baud.

In particular you have found an upper bound.
To paraphrase Dykstra:  testing can find an upper bound
on the entropy density, but it can never find a lower bound.

 You run the data through an algorithm which produces fewer bits.
 You measure the entropy of the result.  When successive (or
 'stronger') runs measure at 1b/bd, you have distilled
 entropy from that sample.  

No, you have just reached the limits of your chosen number
of entropy measurements.  You cannot convince a skeptic that
a new test, discovered tomorrow, will not greatly lower your
new (1b/bd) upper bound.

 To use this in crypto, you'd
 put it through a whitener --feed blocks to DES-- for
 belts-and-suspenders assurance.  And because you don't
 want someone looking through your simple-hashing logic
 back to your source.

I say again that I don't need the one-way property.  At
each iteration, the input to the hash function is unrelated
to all past and future inputs.

We agree that the belt-and-suspenders approach is a standard 
way of achieving high reliability.  It works in crypto and in
many unrelated fields
  http://www.monmouth.com/~jsd/how/htm/misc.html#sec-layers-safety

If you want to XOR the output of the HRNG with some nice
PRNG, that will give excellent error-concealment in the
case of a gross failure of one or the other.

But (minor opoint) I recommend we continue to call a belt a belt, 
and call a suspender a suspender.  Call a HRNG a HRNG, and call
a PRNG a PRNG.  Adding a strong one-way function to my HRNG seems
like a total waste of CPU cycles.

 Once you put it through DES, anything (eg the integers) appears random.
 That's why you measure before whitening, if possible.  

We agree that measuring after whitening is pointless,
given the current state of the art, namely that encryption
algorithms are incomparably stronger than automatic measurement
(pattern-finding) algorithms.

 I recommend _calculating_ the entropy from physics principles,
 rather than trying to measure the entropy using statistical
 tests.  The calculation is based on a handful of macroscopic
 physical parameters, such as temperature, gain, and bandwidth.
 
 You measure because your model may have overlooked something.

Measure what?  Measure how?  If I run diehard on my raw
data it will tell me that the data has entropy density
far less than 8 bits per byte -- but duh, I already knew
that.  Running standard compression algorithms (Lempel-Ziv)
or whatever will give me an upper bound that is much,
much, higher than my calculated lower bound -- so even
if I've overlooked something or made an error in the calculation
the measurement is not sensitive enough to catch it.

 The output of a good distiller has virtually 100% entropy
 density, i.e. 8 bits per byte.  I say virtually because
 perfection is impossible, but 159.98 bits in a 160 bit
 word ought to be good enough for most applications :-).
 
 I agree with the first statement (by definition), I think in crypto you have
 to be 

vulnerability in Outlook PGP plugin

2002-07-12 Thread John S. Denker

http://www.eeye.com/html/Research/Advisories/AD20020710.html

This vulnerability can be exploited by the Outlook user simply
selecting a malicious email, the opening of an attachment is 
not required. 
...
[NAI] have released a patch for the latest versions of the PGP
Outlook plug-in to protect systems from this flaw. Users can 
download the patch from:

http://www.nai.com/naicommon/download/upgrade/patches/patch-pgphotfix.asp


=
By TED BRIDIS, Associated Press Writer

 WASHINGTON (AP) - The world's most popular software for scrambling
 sensitive e-mails suffers from a programming flaw that could allow
 hackers to attack a user's computer and, in some circumstances,
 unscramble messages.

 The software, called Pretty Good Privacy, or PGP, is the de facto
 standard for encrypting e-mails and is widely used by corporate and
 government offices, including some FBI ( news - web sites) agents and
 U.S. intelligence agencies. The scrambling technology is so powerful
 that until 1999 the federal government sought to restrict its sale
 out of fears that criminals, terrorists and foreign nations might use
 it.

 The new vulnerability, discovered weeks ago by researchers at eEye
 Digital Security Inc., does not exploit any weakness in the complex
 encrypting formulas used to scramble messages into
 gibberish. Instead, hackers are able to attack a programming flaw in
 an important piece of companion software, called a plug-in, that
 helps users of Microsoft Corp.'s Outlook e-mail program encrypt
 messages with a few mouse clicks.

 Outlook itself has emerged as the world's standard for e-mail
 software, with tens of millions of users inside many of the world's
 largest corporations and government offices. Smaller numbers use the
 Outlook plug-in to scramble their most sensitive messages so that
 only the recipient can read them.

 It's not the number of people using PGP but the fact that they're
 using it because they're trying to safeguard their data, said Marc
 Maiffret, the eEye executive and researcher who discovered the
 problem. Whatever the percentage is, it's very important data.

 Maiffret said there was no evidence anyone had successfully attacked
 users of the encryption software with this technique. He said the
 programming flaw was not totally obvious, even to trained
 researchers examining the software blueprints.

 Network Associates Inc. of Santa Clara, Calif., which until February
 distributed both commercial and free versions of PGP, made available
 on its Web site a free download to fix the software. The company
 announced earlier it was suspending new sales of the software, which
 hasn't been profitable, but moved within weeks to repair the problem
 in existing versions. The company's shares fell 50 cents to $17.70 in
 Tuesday trading on the New York Stock Exchange ( news - web sites).

 Free versions of PGP are widely available on the World Wide Web.

 The flaw allows a hacker to send a specially coded e-mail - which
would appear as a blank message followed by an error warning - and
effectively seize control of the victim's computer. The hacker could
then install spy software to record keystrokes, steal financial
records or copy a person's secret unlocking keys to unscramble their
sensitive e-mails. Other protective technology, such as corporate
firewalls, could make this more difficult.

 You can do whatever you want - execute code, read e-mails, install a
 backdoor, steal their keys. You could intercept all that stuff,
 Maiffret said.

 Experts said the convenience of the plug-ins for popular e-mail
 programs broadened the risk from this latest threat, since encryption
 software is famously cumbersome to use without them. Even the creator
 of PGP, Philip Zimmermann, relies on such a plug-in, although
 Zimmermann uses one that works with Eudora e-mail software and does
 not suffer the same vulnerability as Outlook's.

 A plug-in for Microsoft's Outlook Express - a scaled-down version of
 Outlook - is not affected by the flaw.

 Maiffret said his company immediately deactivated the vulnerable
 software on all its computers, which can be done with nine
 mouse-clicks using Outlook, until it could apply the repairs from
 Network Associates. The decision improved security but makes it kind
 of a pain to send encrypted e-mails, he said.

 Zimmermann, in an interview, said PGP software is used quite
 extensively by U.S. agencies, based on sales when he formerly worked
 at Network Associates. He also said use of the vulnerable companion
 plug-in was widespread. Zimmermann declined to specify which
 U.S. agencies might be at risk, but other experts have described
 trading scrambled e-mails using PGP and Outlook with employees at the
 FBI, the Energy Department and even the super-secret National
 Security Agency.

 In theory, only nonclassified U.S. information would be at risk from
 this flaw. Agencies impose strict rules against transmitting any
 classified 

Re: privacy digital rights management

2002-06-26 Thread John S. Denker

I wrote:
  Perhaps we are using
  wildly divergent notions of privacy 

Donald Eastlake 3rd wrote:

 You are confusing privacy with secrecy 

That's not a helpful remark.  My first contribution to
this thread called attention to the possibility of
wildly divergent notions of privacy.

Also please note that according to the US Office of
Technology Assessment, such terms do not posess a single
clear definition, and theorists argue variously ... the
same, completely distinct, or in some cases overlapping.

Please let's avoid adversarial wrangling over terminology.
If there is an important conceptual distinction, please
explain the concepts using unambiguous multi-word descriptions
so that we may have a collegial discussion.

 The spectrum from 2 people knowing something to 2 billion knowing
 something is pretty smooth and continuous. 

That is quite true, but quite irrelevant to the point I was making.
Pick an intermediate number, say 100 people.  Distributing
knowledge to a group of 100 people who share a vested interest in not 
divulging it outside the group is starkly different from distributing 
it to 100 people who have nothing to lose and something to gain by
divulging it.

Rights Management isn't even directly connected to knowledge.  Suppose
I know by heart the lyrics and music to _The Producers_ --- that doesn't 
mean I'm free to rent a hall and put on a performance.

 Both DRM and privacy have to
 do with controlling material after you have released it to someone who
 might wish to pass it on further against your wishes. There is little
 *tehcnical* difference between your doctors records being passed on to
 assorted insurance companies, your boss, and/or tabloid newspapers and
 the latest Disney movies being passed on from a country where it has
 been released to people/theaters in a country where it has not been
 released.

That's partly true (although overstated).  In any case it supports
my point that fixating on the *technical* issues misses some
crucial aspects of the problem.

 The only case where all holders of information always have a common
 interest is where the number of holder is one.

Colorful language is no substitute for a logical argument.
Exaggerated remarks (... ALWAYS have ...) tend to drive the
discussion away from reasonable paths.  In the real world,
there is a great deal of information held by N people where
(N1) and (Ninfinity).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Commercial quantum crypto product - news article

2002-05-31 Thread John S. Denker

Kossmann, Bill asked:
 
 Anybody familiar with this product?
 
 A Swiss company has announced the commercial availability of what it says
 are the first IT products which exploit quantum effects rather than
 conventional physics to achieve their goals. (05/31/2002)
 http://itworld.ca/rpb.cfm?v=20021510001

Actually a couple of products, I will comment only on one
of them, the quantum random number generator.

It reminds me of using a sport-utility vehicle to drive
to the neighbor's house, ten feet away.  There are 
easier ways to get to there.  There is no reason to
believe that quantum noise has any practical advantage
over thermal noise.  This point is not discussed in 
Quantique's principles-of-operation paper
  http://www.idquantique.com/files/paper-qrng.pdf
and indeed they say there goal is to avoid thermal
noise.

You can harvest industrial-strength randomness from
the thermodynamics of electrical circuits, costing
next to nothing.  A draft writeup can be found at:
  http://www.monmouth.com/~jsd/turbid/paper/turbid.htm

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Microsoft to shift strategy toward security and privacy

2002-01-17 Thread John S. Denker

WASHINGTON -- Microsoft Chairman Bill
  Gates announced to employees Wednesday a
  major strategy shift across all its products,
  including its flagship Windows software, to
  emphasize security and privacy over new
  capabilities.

http://www0.mercurycenter.com/breaking/docs/039127.htm



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: CFP: PKI research workshop

2002-01-14 Thread John S. Denker

[EMAIL PROTECTED] wrote: 
...
 People running around in business selling
 products and services and then disclaiming any liability with regard
 to their performance _for_their_intended_task_ is, IMHO, wrong.

IMHO this presents an unsophisticated notion of 
right versus wrong.

By way of analogy:  Suppose you go skiing in Utah.
A rut left by a previous skier causes you to fall
and break your leg, or worse.  Now everybody involved
has been using the ski area _in_the_intended_manner_
yet something bad happened.  So who is liable? The 
ski area could have groomed that trail, but they 
didn't.  They could have enforced a speed limit, but
they didn't.  They could at least have bought insurance
to cover you, but they didn't.  They simply disclaimed
all liability for your injury.  Not only is this 
disclaimer a matter of contract (a condition of sale
of the lift ticket) it is codified in Utah state law.
Other states are similar.  If you don't like it, don't
ski.

Returning to PKI in particular and software defects in 
particular:  Let's not make this a Right-versus-Wrong
issue.  There are intricate and subtle issues here.
Most of these issues are negotiable.

In particular, you can presumably get somebody to insure
your whole operation, for a price.  In the grand scheme
of things, it doesn't matter very much whether you (the
PKI buyer/user) obtain the insurance directly, or whether
the other party (the PKI maker/vendor) obtains the insurance
and passes the cost on to you.  The insurer doesn't much
care; the risk is about the same either way.

The fact is that today most people choose to self-insure
for PKI defects.  If you don't like it, you have many 
options:
 -- Call up some PKI vendor(s) and negotiate for better
warranty terms.  Let us know what this does to the price.
 -- Call up http://www.napslo.org/ or some such and get
your own insurance.  Let us know the price.
 -- Write your own PKI.  Then defray costs, if desired, 
by becoming a vendor.
 -- Et cetera.

In general, there is a vast gray area between Right
and Wrong.  Most things in my life can be described
as not perfect, but way better than nothing.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]