Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread AARG!Anonymous

Anon wrote:
 You could even have each participant compile the program himself,
 but still each app can recognize the others on the network and
 cooperate with them.

Matt Crawford replied:
 Unless the application author can predict the exact output of the
 compilers, he can't issue a signature on the object code.  The
 compilers then have to be inside the trusted base, checking a
 signature on the source code and reflecting it somehow through a
 signature they create for the object code.

It's likely that only a limited number of compiler configurations would
be in common use, and signatures on the executables produced by each of
those could be provided.  Then all the app writer has to do is to tell
people, get compiler version so-and-so and compile with that, and your
object will match the hash my app looks for.
DEI

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: dangers of TCPA/palladium

2002-08-09 Thread Seth David Schoen

R. Hirschfeld writes:

  From: Peter N. Biddle [EMAIL PROTECTED]
  Date: Mon, 5 Aug 2002 16:35:46 -0700
 
  You can know this to be true because the
  TOR will be made available for review and thus you can read the source and
  decide for yourself if it behaves this way.
 
 This may be a silly question, but how do you know that the source code
 provided really describes the binary?
 
 It seems too much to hope for that if you compile the source code then
 the hash of the resulting binary will be the same, as the binary would
 seem to depend somewhat on the compiler and the hardware you compile
 on.

I heard a suggestion that Microsoft could develop (for this purpose)
a provably-correct minimal compiler which always produced identical
output for any given input.  If you believe the proof of correctness,
then you can trust the compiler; the compiler, in turn, should produce
precisely the same nub when you run it on Microsoft's source code as
it did when Microsoft ran it on Microsoft's source code (and you can
check the nub's hash, just as the SCP can).

I don't know for sure whether Microsoft is going to do this, or is
even capable of doing this.  It would be a cool idea.  It also isn't
sufficient to address all questions about deliberate malfeasance.  Back
in the Clipper days, one question about Clipper's security was how do
we know the Clipper spec is secure? (and the answer actually turned
out to be it's not).  But a different question was how do we know
that this tamper-resistant chip produced by Mykotronix even implements
the Clipper spec correctly?.

The corresponding questions in Palladium are how do we know that the
Palladium specs (and Microsoft's nub implementation) are secure? and
how do we know that this tamper-resistant chip produced by a
Microsoft contractor even implements the Palladium specs correctly?.

In that sense, TCPA or Palladium can _reduce_ the size of the hardware
trust problem (you only have to trust a small number of components,
such as the SCP), and nearly eliminate the software trust problem, but
you still don't have an independent means of verifying that the logic
in the tamper-resistant chip performs according to its specifications.
(In fact, publishing the plans for the chip would hardly help there.)

This is a sobering thought, and it's consistent with ordinary security
practice, where security engineers try to _reduce_ the number of
trusted system components.  They do not assume that they can eliminate
trusted components entirely.  In fact, any demonstration of the
effectiveness of a security system must make some assumptions,
explicit or implicit.  As in other reasoning, when the assumptions are
undermined, the demonstration may go astray.

The chip fabricator can still -- for example -- find a covert channel
within a protocol supported by the chip, and use that covert channel
to leak your keys, or to leak your serial number, or to accept secret,
undocumented commands.

This problem is actually not any _worse_ in Palladium than it is in
existing hardware.  I am typing this in an ssh window on a Mac laptop.
I can read the MacSSH source code (my client) and the OpenSSH source
code (the server listening at the other end), and I can read specs for
most of the software and most of the parts which make up this laptop,
but I can't independently verify that they actually implement the
specs, the whole specs, and nothing but the specs.

As Ken Thompson pointed out in Reflections on Trusting Trust, the
opportunities for introducing backdoors in hardware or software run
deep, and can conceivably survive multiple generations, as though they
were viruses capable of causing Lamarckian mutations which cause the
cells of future generations to produce fresh virus copies.  Even if I
have a Motorola databook for the CPU in this iBook, I won't know
whether the microcode inside that CPU is compliant with the spec, or
whether it might contain back doors which can be used against me
somehow.  It's technically conceivable that the CPU microcode on this
machine understands MacOS, ssh, vt100, and vi, and is programmed to
detect BWA HA HA! arguments about trusted computing and invisibly
insert errors into them.  I would never know.

This problem exists with or without Palladium.  Palladium would
provide a new place where a particular vendor could put
security-critical (trusted) logic without direct end-user
accountability.  But there are already several such places in the
PC.  I don't think that trust-bootstrapping problem can ever be
overcome, although maybe it's possible to chip away at it.  There is
a much larger conversation about trusted computing in general, which
we ought to be having:

What would make you want to enter sensitive information into a
complicated device, built by people you don't know, which you can't
take apart under a microscope?

That device doesn't have to be a computer.

-- 
Seth David Schoen [EMAIL PROTECTED] | Reading is a right, not a feature!
 

Re: deterministic primality test

2002-08-09 Thread Joseph Ashwood

[I've got some doubts about the content here but I think the
discussion is certainly on charter --Perry]

Since I have received a number of private replies all saying approximately
the same thing; lookup for small n, use algo for large. Allow me to extend
my observation.
To quote myself from earlier:
 1: if (n is of the form a^b, b1) output COMPOSITE;
 2: r=2
 3: while(r  n) {
 4:if(gcd(n,r)=/=1) output COMPOSITE;
 5:if(r is prime)
 6:let q be the largest prime factor of r-1;

n = prime number  2
1: n is not of the form (we already know it is prime)
2: r=2
3: 2  3
4: gcd = 1 (since N is prime)
5: 2 is prime
6: q = ?
r=2
q = largest prime factor of (2-1) = largest prime factor of 1. 1 has no
prime factors, since 1 is by definition not prime, but has no integer
divisors except 1.

So the algorithm cannot be executed on any prime value with the exception of
2 (which it correctly states is prime). It is possible that the algorithm is
simply incomplete. The apparent intension is:
6: if(r == 2) q = 1;
else q is the largest prime factor of r-1

A few additional observations under the assumption that this is the desired
statement:
The algorithm is equivalent to (I've left the numbering for the original
instruction order:
1: if (n is of the form a^b, b1) output COMPOSITE;
2: r=2
3: while(r  n) {
5:if(r is prime)
4:if(gcd(n,r)=/=1) output COMPOSITE;
6: if(r == 2) q = 1;
else q is the largest prime factor of r-1
7.if(q = 4sqrt(r)log(n)) and (n^((r-1)/q) =/= 1 (mod r))
8break
9:r -r+1
proving this is trivial. Since r is being stepped sequentially the only new
results will occur when r is prime (otherwise gcd is not dependable), this
will reduce the number of calls to gcd, and so reduce the asymptotic time.
This is obviously bounded by the time it takes to check if r is prime.
However note that we can now replace it conceptually with, without changing
anything:
1: if (n is of the form a^b, b1) output COMPOSITE;
2: r=2
3: while(r  n) {
4:if(gcd(n,r)=/=1) output COMPOSITE;
6: if(r == 2) q = 1;
else q is the largest prime factor of r-1
7.if(q = 4sqrt(r)log(n)) and (n^((r-1)/q) =/= 1 (mod r))
8break
9-2:r -nextprime(r)

The asymptotic time is now bounded primarily by the runningtime of
nextprime(), the current best running time for this is not polynomial (but
there are better ways than their step by 1, prime check method). In fact
assuming that the outer while loop is the primary exit point (which is
certainly true for the worst case), the algorithm presented takes O(n) time
(assuming binary notation, where n is the function input n), this is
equvalent to the convential notation O(2^m), where m is the length of the
input. The best case is entirely different. Best case running time is
O(gcd()), which is polynomial. Their claim of polynomial running time is
certainly suspect. And over the course of the entire algorithm, the actual
running time is limited by a function I've dubbed listAllPrimesLessThan(n),
which is well studied and the output has exponential length, making such a
function strictly exponential.

Additionally it has been noted that certain composite values may pass the
test, regardless of the claim of perfection. It is unlikely that this
function is of much use.
Joe


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Challenge to TCPA/Palladium detractors

2002-08-09 Thread Lucky Green

Anonymous wrote:
 Matt Crawford replied:
  Unless the application author can predict the exact output of the 
  compilers, he can't issue a signature on the object code.  The 
  compilers then have to be inside the trusted base, checking a 
  signature on the source code and reflecting it somehow through a 
  signature they create for the object code.
 
 It's likely that only a limited number of compiler 
 configurations would be in common use, and signatures on the 
 executables produced by each of those could be provided.  
 Then all the app writer has to do is to tell people, get 
 compiler version so-and-so and compile with that, and your 
 object will match the hash my app looks for. DEI

The above view may be overly optimistic. IIRC, nobody outside PGP was
ever able to compile a PGP binary from source that matched the hash of
the binaries built by PGP. 

--Lucky Green


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [ANNOUNCE] OpenSSL 0.9.6f released

2002-08-09 Thread Rich Salz


   The checksums were calculated using the following commands:
 
 openssl md5  openssl-0.9.6f.tar.gz
 openssl md5  openssl-engine-0.9.6f.tar.gz

Is there another md5/hash program that's readily available?
Cf: Thompson's reflections on trusting trust.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [ANNOUNCE] OpenSSL 0.9.6f released

2002-08-09 Thread tc lewis


On Fri, 9 Aug 2002, Rich Salz wrote:
The checksums were calculated using the following commands:
 
  openssl md5  openssl-0.9.6f.tar.gz
  openssl md5  openssl-engine-0.9.6f.tar.gz

 Is there another md5/hash program that's readily available?
 Cf: Thompson's reflections on trusting trust.

md5sum is included with many linux/unix-ish/bsd/etc distributions.
it's included in gnu's textutils package i think (and isn't linked
against openssl).

-tcl.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



[ANNOUNCE] OpenSSL 0.9.6g released

2002-08-09 Thread Richard Levitte - VMS Whacker


  OpenSSL version 0.9.6g released
  ===

  OpenSSL - The Open Source toolkit for SSL/TLS
  http://www.openssl.org/

  The OpenSSL project team is pleased to announce the release of version
  0.9.6g of our open source toolkit for SSL/TLS.  This new OpenSSL version
  is a bugfix release.

  The most significant changes are:

  o Important building fixes on Unix.
  o Fix crash in CSwift engine. [engine]

  We consider OpenSSL 0.9.6g to be the best version of OpenSSL available
  and we strongly recommend that users of older versions upgrade as
  soon as possible.  OpenSSL 0.9.6g is available for download via HTTP
  and FTP from the following master locations (you can find the various
  FTP mirrors under http://www.openssl.org/source/mirror.html):

o http://www.openssl.org/source/
o ftp://ftp.openssl.org/source/

  [1] OpenSSL comes in the form of two distributions this time.
  The reasons for this is that we want to deploy the external crypto device
  support but don't want to have it part of the normal distribution just
  yet.  The distribution containing the external crypto device support is
  popularly called engine, and is considered experimental.  It's been
  fairly well tested on Unix and flavors thereof.  If run on a system with
  no external crypto device, it will work just like the normal distribution.

  The distribution file names are:

  o openssl-0.9.6g.tar.gz [normal]
MD5 checksum: 515ed54165a55df83f4eb4e4e9078d3f
  o openssl-engine-0.9.6g.tar.gz [engine]
MD5 checksum: 87cb788c99e40b6e67268ea35d1d250c

  The checksums were calculated using the following commands:

openssl md5  openssl-0.9.6g.tar.gz
openssl md5  openssl-engine-0.9.6g.tar.gz

  Yours,
  The OpenSSL Project Team...  

Mark J. Cox Ben Laurie  Andy Polyakoff
Ralf S. Engelschall Richard Levitte Geoff Thorpe
Dr. Stephen Henson  Bodo Möller
Lutz JänickeUlf Möller


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [ANNOUNCE] OpenSSL 0.9.6f released

2002-08-09 Thread Tim Rice

On Fri, 9 Aug 2002, Rich Salz wrote:


The checksums were calculated using the following commands:
 
  openssl md5  openssl-0.9.6f.tar.gz
  openssl md5  openssl-engine-0.9.6f.tar.gz

 Is there another md5/hash program that's readily available?
 Cf: Thompson's reflections on trusting trust.

ftp://ftp.sgi.com/sgi/fax/contrib/md5.tar.gz
ftp://ftp.hylafax.org/contrib/md5.tar.gz


-- 
Tim RiceMultitalents(707) 887-1469
[EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: md5 for bootstrap checksum of md5 implementations? (Re: [ANNOUNCE] OpenSSL 0.9.6f released)

2002-08-09 Thread Barney Wolff

C for md5 with a driver and test results is in RFC 1321, which is
available in so many public places that it's impossible to trojan.
Of course you need a compiler you trust, as Ken pointed out so long ago.
If I were that paranoid I'd rather trust a C compiler than Perl -
at least I can inspect the (staticly linked) executable produced.

Does anybody offer a public MD5 web service?  Though if your omnipotent
attacker sits between you and the world, this does no good.

  Is there another md5/hash program that's readily available?
  Cf: Thompson's reflections on trusting trust.

--  
Barney Wolff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Thanks, Lucky, for helping to kill gnutella

2002-08-09 Thread Bram Cohen

AARG!Anonymous wrote:

 If only there were a technology in which clients could verify and yes,
 even trust, each other remotely.  Some way in which a digital certificate
 on a program could actually be verified, perhaps by some kind of remote,
 trusted hardware device.  This way you could know that a remote system was
 actually running a well-behaved client before admitting it to the net.
 This would protect Gnutella from not only the kind of opportunistic
 misbehavior seen today, but the future floods, attacks and DOSing which
 will be launched in earnest once the content companies get serious about
 taking this network down.

Before claiming that the TCPA, which is from a deployment standpoint
vaporware, could help with gnutella's scaling problems, you should
probably learn something about what gnutella's problems are first. The
truth is that gnutella's problems are mostly that it's a screamer
protocol, and limiting which clients could connect would do nothing to fix
that.

Limiting which clients could connect to the gnutella network would,
however, do a decent job of forcing to pay people for one of the
commercial clients. In this way it's very typical of how TCPA works - a
non-solution to a problem, but one which could potentially make money, and
has the support of gullible dupes who know nothing about the technical
issues involved.

 Be sure and send a note to the Gnutella people reminding them of all
 you're doing for them, okay, Lucky?

Your personal vendetta against Lucky is very childish.

-Bram Cohen

Markets can remain irrational longer than you can remain solvent
-- John Maynard Keynes


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Thanks, Lucky, for helping to kill gnutella

2002-08-09 Thread Antonomasia

From: AARG!Anonymous [EMAIL PROTECTED]

 An article on Salon this morning (also being discussed on slashdot),
 http://www.salon.com/tech/feature/2002/08/08/gnutella_developers/print.html,
 discusses how the file-trading network Gnutella is being threatened by
 misbehaving clients.  In response, the developers are looking at limiting
 the network to only authorized clients:

 They intend to do this using digital signatures, and there is precedent
 for this in past situations where there have been problems:

  Alan Cox,  Years and years ago this came up with a game

 If only there were a technology in which clients could verify and yes,

 Be sure and send a note to the Gnutella people reminding them of all
 you're doing for them, okay, Lucky?

Now that is resorting to silly accusation.

My copy of Peer to Peer (Oram, O'Reilly) is out on loan but I think Freenet
and Mojo use protocols that require new users to be contributors before they
become consumers.  (Leaving aside that Gnutella seems doomed on scalability
grounds.)

Likewise the WAN shooter games have (partially) defended against cheats by
making the client hold no authoritative data and by disqualifying those
that send impossible traffic.  (Excluding wireframe graphics cards is another
matter.)  If I were a serious gamer I'd want 2 communities - one for plain
clients to match gaming skills and another for cheat all you like contests
to match both gaming and programming skills.

If the Gnuts need to rework the protocol they should do so.

My objection to this TCPA/palladium thing is that it looks aimed at ending
ordinary computing.  If the legal scene were radically different this wouldn't
be causing nearly so much fuss.  Imagine:
- a DoJ that can enforce monopoly law
- copyright that expires in reasonable time
 (5 years for s/w ? 15 years for books,films,music... ?)
- fair use and first sale are retained
- no concept of indirect infringement (e.g. selling marker pens)
- criminal and civil liability for incorrectly barring access in DRM
- hacking is equally illegal for everybody
- no restriction on making and distributing/selling any h/w,s/w

If Anonymous presents Gnutella for serious comparison with the above issues
I say he's looking in the wrong end of his telescope.

--
##
# Antonomasia   ant notatla.demon.co.uk  #
# See http://www.notatla.demon.co.uk/#
##

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Thanks, Lucky, for helping to kill gnutella

2002-08-09 Thread Pete Chown

Anonymous wrote:

 ... the file-trading network Gnutella is being threatened by
 misbehaving clients.  In response, the developers are looking at limiting
 the network to only authorized clients:

This is the wrong solution.  One of the important factors in the
Internet's growth was that the IETF exercised enough control, but not
too much.  So HTTP is standardised, which allows (theoretically) any
browser to talk to any web server.  At the same time the higher levels
are not standardised, so someone who has an idea for a better browser or
web server is free to implement it.

If you build a protocol which allows selfish behaviour, you have done
your job badly.  Preventing selfish behaviour in distributed systems is
not easy, but that is the problem we need to solve.  It would be a good
discussion for this list.

 Not discussed in the article is the technical question of how this can
 possibly work.  If you issue a digital certificate on some Gnutella
 client, what stops a different client, an unauthorized client, from
 pretending to be the legitimate one?

Exactly.  This has already happened with unauthorised AIM clients.  My
freedom to lie allows me to use GAIM rather than AOL's client.  In this
case, IMO, the ethics are the other way round.  AOL seeks to use its
(partial) monopoly to keep a grip on the IM market.  The freedom to lie
mitigates this monopoly to an extent.

-- 
Pete

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



[no subject]

2002-08-09 Thread AARG!Anonymous

Adam Back writes a very thorough analysis of possible consequences of the
amazing power of the TCPA/Palladium model.  He is clearly beginning to
get it as far as what this is capable of.  There is far more to this
technology than simple DRM applications.  In fact Adam has a great idea
for how this could finally enable selling idle CPU cycles while protecting
crucial and sensitive business data.  By itself this could be a killer
app for TCPA/Palladium.  And once more people start thinking about how to
exploit the potential, there will be no end to the possible applications.

Of course his analysis is spoiled by an underlying paranoia.  So let me
ask just one question.  How exactly is subversion of the TPM a greater
threat than subversion of your PC hardware today?  How do you know that
Intel or AMD don't already have back doors in their processors that
the NSA and other parties can exploit?  Or that Microsoft doesn't have
similar backdoors in its OS?  And similarly for all the other software
and hardware components that make up a PC today?

In other words, is this really a new threat?  Or are you unfairly blaming
TCPA for a problem which has always existed and always will exist?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread AARG!Anonymous

I want to follow up on Adam's message because, to be honest, I missed
his point before.  I thought he was bringing up the old claim that these
systems would give the TCPA root on your computer.

Instead, Adam is making a new point, which is a good one, but to
understand it you need a true picture of TCPA rather than the false one
which so many cypherpunks have been promoting.  Earlier Adam offered a
proposed definition of TCPA/Palladium's function and purpose:

 Palladium provides an extensible, general purpose programmable
 dongle-like functionality implemented by an ensemble of hardware and
 software which provides functionality which can, and likely will be
 used to expand centralised control points by OS vendors, Content
 Distrbuters and Governments.

IMO this is total bullshit, political rhetoric that is content-free
compared to the one I offered:

: Allow computers separated on the internet to cooperate and share data
: and computations such that no one can get access to the data outside
: the limitations and rules imposed by the applications.

It seems to me that my definition is far more useful and appropriate in
really understanding what TCPA/Palladium are all about.  Adam, what do
you think?

If we stick to my definition, you will come to understand that the purpose
of TCPA is to allow application writers to create closed spheres of trust,
where the application sets the rules for how the data is handled.  It's
not just DRM, it's Napster and banking and a myriad other applications,
each of which can control its own sensitive data such that no one can
break the rules.

At least, that's the theory.  But Adam points out a weak spot.  Ultimately
applications trust each other because they know that the remote systems
can't be virtualized.  The apps are running on real hardware which has
real protections.  But applications know this because the hardware has
a built-in key which carries a certificate from the manufacturer, who
is called the TPME in TCPA.  As the applications all join hands across
the net, each one shows his cert (in effect) and all know that they are
running on legitimate hardware.

So the weak spot is that anyone who has the TPME key can run a virtualized
TCPA, and no one will be the wiser.  With the TPME key they can create
their own certificate that shows that they have legitimate hardware,
when they actually don't.  Ultimately this lets them run a rogue client
that totally cheats, disobeys all the restrictions, shows the user all
of the data which is supposed to be secret, and no one can tell.

Furthermore, if people did somehow become suspicious about one particular
machine, with access to the TPME key the eavesdroppers can just create
a new virtual TPM and start the fraud all over again.

It's analogous to how someone with Verisign's key could masquerade as
any secure web site they wanted.  But it's worse because TCPA is almost
infinitely more powerful than PKI, so there is going to be much more
temptation to use it and to rely on it.

Of course, this will be inherently somewhat self-limiting as people learn
more about it, and realize that the security provided by TCPA/Palladium,
no matter how good the hardware becomes, will always be limited to
the political factors that guard control of the TPME keys.  (I say
keys because likely more than one company will manufacture TPM's.
Also in TCPA there are two other certifiers: one who certifies the
motherboard and computer design, and the other who certifies that the
board was constructed according to the certified design.  The NSA would
probably have to get all 3 keys, but this wouldn't be that much harder
than getting just one.  And if there are multiple manufacturers then
only 1 key from each of the 3 categories is needed.)

To protect against this, Adam offers various solutions.  One is to do
crypto inside the TCPA boundary.  But that's pointless, because if the
crypto worked, you probably wouldn't need TCPA.  Realistically most of the
TCPA applications can't be cryptographically protected.  Computing with
encrypted instances is a fantasy.  That's why we don't have all those
secure applications already.

Another is to use a web of trust to replace or add to the TPME certs.
Here's a hint.  Webs of trust don't work.  Either they require strong
connections, in which case they are too sparse, or they allow weak
connections, in which case they are meaningless and anyone can get in.

I have a couple of suggestions.  One early application for TCPA is in
closed corporate networks.  In that case the company usually buys all
the computers and prepares them before giving them to the employees.
At that time, the company could read out the TPM public key and sign
it with the corporate key.  Then they could use that cert rather than
the TPME cert.  This would protect the company's sensitive data against
eavesdroppers who manage to virtualize their hardware.

For the larger public network, the first thing I would suggest is that
the TPME key ought 

Re: Thanks, Lucky, for helping to kill gnutella (fwd)

2002-08-09 Thread R. A. Hettinga

At 1:03 AM +0200 on 8/10/02, Some anonymous, and now apparently
innumerate, idiot in my killfile got himself forwarded to Mr. Leitl's
cream of cypherpunks list:


 They will protect us from being able
 to extend trust across the network.

As Dan Geer and Carl Ellison have reminded us on these lists and
elsewhere, there is no such thing as trust, on the net, or anywhere
else.

There is only risk.


Go learn some finance before you attempt to abstract emotion into the
quantifiable.

Actual numerate, thinking, people gave up on that nonsense in the
1970's, and the guys who proved the idiocy of trust, showing, like
LaGrange said to Napoleon about god, that the capital markets had no
need that hypothesis, Sire ended up winning a Nobel for that proof
the 1990's*.

Cheers,
RAH
*The fact that Scholes and Merton eventually ended up betting on
equity volatility like it was actually predictable and got their
asses handed to them for their efforts is beside the point, of
course. :-).


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread James A. Donald

--
On 9 Aug 2002 at 17:15, AARG! Anonymous wrote:
 to understand it you need a true picture of TCPA rather than the 
 false one which so many cypherpunks have been promoting.

As TCPA is currently vaporware, projections of what it will be, 
and how it will be used are judgments, and are not capable of 
being true or false, though they can be plausible or implausible.

Even with the best will in the world, and I do not think the 
people behind this have the best will in the world, there is an 
inherent conflict between tamper resistance and general purpose 
programmability.  To prevent me from getting at the bits as they 
are sent to my sound card or my video card, the entire computer, 
not just the dongle, has to be somewhat tamper resistant, which is 
going to make the entire computer somewhat less general purpose 
and programmable, thus less useful.

The people behind TCPA might want to do something more evil than 
you say they want to do, if they want to do what you say they want 
to do they might be prevented by law enforcement which wants 
something considerably more far reaching and evil, and if they
want to do it, and law enforcement refrains from reaching out and 
taking hold of their work, they still may be unable to do it for 
technical reasons. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 D7ZUyyAS+7CybaH0GT3tHg1AkzcF/LVYQwXbtqgP
 2HBjGwLqIOW1MEoFDnzCH6heRfW1MNGv1jXMIvtwb


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread AARG!Anonymous

Re the debate over whether compilers reliably produce identical object
(executable) files:

The measurement and hashing in TCPA/Palladium will probably not be done
on the file itself, but on the executable content that is loaded into
memory.  For Palladium it is just the part of the program called the
trusted agent.  So file headers with dates, compiler version numbers,
etc., will not be part of the data which is hashed.

The only thing that would really break the hash would be changes to the
compiler code generator that cause it to create different executable
output for the same input.  This might happen between versions, but
probably most widely used compilers are relatively stable in that
respect these days.  Specifying the compiler version and build flags
should provide good reliability for having the executable content hash
the same way for everyone.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]