Re: Non-repudiation (was RE: The PAIN mnemonic)

2003-12-30 Thread Amir Herzberg
At 18:02 29/12/2003, Ben Laurie wrote:
Amir Herzberg wrote:
...
specifications, I use `non-repudiation` terms for some of the 
requirements. For example, the intuitive phrasing of the Non-Repudiation 
of Origin (NRO) requirement is: if any party outputs an evidence evid 
s.t. valid(agreement, evid, sender, dest, message, time-interval, NRO), 
then either the sender is corrupted or sender originated message to the 
destination dest during the indicated time-interval. Notice of course 
that sender here is an entity in the protocol, not the human being 
`behind` it. Also notice this is only intuitive description, not the 
formal specifications.
What you have here is evidence of origin, not non-repudiation.
Ben, thanks, I'll change to this term (`evidence` instead of 
`non-repudiation`) since it appears from this thread that it may avoid 
confusion (at least for some people).

Best regards,

Amir Herzberg
Computer Science Department, Bar Ilan University
Homepage (and lectures in applied cryptography, secure communication and 
commerce): http://amir.herzberg.name

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-30 Thread Amir Herzberg
At 04:20 30/12/2003, David Wagner wrote:
Ed Reed wrote:
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software),
I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.
skip
I'm not sure I agree with your last statement. Consider a typical PC 
running some insecure OS and/or applications, which, as you said in earlier 
post, is the typical situation and threat. Since the OS is insecure and/or 
(usually) gives administrator priviledges to insecure applications, an 
attacker may be able to gain control and then modify some code (e.g. 
install trapdoor). With existing systems, this is hard to prevent. However, 
it may be possible to detect this by some secure monitoring hardware, which 
e.g. checks for signatures by the organization's IT department on any 
installed software. A reasonable response when such violation is 
detected/suspected is to report to the IT department (`owner` of the machine).

On the other hand I fully agree with your other comments in this area and 
in particular with...
...
Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an Owner
Override would not disturb these applications.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-30 Thread Jerrold Leichter
| Rick Wash  wrote:
| There are many legitimate uses of remote attestation that I would like to
| see.  For example, as a sysadmin, I'd love to be able to verify that my
| servers are running the appropriate software before I trust them to access
| my files for me.  Remote attestation is a good technical way of doing that.
|
| This is a good example, because it brings out that there are really
| two different variants of remote attestation.  Up to now, I've been
| lumping them together, but I shouldn't have been.  In particular, I'm
| thinking of owner-directed remote attestation vs. third-party-directed
| remote attestation.  The difference is who wants to receive assurance of
| what software is running on a computer; the former mechanism allows to
| convince the owner of that computer, while the latter mechanism allows
| to convince third parties
|
| Finally, I'll come back to the topic you raised by noting that your
| example application is one that could be supported with owner-directed
| remote attestation.  You don't need third-party-directed remote
| attestation to support your desired use of remote attestation.  So, TCPA
| or Palladium could easily fall back to only owner-directed attestation
| (not third-party-attestation), and you'd still be able to verify the
| software running on your own servers without incurring new risks of DRM,
| software lock-in, or whatever
All of this is fine as long as there is a one-to-one association between
machines and owners of those machines.  Consider the example I gave
earlier:  A shared machine containing the standard distribution of the
trusted computing software.  All the members of the group that maintain the
software will want to have the machine attest, to them, that it is properly
configured and operating as intended.  We can call the group the owner of the
machine, and create a single key pair that all of them know.  But this is
brittle - shared secrets always are.  Any member of the group could then
modify the machine and, using his access to the private key, fake the all
clear indication.  Each participant should have his own key pair, since
attestation using a particular key pair only indicates security with respect
to those who don't know the private key of the pair - and a member of a
development team for the secure kernel *should* mistrust his fellow team
members!

So, again, there are simple instances where it will prove useful to be able
to maintain multiple sets of independent key pairs.

Now, in the shared distribution machine case, on one level team members should
be mutually suspicious, but on another they *do* consider themselves joint
owners of the machine - so it doesn't bother them that there are key pairs
to which they don't have access.  After all, those key pairs are assigned to
*other* owners of the machine!  But exactly the same mechanism could be used
to assign a key pair to Virgin Records - who we *don't* want to consider an
owner of the machine.

As long as, by owner, you mean a single person, or a group of people who
completely trust each other (with respect to the security problem we are trying
to solve); and as long as each machine only has only one owner; then, yes, one
key pair will do.  But as soon as owner can encompass mutually suspicious
parties, you need to have mutual independent key pairs - and then how you
use them, and to whom you grant them, becomes a matter of choice and policy,
not technical possibility.

BTW, even with a single owner, multiple independent key pairs may be useful.
Suppose I have reason to suspect that my private key has been leaked.  What
can I do?  If there is only one key pair around, I have to rebuild my machine
from scratch.  But if I had the forsight to generate *two* key pairs, one of
which I use regularly - and the other of which I sealed away in a safe - then
I can go to the safe, get out my backup key pair, and re-certify my machine.
In fact, it would probably be prudent for me to generate a whole bunch of
such backup key pairs, just in case.

You're trying to make the argument that feature X (here, remote attestation for
multiple mutually-suspicious parties) has no significant uses.  Historically,
arguments like this are losers.  People come up with uses for all kinds of
surprising things.  In this case, it's not even very hard.

An argument that feature X has uses, but also imposes significant and non-
obvious costs, is another thing entirely.  Elucidating the costs is valuable.
But ultimately individuals will make their own analysis of the cost/benefit
ratio, and their calculations will be different from yours.  Carl Ellison, I
think, argued that TCPA will probably never have large penetration because the
dominant purchasing factor for consumers is always initial cost, and the
extra hardware will ensure that TCPA-capable machines will always be more
expensive.  Maybe he's right.

Even if he isn't, as long as people believe that they have control over the
costs associated with 

Re: [camram-spam] Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-30 Thread Eric S. Johansson
Scott Nelson wrote:

d*b
---
s
where: d = stamp delay in seconds
  s = spam size in bytes
  b = bandwidth in bytes per second


I don't understand this equation at all.

It's the rate limiting factor that counts, not a combination of
stamp speed + bandwidth.
well, stamp speed is method of rate limiting.  This equation/formula 
gives you the ratio of performance degradation.  So,

Given d=15, b=49152 (aka 384kbps) and s=1000

the slowdown ratio or factor is 737.28 times over what an unimpeded 
spammer can send.  But as you increase spam size, the slowdown factor 
declines.

Assuming 128Kbps up, without a stamp it takes about .6 seconds to
send a typical 10K spam.
If it takes 15 seconds to generate the stamp, then it will take
15 seconds to send a stamped spam.  It won't even take 15.6 seconds,
because the calculation can be done in parallel with the sending.
actually, it would take 15 but only because you can be sending one 
stamped piece of spam at the same time as you're generating the next 
stamp.  But using your spam size, , the slowdown factor becomes roughly 
73 times.  So they would need 73 machines running full tilt all the time 
to regain their old throughput.  It's entirely possible that one 
evolutionary response to stamps would be to generate larger pieces of 
spam but that would also slow them down so we still win, kind of, sort of...


assuming unlimited bandwidth, if a stamp spammer compromises roughly the 
same number of PCs as were compromised during the last worm attack 
(350,000) at 15 seconds per stamp, you end up with 1.4 million stamps 
per minute or 2 billion stamps per day.  When you compare that to the 
amount of spam generated per day (high hundred billion to low trillion), 



Not according to the best estimates I have.
The average email address receives 20-30 spams a day (almost twice 
what it was last year) and there are only 200-400 million 
email addresses, which works out to less than 10 billion spams per day.
actually, I'm hearing that there are roughly one billion addresses but 
unfortunately have lost the source.  The numbers for spam I'm hearing 
are on the order of 76 billion to 2 trillion
(
2 tril spams /day 
http://www.pacificresearch.org/press/clip/2003/clip_03-05-08.html
76 bil http://www.marketinglaw.co.uk/open.asp?A=703
)

If you have a better source (and I am sure there are some), I would like 
to hear it.


But there's a much easier way to do the math.

If 1% of the machines on the internet are compromised,
and a stamp takes 15 seconds to generate, then spammers can send
50-60 spams to each person.
(86400 seconds per day / 15 seconds per stamp * 1% of everybody = 57.6)
unfortunately, I think you making some assumptions that are not fully 
warranted.  I will try to do some research and figure out the number of 
machines compromised.  The best No. I had seen to date was about 350,000.

You can reduce that by factoring in the average amount of time
that a compromised machine is on per day.
I fully expect that stamps will rise in price to several minutes,
if camram actually gets any traction.
well, that might be the case but I must have a who cares attitude about 
that.  For the most part I rarely send mail to strangers and the stamp 
generation process is in background.  So if it take several minutes to 
queue up and send a piece of mail a few times a month.  What's the 
problem? (yes, I know I'm being cavalier)

Custom hardware?
I can buy a network ready PC at Fry's for $199.
If it takes that machine 30 seconds to generate a stamp, and I leave
it running 24/7, and replace it after 5 months, then the cost
of a hashstamp is still less than 1/500 of a snail-mail stamp.
Granted it's a significant increase in costs over current email,
and therefore potentially a vast improvement, 
but it's still not expensive.
wrong unit of costs.  The stamps still take 15 seconds (give or take) 
which means approximately 5760 stamps per day.  Hardware acceleration is 
an attack against stamps by using dedicated hardware to shrink the cost 
in time of a given size stamp.  so, if and evil someone can build an 
ASIC to shrink the cost of a stamped by 100 times, then mercenary 
somebody else can build the same functionality and performance as well. 
 Plop it onto a USB interface chip, sell for $15 and balance is restored

---eric

--
Speech recognition in use.  Incorrect endings, words, and case is
closer than it appears
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [camram-spam] Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-30 Thread Alan Brown
On Tue, 30 Dec 2003, Eric S. Johansson wrote:

  But using your spam size, , the slowdown factor becomes roughly
 73 times.  So they would need 73 machines running full tilt all the time
 to regain their old throughput.

Believe me, the professionals have enough 0wned machines that this is
trivial.

On the flipside, it means the machines are burned faster.

 unfortunately, I think you making some assumptions that are not fully
 warranted.  I will try to do some research and figure out the number of
 machines compromised.  The best No. I had seen to date was about 350,000.

It's at least an order of magnitude higher than this, possibly 2 orders,
thanks to rampaging worms with spamware installation payloads
compromising cablemodem- and adsl- connected Windows machines worldwide.

AB




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Electronic-voting firm reveals hacker break-in

2003-12-30 Thread R. A. Hettinga
http://seattletimes.nwsource.com/cgi-bin/PrintStory.pl?document_id=2001825724zsection_id=268448455slug=votehere300date=20031230

Tuesday, December 30, 2003, 12:00 A.M. Pacific

The Seattle Times:
Electronic-voting firm reveals hacker break-in

By Monica Soto Ouchi
Seattle Times technology reporter

Bellevue-based VoteHere, which sells software designed to make electronic
voting more secure, said yesterday a hacker it thinks was politically
motivated broke into its computer system and stole nonsensitive internal
documents.

The break-in occurred in October but was only publicly acknowledged
yesterday by Chief Executive Jim Adler.

The incident occurred after the hacker exploited a vulnerability in the
company's corporate software. VoteHere was a couple days behind updating
a security patch, spokeswoman Stacey Fields said.

VoteHere said it identified the hacker within 24 hours of the break-in and
that it believes the person is affiliated with anti-electronic voting
organizations.

The Washington Cyber Crime Task Force - an affiliation of FBI, U.S. Secret
Service and local law enforcement - is investigating.

No one has been arrested, Fields said.

The breach comes amid growing concern about the security and reliability of
electronic voting.

Bev Harris, who runs a small Renton public-relations firm, helped energize
citizens and computer scientists concerned with the potential for election
fraud after earlier this year discovering an open, unprotected Web site
that revealed source code for Diebold voting machines.

The most vocal opponents have called for electronic-voting systems to be
backed up by voter-verifiable paper audit trails, a move adopted by
California's secretary of state.

VoteHere sells two electronic-voting products. One, encryption-security
software for electronic-voting machines, detects when ballots are
compromised by adding, deleting or changing a vote.

The other is Internet voting software for private and public elections.

Adler said the hacker didn't access sensitive materials because the
company's business model rests upon releasing its source code for all to
see.

VoteHere deploys the same encryption technology used to keep credit-card
data private during online transactions. The secret is the key data, a
10-digit number that unlocks the information.

We're a bunch of cryptographers that decided all the algorithms must be
public for the system to be trustworthy, Adler said.

There's no secret in any of this.

VoteHere released some of its source code earlier this year to be
scrutinized by VerifiedVoting.org, a grass-roots organization pressing for
accountability in election systems.

David Dill, the group's founder and a Stanford University computer-science
professor, said he has yet to find a volunteer with the expertise to verify
the company's systems.

What I think we need, before I'm confident in a system like VoteHere, is a
near consensus among experts in cryptography and election administration
that the system is trustworthy, Dill said.

At this point, people haven't looked at it enough to gain a consensus.


-- 
-
R. A. Hettinga mailto: [EMAIL PROTECTED]
The Internet Bearer Underwriting Corporation http://www.ibuc.com/
44 Farquhar Street, Boston, MA 02131 USA
... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience. -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


why penny black etc. are not very useful

2003-12-30 Thread Perry E. Metzger

In my opinion, the various hashcash-to-stop-spam style schemes are not
very useful, because spammers now routinely use automation to break
into vast numbers of home computers and use them to send their
spam. They're not paying for CPU time or other resources, so they
won't care if it takes more effort to send. No amount of research into
interesting methods to force people to spend CPU time to send mail
will injure the spammers.

By the way, this of course points out that most spammers these days,
regardless of their protestations about being legitimate
businessmen, are in fact already multiple felons even to a
libertarian like me.  The stats places like Spamhaus produce show that
all the biggest spammers are indeed based in the US even if they use
foreign machines in their work, and throwing them in jail would
probably help.  The fact that the FBI and similar agencies rarely or
never arrest anyone for breaking the law in the course of spamming
just points out that the problem isn't a lack of laws or technology
but raging incompetence and disinterest on the part of law enforcement.

However, as this isn't a spam list, I'll get off of that rant right
now.

I've heard all sorts of other claims about how technology could help
with spam, and they're usually well intentioned but misguided. Two in
particular come to mind:

1. We need public key authentication of all mail. Well, I'll point
out that large integers are cheap and plentiful. Authenticated
spam is pretty much as bad as non-Authenticated spam. If we use
the authentication to only accept mail from people we already know
we want to talk to, we've drastically reduced the usefulness of
mail.

2. The problem is SMTP -- we need to replace it. Every time I hear
this, the speaker rarely has any actual improvements to offer over
what SMTP already does, or, more often, doesn't understand what
SMTP does.

Anyway, enough ranting.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


[ISN] Oh Dan Geer, where art thou?

2003-12-30 Thread R. A. Hettinga

--- begin forwarded text


Date: Tue, 30 Dec 2003 09:30:58 -0600 (CST)
From: InfoSec News [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: [ISN] Oh Dan Geer, where art thou?
Sender: [EMAIL PROTECTED]
Reply-To: InfoSec News [EMAIL PROTECTED]
Status:

http://napps.nwfusion.com/weblogs/security/003879.html

By Ellen Messmer
Network World Fusion
12/22/03

Remember Dan Geer-Dr. Dan Geer to you-who was fired from security firm
@stake in late September for sounding off against Microsoft as a
national security threat in the report CyberSecurity: The Cost of
Monopoly? (If not, check out the 9/29/03 Security Notes column).
Well, Geer is back in action as the chief scientist for Verdasys, a
security start-up that makes a product called Digital Guardian. And he
vows to continue to be as outspoken as he has been in the past, come
hell or high water.

Geer's previous employer, @stake, has declined to discuss the
particulars about how Geer suddenly departed his post as chief
technical officer the very week the Microsoft-bashing report he
authored appeared under the sponsorship of the Computer and
Communications Industry Association.

Whether you agree with the conclusions of that report or not, it can
certainly be counted as one of the better-argued essays on the dangers
of software monoculture and the possibility of security becoming the
means for vendor product lock-in. However, @stake, which counts
Microsoft as a client customer, apparently didn't find it amusing.
Geer went missing from his job the week the report was published,
with @stake only willing to say it was all a private personnel matter.

Of course, nothing like this stays private for too long, and word got
out from some of Geer's pals that he had been axed at @stake. Geer,
who started his new job as Veradys' chief scientist last week, had
this to say about the Microsoft-as-monoculture episode: I was fired
for saying the emperor is naked.

Geer, the main author of the report that had six other contributors,
acknowledges he didn't exactly brief @stake on what he was going to
say about Microsoft. He went straight to CCIA, which has long sought
to have Microsoft brought to heel under anti-trust laws, to back it as
a major trade organization with a megaphone to reach the press.

He added that it's ironic that three weeks after I'm shot for saying
the emperor has no clothes, the National Science Foundation awards
Mike Reiter a multi-million NSF grant to study software monoculture.

(Mike Reiter is professor of electrical and computer engineering at
Carnegie-Mellon and associate director of its CyLab to advance
cybersecurity. We are looking at computers the way a physician would
look at genetically related patients, each susceptible to the same
disorder, Reiter is quoted as saying in NSF's November 25 press
release about the grant he and his colleagues were awarded. They are
trying to find a way to keep computers that are basically the same
from being infected by the same thing, like Code Red and Blaster
worms. Sounds like a search for safe sex for computers, and we wish
them well in their quixotic quest.)

Geer is still somewhat bitter about his experience with @stake, where
he says his job was to make @stake look bigger than it actually is.
And I was successful at it. But now it's time to move on.

Besides assisting Waltham, Mass.-based Veradsys in developing its
data-integrity products, Geer's official job description now says
he'll have a role in customer and market evangelism. So expect the
outspoken and erudite Geer -- who cut his teeth at MIT's Project
Athena where Kerberos and X Windows System were developed--to be seen
at conferences and at customer locations pulling for Verdasys.

The future is at the data layer, Geer says with his Veradsys hat on.
Putting limits to file use -- what Veradysys has nailed, says Geer
-- is the right place to be right now.

As a scientist, one idea Geer hopes to pursue is studying file use on
a statistical basis for live times and transit patterns, perhaps to be
able to detect anomalies. Geer earlier was on the Verdasys board of
advisors, which also includes Bob Blakley, chief scientist for
security and privacy at IBM Tivoli Software and Dennis Devlin, vice
president and chief security officer at Thomson Corp. The privately
funded company was started earlier this year by its CEO Seth Birnbaum.

But just because But Geer has a day job (though he'll still also be an
independent risk management consultant for Geer Risk Services) don't
expect him to suddenly go soft. He says he frets just as much about
the problems of open-source code as he does about Microsoft's more
proprietary software.

The most interesting question right now is the sanctity of the
open-source code pool and attempts to subvert it, he says, by those
that may want to insert Trojan horses or do other damage by breaking
into Web sites. He said there needs to be a lot more work on that
subject.

Whatever happens, don't expect this loose cannon of the Internet 

Re: [camram-spam] Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-30 Thread Jerrold Leichter
(The use of memory speed leads to an interesting notion:  Functions that are
designed to be differentially expensive on different kinds of fielded hardware.
On a theoretical basis, of course, all hardware is interchangeable; but in
practice, something differentially expensive to calculate on an x86 will remain
expensive for many years to come.)

In fact, such things are probably pretty easy to do - as was determined during
arguments over the design of Java.  The original Java specs pinned down
floating point arithmetic exactly:  A conforming implementation was required
to use IEEE single- and double-precision arithmetic, and give answers
identical at the bit level to a reference implementation.  This is easy to do
on a SPARC.  It's extremely difficult to do on an x86, because x86 FP
arithmetic is done to a higher precision.  The hardware provides only one way
to round an intermediate result to true IEEE single or double precision:
Store to memory, then read back.  This imposes a huge cost.  No one could find
any significantly better way to get the bit-for-bit same results on an x86.
(The Java standards were ultimately loosened up.)

So one should be able to define an highly FP-intensive, highly numerically
unstable, calculation all of whose final bits were considered to be part of
the answer.  This would be extremely difficult to calculate rapidly on an
x86.

Conversely, one could define the answer - possibly to the same problem - as
that produced using the higher intermediate precision of the x86.  This would
be very hard to compute quickly on machines whose FP hardware doesn't provide
exactly the same length intermediate results as the x86.

One can probably find problems that are linked to other kinds of hardware. For
example, the IBM PowerPC chip doesn't have generic extended precision values,
but does have a fused multiply/add with extended intermediate values.

Some machines provide fast transfers between FP and integer registers; others
require you to go to memory.  Vector-like processing - often of a specialized,
limited sort intended for graphics - is available on some architectures and
not others.  Problems requiring more than 32 bits of address space will pick
out the 64-bit machines.  (Imagine requiring lookups in a table with 2^33
entries.  8 Gig of real memory isn't unreasonable today - a few thousand
dollars - and is becoming cheaper all the time.  But using it effectively on a
the 32-bit machines out there is very hard, typically requiring changes to
the memory mapping or segment registers and such, at a cost equivalent to
hundreds or even thousands of instructions.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: [camram-spam] Re: Microsoft publicly announces Penny Black PoW postage project

2003-12-30 Thread Richard Clayton
On Tue, 30 Dec 2003, Eric S. Johansson wrote:

  But using your spam size, , the slowdown factor becomes roughly
 73 times.  So they would need 73 machines running full tilt all the time
 to regain their old throughput.

Believe me, the professionals have enough 0wned machines that this is
trivial.

On the flipside, it means the machines are burned faster.

only if the professionals are dumb enough to use the machines that are
making the stamps to actually send the email (since it is only the
latter which are, in practice, traceable)

 unfortunately, I think you making some assumptions that are not fully
 warranted.  I will try to do some research and figure out the number of
 machines compromised.  The best No. I had seen to date was about 350,000.

It's at least an order of magnitude higher than this, possibly 2 orders,
thanks to rampaging worms with spamware installation payloads
compromising cablemodem- and adsl- connected Windows machines worldwide.

the easynet.nl list (recently demised) listed nearly 700K machines that
had been detected (allegedly) sending spam... so since their detection
was not universal it would certainly be more than 700K :(

-
The Cryptography Mailing List

and in these schemes, where does our esteemed moderator get _his_ stamps
from ? remember that not all bulk email is spam by any means...  or do
we end up with whitelists all over the place and the focus of attacks
moves to the ingress to the mailing lists :(

moan
I never understand why people think spam is a technical problem :( let
alone a cryptographic one :-(
/moan

-- 
richard  Richard Clayton

They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety. Benjamin Franklin

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]