Re: I'll show you mine if you show me, er, mine

2005-02-24 Thread Hal Finney
 its multiplications as above you'd still get the
correct encryptions of c_i, but the other pair would be junk and you
wouldn't learn the r_i values:

E(c_1)   junk junk E(c_4)   junk...
junk E(c_2)   E(c_3)   junk E(c_5)  ...

Now you can still decrypt it and verify your password.  But for someone
who is impersonating you and doesn't know your password, they're going
to get a mix of c_i and r_i values that won't add up to zero, and that
won't give them any clue about what the real password is, other than
that they guessed wrong.

I'm not 100% sure this will work, that the attacker can't create a bogus
pair (F/g^k, junk) which will allow him to determine what value the server
multiplied by.  At a minimum I see that if junk = F/g^k then it will be
obvious what the constant was, so the server would have to check for that.
This is why it's good to have provable security!

This way of doing things would also be quite inefficient; there are
two ElGamal encryptions going back and forth (typically 2048 bits each)
for every bit of your password.  I'll bet the actual paper has a much
more clever scheme which improves the efficiency and has a nice proof
of security.  I'm looking forward to seeing it.

Hal Finney



Re: Off-the-Record Messaging (IM plugin)

2004-12-16 Thread Hal Finney
 Nikita Borisov and Ian Goldberg have released
 Off-the-Record Messaging (http://www.xelerance.com/mirror/otr/),

It looks like Ian Goldberg's site might be a more authoritative source,
http://www.cypherpunks.ca/otr/ .

One interesting feature is authentication + deniability.  You know who
you are talking to, but afterwards anyone who captured a transcript can't
prove who said it.  Usually we do authentication with digital signatures,
but the problem is that binds you to what you say and it can be used
against you afterwards.

OTR does it by signing the key exchange which creates a MAC key for each
direction.  (A MAC is a keyed hash which is then applied to each message.)
Each message gets MAC'd and this way you know that the messages are
authentic and untampered.

This already protects you against your conversant; both of you know the
MAC keys in each direction (one knows them in order to MAC new messages;
the other knows them in order to verify the MAC), so each guy can
forge messages created by the other guy and create a bogus transcript.
That means that neither person can publish a transcript and credibly
claim that it authentically represents what was said.

Then, there's another trick: when you are through with them you publish
your MAC keys, in the clear.  This does not compromise secrecy; all of
the data is encrypted with a different key.  But it means that now, anyone
could in retrospect forge a transcript showing you saying anything at all.
And that of course means that no such transcript has any credibility in
terms of providing cryptographic evidence of what you said.

Hal



Re: Your source code, for sale

2004-11-08 Thread Hal Finney
Ben Laurie writes:
 How do you make the payment already gone without using a third party?

Of course there has to be a third party in the form of the currency
issuer.  If it is someone like e-gold, they could do as I suggested and
add a feature where the buyer could transfer funds irrevocably into
an escrow account which would be jointly controlled by the buyer and
the seller.  This way the payment is already gone from the POV of the
buyer and if the seller completes the transaction, the buyer has less
incentive to cheat him.

In the case of an ecash mint, a simple method would be for the seller to
give the buyer a proto-coin, that is, the value to be signed at the mint,
but in blinded form.  The buyer could take this to the mint and pay to
get it signed.  The resulting value is no good to the buyer because he
doesn't know the blinding factors, so from his POV the money (he paid
to get it signed) is already gone.  He can prove to the seller that
he did it by using the Guillou-Quisquater protocol to prove in ZK that
he knows the mint's signature on the value the seller gave him.

The seller thereby knows that the buyer's costs are sunk, and so the
seller is motivated to complete the transaction.  The buyer has nothing
to lose and might as well pay the seller by giving him the signed value
from the mint, which the seller can unblind and (provably, verifiably)
be able to deposit.

Hal



Re: Your source code, for sale

2004-11-05 Thread Hal Finney
Enzo Michelangeli writes:
 In the world of international trade, where mutual distrust between buyer
 and seller is often the rule and there is no central authority to enforce
 the law, this is traditionally achieved by interposing not less than three
 trusted third parties: the shipping line, the opening bank and the
 negotiating bank.

Interesting.  In the e-gold case, both parties have the same bank,
e-gold ltd.  The corresponding protocol would be for the buyer to instruct
e-gold to set aside some money which would go to the seller once the
seller supplied a certain receipt.  That receipt would be an email return
receipt showing that the seller had sent the buyer the content with hash
so-and-so, using a cryptographic email return-receipt protocol.

  You could imagine a trusted third party who would inspect the code and
  certify it, saying the source code with hash XXX appears to be
  legitimate Cisco source code.  Then they could send you the code bit
  by bit and incrementally show that it matches the specified hash,
  using a crypto protocol for gradual release of secrets.  You could
  simultaneously do a gradual release of some payment information in the
  other direction.

 But it's hard to assess the value of partially-released code. If the
 gradual transfer bits-against-cents is aborted, what is left to the buyer
 is likely to be unusable, whereas the partial payment still represents
 good value.

Actually you can arrange it so that neither the partially-released code
nor the partially-transferred ecash is of any value until the whole
transfer finishes.  For example, send the whole thing first in encrypted
form, then release the encryption keys bit-by-bit.  If someone aborts
the protocol early, the best each side can do is a brute force search
over the untransferred bits to try to find the key to unlock the data
they received.

 A more general issue is that source code is not a commodity, and
 intellectual property is not real property: so the traditional cash on
 delivery paradigm just doesn't work, and looking for protocols
 implementing it kind of moot. If the code is treated as trade secret,
 rather than licensed, an anonymous buyer may make copies and resell them
 on the black market more than recovering his initial cost, at the same
 time undercutting your legitimate sales (see e.g. the cases of RC4 and
 RC2). This can cause losses order of magnitude larger than refusing to pay
 for his copy.

That's a good point.  Maybe you could use some kind of DRM or trusted
computing concept to try to force the buyer to lock up his received data.
For source code that would be pretty difficult though, it needs to be
handled in flexible ways.

Hal



RE: Your source code, for sale

2004-11-04 Thread Hal Finney
Tyler Durden writes:
 So my newbie-style question is, is there an eGold that can be verified, but 
 not accessed, until a 'release' code is sent?

 In other words, say I'm buying some hacker-ed code and pay in egold. I don't 
 want them to be able to 'cash' the gold until I have the code. Meanwhile, 
 they will want to see that the gold is at least there, even if they can't 
 cash it yet.

 Is there a way to send a 'release' to an eGold (or other) payment? Better 
 yet, a double simultaneous release feature makes thing even more 
 interesting.

I've been thinking about how to do this kind of thing with ecash.
One project I'm hoping to work on next year is a P2P gambling game (like
poker or something) using my rpow.net which is a sort of play-money ecash.
You'd like to be able to do bets and have some kind of reasonable
assurance that the other guy would pay up if he loses.

In the case of your problem there is the issue of whether the source
code you are buying is legitimate.  Only once you have inspected it and
satisfied yourself that it will suit your needs would you be willing
to pay.  But attaining that assurance will require examing the code in
such detail that maybe you will decide that you don't need to pay.

You could imagine a trusted third party who would inspect the code and
certify it, saying the source code with hash XXX appears to be legitimate
Cisco source code.  Then they could send you the code bit by bit and
incrementally show that it matches the specified hash, using a crypto
protocol for gradual release of secrets.  You could simultaneously do
a gradual release of some payment information in the other direction.

If you don't have a TTP, one idea for using ecash is Markus Jakobsson's
Ripping Coins for a Fair Exchange.  Basically you withdraw ecash from
your account and in effect rip it in half and give half to the seller.
Now he gives you the product and you give him the other half of the coin.
The idea is that once you have given him the ripped ecash (torn
would be a better word because ripping means something else today),
you are out the value of the cash.  You have no more incentive to cheat,
as giving him the other half won't cost you anything additional.

(Even without ecash, a service like egold could mimic this functionality.
You'd create an escrow account with two passwords, one known to each
party.  Only with both passwords could data be withdrawn from the account.
Then the buyer would transfer funds into this account.  After receiving
the goods, the buyer would send his password to the seller.)

The problem is that if the source code you are purchasing is bogus,
or if the other side doesn't come through, you're screwed because you've
lost the value of the torn cash.  The other side doesn't gain anything
by this fraud, but they harm you, and if they are malicious that might
be enough.  And likewise you might be malicious and harm them by refusing
to give them your half of the coin even after you have received the goods.
Again, this doesn't benefit you, you're still out the money, but maybe
you like causing trouble.

Another idea along these lines is gradual payment for gradual release
of the goods.  You pay 10% of the amount and they give you 10% of the
source code.  You pay another 10% and you get the next 10% of the source,
and so on.  (Or it could be nonlinear; maybe they give out half the code
for free, but the final 10% requires a large payment.)  The idea is that
you can sample and make sure they do appear to have the real thing with
a fairly small investment.

If there is some mechanism for the seller to have a reputation (like
Advogato's perhaps, with some spoofing immunity) then the problem is
easier; the seller won't want to screw buyers because it hurts his rep.
In that case it may be reasonable to ask the buyer to pay in advance,
perhaps using the partial payment system just discussed.

These various ideas all have tradeoffs, and in general this kind of
problem is hard to solve because of the complexity of what constitutes a
successful transaction.  A reputation system helps a great deal to resolve
the issues, but opens up problems of its own.  The betting problem I
want to work on is relatively easy because there is no ambiguity about
who wins, but even then it is hard to make sure that neither party can
maliciously harm the other.

Hal F.



Re: Federal program to monitor everyone on the road

2004-10-01 Thread Hal Finney
There was a brief mention of this technology at the Crypto conference.
I provided some pointers in a comment to an Ed Felten blog entry at
http://www.freedom-to-tinker.com/archives/000677.html#comments (scroll
down to the 3rd comment).

Dan Boneh et al presented a proposal for a group signature scheme so that
the data collected would not be personally identifiable.  The problem is
that the data needs to be authenticated, otherwise rogue transmitters
could send false data and perhaps cause traffic flow problems or even
serious accidents.  So they want to use some cryptographic method.
Putting a common key in the whole system would make it too easy for
rogues to get access to, would be unrevocable, and we are back to the
rogue transmitter problem.  Using individual certified keys is the
default solution but has privacy problems: everyone would be constantly
transmitting a cryptographically verifiable record of their driving
patterns, speed, lane changing and who knows what else.

With the group signature, everybody has a unique key but their
transmissions are not bound to that key.  And if a key gets scraped
out and goes rogue, it can be revoked.  This is supposed to provide
flexibility, authentication, and privacy.

In practice I am skeptical that society will choose to protect privacy at
the expense of security.  One optional feature of group signatures is a
trusted party who can penetrate the anonymity and learn the identity of
the author of a particular message.  I suspect that any vehicle based
embedded communications system will retain that capability, a sort of
license plate in the virtual realm.  The ability to track the paths of
bank robbers and terrorists would be too inviting for society to give up,
especially if the data is only available to government agents.

Hal



Re: potential new IETF WG on anonymous IPSec

2004-09-09 Thread Hal Finney
 The IETF has been discussing setting up a working group
 for anonymous IPSec.  They will have a BOF at the next IETF
 in DC in November.  They're also setting up a mailing list you
 might be interested in if you haven't heard about it already.
 ...
   http://www.postel.org/anonsec

To clarify, this is not really anonymous in the usual sense.  Rather it
is a proposal to an extension to IPsec to allow for unauthenticated
connections.  Presently IPsec relies on either pre-shared secrets or a
trusted third party CA to authenticate the connection.  The new proposal
would let connections go forward using a straight Diffie-Hellman type
exchange without authentication.  It also proposes less authentication
of IP message packets, covering smaller subsets, as an option.

The point has nothing to do with anonymity; rather it is an attempt
to secure against weaknesses in TCP which have begun to be exploited.
Sequence number guessing attacks are more successful today because of
increasing bandwidth, and there have been several instances where they
have caused disruption on the net.  While workarounds are in place, a
better solution is desirable.

This new effort is Joe Touch's proposal to weaken IPsec so that it uses
less resources and is easier to deploy.  He calls the weaker version
AnonSec.  But it is not anonymous, all the parties know the addresses
of their counterparts.  Rather, it allows for a degree of security on
connections between communicators who don't share any secrets or CAs.
I don't think anonymous is the right word for this, and I hope the
IETF comes up with a better one as they go forward.

Hal Finney



Seth Schoen's Hard to Verify Signatures

2004-09-08 Thread Hal Finney
Seth Schoen of the EFF proposed an interesting cryptographic primitive
called a hard to verify signature in his blog at
http://vitanuova.loyalty.org/weblog/nb.cgi/view/vitanuova/2004/09/02 .
The idea is to have a signature which is fast to make but slow to verify,
with the verification speed under the signer's control.  He proposes
that this could be useful with trusted computing to discourage certain
objectionable applications.

The method Seth describes is to include a random value in the signature
but not to include it in the message.  He shows a sample signature
with 3 decimal digits hidden.  The only way to verify it is to try all
possibilities for the random values.  By controlling how much data is
hidden in this way, the signer can control how long it will take to
verify the signature.

This idea is nice and simple, but one disadvantage is that it is
probabilistic.  It works on average, but occasionally someone might
choose an n digit value which happens to be low (assuming the values are
chosen at random).  Then they don't get as much protection.  They could
fix this by eliminating too-low values, but then verifiers might exploit
that by doing low values last in their testing.

Another problem is that this method is inherently parallelizable, so
that someone with N computers could solve it N times faster, by having
each computer test a subset of the values.

An alternative is based on the paper, Time-lock puzzles and
timed release Crypto, by Rivest, Shamir and Wagner, from 1996,
http://theory.lcs.mit.edu/~rivest/RivestShamirWagner-timelock.pdf or .ps.
They are looking more at the problem of encrypting data such that it can
be decrypted only after a chosen amount of computing time, but many of
their techniques are applicable to signatures.

The first solution they consider is essentially the same as Seth's,
doing an encryption where n bits of the encryption key are unknown, and
letting people search for the decryption key.  They identify the problems
I noted above (which I stole from their paper).  They also point out BTW
that this concept was used by Ralph Merkle in his paper which basically
foreshadowed the invention of public key cryptography.  Merkle had to
fight for years to get his paper published, otherwise he would be known
as the father of the field rather than just a pioneer or co-inventor.

The next method they describe can be put into signature terms as follows.
Choose the number of modular squarings, t, that you want the verifier
to have to perform.  Suppose you choose t = 1 billion.  Now you will
sign your value using an RSA key whose exponent e = 2^t + 1.  (You also
need to make sure that this value is relatively prime to p-1 and q-1,
but that is easily arranged, for example by letting p and q be strong
primes.)  The way you sign, even using such a large e, is to compute
phi = (p-1)*(q-1) and to compute e' = e mod phi, which can be done using
about 30 squarings of 2 mod phi.  You then compute the secret exponent
d as the multiplicative inverse mod phi of e', in the standard way that
is done for RSA keys.  Using this method you can sign about as quickly
as for regular RSA keys.

However, the verifier has a much harder problem.  He does not know phi,
hence cannot reduce e.  To verify, he has to raise the signature to
the e power as in any RSA signature, which for the exponent I described
will require t = 1 billion modular squaring operations.  This will be a
slow process, and the signer can control it by changing the size of t,
without changing his own work factor materially.

The authors also point out that modular squaring is an intrinsically
sequential process which cannot benefit from applying multiple computers.
So the only speed variations relevant will be those between individual
computers.

Another idea I had for a use of hard to verify signatures would be if
you published something anonymously but did not want to be known as
the author of it until far in the future, perhaps after your death.
Then you could create a HTV signature on it, perhaps not identifying
the key, just the signature value.  Only in the future when computing
power is cheap would it be possible to try verifying the signature under
different keys and see which one worked.

Hal Finney



Re: Remailers an unsolveable paradox?

2004-09-01 Thread Hal Finney
Spam is the least of the problems for remailers when it comes to
abuse.  You should be more concerned about possible liability for
illegal messages.

In a way, spam has actually made the remailer operator's life easier
as people today are used to receiving annoying and obscene email.
Ten years ago, when I ran a remailer, people were genuinely shocked
to receive unsolicited pornography.  Yes, it's hard to believe today,
but in those quaint times, when the Internet was in black and white,
most users got only a few email messages a day and they were all from
their friends, family and co-workers.

As far as spam, next-generation remailers should incorporate hashcash,
www.hashcash.org, to make sending an anonymous message relatively costly.
Let it take a minute or more to generate the stamp necessary for a
message to enter the remailer system and spam will not be a problem,
while legitimate users will have no barrier.

Hal



RPOW - Reusable Proofs of Work

2004-08-15 Thread Hal Finney
I'd like to invite members of this list to try out my new
hashcash-based server, rpow.net.

This system receives hashcash as a Proof of Work (POW) token, and in
exchange creates RSA-signed tokens which I call Reusable Proof of Work
(RPOW) tokens.  RPOWs can then be transferred from person to person and
exchanged for new RPOWs at each step.  Each RPOW or POW token can only
be used once but since it gives birth to a new one, it is as though the
same token can be handed from person to person.

Because RPOWs are only created from equal-value POWs or RPOWs, they are
as rare and valuable as the hashcash that was used to create them.
But they are reusable, unlike hashcash.

The new concept in the server is the security model.  The RPOW server
is running on a high-security processor card, the IBM 4758 Secure
Cryptographic Coprocessor, validated to FIPS-140 level 4.  This card
has the capability to deliver a signed attestation of the software
configuration on the board, which any (sufficiently motivated) user
can verify against the published source code of the system.  This lets
everyone see that the system has no back doors and will only create RPOW
tokens when supplied with POW/RPOW tokens of equal value.

This is what creates trust in RPOWs as actually embodying their claimed
values, the knowledge that they were in fact created based on an equal
value POW (hashcash) token.

I have a lot more information about the system at rpow.net, along with
downloadable source code.  There is also a crude web interface which
lets you exchange POWs for RPOWs without downloading the client.

This system is in early beta right now so I'd appreciate any feedback
if anyone has a chance to try it out.  Please keep in mind that if there
are problems I may need to reload the server code, which will invalidate
any RPOW tokens which people have previously created.  So don't go too
crazy hoarding up RPOWs quite yet.

Thanks very much -

Hal Finney



Re: Email tapping by ISPs, forwarder addresses, and crypto proxies

2004-07-07 Thread Hal Finney
 there is still the
problem that the forwarding server could read your email, so you have
only moved the threat from the ISP to another operator.  The difference
I suppose is that the forwarder would be selling privacy services, hence
different ones would compete to get a good reputation.  Any cheating might
be detected by insider whistle blowers or perhaps some kind of audit.

Hal Finney