Bram writes:
> Alice first picks primes p and q, both congruent to 3 mod 4, and finds
> their product, N. She will be able to reuse N any number of times so long
> as she keeps p and q secret.

So if we use Wagner et al's simpler version of this, where the user is
asked to compute s^{2^t} mod n, I considered this myself also and it
works (I had read the timelock paper, and this is where I got it
from).

There are other properties it may be useful for hashcash to have for
some applications:

(1) safe to allow user chosen challenges
(2) efficiently verifiable by the challenger
(3) efficiently auditable (verifiable by any third party without any
    private key information)
(4) trap door for challenger (challenger can produce tokens more cheaply)
!(4) no trap door for challenger (challenger can't produce tokens more cheaply)
(5) efficiently amortizable (tokens can be efficiently combined to form
    a small combined token proving that the combined token cost the
    combined amount of CPU time to produce)
(6) non parallelizable
(7) fixed upper bound on work (eg. known solution challenges, we know
    the user won't have to spend more than a certain amount of time)

((4) and !(4) are opposites; (3) implies (2).

s^{2^t} exhibits properties (1), (2), (4) and (6); it could exhibit
!(4) if p and q were deleted, or n computed using some multiparty
computation.

It seems difficult to create a cost function which exhibits (3), (4)
and (6).  And even harder to add property (1) to the list.

My original hashcash scheme [1] exhibits properties (1), (2), (3).

Here's a description of an amortizable hashcash scheme exhibiting:
(3), !(4) and (5):

The motivation for property (5) (amortizable tokens) is distributed
systems (eg. cypherspace / eternity server systems), where you want a
decentralised method to vote on the popularity of documents, or
reliability, honesty etc. of other nodes.

The basic idea is that when you ask for a hash collision of 20 bits
50% of the time you get a 21 bit collision.  So it turns out if you
keep the biggest collision you've seen so far, that is in itself an
approximation of the number of collisions you've had on that string.

There are a few twists to prevent cheating by under voting (spending a
bit more time to submit exactly a 20 bit hash to ensure you never
contribute).

So the server computes:

X = hash( X' )
and sends X to the user.

The user computes a hash collision on X with 20 bits (or whatever):

Y = hash( X || C ) 

where Y has 20 (or more) common lsbits with X, and C is a counter

The server checks how many msbits are common between Y and X', and
publishes the amortizable hashcash token of (X',C,n)

To verify compute X=hash(X'), Y=hash(X||C) and n lsbits of X and Y are
equal, and the amortized value of the token is how many msbits of Y
and X' are equal.  Each server keeps the largest value amortized
hashcash token on each string X'.

To relate the token to a resource name you'd need to use X_r = hash( X
|| r ) and have the user find lsbit collisions on X_r and Y = hash(
X_r || C ) the amortized token is then (r, X',C, n) where r is the
resource name (eg. filename, servername).

Then people simply vote with their CPU time about reliability,
non-disruptiveness, uptime, etc, etc.  During the existing protocols,
and during seperate voting protocols.

The servers distribute and globally amortize amortizable hashcash
tokens on files.


It would be nice to add property (6) (non-parallelizable) to the list:
(3), !(4) and (5), to give an efficiently amortizable,
non-parallelizable scheme.  Option (1) is nice in some scenarios
(eg. store and forward communications where the server is not
available to create challenges, or the client is offline).  (7) isn't
a big deal as you can choose parameters to give low probability of the
computation taking an unreasonably long time.

It would be nice if the amortization function could be made
non-statistical also.  The amortization function above is statistical
in the sense that a random contribution to the amortized value is
provided by each packet, either no contribution (if the token is lower
value than the current largest), or some random contribution.  If the
server got lucky it could achieve a very high value amortized token
for little work.


The RSALABs authors in their paper [2] on denial of service prevention
have a suggestion that to reduce the standard deviation of the amount
of time the user must spend that multiple sub-puzzles be submitted
with their expected running times summing to the target cost.  When
this is done the sub-puzzles have to be linked somehow to prevent
sub-puzzles from different runs being combined.

They also give an example of a cost function exhibiting property (7)
(known solution).


Adam

[1] http://www.cypherspace.org/~adam/hashcash/

[2] RSALABs paper on denial of service prevention (somewhere on
www.rsasecurity.com)

Reply via email to