Quoting Martin Geisler <[EMAIL PROTECTED]>:
> Hi everybody
>
> >> I am confused about the notion of security via adversary traces
> >> presented in those papers. It is described via two properties:
> >>
> >> * Identity Property: a public state P can only lead to one other
> >>   public state P', regardless of the secret state.
> >>
> >> * Commutative Property: computing on secrets leads to the same
> >>   state as opening everything and computing on open values.
> >>
> >> I think you write that this is a new idea -- have you then looked
> >> into how this relates to the more standard notion of Ideal
> >> World/Real World simulation arguments in the UC framework?
> >
> > Yes this is a new formulation of the security guaranties in the
> > programming language community. I have not compared this to UC, it
> > would be nice to do so.
>

I guess the intuition here is that the identity property gives you privacy:
namely the public history develops "independently" of the secret stuff. And the
commutative property gives you correctness, namely the secret computation
develops as if it had been done correctly in the open.

One problem I see with this is that it doesn't seem to take into account what
happens if you open some of the secret data before the computation is done.
There are many examples where this is the right thing to do, the binary search
in the double auction for instance. Such an action opens the possibility that
the history of the secret data now influences the public part (in some limited
way of course). At least superficially, it seems this would violate the
identity property, but nevertheless the underlying protocol is secure..

Another issue is that it is known from the history of the UC
definition that requiring correctness and privacy as separate properties is not
the right thing to do, or rather it is not always enough. An example: consider
an electronic election where people submit votes encrypted under some public
key, where the secret key is shared among some servers. Then the servers do
some
secure computation to get the result of the election. Now, if you don't take
special precautions, a voter could copy the ciphertext sent by some other voter
and submit it as his own vote. This is actually both correct and private, but
it is clearly not what we want: a voter should not be able to always vote the
same as someone else. What is missing is a requirement that you actually know
which input you are contributing, this is implied by the UC definition.
This does not necessarily mean that Identity+Commutative property is weaker than
UC, maybe it is most likely that they are incomparable, but it is an issue to be
aware of..

>
> > In the paper on page two, lower left, we write that each server
> > party execute identical copies of the server program inn lock-step.
> > Based on this assumption it is reasonable to consider the server as
> > having a single well-defined state. However in Viff this is no
> > longer true due to parallelism. But it would be very nice if we
> > could consider the server as having a single well-defined state.
>
> I don't like this since it introduces a wide gap between the model and
> the real world and I believe such a gap makes it easier to come up
> with inefficient solutions.
>
> Sure you can implement a synchronous network on top of an asynchronous
> network, you can run expensive agreement protocols to ensure that
> everybody has seen every message and that they are all delivered in
> the same order.

I think what people do in real life is most often not to emulate synchronous via
subprotocols - which is indeed expensive, but rather they make a physical
assumption on the max delivery time of the network. This can make good sense on
a LAN, for instance. Then you just make every synchronous round last the max
delivery time plus a little bit to compensate for clock drifts and then you can
assume that if you did not receive something from a player, he must be corrupt.

This just to say that the lock-step assumption is not necessarily meaningless in
real life. But the scope is limited of course. If you try the same on the
Internet, each round has to last for MUCH more real time, and the protocol may
become much slower than an asynchronous solution.

I would say that it's OK to do this modeling in steps: do a first approximation
that may be limited in scope, and then take it further afterwards. After all
there should also be something for Sigurd to do :-)

regards, Ivan
_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to