[ Hi Janus, I'm resending this mail to the list since it looks like it
was meant to go there (if people who know Danish write to me in
English, I assume they meant to send the mail to the public). ]

--
Janus


Den 08/12/2007 kl. 2.50 skrev Martin Geisler:

> Janus Dam Nielsen <[EMAIL PROTECTED]> writes:
>
>> Den 07/12/2007 kl. 14.34 skrev Martin Geisler:
>>
>>> I see what you mean about the observer pattern since each  
>>> variable in
>>> the program is observed by the operations that depend on it. So
>>>
>>>   x = add(a, b)
>>>   y = mul(a, b)
>>>   z = add(x, y)
>>>
>>> would create an add operation that observes the a and b variables  
>>> and
>>> a mul operation that does the same. Finally, an add operation would
>>> observe x and y and notify z.
>>>
>>> When a and b get a value (from incoming network traffic) they notify
>>> their listeners. The add and mul operations could then proceed by
>>> having x, respectively y, notify their listeners which would be the
>>> add operation defining z in this example.
>>
>> By the way I really like this way of looking at the computations as
>> just observing on the results, it has some huge advantages in a
>> parallel world! Good work Martin!
>
> Thanks!
>
> I think of the calculation as a big network of operators. The values
> flow through this network and are transformed as they pass through an
> operator node. The network is actually an acyclic directed graph like
> this one random example where x and y are multiplied, opened and
> squared. At the same time (in parallel) a and b are created using PRSS
> and added, and b is multiplied by 2 and opened.
>
>    ...        ...   ...
>     |          |     |
>   sqrt()       +   open()
>     |         / \    |
>   open()     /   \   *
>     |       /     \ / \
>     *      a       b   2
>    / \     |       |
>   x   y  prss()  prss()
>
> More stuff might happen later with the top three outgoing edges.
>
> This is the kind of structure that VIFF builds and evaluates using the
> Twisted Deferreds.
>
> As you say, it is then easy to evaluate such a graph in parallel as  
> long
> as you keep track of the leaf nodes. By parallel I don't necessarily
> mean having multiple threads running on a single party -- one could  
> also
> evaluate the graph in parallel by starting the next operations before
> the first was finished.
Why not just create a thread for each, we are moving into the world  
of multicore processors and it might even get easier to reason about  
from a formal perspektive?

>
> In our example, the multiplication could be done and the shares sent
> out. Before the others parties send us their shares, we would go on to
> the first prss() and then the second prss() operation.
>
> This is what VIFF does: it keeps evaluating all the nodes that can be
> evaluated. This is done in a single thread which simplifies things
> because no locking or synchronization is needed.
>
> --
> Martin Geisler

_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to