Re: [viff-devel] Optimizing preprocessing

2009-10-22 Thread Janus Dam Nielsen


Computing directly on the field elements is hacking the  
abstractions
of VIFF. Computation on field elements or rather the  
representation of
a Share can be useful as an optimization, however this  
optimization

should be confined within applications or runtimes, and should not
progress over interface boundaries as I fear you are suggesting.
I think we are in agreement: public methods on the runtimes will  
keep
returning Shares. Methods used internally in runtimes can return  
other

things as needed. To me it sounds like a better API to require
preprocessing functions to return a list of Deferreds:
[D(?), D(?), ...],
instead of a Deferred list of tuples containing Deferreds :-)
I think it will simplify the interface nicely, at least for  
consumers.
Using simpler types also leads to less memory usage which has a  
positive

effect on performance, as Marcel notes. So let's go for it.


So this makes 2 votes in favour of it and 1 against it. Maybe we  
should have a meeting to discuss it. What do you think?
I can agree on this as well, as long as we don't make field  
elements canonical.


You mean that the preprocessing infrastructure should not be  
restricted to FieldElements but can handle other items such as the  
tuples used by the Orlandi runtime. Is that correct? It was never my  
plan to introduce this restriction. I just want to get rid of the  
Deferreds in the preprocessing pool because I think that they are  
unnecessary there.

Then by all means go ahead and speed up the preprocessing. :)



Janus Dam Nielsen

Research and Innovationspecialist, PhD.
CENTRE FOR IT-SECURITY

THE ALEXANDRA INSTITUTE LTD.

T +45 42 22 93 56
E janus.niel...@alexandra.dk
W alexandra.dk


___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] Optimizing preprocessing

2009-10-22 Thread Marcel Keller

Janus Dam Nielsen wrote:


On 21/10/2009, at 20.28, Marcel Keller wrote:

Martin Geisler wrote:
Janus Dam Nielsen > writes:

Hi Marcel,

I am not opposed to your suggestion. However I would like to point out
that in VIFF you compute on shares and not field elements!

Well, we've actually made the outer runtime interfaces in such a way
that add, mul, xor, etc... accept both integers, FieldElements and
Shares. The methods then wrap their input as needed -- or they *dont*
wrap it if that leads to a short cut (e.g., constant multiplication)


I agree (see also my answer).
I would still like to stress that Shares are the basic values in VIFF. 
The interface is then designed in such a way, so that we can do various 
optimizations, but computing on field elements (a particular 
representation of a share) is an optimization, which I am very happy 
that the interface allows us to do. But it is still an optimization. 


I agree that the design maybe is not optimal semantically, IMHO because 
Shares are Deferreds. This leads to the situation that whenever a 
callback returns a Share, the Deferred code converts it into the result 
carried by the Share. However, a list of Shares returned by a callback 
is not converted.



Computing directly on the field elements is hacking the abstractions
of VIFF. Computation on field elements or rather the representation of
a Share can be useful as an optimization, however this optimization
should be confined within applications or runtimes, and should not
progress over interface boundaries as I fear you are suggesting.

I think we are in agreement: public methods on the runtimes will keep
returning Shares. Methods used internally in runtimes can return other
things as needed. To me it sounds like a better API to require
preprocessing functions to return a list of Deferreds:
 [D(?), D(?), ...],
instead of a Deferred list of tuples containing Deferreds :-)
I think it will simplify the interface nicely, at least for consumers.
Using simpler types also leads to less memory usage which has a positive
effect on performance, as Marcel notes. So let's go for it.


So this makes 2 votes in favour of it and 1 against it. Maybe we 
should have a meeting to discuss it. What do you think?
I can agree on this as well, as long as we don't make field elements 
canonical.


You mean that the preprocessing infrastructure should not be restricted 
to FieldElements but can handle other items such as the tuples used by 
the Orlandi runtime. Is that correct? It was never my plan to introduce 
this restriction. I just want to get rid of the Deferreds in the 
preprocessing pool because I think that they are unnecessary there.

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] Optimizing preprocessing

2009-10-22 Thread Janus Dam Nielsen


On 21/10/2009, at 20.28, Marcel Keller wrote:

Martin Geisler wrote:

Janus Dam Nielsen  writes:

Hi Marcel,

I am not opposed to your suggestion. However I would like to point  
out

that in VIFF you compute on shares and not field elements!

Well, we've actually made the outer runtime interfaces in such a way
that add, mul, xor, etc... accept both integers, FieldElements and
Shares. The methods then wrap their input as needed -- or they *dont*
wrap it if that leads to a short cut (e.g., constant multiplication)


I agree (see also my answer).
I would still like to stress that Shares are the basic values in VIFF.  
The interface is then designed in such a way, so that we can do  
various optimizations, but computing on field elements (a particular  
representation of a share) is an optimization, which I am very happy  
that the interface allows us to do. But it is still an optimization.



Computing directly on the field elements is hacking the abstractions
of VIFF. Computation on field elements or rather the  
representation of

a Share can be useful as an optimization, however this optimization
should be confined within applications or runtimes, and should not
progress over interface boundaries as I fear you are suggesting.

I think we are in agreement: public methods on the runtimes will keep
returning Shares. Methods used internally in runtimes can return  
other

things as needed. To me it sounds like a better API to require
preprocessing functions to return a list of Deferreds:
 [D(?), D(?), ...],
instead of a Deferred list of tuples containing Deferreds :-)
I think it will simplify the interface nicely, at least for  
consumers.
Using simpler types also leads to less memory usage which has a  
positive

effect on performance, as Marcel notes. So let's go for it.


So this makes 2 votes in favour of it and 1 against it. Maybe we  
should have a meeting to discuss it. What do you think?
I can agree on this as well, as long as we don't make field elements  
canonical.




Janus Dam Nielsen

Research and Innovationspecialist, PhD.
CENTRE FOR IT-SECURITY

THE ALEXANDRA INSTITUTE LTD.

T +45 42 22 93 56
E janus.niel...@alexandra.dk
W alexandra.dk


___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] Optimizing preprocessing

2009-10-21 Thread Marcel Keller

Martin Geisler wrote:

Janus Dam Nielsen  writes:


Hi Marcel,

I am not opposed to your suggestion. However I would like to point out
that in VIFF you compute on shares and not field elements!


Well, we've actually made the outer runtime interfaces in such a way
that add, mul, xor, etc... accept both integers, FieldElements and
Shares. The methods then wrap their input as needed -- or they *dont*
wrap it if that leads to a short cut (e.g., constant multiplication)


I agree (see also my answer).


Computing directly on the field elements is hacking the abstractions
of VIFF. Computation on field elements or rather the representation of
a Share can be useful as an optimization, however this optimization
should be confined within applications or runtimes, and should not
progress over interface boundaries as I fear you are suggesting.


I think we are in agreement: public methods on the runtimes will keep
returning Shares. Methods used internally in runtimes can return other
things as needed. To me it sounds like a better API to require
preprocessing functions to return a list of Deferreds:

  [D(?), D(?), ...],

instead of a Deferred list of tuples containing Deferreds :-)

I think it will simplify the interface nicely, at least for consumers.
Using simpler types also leads to less memory usage which has a positive
effect on performance, as Marcel notes. So let's go for it.


So this makes 2 votes in favour of it and 1 against it. Maybe we should 
have a meeting to discuss it. What do you think?


Best regards,
Marcel
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] Optimizing preprocessing

2009-10-16 Thread Martin Geisler
Janus Dam Nielsen  writes:

> Hi Marcel,
>
> I am not opposed to your suggestion. However I would like to point out
> that in VIFF you compute on shares and not field elements!

Well, we've actually made the outer runtime interfaces in such a way
that add, mul, xor, etc... accept both integers, FieldElements and
Shares. The methods then wrap their input as needed -- or they *dont*
wrap it if that leads to a short cut (e.g., constant multiplication)

> Computing directly on the field elements is hacking the abstractions
> of VIFF. Computation on field elements or rather the representation of
> a Share can be useful as an optimization, however this optimization
> should be confined within applications or runtimes, and should not
> progress over interface boundaries as I fear you are suggesting.

I think we are in agreement: public methods on the runtimes will keep
returning Shares. Methods used internally in runtimes can return other
things as needed. To me it sounds like a better API to require
preprocessing functions to return a list of Deferreds:

  [D(?), D(?), ...],

instead of a Deferred list of tuples containing Deferreds :-)

I think it will simplify the interface nicely, at least for consumers.
Using simpler types also leads to less memory usage which has a positive
effect on performance, as Marcel notes. So let's go for it.

-- 
Martin Geisler

VIFF (Virtual Ideal Functionality Framework) brings easy and efficient
SMPC (Secure Multiparty Computation) to Python. See: http://viff.dk/.
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] Optimizing preprocessing

2009-10-09 Thread Marcel Keller

Hi Janus,

I am not opposed to your suggestion. However I would like to point out 
that in VIFF you compute on shares and not field elements!. Computing 
directly on the field elements is hacking the abstractions of VIFF.


I don't think so. Since I work with VIFF, it treats field elements as 
they would be shares with threshold 0. You can multiply and add field 
elements and shares. Furthermore, if you open to shares and multiply 
them, VIFF doesn't multiply locally, but executes the multiplication for 
shares of higher threshold. You have to do the following to achieve 
local multiplication:


a = open(share_a)
b = open(share_b)
c = gather_shares([a, b])
c.addCallback(lambda (a, b): a * b)

Therefore, I think that VIFF computes on field elements as well as on 
shares almost equally.


Computation on field elements or rather the representation of a Share 
can be useful as an optimization, however this optimization should be 
confined within applications or runtimes, and should not progress over 
interface boundaries as I fear you are suggesting.


I believe that if you would implement an OrlandiTuple class and overload 
operators there with the current _plus, _minus, and _const_mul 
functions, the OrlandiRuntime could reuse the add, sub, and lin_comb 
functions from PassiveRuntime and maybe even the mul function from 
BasicActiveRuntime. The OrlandiTuple would then be something equivalent 
to a FieldElement in the older runtimes.


Best regards,
Marcel

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] Optimizing preprocessing

2009-10-09 Thread Janus Dam Nielsen

Hi Marcel,

I am not opposed to your suggestion. However I would like to point out  
that in VIFF you compute on shares and not field elements!. Computing  
directly on the field elements is hacking the abstractions of VIFF.
Computation on field elements or rather the representation of a Share  
can be useful as an optimization, however this optimization should be  
confined within applications or runtimes, and should not progress over  
interface boundaries as I fear you are suggesting.



On 08/10/2009, at 20.11, Marcel Keller wrote:


Dear friends of VIFF,

I have a proprosal to optimize preprocessing in VIFF, which I would  
like to put up for discussion.


Notation:
- D(x): a Deferred object whose callback function will be called  
with x

- S(x): same for a Share object
- F: a FieldElement object
- [x, ...]: a Python list
- (x, ...): a Python tuple

Current situation:
The two generate_triples() functions in the active runtimes return
n, D([(S(F), S(F), S(F)), ...]),
where n denotes the length of the list. Runtime.preprocess() puts  
one D(S(F), S(F), S(F)) per program counter in the preprocessing  
pool and uses util.deep_wait() to return a Deferred whose callback  
is called when all field elements are available.


The get_triple() functions return
D(S(F), S(F), S(F)),
either taken from the preprocessing pool, or by calling  
generate_triples() and adding an extra callback to extract the first  
triple.


My proposal:
The generate_triples() functions return
[D([F, F, F]), ...],
and Runtime.preprocess() puts one [F, F, F] per program counter in  
the preprocessing pool. There is no need for deep_wait, we can just  
use gatherResults on the Deferreds return by the generator  
functions. The generator functions can use gatherResults to get  
D([F, F, F]) from [S(F), S(F), S(F)].


The get_triple() functions return
[F, F, F], True  if there is a triple in the pool, and
[S(F), S(F), S(F)], Falseotherwise.
For the multiplication in BasicActiveRuntime it doesn't make a  
difference whether FieldElements or Shares are returned. Future  
applications which require Shares can wrap the FieldElements in  
Shares if the Boolean is True.


Benchmarks:
- 1 multiplications in parallel with active security using PRSS
 - 4% faster preprocessing
 - 5% faster online operation
 - 50 MB less memory used
- 10 AES blocks in parallel using masked exponentiation with active  
security using PRSS

 - 10% faster preprocessing
 - 10% faster online operation
 - 140 MB less memory used

Further considerations:
- I don't see any problem for replacing FieldElement with anything  
that does not contain Deferreds in any form.
- Probably, it would also be possible to leave the generator  
functions as they are and use something similar to deep_wait().  
However, I consider it as an overhead.
- One could also always wrap the FieldElements in Shares, but this  
would again be an overhead.


Best regards,
Marcel
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk




Janus Dam Nielsen

Research and Innovationspecialist, PhD.
CENTRE FOR IT-SECURITY

THE ALEXANDRA INSTITUTE LTD.

T +45 42 22 93 56
E janus.niel...@alexandra.dk
W alexandra.dk


___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


[viff-devel] Optimizing preprocessing

2009-10-08 Thread Marcel Keller

Dear friends of VIFF,

I have a proprosal to optimize preprocessing in VIFF, which I would like 
to put up for discussion.


Notation:
- D(x): a Deferred object whose callback function will be called with x
- S(x): same for a Share object
- F: a FieldElement object
- [x, ...]: a Python list
- (x, ...): a Python tuple

Current situation:
The two generate_triples() functions in the active runtimes return
n, D([(S(F), S(F), S(F)), ...]),
where n denotes the length of the list. Runtime.preprocess() puts one 
D(S(F), S(F), S(F)) per program counter in the preprocessing pool and 
uses util.deep_wait() to return a Deferred whose callback is called when 
all field elements are available.


The get_triple() functions return
D(S(F), S(F), S(F)),
either taken from the preprocessing pool, or by calling 
generate_triples() and adding an extra callback to extract the first triple.


My proposal:
The generate_triples() functions return
[D([F, F, F]), ...],
and Runtime.preprocess() puts one [F, F, F] per program counter in the 
preprocessing pool. There is no need for deep_wait, we can just use 
gatherResults on the Deferreds return by the generator functions. The 
generator functions can use gatherResults to get D([F, F, F]) from 
[S(F), S(F), S(F)].


The get_triple() functions return
[F, F, F], True  if there is a triple in the pool, and
[S(F), S(F), S(F)], Falseotherwise.
For the multiplication in BasicActiveRuntime it doesn't make a 
difference whether FieldElements or Shares are returned. Future 
applications which require Shares can wrap the FieldElements in Shares 
if the Boolean is True.


Benchmarks:
- 1 multiplications in parallel with active security using PRSS
  - 4% faster preprocessing
  - 5% faster online operation
  - 50 MB less memory used
- 10 AES blocks in parallel using masked exponentiation with active 
security using PRSS

  - 10% faster preprocessing
  - 10% faster online operation
  - 140 MB less memory used

Further considerations:
- I don't see any problem for replacing FieldElement with anything that 
does not contain Deferreds in any form.
- Probably, it would also be possible to leave the generator functions 
as they are and use something similar to deep_wait(). However, I 
consider it as an overhead.
- One could also always wrap the FieldElements in Shares, but this would 
again be an overhead.


Best regards,
Marcel
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk