Re: [viff-devel] OrlandiRuntime implementation

2009-11-04 Thread Marcel Keller

Claudio Orlandi wrote:

On Wed, Nov 4, 2009 at 5:46 AM, Marcel Keller mkel...@cs.au.dk wrote:

Hi Claudio and Jesper,

In the code review of the OrlandiRuntime we found two points, we want to
discuss with you.

Step 3 of the triple generation protocol says: Coin-flip a subset \fancy_T
\subset \fancy_M of size \lambda(2d + 1)M. The current implementation loops
over the elements in \fancy_M and decides fifty-fifty for every element
whether it should by in \fancy_T until there are enough elements in
\fancy_T. I however think that this choice should be biased according to the
size of \fancy_M and \fancy_T, i.e. every element of \fancy_M should be put
in \fancy_T with probability \lambda(2d + 1)M / (1 + \lambda)(2d + 1)M =
\lambda / (1 + \lambda). Furthermore, I think the probability should be
updated whenever \fancy_M is processed completely and \fancy_T still is not
big enough. Maybe it would be easier to choose \lambda(2d + 1)M  times a
random element of the remaining ones in \fancy_M instead.


So you could loop the elements of M and include them in T with
probability \lambda/(1+\lambda), but then you're not guaranteed that
you have enough triplets left in M.


Exactly, that's why I suggested the solution which in that case would 
restart with an adapted probability.


 Choosing the right number of

random elements from M and include them in T is the right choice (or
any solution that guarantees the size of M \ T to be 2(2d+1)M --- I
don't know if the typo is in your email or in Step 3, but the size of
the set M should be (1+\lambda)2(2d+1)M --- you miss a 2.


I think the 2 is missing in your report and the code.


In step 6, the implementation generates a distributed random number in Z_p
to determine a partition where one element should be put if there is still
space there. I suggest to use the randomness more efficiently to save
communication. E.g., if a random number in Z_p is smaller than M^l with l
\le log_M(p), one can extract l random numbers in Z_M. The same method
probably could be used in step 3 as well.


Right. Actually if you want to save communication here you can also
use a PRG using the distributed random number as the seed.


Best regards,
Marcel



If you have time, there are a lot of things that need to be done here.
I already told Janus knows some of them, but said he can't work on
them. Here is something nice we should do to reduce the online time of
a factor (at least) 3:

In the protocol there are Pedersen commitemnts with three base element
g,h_1,h_2. To commit to x using randomness r_1,r_2 one computes
g^xh_1^r_1h_2^r_2. The protocol now does it in the following way:
1) compute a=g^x
2) compute b=h_1^r_1
3) compute c=h_2^r_2
4) compute d=a*b*c

The complexity is dominated by the three exponentiation, that one
implements using some ladder (square and multiply)

There is no need to do three exponentiation if one does something a bit smarter:
- precompute
g001=g^0h_1^0h_2^1
g010=g^0h_1^1h_2^0
...
g010=g^1h_1^1h_2^1

Now, to compute g^xh_1^r_1h_2^r_2 one does the ladder by looking
comtemporarily at the i-th bit of x, r_1, r_2. The complexity of this
is just one ladder, so virtually the same as just one of the above
exponentiation.

One can go further and precompute more elements, for instance
g000
...
g333
And now you look at 2 bits of x,r_1,r_2 for every square in the
square-and-multiply. This saves another factor 2 in time but you need
to store a bigger table of size 8^2.

What is the best strategy given our architecture? How many powers
should we precompute?


The commitments are computed by an external elliptic curve library, so I 
can't answer anything here. Janus should know more about it.


Regards,
Marcel

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] OrlandiRuntime implementation

2009-11-04 Thread Claudio Orlandi
On Wed, Nov 4, 2009 at 10:15 AM, Marcel Keller mkel...@cs.au.dk wrote:
 Claudio Orlandi wrote:

 On Wed, Nov 4, 2009 at 5:46 AM, Marcel Keller mkel...@cs.au.dk wrote:

 Hi Claudio and Jesper,

 In the code review of the OrlandiRuntime we found two points, we want to
 discuss with you.

 Step 3 of the triple generation protocol says: Coin-flip a subset
 \fancy_T
 \subset \fancy_M of size \lambda(2d + 1)M. The current implementation
 loops
 over the elements in \fancy_M and decides fifty-fifty for every element
 whether it should by in \fancy_T until there are enough elements in
 \fancy_T. I however think that this choice should be biased according to
 the
 size of \fancy_M and \fancy_T, i.e. every element of \fancy_M should be
 put
 in \fancy_T with probability \lambda(2d + 1)M / (1 + \lambda)(2d + 1)M =
 \lambda / (1 + \lambda). Furthermore, I think the probability should be
 updated whenever \fancy_M is processed completely and \fancy_T still is
 not
 big enough. Maybe it would be easier to choose \lambda(2d + 1)M  times a
 random element of the remaining ones in \fancy_M instead.

 So you could loop the elements of M and include them in T with
 probability \lambda/(1+\lambda), but then you're not guaranteed that
 you have enough triplets left in M.

 Exactly, that's why I suggested the solution which in that case would
 restart with an adapted probability.

 Choosing the right number of

 random elements from M and include them in T is the right choice (or
 any solution that guarantees the size of M \ T to be 2(2d+1)M --- I
 don't know if the typo is in your email or in Step 3, but the size of
 the set M should be (1+\lambda)2(2d+1)M --- you miss a 2.

 I think the 2 is missing in your report and the code.


Wow. Then it really shouldn't work!
The factors are:
1+lambda : because in the test we check a fraction lambda/1+lambda of
the triplets
2 : because there is a test to check the correctness of the triplets
that burns one triplet to ensure the other one is good
2d+1: because we do the shamir secret sharing multiplication

I guess at some point I should also give a look at the code...

 In step 6, the implementation generates a distributed random number in
 Z_p
 to determine a partition where one element should be put if there is
 still
 space there. I suggest to use the randomness more efficiently to save
 communication. E.g., if a random number in Z_p is smaller than M^l with l
 \le log_M(p), one can extract l random numbers in Z_M. The same method
 probably could be used in step 3 as well.

 Right. Actually if you want to save communication here you can also
 use a PRG using the distributed random number as the seed.

 Best regards,
 Marcel


 If you have time, there are a lot of things that need to be done here.
 I already told Janus knows some of them, but said he can't work on
 them. Here is something nice we should do to reduce the online time of
 a factor (at least) 3:

 In the protocol there are Pedersen commitemnts with three base element
 g,h_1,h_2. To commit to x using randomness r_1,r_2 one computes
 g^xh_1^r_1h_2^r_2. The protocol now does it in the following way:
 1) compute a=g^x
 2) compute b=h_1^r_1
 3) compute c=h_2^r_2
 4) compute d=a*b*c

 The complexity is dominated by the three exponentiation, that one
 implements using some ladder (square and multiply)

 There is no need to do three exponentiation if one does something a bit
 smarter:
 - precompute
 g001=g^0h_1^0h_2^1
 g010=g^0h_1^1h_2^0
 ...
 g010=g^1h_1^1h_2^1

 Now, to compute g^xh_1^r_1h_2^r_2 one does the ladder by looking
 comtemporarily at the i-th bit of x, r_1, r_2. The complexity of this
 is just one ladder, so virtually the same as just one of the above
 exponentiation.

 One can go further and precompute more elements, for instance
 g000
 ...
 g333
 And now you look at 2 bits of x,r_1,r_2 for every square in the
 square-and-multiply. This saves another factor 2 in time but you need
 to store a bigger table of size 8^2.

 What is the best strategy given our architecture? How many powers
 should we precompute?

 The commitments are computed by an external elliptic curve library, so I
 can't answer anything here. Janus should know more about it.


Actually i think that the elliptic curve library offers method for
square, multiply, and a multiplication, not for the commitments as an
object... I seem to remember that Janus implemented them.

 Regards,
 Marcel


___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] OrlandiRuntime implementation

2009-11-04 Thread Marcel Keller

Claudio Orlandi schrieb:

On Wed, Nov 4, 2009 at 10:15 AM, Marcel Keller mkel...@cs.au.dk wrote:

Claudio Orlandi wrote:

On Wed, Nov 4, 2009 at 5:46 AM, Marcel Keller mkel...@cs.au.dk wrote:

Hi Claudio and Jesper,

In the code review of the OrlandiRuntime we found two points, we want to
discuss with you.

Step 3 of the triple generation protocol says: Coin-flip a subset
\fancy_T
\subset \fancy_M of size \lambda(2d + 1)M. The current implementation
loops
over the elements in \fancy_M and decides fifty-fifty for every element
whether it should by in \fancy_T until there are enough elements in
\fancy_T. I however think that this choice should be biased according to
the
size of \fancy_M and \fancy_T, i.e. every element of \fancy_M should be
put
in \fancy_T with probability \lambda(2d + 1)M / (1 + \lambda)(2d + 1)M =
\lambda / (1 + \lambda). Furthermore, I think the probability should be
updated whenever \fancy_M is processed completely and \fancy_T still is
not
big enough. Maybe it would be easier to choose \lambda(2d + 1)M  times a
random element of the remaining ones in \fancy_M instead.

So you could loop the elements of M and include them in T with
probability \lambda/(1+\lambda), but then you're not guaranteed that
you have enough triplets left in M.

Exactly, that's why I suggested the solution which in that case would
restart with an adapted probability.


Choosing the right number of

random elements from M and include them in T is the right choice (or
any solution that guarantees the size of M \ T to be 2(2d+1)M --- I
don't know if the typo is in your email or in Step 3, but the size of
the set M should be (1+\lambda)2(2d+1)M --- you miss a 2.

I think the 2 is missing in your report and the code.



Wow. Then it really shouldn't work!
The factors are:
1+lambda : because in the test we check a fraction lambda/1+lambda of
the triplets
2 : because there is a test to check the correctness of the triplets
that burns one triplet to ensure the other one is good
2d+1: because we do the shamir secret sharing multiplication


Isn't the burning already done in TripleTest, so you really have just 
(lambda+1)(2d+1)M triples when doing coin-flipping for the test set?



I guess at some point I should also give a look at the code...


In step 6, the implementation generates a distributed random number in
Z_p
to determine a partition where one element should be put if there is
still
space there. I suggest to use the randomness more efficiently to save
communication. E.g., if a random number in Z_p is smaller than M^l with l
\le log_M(p), one can extract l random numbers in Z_M. The same method
probably could be used in step 3 as well.

Right. Actually if you want to save communication here you can also
use a PRG using the distributed random number as the seed.


Best regards,
Marcel


If you have time, there are a lot of things that need to be done here.
I already told Janus knows some of them, but said he can't work on
them. Here is something nice we should do to reduce the online time of
a factor (at least) 3:

In the protocol there are Pedersen commitemnts with three base element
g,h_1,h_2. To commit to x using randomness r_1,r_2 one computes
g^xh_1^r_1h_2^r_2. The protocol now does it in the following way:
1) compute a=g^x
2) compute b=h_1^r_1
3) compute c=h_2^r_2
4) compute d=a*b*c

The complexity is dominated by the three exponentiation, that one
implements using some ladder (square and multiply)

There is no need to do three exponentiation if one does something a bit
smarter:
- precompute
g001=g^0h_1^0h_2^1
g010=g^0h_1^1h_2^0
...
g010=g^1h_1^1h_2^1

Now, to compute g^xh_1^r_1h_2^r_2 one does the ladder by looking
comtemporarily at the i-th bit of x, r_1, r_2. The complexity of this
is just one ladder, so virtually the same as just one of the above
exponentiation.

One can go further and precompute more elements, for instance
g000
...
g333
And now you look at 2 bits of x,r_1,r_2 for every square in the
square-and-multiply. This saves another factor 2 in time but you need
to store a bigger table of size 8^2.

What is the best strategy given our architecture? How many powers
should we precompute?

The commitments are computed by an external elliptic curve library, so I
can't answer anything here. Janus should know more about it.



Actually i think that the elliptic curve library offers method for
square, multiply, and a multiplication, not for the commitments as an
object... I seem to remember that Janus implemented them.


In any case, the commitments are completely implemented in C and I 
didn't have a look at that code.


___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk


Re: [viff-devel] OrlandiRuntime implementation

2009-11-04 Thread Claudio Orlandi
On Wed, Nov 4, 2009 at 3:18 PM, Marcel Keller mkel...@cs.au.dk wrote:
 Claudio Orlandi schrieb:

 On Wed, Nov 4, 2009 at 10:15 AM, Marcel Keller mkel...@cs.au.dk wrote:

 Claudio Orlandi wrote:

 On Wed, Nov 4, 2009 at 5:46 AM, Marcel Keller mkel...@cs.au.dk wrote:

 Hi Claudio and Jesper,

 In the code review of the OrlandiRuntime we found two points, we want
 to
 discuss with you.

 Step 3 of the triple generation protocol says: Coin-flip a subset
 \fancy_T
 \subset \fancy_M of size \lambda(2d + 1)M. The current implementation
 loops
 over the elements in \fancy_M and decides fifty-fifty for every element
 whether it should by in \fancy_T until there are enough elements in
 \fancy_T. I however think that this choice should be biased according
 to
 the
 size of \fancy_M and \fancy_T, i.e. every element of \fancy_M should be
 put
 in \fancy_T with probability \lambda(2d + 1)M / (1 + \lambda)(2d + 1)M
 =
 \lambda / (1 + \lambda). Furthermore, I think the probability should be
 updated whenever \fancy_M is processed completely and \fancy_T still is
 not
 big enough. Maybe it would be easier to choose \lambda(2d + 1)M  times
 a
 random element of the remaining ones in \fancy_M instead.

 So you could loop the elements of M and include them in T with
 probability \lambda/(1+\lambda), but then you're not guaranteed that
 you have enough triplets left in M.

 Exactly, that's why I suggested the solution which in that case would
 restart with an adapted probability.

 Choosing the right number of

 random elements from M and include them in T is the right choice (or
 any solution that guarantees the size of M \ T to be 2(2d+1)M --- I
 don't know if the typo is in your email or in Step 3, but the size of
 the set M should be (1+\lambda)2(2d+1)M --- you miss a 2.

 I think the 2 is missing in your report and the code.


 Wow. Then it really shouldn't work!
 The factors are:
 1+lambda : because in the test we check a fraction lambda/1+lambda of
 the triplets
 2 : because there is a test to check the correctness of the triplets
 that burns one triplet to ensure the other one is good
 2d+1: because we do the shamir secret sharing multiplication

 Isn't the burning already done in TripleTest, so you really have just
 (lambda+1)(2d+1)M triples when doing coin-flipping for the test set?


Oh that's fine. It would also be fine to do the test set first and the
TripleTest after. Sorry for the confusion.

 I guess at some point I should also give a look at the code...

 In step 6, the implementation generates a distributed random number in
 Z_p
 to determine a partition where one element should be put if there is
 still
 space there. I suggest to use the randomness more efficiently to save
 communication. E.g., if a random number in Z_p is smaller than M^l with
 l
 \le log_M(p), one can extract l random numbers in Z_M. The same method
 probably could be used in step 3 as well.

 Right. Actually if you want to save communication here you can also
 use a PRG using the distributed random number as the seed.

 Best regards,
 Marcel

 If you have time, there are a lot of things that need to be done here.
 I already told Janus knows some of them, but said he can't work on
 them. Here is something nice we should do to reduce the online time of
 a factor (at least) 3:

 In the protocol there are Pedersen commitemnts with three base element
 g,h_1,h_2. To commit to x using randomness r_1,r_2 one computes
 g^xh_1^r_1h_2^r_2. The protocol now does it in the following way:
 1) compute a=g^x
 2) compute b=h_1^r_1
 3) compute c=h_2^r_2
 4) compute d=a*b*c

 The complexity is dominated by the three exponentiation, that one
 implements using some ladder (square and multiply)

 There is no need to do three exponentiation if one does something a bit
 smarter:
 - precompute
 g001=g^0h_1^0h_2^1
 g010=g^0h_1^1h_2^0
 ...
 g010=g^1h_1^1h_2^1

 Now, to compute g^xh_1^r_1h_2^r_2 one does the ladder by looking
 comtemporarily at the i-th bit of x, r_1, r_2. The complexity of this
 is just one ladder, so virtually the same as just one of the above
 exponentiation.

 One can go further and precompute more elements, for instance
 g000
 ...
 g333
 And now you look at 2 bits of x,r_1,r_2 for every square in the
 square-and-multiply. This saves another factor 2 in time but you need
 to store a bigger table of size 8^2.

 What is the best strategy given our architecture? How many powers
 should we precompute?

 The commitments are computed by an external elliptic curve library, so I
 can't answer anything here. Janus should know more about it.


 Actually i think that the elliptic curve library offers method for
 square, multiply, and a multiplication, not for the commitments as an
 object... I seem to remember that Janus implemented them.

 In any case, the commitments are completely implemented in C and I didn't
 have a look at that code.


___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk