### Re: [viff-devel] OrlandiRuntime implementation

On Wed, Nov 4, 2009 at 10:15 AM, Marcel Keller mkel...@cs.au.dk wrote:
Claudio Orlandi wrote:

On Wed, Nov 4, 2009 at 5:46 AM, Marcel Keller mkel...@cs.au.dk wrote:

Hi Claudio and Jesper,

In the code review of the OrlandiRuntime we found two points, we want to
discuss with you.

Step 3 of the triple generation protocol says: Coin-flip a subset
\fancy_T
\subset \fancy_M of size \lambda(2d + 1)M. The current implementation
loops
over the elements in \fancy_M and decides fifty-fifty for every element
whether it should by in \fancy_T until there are enough elements in
\fancy_T. I however think that this choice should be biased according to
the
size of \fancy_M and \fancy_T, i.e. every element of \fancy_M should be
put
in \fancy_T with probability \lambda(2d + 1)M / (1 + \lambda)(2d + 1)M =
\lambda / (1 + \lambda). Furthermore, I think the probability should be
updated whenever \fancy_M is processed completely and \fancy_T still is
not
big enough. Maybe it would be easier to choose \lambda(2d + 1)M  times a
random element of the remaining ones in \fancy_M instead.

So you could loop the elements of M and include them in T with
probability \lambda/(1+\lambda), but then you're not guaranteed that
you have enough triplets left in M.

Exactly, that's why I suggested the solution which in that case would

Choosing the right number of

random elements from M and include them in T is the right choice (or
any solution that guarantees the size of M \ T to be 2(2d+1)M --- I
don't know if the typo is in your email or in Step 3, but the size of
the set M should be (1+\lambda)2(2d+1)M --- you miss a 2.

I think the 2 is missing in your report and the code.

Wow. Then it really shouldn't work!
The factors are:
1+lambda : because in the test we check a fraction lambda/1+lambda of
the triplets
2 : because there is a test to check the correctness of the triplets
that burns one triplet to ensure the other one is good
2d+1: because we do the shamir secret sharing multiplication

I guess at some point I should also give a look at the code...

In step 6, the implementation generates a distributed random number in
Z_p
to determine a partition where one element should be put if there is
still
space there. I suggest to use the randomness more efficiently to save
communication. E.g., if a random number in Z_p is smaller than M^l with l
\le log_M(p), one can extract l random numbers in Z_M. The same method
probably could be used in step 3 as well.

Right. Actually if you want to save communication here you can also
use a PRG using the distributed random number as the seed.

Best regards,
Marcel

If you have time, there are a lot of things that need to be done here.
I already told Janus knows some of them, but said he can't work on
them. Here is something nice we should do to reduce the online time of
a factor (at least) 3:

In the protocol there are Pedersen commitemnts with three base element
g,h_1,h_2. To commit to x using randomness r_1,r_2 one computes
g^xh_1^r_1h_2^r_2. The protocol now does it in the following way:
1) compute a=g^x
2) compute b=h_1^r_1
3) compute c=h_2^r_2
4) compute d=a*b*c

The complexity is dominated by the three exponentiation, that one
implements using some ladder (square and multiply)

There is no need to do three exponentiation if one does something a bit
smarter:
- precompute
g001=g^0h_1^0h_2^1
g010=g^0h_1^1h_2^0
...
g010=g^1h_1^1h_2^1

Now, to compute g^xh_1^r_1h_2^r_2 one does the ladder by looking
comtemporarily at the i-th bit of x, r_1, r_2. The complexity of this
is just one ladder, so virtually the same as just one of the above
exponentiation.

One can go further and precompute more elements, for instance
g000
...
g333
And now you look at 2 bits of x,r_1,r_2 for every square in the
square-and-multiply. This saves another factor 2 in time but you need
to store a bigger table of size 8^2.

What is the best strategy given our architecture? How many powers
should we precompute?

The commitments are computed by an external elliptic curve library, so I

Actually i think that the elliptic curve library offers method for
square, multiply, and a multiplication, not for the commitments as an
object... I seem to remember that Janus implemented them.

Regards,
Marcel

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk



### Re: [viff-devel] OrlandiRuntime implementation

On Wed, Nov 4, 2009 at 3:18 PM, Marcel Keller mkel...@cs.au.dk wrote:
Claudio Orlandi schrieb:

On Wed, Nov 4, 2009 at 10:15 AM, Marcel Keller mkel...@cs.au.dk wrote:

Claudio Orlandi wrote:

On Wed, Nov 4, 2009 at 5:46 AM, Marcel Keller mkel...@cs.au.dk wrote:

Hi Claudio and Jesper,

In the code review of the OrlandiRuntime we found two points, we want
to
discuss with you.

Step 3 of the triple generation protocol says: Coin-flip a subset
\fancy_T
\subset \fancy_M of size \lambda(2d + 1)M. The current implementation
loops
over the elements in \fancy_M and decides fifty-fifty for every element
whether it should by in \fancy_T until there are enough elements in
\fancy_T. I however think that this choice should be biased according
to
the
size of \fancy_M and \fancy_T, i.e. every element of \fancy_M should be
put
in \fancy_T with probability \lambda(2d + 1)M / (1 + \lambda)(2d + 1)M
=
\lambda / (1 + \lambda). Furthermore, I think the probability should be
updated whenever \fancy_M is processed completely and \fancy_T still is
not
big enough. Maybe it would be easier to choose \lambda(2d + 1)M  times
a
random element of the remaining ones in \fancy_M instead.

So you could loop the elements of M and include them in T with
probability \lambda/(1+\lambda), but then you're not guaranteed that
you have enough triplets left in M.

Exactly, that's why I suggested the solution which in that case would

Choosing the right number of

random elements from M and include them in T is the right choice (or
any solution that guarantees the size of M \ T to be 2(2d+1)M --- I
don't know if the typo is in your email or in Step 3, but the size of
the set M should be (1+\lambda)2(2d+1)M --- you miss a 2.

I think the 2 is missing in your report and the code.

Wow. Then it really shouldn't work!
The factors are:
1+lambda : because in the test we check a fraction lambda/1+lambda of
the triplets
2 : because there is a test to check the correctness of the triplets
that burns one triplet to ensure the other one is good
2d+1: because we do the shamir secret sharing multiplication

Isn't the burning already done in TripleTest, so you really have just
(lambda+1)(2d+1)M triples when doing coin-flipping for the test set?

Oh that's fine. It would also be fine to do the test set first and the
TripleTest after. Sorry for the confusion.

I guess at some point I should also give a look at the code...

In step 6, the implementation generates a distributed random number in
Z_p
to determine a partition where one element should be put if there is
still
space there. I suggest to use the randomness more efficiently to save
communication. E.g., if a random number in Z_p is smaller than M^l with
l
\le log_M(p), one can extract l random numbers in Z_M. The same method
probably could be used in step 3 as well.

Right. Actually if you want to save communication here you can also
use a PRG using the distributed random number as the seed.

Best regards,
Marcel

If you have time, there are a lot of things that need to be done here.
I already told Janus knows some of them, but said he can't work on
them. Here is something nice we should do to reduce the online time of
a factor (at least) 3:

In the protocol there are Pedersen commitemnts with three base element
g,h_1,h_2. To commit to x using randomness r_1,r_2 one computes
g^xh_1^r_1h_2^r_2. The protocol now does it in the following way:
1) compute a=g^x
2) compute b=h_1^r_1
3) compute c=h_2^r_2
4) compute d=a*b*c

The complexity is dominated by the three exponentiation, that one
implements using some ladder (square and multiply)

There is no need to do three exponentiation if one does something a bit
smarter:
- precompute
g001=g^0h_1^0h_2^1
g010=g^0h_1^1h_2^0
...
g010=g^1h_1^1h_2^1

Now, to compute g^xh_1^r_1h_2^r_2 one does the ladder by looking
comtemporarily at the i-th bit of x, r_1, r_2. The complexity of this
is just one ladder, so virtually the same as just one of the above
exponentiation.

One can go further and precompute more elements, for instance
g000
...
g333
And now you look at 2 bits of x,r_1,r_2 for every square in the
square-and-multiply. This saves another factor 2 in time but you need
to store a bigger table of size 8^2.

What is the best strategy given our architecture? How many powers
should we precompute?

The commitments are computed by an external elliptic curve library, so I

Actually i think that the elliptic curve library offers method for
square, multiply, and a multiplication, not for the commitments as an
object... I seem to remember that Janus implemented them.

In any case, the commitments are completely implemented in C and I didn't
have a look at that code.

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http

### Re: [viff-devel] Orlandi preprocessing

Correct!

On Thu, Oct 22, 2009 at 11:17 AM, Marcel Keller mkel...@cs.au.dk wrote:
Hi Janus,

I remember you saying today that the preprocessing in the OrlandiRuntime is
more efficient per item the more items are requested. Is that correct? I ask
because in my optimizations, I limited the items being preprocessed per call
in order to save memory. I would of course drop that because it doesn't
really affect the other cases.

Best regards,
Marcel
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk



### Re: [viff-devel] Homomorphic encryption

Hi Marc,

Let me see if I understood the way you measured: it takes 496 msec on
average to do an encryption with your code, right?

Claudio

On Fri, Jul 10, 2009 at 10:18 AM, Marc Makkesmmak...@science.uva.nl wrote:
Hi Janus,

I think that I'd have reached the stage where you can test my code, but
still lacks some basic checks and is still prone to timing attacks and
is basically the same viffs current implementation, with some additional
speedups. So consequently, it code should only be used for testing purposes
only.

I'm achieving the following speeds on my atom N270 ( 1.6Ghz ) testing
with key sizes of 2048 bit.

Viff code:
--
Encrypting:
10 loops, best of 3: 4.42 sec per loop
Decrypting:
10 loops, best of 3: 925 msec per loop

My code:

Encrypting:
10 loops, best of 3: 496 msec per loop
Decrypting:
10 loops, best of 3: 143 msec per loop

For encrypting its almost a 9 fold speedup and for decrypting 6.5 times
with respect to the current implementation.

In the tar ball you find the small makefile as well as a test.py file.
It shows the basic use of all functions. If you have any comments, issues
or questions please let me know.

Happy testing,

-Marc

___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

--
Claudio Orlandi

PhD student,
Department of Computer Science, Turing-223
Aarhus Universitet, Denmark
http://www.daimi.au.dk/~orlandi
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk



### Re: [viff-devel] Speed of ElGamal encryption

Could everyone specify the size of the field and the size of the
secret keys used?
Otherwise it's quite hard to understand the performance reported.

Regards,
Claudio

On Sun, Sep 21, 2008 at 4:59 PM, Adam Langley [EMAIL PROTECTED] wrote:
On Sun, Sep 21, 2008 at 3:23 AM, Martin Geisler [EMAIL PROTECTED] wrote:
Calling a ElGamal function in NaCl would be very cool and probably a bit
faster since you wont have to do all the tuple packing and unpacking
that you do in the Python version.

NaCl has support for a primitive called a 'box'. The boxing function
takes these inputs:
* The message
* An nonce
* The recipient's public key
* The sender's private key

Note that requiring the sender's private key makes this different from
most public key encryption functions. The unboxing function,
symmetrically, requires the sender's public key. (This boxing function
may be viewed as a encrypt+sign operation.)

need. In this case, the underlying primitive is not ElGamel, but
Diffie-Hellman. The two keys are combined with ECDH and the nonce
(which both sides must know, but need not be secret) diversifies the
long-term shared key into a per-message key.

Based on timings for the x86-64 ECDH implementation, which I wrote,
4*10^6 operations should take about 880 seconds for a short message.

AGL

--
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

--
Claudio Orlandi

PhD student,
Department of Computer Science, Turing-223
Aarhus Universitet, Denmark
http://www.daimi.au.dk/~orlandi
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk



### Re: [viff-devel] [PATCH 0 of 4] Insecure ElGamal based two player runtime

It seems ok to me.
I just think that we can improve effiency (and security) a bit if we
do like this:

P1 computes:
- A1= Enc(a1), B1=Enc(b1)
- Send A1,B1 to P2
P2 computes:
- C1=A1^b2 * B1^a2 * Enc(r) // r random in [0, 2p^2 + 2^k] k security parameter
- c2= a2b2 - (r mod p) mod p
- Send C1 to P1
P1 computes:
- c1 = Dec(C1) + a1b1 mod p

Now c1+c2=c=ab=(a1+a2)(b1+b2)

Efficiency: in this way we reduced from:
- Encryptions: from 6 to 3 encryptions
- Decryptions: from 2 to 1 decryptions
- Communication: from 4 to 3 ciphertext
- Generated random numbers: from 2 to 1
- Key pair needed: from 2 to 1.

Security:
- original: computational for both players.
- modified: computational for P1, statistical in k for P2.

Problems:
- it doesn't scale for n2
- it might be complicated to implement it in VIFF, given that this is
quite asymmetric while VIFF is highly symmetric.

Claudio

On Sun, Jun 29, 2008 at 2:15 PM, Martin Geisler [EMAIL PROTECTED] wrote:
Claudio Orlandi [EMAIL PROTECTED] writes:

Hi Claudio

if you are interested just in passive security for the 2 party case
you can implement the following protocol for multiplication.

You never commented on my implementation of your multiplication
protocol -- is there anything I should know security-wise before
including it in VIFF proper?

I did a simple benchmark with 10 multiplications and a multiplication
takes about *3 seconds* when I run both playes on the same laptop. I
have not yet tested on the DAIMI machines we normally compare with.

The updated code is here:

--
Martin Geisler

--
Claudio Orlandi

PhD student,
Department of Computer Science, Turing-223
Aarhus Universitet, Denmark
http://www.daimi.au.dk/~orlandi
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk



### Re: [viff-devel] Paillier based two player runtime

 Cool -- that sounds like a good opportunity to finally sit down and
create a slow-but-simple elliptic curve library for VIFF.

I suggest you to use some library instead. Some of the algorithms are
quite involved...
I'm sure you can find C/C++ good stuff out there, and as far as I
understood, you can embed them into Python right? There is a list here
http://en.wikipedia.org/wiki/Elliptic_Curve_Cryptography but I have no
clue about what is good and what is not.

Claudio
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk



### [viff-devel] Elliptic curves

 From reading the Wikipedia page linked below it seems very simple to
implement. But if it should be fast, then a library is of course much
better than a home-grown Python version.

It's also about security. I would like an implementation that deals,
at least, with the most common side-channel attacks.
Other issues are which curve do you use, which kind of point representation, ...

Yes, one can do that. But then people would need to install the
library on their machine to use VIFF. If the library provided binaries
for Windows then it's no problem, but for a smaller library there
might not be much Windows support.

So Micheal used mostly pairing-friendly curves, that is really what we
don't want here. Anyway, he suggested to have a look at the MIRACL
library. The problem with this one is that is not open source, it's
free just if you use it for fun...
___
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk