Hi

> Okay, thanks for testing! But maybe something else changed since
> viff-0.4, so could you try revisions 563bcc37fe47 and f993269c2660?

True. Will do so later this week.

> It is quite easy in a Mercurial clone: if you have all local
> modifications committed, then just do a 'hg update 563bcc37fe47' to
> change the working copy to that revision. Do a 'hg update tip' to get
> back to the tip.
>


<snip memory profiling suggestion>

> There they trace the boot sequence and each process is a separate bar.
> It would be very cool if we could produce something similar!

I'm sure you can do that :-) I'm still working on getting to grips with
python and getting the immediate things done. I'm still learning so trying
to get more tools in will just confuse me further :-)

<snip gigantic computation>

> Saying 4 bytes/field-element (say using 32-bit integers as could be
>> done easily in C), and 400,000 field-elements (100,000 shares from
>> each of the three parties and 100,000 reconstructed secrets) this
>> amounts to around an astonishing 400kB (per party).
>
> Right, 100,000 32-bit values would be 400 KB. If you allocated them as
> integers using the array module, then they would take up close to 400 KB
> in Python. But each Share holds a FieldElement which holds two integers.
> So some of the massive overhead is simply due to more objects being
> tossed around. Then there is a per object malloc overhead, reference
> counters, list overallocation, etc... :-/

But from 400kB to more than 400MB? It's a factor of a thousand!

> By the way, there is now an open ticket at Twisted about the memory
> consumption of Deferred:
>
>   http://twistedmatrix.com/trac/ticket/3245
>
> I made a patch and I hope they will eventually include it. Another guy
> reported that using __slots__ cut the memory usage in half.

Cool. Do you see a benefit... and how big?

>> One positive thing here is that I've gotten an idea of how to
>> potentially remove /some/ of that overhead (honest-but-curious only).
>
> Have you tried the code I sent you? I posted it here as well:
>
>   http://article.gmane.org/gmane.comp.python.twisted/16519
>
> and got two comments that said that similar functionality already
> existed in Twisted :-) There is a DeferredQueue and DeferredSemaphore in
> twisted.internet.defer which could be used to limit concurrency.

Not yet, will do though. I'll also look into the twisted solutions.

>> The observation is that we don't need to store all shares and
>> reconstruct at the end. We can just keep track of the sum of the
>> shares (times their lambda's) we've recieved. Whenever a new one
>> arrives we add it to that accumulator. This might not make a big
>> difference in a 3-party setting, but if many (e.g. 10) parties are
>> involved, then this seems like a good idea to reduce memory
>> requirements.
>
> Do you mean that we can sort of recombine as we go? I can see how that
> makes sense if we know which recombination vector to use -- and if
> people are honest-but-curious (as you write) then that is no problem.

Exactly. And of course it doesn't work with active cheating, because the
parties must check stuff later on.

> In the honest-but-curious setting we could even make it so that only t+1
> players ever send their shares. If there are S such subsets, then we
> could choose subset number "hash(program_counter) % S" for each
> operation -- that would distribute the load evenly over all parties.

How about just sending to the t next players, i.e. player P_i sends to
P_{i+1},...,P_{i+t}, where the additions are implicitly mod the number of
players? As everyone always sends the same amount this also distributes
the load evenly :-)

Alternatively, if certain players are more powerful/have better
connections they can do more work, thereby taking the load off weak
players. Exact details for a protocol run could be set up through
config-files.
Just brainstorming a bit.


Regards,
  Tomas





_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to