Well, there are many more or less interesting conclusions to draw from your
benchmark, Marting. Not surprisingly, matrix multiplication turns out to be
expensive. It is worth considering using a non-naive algorithm for this
multiplication, but I am not convinced there is very much to gain.
One
Mikkel Krøigård [EMAIL PROTECTED] writes:
Well, there are many more or less interesting conclusions to draw
from your benchmark, Martin. Not surprisingly, matrix multiplication
turns out to be expensive.
Hmm... I did see that there were a bunch of calls to __mul__ in
matrix.py, but I thought
Well, there are many more or less interesting conclusions to draw
from your benchmark, Martin. Not surprisingly, matrix multiplication
turns out to be expensive.
Hmm... I did see that there were a bunch of calls to __mul__ in
matrix.py, but I thought they came from the initialization of
Martin Geisler [EMAIL PROTECTED] writes:
Strangely the time for preprocessing has not improved... It stayed
at an average time of about *20 ms* for a multiplication triple both
before and after the change -- I don't understand that :-(
I do now! :-)
It turned out that the preprocessing was
Mikkel Krøigård [EMAIL PROTECTED] writes:
It only used 0.5 seconds of its own time -- the 21 seconds are the
total time spend in the child-calls made by inc_pc_wrapper. Since
it wraps all important functions its clear that the cumulative time
will be big:
ncalls tottime percall
Martin Geisler [EMAIL PROTECTED] writes:
I'm thinking that there might be some unfortunate overhead in the
preprocessing book-keeping. We should try running benchmark.py under
a profiler to see where the time is spent.
There is now support for a --profile flag, and running benchmark.py
with
Hi everybody,
Ivan told me how we can implement pseudo-random zero-sharing over a
degree 2t polynomial. It even uses most of the stuff we already have
so I went ahead and implemented it.
I then make a prss_generate_triple method which uses PRSS-based
methods instead of the single_ and