Breaking out the discussion started by Suneel, I see opportunities for a
bit of judicious refactoring of the codebase.

I'm no expert on Beam etc., so I'm going to keep out of that decision,
simply to say I agree that it is probably a bad idea to create
submodules too early.

At the moment I'm simply trying to code up the "hello world" of the Pirk
APIs, and I'm struggling.  I'll happily admit that I don't have the
background here to make it simple, but as I wander around the code under
main/java I see
 - core implementation code (e.g. Querier, Response, etc)
 - performance / test classes (e.g. PaillierBenchmark)
 - examples, drivers, CLI's (e.g. QuerierDriver)
 - providers for Hadoop, standalone, Spark

I'm left somewhat confused about stripping this down to a library of
types that I need to interact with to programmatically use Pirk.  I
think I'm getting far more in the target JAR than I need, and that the
core types are not yet finessed to offer a usable API -- or am I being
unfair?

Does it make sense to move some of this CLI / test material out of the
way?  Should the providers be a bit more pluggable rather than hard-coded?

I will continue with my quest, and will raise usage questions here as I
go if people are prepared to tolerate a PIR newbie!

Regards,
Tim

Reply via email to