Hi Tim, Good points.
Yes, there are currently no examples included in the codebase (other than in a roundabout kind of way via the tests, which doesn't count) and the website doesn't have a step-by-step walkthrough of how to get up and running from a user perspective. We can certainly add these in. In terms of what you can look at right now to help get going -- take a look at the performQuery() method of the org.apache.pirk.test.distributed.testsuite.DistTestSuite -- it walks you through the basic steps. What are you thinking in terms of making the providers more pluggable? Perhaps Responder/Querier core and Responder/Querier modules/providers? Right now, the 'providers' fall under the algorithm, algorithm -> provider -- i.e., org.apache.pirk.responder.wideskies.spark and org.apache.pirk.responder.wideskies.mapreduce under 'wideskies'. This could be changed to provider -> algorithm. Thus, we would have a module for spark and then all algorithm implementations for spark would fall under the spark module (and leverage the core). Thoughts? Agree with doing some judicious refactoring of the codebase... Thanks! Ellison Anne On Mon, Jul 18, 2016 at 1:24 PM, Tim Ellison <[email protected]> wrote: > Breaking out the discussion started by Suneel, I see opportunities for a > bit of judicious refactoring of the codebase. > > I'm no expert on Beam etc., so I'm going to keep out of that decision, > simply to say I agree that it is probably a bad idea to create > submodules too early. > > At the moment I'm simply trying to code up the "hello world" of the Pirk > APIs, and I'm struggling. I'll happily admit that I don't have the > background here to make it simple, but as I wander around the code under > main/java I see > - core implementation code (e.g. Querier, Response, etc) > - performance / test classes (e.g. PaillierBenchmark) > - examples, drivers, CLI's (e.g. QuerierDriver) > - providers for Hadoop, standalone, Spark > > I'm left somewhat confused about stripping this down to a library of > types that I need to interact with to programmatically use Pirk. I > think I'm getting far more in the target JAR than I need, and that the > core types are not yet finessed to offer a usable API -- or am I being > unfair? > > Does it make sense to move some of this CLI / test material out of the > way? Should the providers be a bit more pluggable rather than hard-coded? > > I will continue with my quest, and will raise usage questions here as I > go if people are prepared to tolerate a PIR newbie! > > Regards, > Tim > >
