The embedded lookup tables are computed Query side as they are specific to
the (encrypted) query vectors. The 'embedded' part is key here - if you
compute them in the Responder, then you have to repeat that computation
each time you would like to run the query instead of just pulling the
(one-time) pre-computed lookup table from the Query object.
Note that in Spark, there is an option to compute the lookup table in a
distributed form (i.e. not embedded in the Query).
Thus, computation of lookup tables can happen on the Responder side (there
is such an implementation for Spark), but the embedded lookup table is a
On Wed, Oct 12, 2016 at 9:36 AM, Tim Ellison <t.p.elli...@gmail.com> wrote:
> On 29/09/16 11:29, Ellison Anne Williams wrote:
> > In general, I am in favor of an abstract class.
> > However, note that in the distributed case, the 'table' is generated in a
> > distributed fashion and then used as such too ('split' and distributed).
> > FWIW - In preliminary testing, the lookup tables ended up not performing
> > any better at scale than the local caching mechanism that is currently in
> > place and used by default (in
> > org.apache.pirk.responder.wideskies.common.ComputeEncryptedRow).
> I'm trying to figure out why the Query is responsible for maintaining
> the expTable / expFile* info? These tables are only used by the
> responders, so doesn't it make sense to move the logic over there?
> The responders should decide whether they want to use caches to
> calculate the response, not the person asking the query.