[ 
https://issues.apache.org/jira/browse/PIRK-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15439972#comment-15439972
 ] 

ASF GitHub Bot commented on PIRK-4:
-----------------------------------

Github user tellison commented on a diff in the pull request:

    https://github.com/apache/incubator-pirk/pull/74#discussion_r76494100
  
    --- Diff: 
src/main/java/org/apache/pirk/responder/wideskies/common/ComputeEncryptedRow.java
 ---
    @@ -321,4 +316,80 @@ public static void loadCacheFromHDFS(FileSystem fs, 
String hdfsFileName, Query q
     
         return returnPairs;
       }
    +
    +  /**
    +   * Method to compute the encrypted row elements for a query from 
extracted data partitions in the form of ArrayList<<BigInteger>>
    +   * <p/>
    +   * For each row (as indicated by key = hash(selector)), iterates over 
the dataPartitions and calculates the column values.
    +   * <p/>
    +   * Uses a static LRU cache for the modular exponentiation
    +   * <p/>
    +   * Caller is responsible for keeping track of the colIndex and the the 
maxHitsPerSelector values
    +   * <p/>
    +   * Emits {@code Tuple2<<colNum, colVal>>}
    +   */
    +  public static List<Tuple2<Long,BigInteger>> 
computeEncRow(List<BigInteger> dataPartitions, Query query, int rowIndex, int 
colIndex)
    +      throws IOException
    +  {
    +    List<Tuple2<Long,BigInteger>> returnPairs = new 
ArrayList<Tuple2<Long,BigInteger>>();
    +
    +    // Pull the corresponding encrypted row query
    +    BigInteger rowQuery = query.getQueryElement(rowIndex);
    +
    +    // Initialize the column counter
    +    long colCounter = colIndex;
    +
    +    logger.debug("dataPartitions.size() = " + dataPartitions + " rowIndex 
= " + rowIndex + " colCounter = " + colCounter);
    +
    +    // Update the associated column values
    +    for (int i = 0; i < dataPartitions.size(); ++i)
    +    {
    +      BigInteger part = dataPartitions.get(i);
    +
    +      BigInteger exp = null;
    +      try
    +      {
    +        exp = expCache.get(new 
Tuple3<BigInteger,BigInteger,BigInteger>(rowQuery, part, query.getNSquared()));
    +      } catch (ExecutionException e)
    +      {
    +        e.printStackTrace();
    --- End diff --
    
    Really continue on exception?


> Add Streaming Implementation for Apache Storm
> ---------------------------------------------
>
>                 Key: PIRK-4
>                 URL: https://issues.apache.org/jira/browse/PIRK-4
>             Project: PIRK
>          Issue Type: Task
>          Components: Responder
>            Reporter: Chris Harris
>            Assignee: Chris Harris
>
> Per the Pirk Roadmap, this is a feature to add support for Apache Storm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to