The Reducer/Combiner Aggregators hold logic in order to aggregate across an
entire batch, however it does not have the logic to aggregate between
batches.
In order for this to happen, it must read the previous TransactionId and
value from the datastore, determine whether this incoming data is in the
right sequence, then increment the value within the datastore.

I am asking about this second part. Where the logic goes in order to read
previous data from the datastore, and add it to the new incoming aggregate
data.


On Mon, Apr 21, 2014 at 6:58 PM, Cody A. Ray <[email protected]> wrote:

> Its the ReducerAggregate/CombinerAggregator's job to implement this logic.
> Look at Count and Sum that are built-in to Trident. You can also implement
> your own aggregator.
>
> -Cody
>
>
> On Mon, Apr 21, 2014 at 2:57 PM, Raphael Hsieh <[email protected]>wrote:
>
>> If I am using an opaque spout and doing a persistent aggregate to a
>> MemcachedState, how is it aggregating/incrementing the values across all
>> batches ?
>>
>> I'm wanting to implement an IBackingMap so that I can use an external
>> datastore. However, I'm unsure where the logic goes that will read the
>> previous data, and aggregate it with the new data.
>>
>> From what I've been told, I need to implement the IBackingMap and the
>> multiput/multiget functions. So logically, I think it makes sense that I
>> would put this update logiv in the multiput function. However, the
>> OpaqueMap class already has multiGet logic in order to check the TxId of
>> the batch.
>> Instead of using an OpaqueMap class, should I just make my own
>> implementation ?
>>
>> Thanks
>> --
>> Raphael Hsieh
>>
>>
>>
>>
>
>
>
> --
> Cody A. Ray, LEED AP
> [email protected]
> 215.501.7891
>



-- 
Raphael Hsieh
Amazon.com
Software Development Engineer I
(978) 764-9014

Reply via email to