Hi Matti,

Any reason why you wouldn't want to implement both the cache and flushing
login in the IBackingMap?

Anyway, your IBackingMap implementation will receive batches of data, which
you can bulk upsert into the database (for example, using the MERGE
statement), so there's no need to do single record inserts/updates which
are slower. Take a look at the storm-mysql project on GitHub, which has
TridentState implementations for MySQL.

Best regards,

Danijel


On Mon, Feb 24, 2014 at 10:30 AM, Matti Dahlbom <[email protected]>wrote:

> I'd like to cache aggregation results in-memory (in my IBackingMap
> implementation) for, say, 10 seconds and then do a upsert to my final data
> storage (MySQL in this case). This is of course to avoid unnecessarily
> writing 1000 times a second as our loads are rather big.
>
> My question is, which entity should take care of the "flushing" and how? I
> have a topology as follows:
>
>         Stream stream = topology.newStream("logspout", spout)
>             .parallelismHint(8)
>             .each(new Fields("json"),
>                   new ProcessJsonFunction(),
>                   new Fields("ad_id", "zone", "analytics"))
>             .groupBy(new Fields("ad_id", "zone"))
>             .persistentAggregate(AnalyticsBackingMap.FACTORY,
>                                  new Fields("analytics"),
>                                  new AnalyticsSum(),
>                                  new Fields("analytics_aggregate"))
>             .newValuesStream()
>             .each(new Fields("analytics_aggregate"),
>                   new ProcessResultFunction(),
>                   new Fields());
>
> I was thinking I'd put the flushing logic into ProcessResultFunction. But
> then I would have
> to be able to signal my IBackingMap implementation to clear its state
> since I'm dealing with time windows. And I cannot figure out a proper way
> to do this. :o
>
> - Matti
>



-- 
Danijel Schiavuzzi

E: [email protected]
W: www.schiavuzzi.com
T: +385989035562
Skype: danijels7

Reply via email to