Hello!

You may actually use our data streamer (with allowOverwrite false).

Once you call flush() on it and it returns, you should be confident that
all 10,000 entries are readable from cache. Of course it has to be 1 cache.

Regards,
-- 
Ilya Kasnacheev


вт, 27 окт. 2020 г. в 13:18, ssansoy <s.san...@cmcmarkets.com>:

> Hi thanks for the reply. Appreciate the suggestion - and if creating a new
> solution around this, we would likely take that tact. Unfortunately the
> entire platform we are looking to migrate over to ignite has dependencies
> in
> places for updates to come in as a complete batch (e.g. whatever was in an
> update transaction).
>
> We've experimented with putting a queue in the client as you say, with a
> timeout which gathers all sequentially arriving updates from the continuous
> query and grouping them together after a timeout of e.g. 50ms. However this
> is quite fragile/timing sensitive and not something we can comfortably put
> into production.
>
> Are there any locking or signaling mechanisms (or anything else really)
> that
> might help us here? E.g. we buffer the updates in the client, and await
> some
> signal that the updates are complete. This signal would need to be fired
> after the continuous query has seen all the updates. E.g. the writer will:
>
> Write 10,000 records to the cache
> Notify something
>
> The client app will:
> Receive 10,000 updates, 1 at a time, queueing them up locally
> Upon that notification, drain this queue and process the records.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to