Hello Christo and thank you for your feedback. 

For the moment we have a Relational Database that is not distributed or
partitioned. We have given some thought for configuring at least some
database tables as cache store to Ignite and it not something that we
exclude. Just for the moment due to some Network I/O performance issues of
the Cloud platform that will host the solution we cannot say much.

However, there is a strong chance at least one or two tables involved in the
flow I described to be loaded into cache. For this reason I would like to
ask the following:

1) Which is the best policy for loading into cache a table of about
~1.000.000 rows and use ANSI SQL to query it later? This table will be
incremented periodically (after an ETL procedure) utill it will reach a
final upper limit of about ~3.000.000 rows. So I would like the optimal way
to stream it initially into cache on startup of a future deployment and
update it after each ETL procedure.

2) If we use MapReduce & ForkJoin procedure how can we combine it with
Affinity? There are examples for Distributed Closures but I do not see any
for ComputeTask/ComputeJobAdapter etc. Ideally each job, should perform ANSI
SQL query on the table that will be loaded and maintained in cache but on
the rows that it keeps locally.

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11653.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to