HI Cheyenne*,*

Thank you very much.

Load cannot be done in parallel with one jdbc connection. To make it
parallel, each node must read a set of records

Following is my approach.

1. Create Cluster wide singleton distributed custom service

2. Get all region(s) information (for each records has be to read) in the
init() method of custom service

3. Broadcast region(s) using ignite.compute().call() in execute() method of
custom service. so that each node reads a region data.

4. Scan a particular region (with start row and end row) using scan query
and load into cache


Hope this give clear idea.


Please let me know if you have any questions.


Thanks.




On 13 October 2016 at 13:34, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:

> Check out this post for loading data from MySQL to Ignite
> https://dzone.com/articles/apache-ignite-how-to-read-
> data-from-persistent-sto
>
> and this one (recommended) on how to UPSERT to Phoenix on Ignite PUT...
> *delete, etc.*
> https://apacheignite.readme.io/docs/persistent-store#cachestore-example
>
> Just replace the MySQL things with Phoenix things (eg. the JDBC driver,
> INSERT to UPSERT, etc.). If after reading you still have issues, feel free
> ask in this thread for more help
>

Reply via email to