after spending some time on Phoenix map reduce, i was able to get the
phoenix input splits for given sql query.

As query plan and phoenix input split are not Serializable, i am not able
to use them directly for ignite load.

Was there any way to read using scan on phoenix table  (not on hbase
table)?

Thanks.


On 13 October 2016 at 15:13, Anil <anilk...@gmail.com> wrote:

> HI Cheyenne*,*
>
> Thank you very much.
>
> Load cannot be done in parallel with one jdbc connection. To make it
> parallel, each node must read a set of records
>
> Following is my approach.
>
> 1. Create Cluster wide singleton distributed custom service
>
> 2. Get all region(s) information (for each records has be to read) in the
> init() method of custom service
>
> 3. Broadcast region(s) using ignite.compute().call() in execute() method
> of custom service. so that each node reads a region data.
>
> 4. Scan a particular region (with start row and end row) using scan query
> and load into cache
>
>
> Hope this give clear idea.
>
>
> Please let me know if you have any questions.
>
>
> Thanks.
>
>
>
>
> On 13 October 2016 at 13:34, Cheyenne Forbes <cheyenne.osanu.forbes@gmail.
> com> wrote:
>
>> Check out this post for loading data from MySQL to Ignite
>> https://dzone.com/articles/apache-ignite-how-to-read-data-
>> from-persistent-sto
>>
>> and this one (recommended) on how to UPSERT to Phoenix on Ignite PUT...
>> *delete, etc.*
>> https://apacheignite.readme.io/docs/persistent-store#cachestore-example
>>
>> Just replace the MySQL things with Phoenix things (eg. the JDBC driver,
>> INSERT to UPSERT, etc.). If after reading you still have issues, feel free
>> ask in this thread for more help
>>
>
>

Reply via email to