Re: cannot drop a table in Phoenix

2016-10-18 Thread Divya Gehlot
Hi Mich,
Which version of Phoenix are you using ?


Thanks,
Divya

On 17 October 2016 at 23:41, Mich Talebzadeh 
wrote:

> Hi,
>
> I have a table marketDataHbase create on Hbase as seen below:
>
> [image: Inline images 1]
>
>
> Trying to drop it but it cannot find it
>
> 0: jdbc:phoenix:rhes564:2181> drop table "marketDataHbase";
> Error: ERROR 1012 (42M03): Table undefined. tableName=marketDataHbase
> (state=42M03,code=1012)
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03):
> Table undefined. tableName=marketDataHbase
>
> Any ideas what causes it?
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>


Re: Does Phoenix support select by version?

2016-10-18 Thread Divya Gehlot
Hi Yang ,

Can you share the details  in the forum would be useful for everybody.


Thanks,
Divya

On 19 October 2016 at 10:56, Yang Zhang  wrote:

> William    and ames JTaylor 
>  helped me solve this problem
>
> Thanks
>
> 2016-10-19 10:20 GMT+08:00 Yang Zhang :
>
>> Hi
>>
>> I saw that Phoenix can Create a table define Hbase's Versions,
>> An example on phoenix's website is
>>
>> CREATE TABLE IF NOT EXISTS "my_case_sensitive_table"
>> ( "id" char(10) not null primary key, "value" integer)
>> DATA_BLOCK_ENCODING='NONE',VERSIONS=5,MAX_FILESIZE=200 split on
>> (?, ?, ?)
>>
>>
>> But I find none version support on with select grammer, so what's the
>> advantage of defining such a table?
>>
>
>


problem with index

2016-10-18 Thread jinzhuan
I have a running phoenix table with index.


when too much data write in the same time, index write can not catch up, 
finally cause a regionserver down.


Then I dropped index, and restart region server.


Here comes the problem:
regionserver will recover edits, and found index data, it tried to replay these 
index writes,
but since index is dropped, it can not replay updates, a dead loop.


May be replay log can be improved to handle case that index table is gone?






jinzhuan

Re: Does Phoenix support select by version?

2016-10-18 Thread Yang Zhang
William    and ames JTaylor 
 helped me solve this problem

Thanks

2016-10-19 10:20 GMT+08:00 Yang Zhang :

> Hi
>
> I saw that Phoenix can Create a table define Hbase's Versions,
> An example on phoenix's website is
>
> CREATE TABLE IF NOT EXISTS "my_case_sensitive_table"
> ( "id" char(10) not null primary key, "value" integer)
> DATA_BLOCK_ENCODING='NONE',VERSIONS=5,MAX_FILESIZE=200 split on
> (?, ?, ?)
>
>
> But I find none version support on with select grammer, so what's the
> advantage of defining such a table?
>


Re: cannot drop a table in Phoenix

2016-10-18 Thread Yang Zhang
Hi

I met the sam problem before.
This may happen when you define your table with duplicate column.
such as create table test (id integer primary key,c varchar, c varchar ).
you can try to delete you table from system.catalog ,
maybe like delete * from system.catalog where tablename='your table name'

This issue exist in previous version of phoeniex,



2016-10-18 1:38 GMT+08:00 Dong iL, Kim :

> try add system tablespace.
>
> On Tue, 18 Oct 2016 at 12:41 AM Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>> I have a table marketDataHbase create on Hbase as seen below:
>>
>> [image: Inline images 1]
>>
>>
>> Trying to drop it but it cannot find it
>>
>> 0: jdbc:phoenix:rhes564:2181> drop table "marketDataHbase";
>> Error: ERROR 1012 (42M03): Table undefined. tableName=marketDataHbase
>> (state=42M03,code=1012)
>> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03):
>> Table undefined. tableName=marketDataHbase
>>
>> Any ideas what causes it?
>>
>> Thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>


Does Phoenix support select by version?

2016-10-18 Thread Yang Zhang
Hi

I saw that Phoenix can Create a table define Hbase's Versions,
An example on phoenix's website is

CREATE TABLE IF NOT EXISTS "my_case_sensitive_table"
( "id" char(10) not null primary key, "value" integer)
DATA_BLOCK_ENCODING='NONE',VERSIONS=5,MAX_FILESIZE=200 split on (?,
?, ?)


But I find none version support on with select grammer, so what's the
advantage of defining such a table?


Re: Using Apache perf with Hbase 1.1

2016-10-18 Thread Pradheep Shanmugam
Hi,

All the regions are online acoording to 16010/master-status web UI.
Phoenix is not able connect properly due to error "Caused by: 
java.io.IOException: hconnection-0xa59a583 closed"

Thanks,
Pradheep

From: Mujtaba Chohan >
Reply-To: "user@phoenix.apache.org" 
>
Date: Tuesday, October 18, 2016 at 5:36 PM
To: "user@phoenix.apache.org" 
>
Subject: Re: Using Apache perf with Hbase 1.1

Cannot get all table regions

Check that there are no offline regions. See related thread 
here.

On Tue, Oct 18, 2016 at 2:11 PM, Pradheep Shanmugam 
> wrote:
Hi,

I am trying to connect pherf to hbase cluster running hbase 1.1 using below. 
Could you please help me connect pherf to hbase cluster running v 1.1

java -Xms512m -Xmx3072m  -cp 
"/home/ambari/pherf/phoenix/bin/../phoenix-pherf/config:/etc/hbase/conf:/home/ambari/pherf/phoenix/bin/../phoenix-client/target/phoenix-server-client-4.7.0-HBase-1.1.jar:/home/ambari/pherf/phoenix/bin/../phoenix-pherf/target/phoenix-pherf-4.8.1-HBase-1.1.jar"
 -Dlog4j.configuration=file:/home/ambari/pherf/phoenix/bin/log4j.properties 
org.apache.phoenix.pherf.Pherf -drop all -l -q -z hbase-perf-rs1 -schemaFile 
'.*user_defined_schema.sql' -scenarioFile '.*user_defined_scenario.xml’
And I get below exception

Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/commons/cli/ParseException
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
at java.lang.Class.getMethod0(Class.java:2856)
at java.lang.Class.getMethod(Class.java:1668)
at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.cli.ParseException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 6 more

When I tried to connect using below,
java -Xms512m -Xmx3072m  -cp 
"/home/ambari/pherf/phoenix/bin/../phoenix-pherf/config:/etc/hbase/conf:/home/ambari/pherf/phoenix/bin/../phoenix-client/target/phoenix-4.9.0-HBase-1.2-SNAPSHOT-client.jar:/home/ambari/pherf/phoenix/bin/../phoenix-pherf/target/phoenix-pherf-4.9.0-HBase-1.2-SNAPSHOT-minimal.jar"
 -Dlog4j.configuration=file:/home/ambari/pherf/phoenix/bin/log4j.properties 
org.apache.phoenix.pherf.Pherf -drop all -l -q -z hbase-perf-rs1 -schemaFile 
'.*user_defined_schema.sql' -scenarioFile '.*user_defined_scenario.xml’

I got below error. I thought it could be because of connecting to hbase 1.1 
with 1.2 client?

java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:457)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:549)
at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:542)
at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:477)
at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:370)
at 
org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:60)
at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:218)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:341)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:206)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:290)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:270)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:269)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1476)
at 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getTables(PhoenixDatabaseMetaData.java:1149)
at 
org.apache.phoenix.pherf.util.PhoenixUtil.getTableMetaData(PhoenixUtil.java:220)
at org.apache.phoenix.pherf.util.PhoenixUtil.deleteTables(PhoenixUtil.java:192)
at org.apache.phoenix.pherf.Pherf.run(Pherf.java:234)
at 

Re: Using Apache perf with Hbase 1.1

2016-10-18 Thread Mujtaba Chohan
>
> Cannot get all table regions
>

Check that there are no offline regions. See related thread here

.

On Tue, Oct 18, 2016 at 2:11 PM, Pradheep Shanmugam <
pradheep.shanmu...@infor.com> wrote:

> Hi,
>
> I am trying to connect pherf to hbase cluster running hbase 1.1 using
> below. Could you please help me connect pherf to hbase cluster running v 1.1
>
> java -Xms512m -Xmx3072m  -cp "/home/ambari/pherf/phoenix/
> bin/../phoenix-pherf/config:/etc/hbase/conf:/home/ambari/
> pherf/phoenix/bin/../phoenix-client/target/phoenix-server-
> client-4.7.0-HBase-1.1.jar:/home/ambari/pherf/phoenix/bin/
> ../phoenix-pherf/target/phoenix-pherf-4.8.1-HBase-1.1.jar"
> -Dlog4j.configuration=file:/home/ambari/pherf/phoenix/bin/log4j.properties
> org.apache.phoenix.pherf.Pherf -drop all -l -q -z hbase-perf-rs1
> -schemaFile '.*user_defined_schema.sql' -scenarioFile
> '.*user_defined_scenario.xml’
> And I get below exception
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/commons/cli/ParseException
> at java.lang.Class.getDeclaredMethods0(Native Method)
> at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
> at java.lang.Class.getMethod0(Class.java:2856)
> at java.lang.Class.getMethod(Class.java:1668)
> at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
> at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
> Caused by: java.lang.ClassNotFoundException: org.apache.commons.cli.
> ParseException
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 6 more
>
> When I tried to connect using below,
> java -Xms512m -Xmx3072m  -cp "/home/ambari/pherf/phoenix/
> bin/../phoenix-pherf/config:/etc/hbase/conf:/home/ambari/
> pherf/phoenix/bin/../phoenix-client/target/phoenix-4.9.0-
> HBase-1.2-SNAPSHOT-client.jar:/home/ambari/pherf/phoenix/
> bin/../phoenix-pherf/target/phoenix-pherf-4.9.0-HBase-1.2-SNAPSHOT-minimal.jar"
> -Dlog4j.configuration=file:/home/ambari/pherf/phoenix/bin/log4j.properties
> org.apache.phoenix.pherf.Pherf -drop all -l -q -z hbase-perf-rs1
> -schemaFile '.*user_defined_schema.sql' -scenarioFile
> '.*user_defined_scenario.xml’
>
> I got below error. I thought it could be because of connecting to hbase
> 1.1 with 1.2 client?
>
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
> at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.
> newException(SQLExceptionCode.java:457)
> at org.apache.phoenix.exception.SQLExceptionInfo.buildException(
> SQLExceptionInfo.java:145)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> getAllTableRegions(ConnectionQueryServicesImpl.java:549)
> at org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(
> BaseResultIterators.java:542)
> at org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(
> BaseResultIterators.java:477)
> at org.apache.phoenix.iterate.BaseResultIterators.(
> BaseResultIterators.java:370)
> at org.apache.phoenix.iterate.ParallelIterators.(
> ParallelIterators.java:60)
> at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:218)
> at org.apache.phoenix.execute.BaseQueryPlan.iterator(
> BaseQueryPlan.java:341)
> at org.apache.phoenix.execute.BaseQueryPlan.iterator(
> BaseQueryPlan.java:206)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(
> PhoenixStatement.java:290)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(
> PhoenixStatement.java:270)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(
> PhoenixStatement.java:269)
> at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(
> PhoenixStatement.java:1476)
> at org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getTables(
> PhoenixDatabaseMetaData.java:1149)
> at org.apache.phoenix.pherf.util.PhoenixUtil.getTableMetaData(
> PhoenixUtil.java:220)
> at org.apache.phoenix.pherf.util.PhoenixUtil.deleteTables(
> PhoenixUtil.java:192)
> at org.apache.phoenix.pherf.Pherf.run(Pherf.java:234)
> at org.apache.phoenix.pherf.Pherf.main(Pherf.java:188)
> Caused by: java.io.IOException: hconnection-0xa59a583 closed
> at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.getKeepAliveZooKeeperWatcher(
> ConnectionManager.java:1685)
> at org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(
> ZooKeeperRegistry.java:122)
> at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.isTableDisabled(ConnectionManager.java:979)
> at 

Using Apache perf with Hbase 1.1

2016-10-18 Thread Pradheep Shanmugam
Hi,

I am trying to connect pherf to hbase cluster running hbase 1.1 using below. 
Could you please help me connect pherf to hbase cluster running v 1.1

java -Xms512m -Xmx3072m  -cp 
"/home/ambari/pherf/phoenix/bin/../phoenix-pherf/config:/etc/hbase/conf:/home/ambari/pherf/phoenix/bin/../phoenix-client/target/phoenix-server-client-4.7.0-HBase-1.1.jar:/home/ambari/pherf/phoenix/bin/../phoenix-pherf/target/phoenix-pherf-4.8.1-HBase-1.1.jar"
 -Dlog4j.configuration=file:/home/ambari/pherf/phoenix/bin/log4j.properties 
org.apache.phoenix.pherf.Pherf -drop all -l -q -z hbase-perf-rs1 -schemaFile 
'.*user_defined_schema.sql' -scenarioFile '.*user_defined_scenario.xml’
And I get below exception

Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/commons/cli/ParseException
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
at java.lang.Class.getMethod0(Class.java:2856)
at java.lang.Class.getMethod(Class.java:1668)
at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.cli.ParseException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 6 more

When I tried to connect using below,
java -Xms512m -Xmx3072m  -cp 
"/home/ambari/pherf/phoenix/bin/../phoenix-pherf/config:/etc/hbase/conf:/home/ambari/pherf/phoenix/bin/../phoenix-client/target/phoenix-4.9.0-HBase-1.2-SNAPSHOT-client.jar:/home/ambari/pherf/phoenix/bin/../phoenix-pherf/target/phoenix-pherf-4.9.0-HBase-1.2-SNAPSHOT-minimal.jar"
 -Dlog4j.configuration=file:/home/ambari/pherf/phoenix/bin/log4j.properties 
org.apache.phoenix.pherf.Pherf -drop all -l -q -z hbase-perf-rs1 -schemaFile 
'.*user_defined_schema.sql' -scenarioFile '.*user_defined_scenario.xml’

I got below error. I thought it could be because of connecting to hbase 1.1 
with 1.2 client?

java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:457)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:549)
at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:542)
at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:477)
at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:370)
at 
org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:60)
at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:218)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:341)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:206)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:290)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:270)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:269)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1476)
at 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getTables(PhoenixDatabaseMetaData.java:1149)
at 
org.apache.phoenix.pherf.util.PhoenixUtil.getTableMetaData(PhoenixUtil.java:220)
at org.apache.phoenix.pherf.util.PhoenixUtil.deleteTables(PhoenixUtil.java:192)
at org.apache.phoenix.pherf.Pherf.run(Pherf.java:234)
at org.apache.phoenix.pherf.Pherf.main(Pherf.java:188)
Caused by: java.io.IOException: hconnection-0xa59a583 closed
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1685)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.isTableOnlineState(ZooKeeperRegistry.java:122)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableDisabled(ConnectionManager.java:979)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1148)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.relocateRegion(ConnectionManager.java:1136)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:957)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:535)


Re: Region start row and end row

2016-10-18 Thread Anil
after spending some time on Phoenix map reduce, i was able to get the
phoenix input splits for given sql query.

As query plan and phoenix input split are not Serializable, i am not able
to use them directly for ignite load.

Was there any way to read using scan on phoenix table  (not on hbase
table)?

Thanks.


On 13 October 2016 at 15:13, Anil  wrote:

> HI Cheyenne*,*
>
> Thank you very much.
>
> Load cannot be done in parallel with one jdbc connection. To make it
> parallel, each node must read a set of records
>
> Following is my approach.
>
> 1. Create Cluster wide singleton distributed custom service
>
> 2. Get all region(s) information (for each records has be to read) in the
> init() method of custom service
>
> 3. Broadcast region(s) using ignite.compute().call() in execute() method
> of custom service. so that each node reads a region data.
>
> 4. Scan a particular region (with start row and end row) using scan query
> and load into cache
>
>
> Hope this give clear idea.
>
>
> Please let me know if you have any questions.
>
>
> Thanks.
>
>
>
>
> On 13 October 2016 at 13:34, Cheyenne Forbes  com> wrote:
>
>> Check out this post for loading data from MySQL to Ignite
>> https://dzone.com/articles/apache-ignite-how-to-read-data-
>> from-persistent-sto
>>
>> and this one (recommended) on how to UPSERT to Phoenix on Ignite PUT...
>> *delete, etc.*
>> https://apacheignite.readme.io/docs/persistent-store#cachestore-example
>>
>> Just replace the MySQL things with Phoenix things (eg. the JDBC driver,
>> INSERT to UPSERT, etc.). If after reading you still have issues, feel free
>> ask in this thread for more help
>>
>
>