Re: org.apache.phoenix.join.MaxServerCacheSizeExceededException

2016-02-10 Thread rafa
Hi Nanda,

It seems the server is taking the default value for
phoenix.query.maxServerCacheBytes

https://phoenix.apache.org/tuning.html

phoenix.query.maxServerCacheBytes

   - Maximum size (in bytes) of the raw results of a relation before being
   compressed and sent over to the region servers.
   - Attempting to serializing the raw results of a relation with a size
   bigger than this setting will result in a
   MaxServerCacheSizeExceededException.
   - *Default: 104,857,600*

Regards,

rafa


On Wed, Feb 10, 2016 at 1:37 PM, Nanda  wrote:

> Hi ,
>
> I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
> exception,
>
> Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
> execution.
> at
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
> at
> com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
> ... 75 common frames omitted
> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
> Size of hash cache (104857651 bytes) exceeds the maximum allowed size
> (104857600 bytes)
> at
> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ~[na:1.8.0_40]
> at
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_40]
>
>
> Below are params i am using,
>
> server side properties:
> phoenix.coprocessor.maxServerCacheTimeToLiveMs=18
> phoenix.groupby.maxCacheSize=1572864000
> phoenix.query.maxGlobalMemoryPercentage=60
> phoenix.query.maxGlobalMemorySize=409600
> phoenix.stats.guidepost.width=524288000
>
>
> client side properties are:
> hbase.client.scanner.timeout.period=18
> phoenix.query.spoolThresholdBytes=1048576000
> phoenix.query.timeoutMs=18
> phoenix.query.threadPoolSize=240
> phoenix.query.maxGlobalMemoryPercentage=60
> phoenix.query.maxServerCacheBytes=1048576810
>
>
> and my hbase heap is set to 4GB
>
> Is there som property i need to set explicitly for this.
>
> Thanks,
> Nanda
>
>


Multiple versions for single row key

2016-02-10 Thread kannan.ramanathan
Hello,

HBase tables support multiple versions (default is 3) for single row key. I am 
trying to see how efficiently this can be achieved in Phoenix (don't want to 
create view on existing HBase table, just want to go with new Phoenix table).

Is it better to create a separate secondary key column in Phoenix table which 
is of course not unique and setup secondary index on this column for faster 
querying?

Also, is this JIRA related what I am asking?

https://issues.apache.org/jira/browse/PHOENIX-590

Sorry if this has been answered before and thanks in advance.

Regards
Kannan.

___

This message is for information purposes only, it is not a recommendation, 
advice, offer or solicitation to buy or sell a product or service nor an 
official confirmation of any transaction. It is directed at persons who are 
professionals and is not intended for retail customer use. Intended for 
recipient only. This message is subject to the terms at: 
www.barclays.com/emaildisclaimer.

For important disclosures, please see: 
www.barclays.com/salesandtradingdisclaimer regarding market commentary from 
Barclays Sales and/or Trading, who are active market participants; and in 
respect of Barclays Research, including disclosures relating to specific 
issuers, please see http://publicresearch.barclays.com.

___


Re: org.apache.phoenix.join.MaxServerCacheSizeExceededException

2016-02-10 Thread Nanda
I already override the property,  but still it takes the default value,
because of which none of my joins are working.

TIA
NANDA
On Feb 10, 2016 8:52 PM, "rafa"  wrote:

> Hi Nanda,
>
> It seems the server is taking the default value for
> phoenix.query.maxServerCacheBytes
>
> https://phoenix.apache.org/tuning.html
>
> phoenix.query.maxServerCacheBytes
>
>- Maximum size (in bytes) of the raw results of a relation before
>being compressed and sent over to the region servers.
>- Attempting to serializing the raw results of a relation with a size
>bigger than this setting will result in a
>MaxServerCacheSizeExceededException.
>- *Default: 104,857,600*
>
> Regards,
>
> rafa
>
>
> On Wed, Feb 10, 2016 at 1:37 PM, Nanda  wrote:
>
>> Hi ,
>>
>> I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
>> exception,
>>
>> Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
>> execution.
>> at
>> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
>> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
>> at
>> com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
>> ~[nvp-data-access-1.0-SNAPSHOT.jar:na]
>> ... 75 common frames omitted
>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>> Size of hash cache (104857651 bytes) exceeds the maximum allowed size
>> (104857600 bytes)
>> at
>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> ~[na:1.8.0_40]
>> at
>> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
>> ~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> [na:1.8.0_40]
>>
>>
>> Below are params i am using,
>>
>> server side properties:
>> phoenix.coprocessor.maxServerCacheTimeToLiveMs=18
>> phoenix.groupby.maxCacheSize=1572864000
>> phoenix.query.maxGlobalMemoryPercentage=60
>> phoenix.query.maxGlobalMemorySize=409600
>> phoenix.stats.guidepost.width=524288000
>>
>>
>> client side properties are:
>> hbase.client.scanner.timeout.period=18
>> phoenix.query.spoolThresholdBytes=1048576000
>> phoenix.query.timeoutMs=18
>> phoenix.query.threadPoolSize=240
>> phoenix.query.maxGlobalMemoryPercentage=60
>> phoenix.query.maxServerCacheBytes=1048576810
>>
>>
>> and my hbase heap is set to 4GB
>>
>> Is there som property i need to set explicitly for this.
>>
>> Thanks,
>> Nanda
>>
>>
>


OutOfOrderScannerNextException with phoenix 4.6-HBase-1.0-cdh5.5

2016-02-10 Thread Alex Kamil
I'm getting below exception in SELECT DISTINCT query using tenant-specific
connection  with phoenix 4.6-HBase-1.0-cdh5.5 .

The exception disappears if I either switch to non-tenant connection, or
remove DISTINCT from the query.

Caused by: *org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException*:
*org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException*:
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0;
request=scanner_id: 2326 number_of_rows: 100 close_scanner: false
next_call_seq: 0 client_handles_partials: true
client_handles_heartbeats: true


I'm using phoenix 4.6 for cloudera cdh5.5.1 community edition
 https://github.com/chiastic-security/phoenix
-for-cloudera/tree/4.6-HBase-1.0-cdh5.5


Below the test case, error log and hbase-site.xml settings:

*import* java.sql.Connection;
*import* java.sql.DriverManager;
*import* java.sql.ResultSet;
*import* java.sql.SQLException;
*import* java.sql.Statement;
*import* java.util.Properties;
*public* *class* Test {
   *public* *static* *void* main (String [] args){
  Connection conn = *null*;
  String tenant  = *SYSTEMTENANT*;
  String url  = "my.ip";
  System.*out*.println("trying to initialize
tenant-specific connection to hbaseUrl="+url+" for tenant="+tenant);
  Properties connProps = *new* Properties();
  connProps.setProperty("TenantId", tenant);
  String query = "SELECT DISTINCT ROWKEY,VS FROM TABLE1
ORDER BY VS DESC";
  *try* {
 conn =
DriverManager.*getConnection*("jdbc:phoenix:"+url, connProps);
 Statement st = conn.createStatement();
 ResultSet resultSet = st.executeQuery(query);
 *while*(resultSet.next())
 {
   String rowKey = resultSet.getString(1);
   String versionSerial = resultSet.getString(2);
   System.*out*.println("rowkey="+rowKey+",
versionserial="+versionSerial);
 }
  } *catch* (SQLException e) {
 // logger.error(e);
 e.printStackTrace();
  }
   }
}
*Stack trace: *
*org.apache.phoenix.exception.PhoenixIOException*:
*org.apache.phoenix.exception.PhoenixIOException*: Failed after retry
of *OutOfOrderScannerNextException*: was there a rpc timeout?
   at 
org.apache.phoenix.util.ServerUtil.parseServerException(*ServerUtil.java:108*)
   at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(*BaseResultIterators.java:558*)
   at 
org.apache.phoenix.iterate.MergeSortResultIterator.getIterators(*MergeSortResultIterator.java:48*)
   at 
org.apache.phoenix.iterate.MergeSortResultIterator.minIterator(*MergeSortResultIterator.java:84*)
   at 
org.apache.phoenix.iterate.MergeSortResultIterator.next(*MergeSortResultIterator.java:111*)
   at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(*BaseGroupedAggregatingResultIterator.java:64*)
   at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(*PhoenixResultSet.java:771*)
   at Test.main(*Test.java:26*)
Caused by: *java.util.concurrent.ExecutionException*:
*org.apache.phoenix.exception.PhoenixIOException*: Failed after retry
of *OutOfOrderScannerNextException*: was there a rpc timeout?
   at java.util.concurrent.FutureTask.report(Unknown Source)
   at java.util.concurrent.FutureTask.get(Unknown Source)
   at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(*BaseResultIterators.java:554*)
   ... 6 more
Caused by: *org.apache.phoenix.exception.PhoenixIOException*: Failed
after retry of *OutOfOrderScannerNextException*: was there a rpc
timeout?
   at 
org.apache.phoenix.util.ServerUtil.parseServerException(*ServerUtil.java:108*)
   at 
org.apache.phoenix.iterate.ScanningResultIterator.next(*ScanningResultIterator.java:61*)
   at 
org.apache.phoenix.iterate.TableResultIterator.next(*TableResultIterator.java:107*)
   at 
org.apache.phoenix.iterate.SpoolingResultIterator.(*SpoolingResultIterator.java:125*)
   at 
org.apache.phoenix.iterate.SpoolingResultIterator.(*SpoolingResultIterator.java:83*)
   at 
org.apache.phoenix.iterate.SpoolingResultIterator.(*SpoolingResultIterator.java:62*)
   at 
org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(*SpoolingResultIterator.java:78*)
   at 
org.apache.phoenix.iterate.ParallelIterators$1.call(*ParallelIterators.java:109*)
   at 
org.apache.phoenix.iterate.ParallelIterators$1.call(*ParallelIterators.java:100*)
   at java.util.concurrent.FutureTask.run(Unknown Source)
   at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(*JobManager.java:183*)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
  

Re: Spark Phoenix Plugin

2016-02-10 Thread Benjamin Kim
Hi Pierre,

I am getting this error now.

Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
SYSTEM.CATALOG,,1453397732623.8af7b44f3d7609eb301ad98641ff2611.: 
org.apache.hadoop.hbase.client.Delete.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/Delete;

I even tried to use sqlline.py to do some queries too. It resulted in the same 
error. I followed the installation instructions. Is there something missing?

Thanks,
Ben


> On Feb 9, 2016, at 10:20 AM, Ravi Kiran  wrote:
> 
> Hi Pierre,
> 
>   Try your luck for building the artifacts from 
> https://github.com/chiastic-security/phoenix-for-cloudera 
> . Hopefully it 
> helps.
> 
> Regards
> Ravi .
> 
> On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim  > wrote:
> Hi Pierre,
> 
> I found this article about how Cloudera’s version of HBase is very different 
> than Apache HBase so it must be compiled using Cloudera’s repo and versions. 
> But, I’m not having any success with it.
> 
> http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo
>  
> 
> 
> There’s also a Chinese site that does the same thing.
> 
> https://www.zybuluo.com/xtccc/note/205739 
> 
> 
> I keep getting errors like the one’s below.
> 
> [ERROR] 
> /opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
>  cannot find symbol
> [ERROR] symbol:   class Region
> [ERROR] location: class org.apache.hadoop.hbase.regionserver.LocalIndexMerger
> …
> 
> Have you tried this also?
> 
> As a last resort, we will have to abandon Cloudera’s HBase for Apache’s HBase.
> 
> Thanks,
> Ben
> 
> 
>> On Feb 8, 2016, at 11:04 PM, pierre lacave > > wrote:
>> 
>> Havent met that one.
>> 
>> According to SPARK-1867, the real issue is hidden.
>> 
>> I d process by elimination, maybe try in local[*] mode first
>> 
>> https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867 
>> 
>> On Tue, 9 Feb 2016, 04:58 Benjamin Kim > > wrote:
>> Pierre,
>> 
>> I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But, now, I 
>> get this error:
>> 
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
>> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
>> (TID 3, prod-dc1-datanode151.pdc1i.gradientx.com 
>> ): 
>> java.lang.IllegalStateException: unread block data
>> 
>> It happens when I do:
>> 
>> df.show()
>> 
>> Getting closer…
>> 
>> Thanks,
>> Ben
>> 
>> 
>> 
>>> On Feb 8, 2016, at 2:57 PM, pierre lacave >> > wrote:
>>> 
>>> This is the wrong client jar try with the one named 
>>> phoenix-4.7.0-HBase-1.1-client-spark.jar 
>>> 
>>> 
>>> On Mon, 8 Feb 2016, 22:29 Benjamin Kim >> > wrote:
>>> Hi Josh,
>>> 
>>> I tried again by putting the settings within the spark-default.conf.
>>> 
>>> spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>> spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>> 
>>> I still get the same error using the code below.
>>> 
>>> import org.apache.phoenix.spark._
>>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
>>> "TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181"))
>>> 
>>> Can you tell me what else you’re doing?
>>> 
>>> Thanks,
>>> Ben
>>> 
>>> 
 On Feb 8, 2016, at 1:44 PM, Josh Mahonin > wrote:
 
 Hi Ben,
 
 I'm not sure about the format of those command line options you're 
 passing. I've had success with spark-shell just by setting the 
 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath' options 
 on the spark config, as per the docs [1].
 
 I'm not sure if there's anything special needed for CDH or not though. I 
 also have a docker image I've been toying with which has a working 
 Spark/Phoenix setup using the Phoenix 4.7.0 RC and Spark 1.6.0. It might 
 be a useful reference for you as well [2].
 
 Good luck,
 
 Josh
 
 [1] https://phoenix.apache.org/phoenix_spark.html 
 
 [2] https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark 
 
 
 On Mon, Feb 8, 2016 at 4:29 PM, Benjamin Kim 

Cannot support join operations in scans with limit

2016-02-10 Thread Alex Kamil
We're getting below exception in sqlline in phoenix 4.6 (community edition)
with hbase1.0_cdh5.5, any ideas?


0: jdbc:phoenix:localhost> select * from table2 B join (select id from
table1) A on A.id=B.otherid;
java.lang.RuntimeException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: *Failed after retry of
OutOfOrderScannerNextException: was there a rpc timeout?*
at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
at sqlline.SqlLine.print(SqlLine.java:1653)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)


*in regionserver log:*


2016-02-10 22:39:13,579 DEBUG org.apache.hadoop.hbase.ipc.RpcServer:
B.defaultRpcServer.handler=0,queue=0,port=60020: callId: 2858 service:
ClientService methodName: Scan size: 24 connection: 172.17.66.31:63811

*java.io.IOException: Cannot support join operations in scans with limit*

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2068)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)

at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:744)

Caused by: java.lang.UnsupportedOperationException: Cannot support join
operations in scans with limit

at
org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:120)

at
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:270)

at
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2292)

at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)


*To Reproduce:*

CREATE TABLE table1( id bigint not null, message varchar CONSTRAINT pk
PRIMARY KEY (id) );
CREATE TABLE table2( id bigint not null, message varchar, otherid bigint
CONSTRAINT pk PRIMARY KEY (id) );
upsert into table1(id,message) values(123,'m1');
upsert into table1(id,message) values(456,'m2');
upsert into table2(id,message,otherid) values(789,'m1',123);
0: jdbc:phoenix:localhost> select * from table2 B join (select id from
table1) A on A.id=B.otherid;
java.lang.RuntimeException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: Failed after retry of
OutOfOrderScannerNextException: was there a rpc timeout?
at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
at sqlline.SqlLine.print(SqlLine.java:1653)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost>


How to set the logging level of Apache Phoenix

2016-02-10 Thread Simon Lee
My application runs on Tomcat server. The phoenix-client-4.4.0-HBase-1.1.jar is 
deployed to $CATALINA_HOME/lib folder for the application to access HBase 
tables via Phoenix. Tomcat's catalina.out shows a lot of Phoenix debugging 
message. How to change Phoenix's log level from DEBUG to INFO?


9:52:53.585 [phoenix-1-thread-0] DEBUG o.a.p.iterate.BaseResultIterators - 
Guideposts: ]

19:52:53.589 [phoenix-1-thread-0] DEBUG o.a.p.iterate.BaseResultIterators - 
Getting iterators for ResultIterators 
[name=PARALLEL,id=846567fb-66b2-4eac-9de5-4e55b8e31c7e,scans=[[{"timeRange":[0,1455069173375],"batch":-1,"startRow":"204-MBB-097\\x80\\x00\\x00e\\x80\\x00\\x01R\\xA5Iu\\xE9","stopRow":"204-MBB-097\\x80\\x00\\x00e\\x80\\x00\\x01R\\xBF\\x09A\\xE9","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":-1,"maxVersions":1,"filter":"FilterList
 AND (2/2): [FirstKeyOnlyFilter, ]","caching":2147483647}]]]

19:52:53.706 [phoenix-1-thread-1] DEBUG o.a.p.iterate.ParallelIterators - Id: 
846567fb-66b2-4eac-9de5-4e55b8e31c7e, Time: 114ms, Scan: 
{"timeRange":[0,1455069173375],"batch":-1,"startRow":"204-MBB-097\\x80\\x00\\x00e\\x80\\x00\\x01R\\xA5Iu\\xE9","stopRow":"204-MBB-097\\x80\\x00\\x00e\\x80\\x00\\x01R\\xBF\\x09A\\xE9","loadColumnFamiliesOnDemand":null,"totalColumns":1,"cacheBlocks":true,"families":{"0":["ALL"]},"maxResultSize":2097152,"maxVersions":1,"filter":"FilterList
 AND (2/2): [FirstKeyOnlyFilter, ]","caching":2147483647}

...


19:52:53.718 [http-bio-8080-exec-2] DEBUG o.a.phoenix.jdbc.PhoenixStatement - 
Explain plan:

19:52:53.904 [http-bio-8080-exec-2] DEBUG o.a.phoenix.jdbc.PhoenixStatement - 
Execute query:   SELECT ...


org.apache.phoenix.join.MaxServerCacheSizeExceededException

2016-02-10 Thread Nanda
Hi ,

I am using HDP 2.3.0 with Phoneix 4.4 and i quiet often get the below
exception,

Caused by: java.sql.SQLException: Encountered exception in sub plan [0]
execution.
at
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:156)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1223)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
com.brocade.nva.dataaccess.AbstractDAO.getResultSet(AbstractDAO.java:388)
~[nvp-data-access-1.0-SNAPSHOT.jar:na]
at
com.brocade.nva.dataaccess.HistoryDAO.getSummaryTOP10ReportDetails(HistoryDAO.java:306)
~[nvp-data-access-1.0-SNAPSHOT.jar:na]
... 75 common frames omitted
Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
Size of hash cache (104857651 bytes) exceeds the maximum allowed size
(104857600 bytes)
at
org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:109)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:82)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:338)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:135)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[na:1.8.0_40]
at
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:172)
~[phoenix-core-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_40]


Below are params i am using,

server side properties:
phoenix.coprocessor.maxServerCacheTimeToLiveMs=18
phoenix.groupby.maxCacheSize=1572864000
phoenix.query.maxGlobalMemoryPercentage=60
phoenix.query.maxGlobalMemorySize=409600
phoenix.stats.guidepost.width=524288000


client side properties are:
hbase.client.scanner.timeout.period=18
phoenix.query.spoolThresholdBytes=1048576000
phoenix.query.timeoutMs=18
phoenix.query.threadPoolSize=240
phoenix.query.maxGlobalMemoryPercentage=60
phoenix.query.maxServerCacheBytes=1048576810


and my hbase heap is set to 4GB

Is there som property i need to set explicitly for this.

Thanks,
Nanda