RE: error if result size is over some limit

2016-11-23 Thread Bulvik, Noam
It is hard to quantify the size of the data that is causing the issue
Change in criteria was done to retrieve fewer rows

-Original Message-
From: Josh Elser [mailto:josh.el...@gmail.com]
Sent: Wednesday, November 23, 2016 7:07 PM
To: user@phoenix.apache.org
Subject: Re: error if result size is over some limit

Hi Noam,

Can you quantify the query you run that shows this error? Also, when you change 
the criteria to retrieve less data, do you mean that you're fetching fewer rows?

Bulvik, Noam wrote:
> I am using phonix 4.5.2 and in my table the data in in Array.
>
> When I issue a query sometime the query gets the error bellow, if I
> change the criteria for the query to retrieve less data then I get
> results without problems so it is not corrupted data.
>
> When I set limit in the query it does not help only if the criteria
> limits the data.
>
> Any idea which parameter I need to change in order to prevent this
> from happened
>
> Thanks.
>
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException:
> CEM_CDR_SMS,83689\x002926453\x00Undefined\x00\x80\x00\x01W=\xB9\x15\x08\x00\x00\x00\x00SMSTP_DELIVER_WRONG_TRANSIT,1479819771390.5720714e04cf18b12c7eaf09b44ed145.:
> null
>
> at
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:8
> 4)
>
> at
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52
> )
>
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(Bas
> eScannerRegionObserver.java:327)
>
> at
> org.apache.phoenix.iterate.RegionScannerResultIterator.next(RegionScan
> nerResultIterator.java:50)
>
> at
> org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(Ord
> eredResultIterator.java:240)
>
> at
> org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIte
> rator.java:193)
>
> at
> org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(ScanR
> egionObserver.java:239)
>
> at
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(Sc
> anRegionObserver.java:220)
>
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOp
> en(BaseScannerRegionObserver.java:178)
>
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(Reg
> ionCoprocessorHost.java:1308)
>
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOpera
> tion.call(RegionCoprocessorHost.java:1663)
>
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperati
> on(RegionCoprocessorHost.java:1738)
>
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperati
> onWithResult(RegionCoprocessorHost.java:1702)
>
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScanner
> Open(RegionCoprocessorHost.java:1303)
>
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.
> java:2124)
>
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$
> 2.callBlockingMethod(ClientProtos.java:31443)
>
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)
>
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:
> 130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.ArrayIndexOutOfBoundsException
>
> at
> org.apache.phoenix.schema.KeyValueSchema.writeVarLengthField(KeyValueS
> chema.java:152)
>
> at
> org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:1
> 18)
>
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.replaceArra
> yIndexElement(BaseScannerRegionObserver.java:386)
>
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(Bas
> eScannerRegionObserver.java:310)
>
> ... 18 more [SQL State=08000, DB Errorcode=101]
>
>
> --
> --
>
> PRIVILEGED AND CONFIDENTIAL
> PLEASE NOTE: The information contained in this message is privileged
> and confidential, and is intended only for the use of the individual
> to whom it is addressed and others who have been specifically
> authorized to receive it. If you are not the intended recipient, you
> are hereby notified that any dissemination, distribution or copying of
> this communication is strictly prohibited. If you have received this
> communication in error, or if any problems occur with transmission,
> please contact sender. Thank you.




PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and 
confidential, and is intended only for the use of the individual to whom it is 
addressed and others who have been specifically authorized to receive it. If 
you are not the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have 

Re: spark 2.0.2 connect phoenix query server error

2016-11-23 Thread Divya Gehlot
Can you try with below driver
"driver" -> "org.apache.phoenix.jdbc.PhoenixDriver",


Thanks,
Divya

On 22 November 2016 at 11:14, Dequn Zhang  wrote:

> Hello, since spark 2.x can not use Phoenix Spark Interpreter to load data,
> so I want to use JDBC, but when I want to get a *thin connection*, I got
> the following Error info while using *direct connection is ok* , I ran it
> in spark-shell, scala 2.11.8, so can anyone give a solution?
>
>Phoenix : 4.8.1-HBase-1.2
>
> scala>
>> val jdbcDf = spark.read
>> .format("jdbc")
>> .option("driver","org.apache.phoenix.queryserver.client.Driver")
>> .option("url","jdbc:phoenix:thin:url=http://192.168.6.131:87
>> 65;serialization=PROTOBUF")
>> .option("dbtable","imos")
>> .load()
>>
>> java.sql.SQLException: While closing connection
>>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>>   at org.apache.calcite.avatica.AvaticaConnection.close(AvaticaCo
>> nnection.java:156)
>>   at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.
>> resolveTable(JDBCRDD.scala:167)
>>   at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation
>> .(JDBCRelation.scala:117)
>>   at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelation
>> Provider.createRelation(JdbcRelationProvider.scala:53)
>>   at org.apache.spark.sql.execution.datasources.DataSource.
>> resolveRelation(DataSource.scala:345)
>>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
>>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
>>   ... 53 elided
>> Caused by: java.lang.RuntimeException: response code 500
>>   at org.apache.calcite.avatica.remote.RemoteService.apply(Remote
>> Service.java:45)
>>   at org.apache.calcite.avatica.remote.JsonService.apply(JsonServ
>> ice.java:227)
>>   at org.apache.calcite.avatica.remote.RemoteMeta.closeConnection
>> (RemoteMeta.java:78)
>>   at org.apache.calcite.avatica.AvaticaConnection.close(AvaticaCo
>> nnection.java:153)
>>   ... 59 more
>
>


Re: spark 2.0.2 connect phoenix query server error

2016-11-23 Thread Dequn Zhang
Thank you for your reply , I failed to find where do spark or phoenix save
the log, and I also tried to change log4j.evel in
$SPARK_HOME/conf/log4j.properties  and $PHOENIX_HOME/bin/config,
unfortunately it didn't work  also, can you help me more on how to find out
the lost stacktrace info?

2016-11-24 1:03 GMT+08:00 Josh Elser :

> Hi Dequn,
>
> There should be more to this stacktrace than you provided as the actual
> cause is not included. Can you please include the entire stacktrace? If you
> are not seeing this client-side, please check the Phoenix Query Server log
> file to see if there is more there.
>
> Dequn Zhang wrote:
>
>> Hello, since spark 2.x can not use Phoenix Spark Interpreter to load
>> data, so I want to use JDBC, but when I want to get a *thin connection*,
>> I got the following Error info while using *direct connection is ok* ,
>>
>> I ran it in spark-shell, scala 2.11.8, so can anyone give a solution?
>>
>> Phoenix : 4.8.1-HBase-1.2
>>
>> scala>
>> val jdbcDf = spark.read
>> .format("jdbc")
>> .option("driver","org.apache.phoenix.queryserver.client.Driver")
>> .option("url","jdbc:phoenix:thin:url=http://192.168.6.131:87
>> 65;serialization=PROTOBUF")
>> .option("dbtable","imos")
>> .load()
>>
>> java.sql.SQLException: While closing connection
>>at
>> org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>>at
>> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaCo
>> nnection.java:156)
>>at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.
>> resolveTable(JDBCRDD.scala:167)
>>at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation
>> .(JDBCRelation.scala:117)
>>at
>> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelation
>> Provider.createRelation(JdbcRelationProvider.scala:53)
>>at
>> org.apache.spark.sql.execution.datasources.DataSource.
>> resolveRelation(DataSource.scala:345)
>>at
>> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.
>> scala:149)
>>at
>> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.
>> scala:122)
>>... 53 elided
>> Caused by: java.lang.RuntimeException: response code 500
>>at
>> org.apache.calcite.avatica.remote.RemoteService.apply(Remote
>> Service.java:45)
>>at
>> org.apache.calcite.avatica.remote.JsonService.apply(JsonServ
>> ice.java:227)
>>at
>> org.apache.calcite.avatica.remote.RemoteMeta.closeConnection
>> (RemoteMeta.java:78)
>>at
>> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaCo
>> nnection.java:153)
>>... 59 more
>>
>>


Re: phoenix-Hbase-client jar web application issue

2016-11-23 Thread pradeep arumalla
thanks josh looking into it.

On Wed, Nov 23, 2016 at 1:48 PM, Josh Elser  wrote:

> Hi Pradeep,
>
> No, this is one you will likely have to work around on your own by
> building a custom Phoenix client jar that does not include the
> javax-servlet classes. They are getting transitively pulled into Phoenix
> via Hadoop (IIRC). If your web application already has the classes present,
> you can build your own client jar that excludes dependencies that you
> already have bundled.
>
> Take a look at [1] for an example of how we build the phoenix-client jar.
> You can easily add some exclusions to the maven-shade-plugin.
>
> - Josh
>
> [1] https://github.com/apache/phoenix/blob/v4.8.1-HBase-1.2/phoe
> nix-client/pom.xml
>
> pradeep arumalla wrote:
>
>> hello group , when I try to connect to hbase  using
>> *phoenix-4.8.1-HBase-1.2-client.jar *in my web application on tomcat*,
>> *I see the jar is getting rejected because of
>> javax/servlet/Servlet.class package being present in the jar, is there
>> an other jar to use ?.Please advice.
>>
>> INFO:
>> validateJarFile(/Users/bill/Downloads/apache-tomcat-7.0.72/
>> wtpwebapps/DataVisualization/WEB-INF/lib/*phoenix-4.8.1-
>> HBase-1.2-client.jar*)
>> - jar not loaded. See Servlet Spec 3.0, section 10.7.2. Offending class:
>> javax/servlet/Servlet.class
>>
>> Nov 23, 2016 10:46:41 AM
>> org.apache.catalina.loader.WebappClassLoaderBase validateJarFile
>>
>>
>> thanks
>>
>> Pradeep
>>
>>


-- 
Thanks
Pradeep


Re: Phoenix LIKE operator with bind parameters question/error

2016-11-23 Thread Bartłomiej Niemienionek
Hi

Aby other thoughts ideas to get Like operator working with bind parameters ?

Regards
Bartek

18.11.2016 22:20 "Bartłomiej Niemienionek" 
napisał(a):

> Thank you for your answer but it seems that your solution is not working
> in my case.
> As you can see in my example also :abs is workin fine when using "=" and
> it is not working with LIKE operator.
> It seems like parameters passed to LIKE operator are treated in a bit
> different way.
>
> I am using DBeaver to test my queries and when I am using this notation
> : (:1) or wirh parameter name (:abc) query is transleted to use "?"
> sign for bind parameters.
>
>
> Regards,
> Bartek
>
> 2016-11-18 17:09 GMT+01:00 James Taylor :
>
>> Use : instead, like this :1
>>
>> Thanks,
>> James
>>
>> On Fri, Nov 18, 2016 at 5:28 AM Bartłomiej Niemienionek <
>> b.niemienio...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> I am trying to use Phoenix and I am facing some problems with LIKE
>>> operator when used in the prepared statement with bind parameters.
>>>
>>> I don’t know if this is some kind of known issue.
>>>
>>>
>>>
>>> *CREATE* *TABLE* TEST_TABLE (
>>>
>>>NAME *VARCHAR*(100) *NOT* *NULL* *PRIMARY* *KEY*,
>>>
>>>VAL *VARCHAR*(100)
>>>
>>> );
>>>
>>>
>>>
>>> These are working fine:
>>>
>>> *SELECT* 1 *FROM* TEST_TABLE *WHERE* NAME = *:abc*;
>>>
>>> *SELECT* 1 *FROM* TEST_TABLE *WHERE* NAME = 'abc';
>>>
>>> *SELECT* 1 *FROM* TEST_TABLE *WHERE* NAME *LIKE* 'abc';
>>>
>>>
>>>
>>> Here I get error:
>>>
>>> *SELECT* 1 *FROM* TEST_TABLE *WHERE* NAME *LIKE* *:abc*;
>>>
>>>
>>>
>>> SQL Error [0]: Error -1 (0) : while preparing SQL: SELECT 1 FROM
>>> TEST_TABLE WHERE NAME LIKE ?
>>>
>>>   org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) :
>>> while preparing SQL: SELECT 1 FROM TEST_TABLE WHERE NAME LIKE ?
>>>
>>>
>>>
>>> I am using phoenix driver in 4.9.0 version.
>>>
>>>
>>>
>>> Regards,
>>>
>>> bjn
>>>
>>
>


Re: phoenix-Hbase-client jar web application issue

2016-11-23 Thread Josh Elser

Hi Pradeep,

No, this is one you will likely have to work around on your own by 
building a custom Phoenix client jar that does not include the 
javax-servlet classes. They are getting transitively pulled into Phoenix 
via Hadoop (IIRC). If your web application already has the classes 
present, you can build your own client jar that excludes dependencies 
that you already have bundled.


Take a look at [1] for an example of how we build the phoenix-client 
jar. You can easily add some exclusions to the maven-shade-plugin.


- Josh

[1] 
https://github.com/apache/phoenix/blob/v4.8.1-HBase-1.2/phoenix-client/pom.xml


pradeep arumalla wrote:

hello group , when I try to connect to hbase  using
*phoenix-4.8.1-HBase-1.2-client.jar *in my web application on tomcat*,
*I see the jar is getting rejected because of
javax/servlet/Servlet.class package being present in the jar, is there
an other jar to use ?.Please advice.

INFO:
validateJarFile(/Users/bill/Downloads/apache-tomcat-7.0.72/wtpwebapps/DataVisualization/WEB-INF/lib/*phoenix-4.8.1-HBase-1.2-client.jar*)
- jar not loaded. See Servlet Spec 3.0, section 10.7.2. Offending class:
javax/servlet/Servlet.class

Nov 23, 2016 10:46:41 AM
org.apache.catalina.loader.WebappClassLoaderBase validateJarFile


thanks

Pradeep



RE: Phoenix-Spark plug in cannot select by column family name

2016-11-23 Thread Long, Xindian
Thanks, I just filed a Jira Issue

https://issues.apache.org/jira/browse/PHOENIX-3506

Xindian


From: James Taylor [mailto:jamestay...@apache.org]
Sent: Thursday, November 10, 2016 3:08 PM
To: user
Subject: Re: Phoenix-Spark plug in cannot select by column family name

Please file a JIRA, though, Xindian. It's a reasonable request to add the 
ability to prefix column references with the column family name just like you 
can do in JDBC.

On Thu, Nov 10, 2016 at 12:05 PM, Chris Tarnas 
> wrote:
From my experience you will need to make sure that the column names are unique, 
even across families, otherwise Spark will throw errors.

Chris Tarnas
Biotique Systems, Inc
c...@biotiquesystems.com

On Nov 10, 2016, at 10:14 AM, Long, Xindian 
> wrote:

It works with no column family, but I expect that I do not need to make sure 
column names are unique  across different column families.

Xindian


From: James Taylor [mailto:jamestay...@apache.org]
Sent: Tuesday, November 08, 2016 5:46 PM
To: user
Subject: Re: Phoenix-Spark plug in cannot select by column family name

Have you tried without the column family name? Unless the column names are not 
unique across all column families, you don't need to include the column family 
name.

Thanks,
James

On Tue, Nov 8, 2016 at 2:19 PM, Long, Xindian 
> wrote:
I have a table with multiple column family with possible same column names.
I want to use phoenix-spark plug in to select some of the fields, but it 
returns a AnalysisException (details in the attached file)

public void testSpark(JavaSparkContext sc, String tableStr, String dataSrcUrl) {
//SparkContextBuilder.buildSparkContext("Simple Application", "local");

// One JVM can only have one Spark Context now
Map options = new HashMap();
SQLContext sqlContext = new SQLContext(sc);


options.put("zkUrl", dataSrcUrl);
options.put("table", tableStr);
log.info("Phoenix DB URL: " + dataSrcUrl + " tableStr: " + tableStr);

DataFrame df = null;
try {
df = 
sqlContext.read().format("org.apache.phoenix.spark").options(options).load();
df.explain(true);
df.show();

df = df.select("I.CI", "I.FA");

//df = df.select("\"I\".\"CI\"", "\"I\".\"FA\""); // This gives the 
same exception too

} catch (Exception ex) {
log.error("sql error: ", ex);
}

try {
log.info("Count By phoenix spark plugin: " + df.count());
   } catch (Exception ex) {
log.error("dataframe error: ", ex);
}

}


I can see in the log that there is something like

10728 [INFO] main  org.apache.phoenix.mapreduce.PhoenixInputFormat  - Select 
Statement: SELECT 
"RID","I"."CI","I"."FA","I"."FPR","I"."FPT","I"."FR","I"."LAT","I"."LNG","I"."NCG","I"."NGPD","I"."VE","I"."VMJ","I"."VMR","I"."VP","I"."CSRE","I"."VIB","I"."IIICS","I"."LICSCD","I"."LEDC","I"."ARM","I"."FBM","I"."FTB","I"."NA2FR","I"."NA2PT","S"."AHDM","S"."ARTJ","S"."ATBM","S"."ATBMR","S"."ATBR","S"."ATBRR","S"."CS","S"."LAMT","S"."LTFCT","S"."LBMT","S"."LDTI","S"."LMT","S"."LMTN","S"."LMTR","S"."LPET","S"."LPORET","S"."LRMT","S"."LRMTP","S"."LRMTR","S"."LSRT","S"."LSST","S"."MHDMS0","S"."MHDMS1","S"."RFD","S"."RRN","S"."RRR","S"."TD","S"."TSM","S"."TC","S"."TPM","S"."LRMCT","S"."SS13FSK34","S"."LERMT","S"."LEMDMT","S"."AGTBRE","S"."SRM","S"."LTET","S"."TPMS","S"."TPMSM","S"."TM","S"."TMF","S"."TMFM","S"."NA2TLS","S"."NA2IT","S"."CWR","S"."BPR","S"."LR","S"."HLB","S"."NA2UFTBFR","S"."DT","S"."NA28ARE","S"."RM","S"."LMTB","S"."LRMTB","S"."RRB","P"."BADUC","P"."UAN","P"."BAPS","P"."BAS","P"."UAS","P"."BATBBR","P"."BBRI","P"."BLBR","P"."ULHT","P"."BLPST","P"."BLPT","P"."UTI","P"."UUC"
 FROM TESTING.ENDPOINTS

But obviously, the column family is  left out of the Dataframe column name 
somewhere in the process.
Any fix for the problem?

Thanks

Xindian






phoenix-Hbase-client jar web application issue

2016-11-23 Thread pradeep arumalla
hello group , when I try to connect to hbase  using
*phoenix-4.8.1-HBase-1.2-client.jar
*in my web application on tomcat* , *I see the jar is getting rejected
because of javax/servlet/Servlet.class package being present in the jar, is
there an other jar to use ?.Please advice.

INFO:
validateJarFile(/Users/bill/Downloads/apache-tomcat-7.0.72/wtpwebapps/DataVisualization/WEB-INF/lib/
*phoenix-4.8.1-HBase-1.2-client.jar*) - jar not loaded. See Servlet Spec
3.0, section 10.7.2. Offending class: javax/servlet/Servlet.class

Nov 23, 2016 10:46:41 AM org.apache.catalina.loader.WebappClassLoaderBase
validateJarFile


thanks

Pradeep


Re: error if result size is over some limit

2016-11-23 Thread Josh Elser

Hi Noam,

Can you quantify the query you run that shows this error? Also, when you 
change the criteria to retrieve less data, do you mean that you're 
fetching fewer rows?


Bulvik, Noam wrote:

I am using phonix 4.5.2 and in my table the data in in Array.

When I issue a query sometime the query gets the error bellow, if I
change the criteria for the query to retrieve less data then I get
results without problems so it is not corrupted data.

When I set limit in the query it does not help only if the criteria
limits the data.

Any idea which parameter I need to change in order to prevent this from
happened

Thanks.

org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
CEM_CDR_SMS,83689\x002926453\x00Undefined\x00\x80\x00\x01W=\xB9\x15\x08\x00\x00\x00\x00SMSTP_DELIVER_WRONG_TRANSIT,1479819771390.5720714e04cf18b12c7eaf09b44ed145.:
null

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:327)

at
org.apache.phoenix.iterate.RegionScannerResultIterator.next(RegionScannerResultIterator.java:50)

at
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:240)

at
org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:193)

at
org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(ScanRegionObserver.java:239)

at
org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:220)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1308)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1663)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1738)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1702)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1303)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2124)

at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)

at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.ArrayIndexOutOfBoundsException

at
org.apache.phoenix.schema.KeyValueSchema.writeVarLengthField(KeyValueSchema.java:152)

at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:118)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.replaceArrayIndexElement(BaseScannerRegionObserver.java:386)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:310)

... 18 more [SQL State=08000, DB Errorcode=101]




PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and
confidential, and is intended only for the use of the individual to whom
it is addressed and others who have been specifically authorized to
receive it. If you are not the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this
communication in error, or if any problems occur with transmission,
please contact sender. Thank you.


Re: Salting an secondary index

2016-11-23 Thread Josh Elser
IIRC, SALT_BUCKET configuration from the data table is implicitly 
applied to any index tables you create from that data table.


Pradheep Shanmugam wrote:

Hi,

I have a hbase table created using phoenix which is salted.
Since the queries on the table required a secondary index, I created
index using phoenix.
Can this index also be salted which may place the rows randomly in
different Rss?
Even of the index is not salted, will the index be useful when we salt
the actual table?
Please advise.

Thanks,
Pradheep


Re: spark 2.0.2 connect phoenix query server error

2016-11-23 Thread Josh Elser

Hi Dequn,

There should be more to this stacktrace than you provided as the actual 
cause is not included. Can you please include the entire stacktrace? If 
you are not seeing this client-side, please check the Phoenix Query 
Server log file to see if there is more there.


Dequn Zhang wrote:

Hello, since spark 2.x can not use Phoenix Spark Interpreter to load
data, so I want to use JDBC, but when I want to get a *thin connection*,
I got the following Error info while using *direct connection is ok* ,
I ran it in spark-shell, scala 2.11.8, so can anyone give a solution?

Phoenix : 4.8.1-HBase-1.2

scala>
val jdbcDf = spark.read
.format("jdbc")
.option("driver","org.apache.phoenix.queryserver.client.Driver")

.option("url","jdbc:phoenix:thin:url=http://192.168.6.131:8765;serialization=PROTOBUF;)
.option("dbtable","imos")
.load()

java.sql.SQLException: While closing connection
   at
org.apache.calcite.avatica.Helper.createException(Helper.java:39)
   at

org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
   at

org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:167)
   at

org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:117)
   at

org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:53)
   at

org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:345)
   at
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
   at
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122)
   ... 53 elided
Caused by: java.lang.RuntimeException: response code 500
   at

org.apache.calcite.avatica.remote.RemoteService.apply(RemoteService.java:45)
   at

org.apache.calcite.avatica.remote.JsonService.apply(JsonService.java:227)
   at

org.apache.calcite.avatica.remote.RemoteMeta.closeConnection(RemoteMeta.java:78)
   at

org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:153)
   ... 59 more



Salting an secondary index

2016-11-23 Thread Pradheep Shanmugam
Hi,

I have a hbase table created using phoenix which is salted.
Since the queries on the table required a secondary index, I created index 
using phoenix.
Can this index also be salted which may place the rows randomly in different 
Rss?
Even of the index is not salted, will the index be useful when we salt the 
actual table?
Please advise.

Thanks,
Pradheep