phoenix client for web service using playframework

2018-04-20 Thread Lian Jiang
Hi,

I am using HDP 2.6 hbase and pheonix. I created a play rest service using
hbase as the backend. However, I have trouble to get a working pheonix
client.

I tried the pheonix-client.jar given by HDP but it has multiple conflicts
with play, for example logback v.s. log4j,  guava 13 v.s. guava 22, etc.

The logging exception is:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/lianjia/repo/
hbaseWS/lib/phoenix-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/lianjia/.ivy2
/cache/ch.qos.logback/logback-classic/jars/logback-classic-
1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.

java.lang.ClassCastException: org.slf4j.impl.Log4jLoggerFactory cannot be
cast to ch.qos.logback.classic.LoggerContext
at play.api.libs.logback.LogbackLoggerConfigurator.configure(Lo
gbackLoggerConfigurator.scala:84)
at play.api.libs.logback.LogbackLoggerConfigurator.configure(Lo
gbackLoggerConfigurator.scala:66)
at play.api.inject.guice.GuiceApplicationBuilder.$anonfun$confi
gureLoggerFactory$1(GuiceApplicationBuilder.scala:122)
at scala.Option.map(Option.scala:146)
at play.api.inject.guice.GuiceApplicationBuilder.configureLogge
rFactory(GuiceApplicationBuilder.scala:121)
at play.api.inject.guice.GuiceApplicationBuilder.applicationMod
ule(GuiceApplicationBuilder.scala:100)
at play.api.inject.guice.GuiceBuilder.injector(GuiceInjectorBui
lder.scala:185)
at play.api.inject.guice.GuiceApplicationBuilder.build(GuiceApp
licationBuilder.scala:137)
at play.api.inject.guice.GuiceApplicationLoader.load(GuiceAppli
cationLoader.scala:21)
at play.core.server.ProdServerStart$.start(ProdServerStart.scala:51)
at play.core.server.ProdServerStart$.main(ProdServerStart.scala:25)
at play.core.server.ProdServerStart.main(ProdServerStart.scala)



Then I tried:

libraryDependencies += "org.apache.phoenix" % "phoenix-core" %
"4.13.1-HBase-1.1"

libraryDependencies += "org.apache.phoenix" % "phoenix-server-client" %
"4.7.0-HBase-1.1"

libraryDependencies += "org.apache.phoenix" % "phoenix-queryserver-client"
% "4.13.1-HBase-1.1"

None of them worked: "No suitable driver found".

Any idea will be highly appreciated!


Re: Split and distribute regions of SYSTEM.STATS table

2018-04-20 Thread James Taylor
Thanks for bringing this to our attention. There's a bug here in that the
SYSTEM.STATS table has a custom split policy that prevents splitting from
occurring (PHOENIX-4700). We'll get a fix out in 4.14, but in the meantime
it's safe to split the table, as long as all stats for a given table are on
the same region.

James

On Fri, Apr 20, 2018 at 1:37 PM, James Taylor 
wrote:

> Thanks for bringing this to our attention. There's a bug here in that the
> SYSTEM.STATS
>
> On Wed, Apr 18, 2018 at 9:59 AM, Batyrshin Alexander <0x62...@gmail.com>
> wrote:
>
>>  Hello,
>> I've discovered that SYSTEM.STATS has only 1 region with size 3.25 GB. Is
>> it ok to split it and distribute over different region servers?
>
>
>


Re: Split and distribute regions of SYSTEM.STATS table

2018-04-20 Thread James Taylor
Thanks for bringing this to our attention. There's a bug here in that the
SYSTEM.STATS

On Wed, Apr 18, 2018 at 9:59 AM, Batyrshin Alexander <0x62...@gmail.com>
wrote:

>  Hello,
> I've discovered that SYSTEM.STATS has only 1 region with size 3.25 GB. Is
> it ok to split it and distribute over different region servers?


Re: hint to use a global index is not working - need to find out why

2018-04-20 Thread James Taylor
Ron - Salting is only recommended when your primary key is monotonically
increasing. It's mainly used to prevent write hotspotting. Also, I think
Ron forgot to mention, but I was working with him a bit earlier on this,
and I couldn't repro the issue either (in current 4.x or in 4.7 release).
Here's the unit test I put together which hints a non covered global index:

@Test
public void testIndexHintWithNonCoveredColumnSelected() throws
Exception {
String schemaName = "";
String dataTableName = "T_FOO";
String indexTableName = "I_FOO";
String dataTableFullName = SchemaUtil.getTableName(schemaName,
dataTableName);
String indexTableFullName = SchemaUtil.getTableName(schemaName,
indexTableName);
try (Connection conn = DriverManager.getConnection(getUrl())) {
conn.createStatement().execute("CREATE TABLE " +
dataTableFullName + "(k INTEGER PRIMARY KEY, v1 VARCHAR, v2 VARCHAR)");
conn.createStatement().execute("CREATE INDEX " + indexTableName
+ " ON " + dataTableName + "(v1)");
PhoenixStatement stmt =
conn.createStatement().unwrap(PhoenixStatement.class);
QueryPlan plan;
stmt.executeQuery("SELECT v1, v2 " +
" FROM " + dataTableFullName +
" WHERE v1='a'");
plan = stmt.getQueryPlan();
assertEquals("Expected (" + dataTableFullName + ") but was " +
plan.getSourceRefs(), 1, plan.getSourceRefs().size());

stmt.executeQuery("SELECT /*+ INDEX(" + dataTableFullName + " "
+ indexTableName + ") */ v1, v2" +
" FROM " + dataTableFullName +
" WHERE v1='a'");
plan = stmt.getQueryPlan();
assertEquals("Expected (" + dataTableFullName + "," +
indexTableFullName + ") but was " +  plan.getSourceRefs(), 2,
plan.getSourceRefs().size());
}
}



On Fri, Apr 20, 2018 at 1:11 PM, Taylor, Ronald (Ronald) <
ronald.tay...@cchmc.org> wrote:

> Hello Sergey,
>
>
>
> Per your request, here are the commands that I used to create the table
> and its indexes.  Hopefully you can find something in here that provides a
> guide as to what we are doing wrong.
>
>
>
> BTW – as I said, we are novices with Phoenix here. One thing we are doing
> is playing around a bit with salting numbers. We believed that the data in
> our test table was small enough to fit on one region server ( < 10 GB), so
> we used a high salt number (24) to try to force HBase to use more than one
> region server, to parallelize over more than one node. Did we get that
> concept right?
>
>
>
> Ron
>
>
>
> %
>
>
> CREATE TABLE variantjoin_RT_salted24 (
> chrom VARCHAR,
> genomic_range VARCHAR,
> reference VARCHAR,
> alternate VARCHAR,
> annotations VARCHAR,
> consequence VARCHAR,
> chrom_int INTEGER,
> onekg_maf DOUBLE,
> coding VARCHAR,
> esp_aa_maf DOUBLE,
> esp_ea_maf DOUBLE,
> exac_maf DOUBLE,
> filter VARCHAR,
> gene VARCHAR,
> impact VARCHAR,
> polyphen VARCHAR,
> sift VARCHAR,
> viva_maf DOUBLE,
> variant_id INTEGER PRIMARY KEY,
> genomic_range_start INTEGER,
> genomic_range_end INTEGER
> ) SALT_BUCKETS = 24, IMMUTABLE_ROWS=false;
>
>
>
>
>
> 0: jdbc:phoenix:> !describe variantjoin_RTsalted24
>
>
> 0: jdbc:phoenix:> !describe variantjoin_RTsalted24
> ++--+-+-
> -+-+
> | TABLE_CAT  | TABLE_SCHEM  |   TABLE_NAME|
> COLUMN_NAME  | |
> ++--+-+-
> -+-+
> ||  | VARIANTJOIN_RTSALTED24  |
> CHROM| |
> ||  | VARIANTJOIN_RTSALTED24  |
> GENOMIC_RANGE| |
> ||  | VARIANTJOIN_RTSALTED24  |
> REFERENCE| |
> ||  | VARIANTJOIN_RTSALTED24  |
> ALTERNATE| |
> ||  | VARIANTJOIN_RTSALTED24  |
> ANNOTATIONS  | |
> ||  | VARIANTJOIN_RTSALTED24  |
> CONSEQUENCE  | |
> ||  | VARIANTJOIN_RTSALTED24  |
> CHROM_INT| |
> ||  | VARIANTJOIN_RTSALTED24  |
> ONEKG_MAF| |
> ||  | VARIANTJOIN_RTSALTED24  |
> CODING   | |
> ||  | VARIANTJOIN_RTSALTED24  |
> ESP_AA_MAF   | |
> ||  | VARIANTJOIN_RTSALTED24  |
> ESP_EA_MAF   | |
> ||  | VARIANTJOIN_RTSALTED24  |
> EXAC_MAF | |
> ||  | VARIANTJOIN_RTSALTED24  |
> FILTER   | |
> ||  | VARIANTJOIN_RTSALTED24  |
> GENE | |
> ||  | VARIANTJOIN_RTSALTED24  |
> IMPACT   | |
> ||  | VARIANTJOIN_RTSALTED24  |
> POLYPHEN | |
> ||  | VARIANTJOIN_RTSALTED24  |

Re: Bind Params with Union throw AvaticaSqlException

2018-04-20 Thread Lew Jackman
We have a bit more of a stack trace for our bind parameter exception, not sure 
if this is very revealing but we have: java.lang.RuntimeException: 
java.sql.SQLException: ERROR 2004 (INT05): Parameter value unbound. Parameter 
at index 1 is unbound
org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:681)
org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:707)
org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:208)
org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1195)
org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1166)
org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:124)org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.Server.handle(Server.java:499)
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HttpChannel.handle(HttpChannel.java:311)
org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HttpConnection.onFillable(HttpConnection.java:257)
org.apache.phoenix.shaded.org.eclipse.jetty.server.io.AbstractConnection$2.run(AbstractConnection.java:544)
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.io.QueuedThreadPool$3.run(QueuedThreadPool.java:555)java.lang.Thread.run(Thread.java:745)
  

-- Original Message --
From: "Lew Jackman" 
To: user@phoenix.apache.org
Subject: Re: Bind Params with Union throw AvaticaSqlException
Date: Fri, 13 Apr 2018 01:30:43 GMT

This is Phoenix 4.9, we are upgrading to 4.11 very shortly but are on 4.9 this 
week.

I'll try to get a better stack trace, thanks much

World War 2 Discovery Kept Secret For Over 70 Years?!
pro.naturalhealthresponse.com
http://thirdpartyoffers.netzero.net/TGL3231/5ad0085f8ab2385f473fst03vuc


Re: hint to use a global index is not working - need to find out why

2018-04-20 Thread Taylor, Ronald (Ronald)
Hello Sergey,

Per your request, here are the commands that I used to create the table and its 
indexes.  Hopefully you can find something in here that provides a guide as to 
what we are doing wrong.

BTW – as I said, we are novices with Phoenix here. One thing we are doing is 
playing around a bit with salting numbers. We believed that the data in our 
test table was small enough to fit on one region server ( < 10 GB), so we used 
a high salt number (24) to try to force HBase to use more than one region 
server, to parallelize over more than one node. Did we get that concept right?

Ron

%

CREATE TABLE variantjoin_RT_salted24 (
chrom VARCHAR,
genomic_range VARCHAR,
reference VARCHAR,
alternate VARCHAR,
annotations VARCHAR,
consequence VARCHAR,
chrom_int INTEGER,
onekg_maf DOUBLE,
coding VARCHAR,
esp_aa_maf DOUBLE,
esp_ea_maf DOUBLE,
exac_maf DOUBLE,
filter VARCHAR,
gene VARCHAR,
impact VARCHAR,
polyphen VARCHAR,
sift VARCHAR,
viva_maf DOUBLE,
variant_id INTEGER PRIMARY KEY,
genomic_range_start INTEGER,
genomic_range_end INTEGER
) SALT_BUCKETS = 24, IMMUTABLE_ROWS=false;



0: jdbc:phoenix:> !describe variantjoin_RTsalted24


0: jdbc:phoenix:> !describe variantjoin_RTsalted24
++--+-+--+-+
| TABLE_CAT  | TABLE_SCHEM  |   TABLE_NAME| COLUMN_NAME  | |
++--+-+--+-+
||  | VARIANTJOIN_RTSALTED24  | CHROM| |
||  | VARIANTJOIN_RTSALTED24  | GENOMIC_RANGE| |
||  | VARIANTJOIN_RTSALTED24  | REFERENCE| |
||  | VARIANTJOIN_RTSALTED24  | ALTERNATE| |
||  | VARIANTJOIN_RTSALTED24  | ANNOTATIONS  | |
||  | VARIANTJOIN_RTSALTED24  | CONSEQUENCE  | |
||  | VARIANTJOIN_RTSALTED24  | CHROM_INT| |
||  | VARIANTJOIN_RTSALTED24  | ONEKG_MAF| |
||  | VARIANTJOIN_RTSALTED24  | CODING   | |
||  | VARIANTJOIN_RTSALTED24  | ESP_AA_MAF   | |
||  | VARIANTJOIN_RTSALTED24  | ESP_EA_MAF   | |
||  | VARIANTJOIN_RTSALTED24  | EXAC_MAF | |
||  | VARIANTJOIN_RTSALTED24  | FILTER   | |
||  | VARIANTJOIN_RTSALTED24  | GENE | |
||  | VARIANTJOIN_RTSALTED24  | IMPACT   | |
||  | VARIANTJOIN_RTSALTED24  | POLYPHEN | |
||  | VARIANTJOIN_RTSALTED24  | SIFT | |
||  | VARIANTJOIN_RTSALTED24  | VIVA_MAF | |
||  | VARIANTJOIN_RTSALTED24  | VARIANT_ID   | |
||  | VARIANTJOIN_RTSALTED24  | GENOMIC_RANGE_START  | |
||  | VARIANTJOIN_RTSALTED24  | GENOMIC_RANGE_END| |
++--+-+--+-+
0: jdbc:phoenix:>


0: jdbc:phoenix:> create index vj2_chrom on variantjoin_RTsalted24 (chrom);
create index vj2_chrom on variantjoin_RTsalted24 (chrom);
No rows affected (13.322 seconds)
0: jdbc:phoenix:>

reference- no index at present
alternate- no index at present
annotations  - JSONB blob no index at present


0: jdbc:phoenix:> create index vj2_genomic_range on variantjoin_RTsalted24 
(genomic_range);
create index vj2_genomic_range on variantjoin_RTsalted24 (genomic_range);
No rows affected (11.953 seconds)
0: jdbc:phoenix:>

0: jdbc:phoenix:> create index vj2_chrom_int on variantjoin_RTsalted24 
(chrom_int);
create index vj2_chrom_int on variantjoin_RTsalted24 (chrom_int);
No rows affected (12.518 seconds)
0: jdbc:phoenix:>

0: jdbc:phoenix:> create index vj2_onekg_maf on variantjoin_RTsalted24 
(onekg_maf);
create index vj2_onekg_maf on variantjoin_RTsalted24 (onekg_maf);
No rows affected (13.727 seconds)
0: jdbc:phoenix:>

0: jdbc:phoenix:> create index vj2_coding_maf on variantjoin_RTsalted24 
(onekg_maf);
create index vj2_coding_maf on variantjoin_RTsalted24 (onekg_maf);
No rows affected (12.45 seconds)
0: jdbc:phoenix:>

0: jdbc:phoenix:> create index vj2_consequence on variantjoin_RTsalted24 
(consequence);
create index vj2_consequence on variantjoin_RTsalted24 (consequence);
No rows affected (13.906 seconds)
0: jdbc:phoenix:>

0: jdbc:phoenix:> create index vj2_esp_aa_maf on variantjoin_RTsalted24 
(esp_aa_maf);
create index vj2_esp_ea_maf on variantjoin_RTsalted24 (esp_ea_maf);
No rows affected (12.355 seconds)
0: jdbc:phoenix:>

0: jdbc:phoenix:> create index vj2_exac_maf on variantjoin_RTsalted24 
(exac_maf);

Re: using an array field as an index - PHOENIX-1544

2018-04-20 Thread James Taylor
Hi Ron,
Best place to ask questions about a JIRA is on the JIRA itself. If you see
no activity on it for a long time, it often means that no one in the
community has had time to fix it. This probably indicates that it's not as
high priority as other JIRAs that have been fixed. This, of course, is
dependent on the use cases of each contributor to the project. If it's an
important one for your use case, I'd recommend contributing a patch.
Thanks,
James

On Fri, Apr 20, 2018 at 8:29 AM, Taylor, Ronald (Ronald) <
ronald.tay...@cchmc.org> wrote:

> Hello James,
>
>
>
> I did a search using PHOENIX-1544 and could not find any updates to your
> June 2015 post on the Phoenix list, so I wanted to ask: what is the current
> status for indexing array fields over immutable (or even mutable) tables?
> We could certainly use such.
>
>
>
> Ron
>
>
>
> On 2015/06/21 23:10:04, James Taylor  wrote:
>
> > Hey Leon,>
>
> > I filed PHOENIX-1544 a while back for indexing arrays over immutable>
>
> > tables. If you're interested in contributing a patch, that'd be great.>
>
> > I'm happy to help you along the way.>
>
> > Thanks,>
>
> > James>
>
> >
>
> > On Sun, Jun 21, 2015 at 12:48 AM, Leon Prouger 
> wrote:>
>
> > > Hey James, thank you for replying.>
>
> > > Yes you're right, this option is pretty useless is our case. We've
> been>
>
> > > thinking to create a separate table which will model the array with
> one to>
>
> > > many relation, then index it and perform join with the main table for
> every>
>
> > > query. Like:>
>
> > >>
>
> > > Main table: PK(id), data columns>
>
> > > Array table: PK(id, array cell), index of the array cell>
>
> > >>
>
> > > But I wonder is the join gonna be faster then full scan of the main
> table.>
>
> > >>
>
> > > Is there any plans for implementing an array index? Maybe it can be
> done for>
>
> > > immutable tables only.>
>
> > >>
>
> > >>
>
> > > On Wed, Jun 17, 2015 at 10:05 PM James Taylor >
>
> > > wrote:>
>
> > >>>
>
> > >> Hey Leon,>
>
> > >> You can have an array in an index, but it has to be at the end of
> the>
>
> > >> PK constraint which is not very useful and likely not what you want
> ->
>
> > >> it'd essentially be equivalent of having the array at the end of
> your>
>
> > >> primary key constraint.>
>
> > >>>
>
> > >> The other alternative I can think of that may be more useful is to
> use>
>
> > >> functional indexing[1] on specific array elements. You'd need to
> know>
>
> > >> the position of the element that you're indexing and querying
> against>
>
> > >> in advance, though.>
>
> > >>>
>
> > >> [1] http://phoenix.apache.org/secondary_indexing.html#
> Functional_Index>
>
> > >>>
>
> > >> On Wed, Jun 17, 2015 at 4:43 AM, Leon Prouger 
> wrote:>
>
> > >> > Hey folks,>
>
> > >> >>
>
> > >> > Maybe I'm asking too much but I couldn't find a straight answer. Is
> this>
>
> > >> > possible to index an array type with Phoenix?>
>
> > >> >>
>
> > >> > If I can't does anybody tried any alternatives? Like keeping
> another>
>
> > >> > table>
>
> > >> > for the array many to one relation>
>
> >
>
>
>
> Ronald C. Taylor, Ph.D.
> Divisions of Immunobiology and Biomedical Informatics
>
> Cincinnati Children's Hospital Medical Center
>
> Office phone: 513-803-4880
>
> Cell phone: 509-783-7308
>
> Email: ronald.tay...@cchmc.org
>
>
>
>
>


Re: using an array field as an index - PHOENIX-1544

2018-04-20 Thread Taylor, Ronald (Ronald)
Hello James,

I did a search using PHOENIX-1544 and could not find any updates to your June 
2015 post on the Phoenix list, so I wanted to ask: what is the current status 
for indexing array fields over immutable (or even mutable) tables? We could 
certainly use such.

Ron

On 2015/06/21 23:10:04, James Taylor > 
wrote:
> Hey Leon,>
> I filed PHOENIX-1544 a while back for indexing arrays over immutable>
> tables. If you're interested in contributing a patch, that'd be great.>
> I'm happy to help you along the way.>
> Thanks,>
> James>
>
> On Sun, Jun 21, 2015 at 12:48 AM, Leon Prouger 
> > wrote:>
> > Hey James, thank you for replying.>
> > Yes you're right, this option is pretty useless is our case. We've been>
> > thinking to create a separate table which will model the array with one to>
> > many relation, then index it and perform join with the main table for every>
> > query. Like:>
> >>
> > Main table: PK(id), data columns>
> > Array table: PK(id, array cell), index of the array cell>
> >>
> > But I wonder is the join gonna be faster then full scan of the main table.>
> >>
> > Is there any plans for implementing an array index? Maybe it can be done 
> > for>
> > immutable tables only.>
> >>
> >>
> > On Wed, Jun 17, 2015 at 10:05 PM James Taylor 
> > >>
> > wrote:>
> >>>
> >> Hey Leon,>
> >> You can have an array in an index, but it has to be at the end of the>
> >> PK constraint which is not very useful and likely not what you want ->
> >> it'd essentially be equivalent of having the array at the end of your>
> >> primary key constraint.>
> >>>
> >> The other alternative I can think of that may be more useful is to use>
> >> functional indexing[1] on specific array elements. You'd need to know>
> >> the position of the element that you're indexing and querying against>
> >> in advance, though.>
> >>>
> >> [1] http://phoenix.apache.org/secondary_indexing.html#Functional_Index>
> >>>
> >> On Wed, Jun 17, 2015 at 4:43 AM, Leon Prouger 
> >> > wrote:>
> >> > Hey folks,>
> >> >>
> >> > Maybe I'm asking too much but I couldn't find a straight answer. Is this>
> >> > possible to index an array type with Phoenix?>
> >> >>
> >> > If I can't does anybody tried any alternatives? Like keeping another>
> >> > table>
> >> > for the array many to one relation>
>

Ronald C. Taylor, Ph.D.
Divisions of Immunobiology and Biomedical Informatics
Cincinnati Children's Hospital Medical Center
Office phone: 513-803-4880
Cell phone: 509-783-7308
Email: ronald.tay...@cchmc.org




Re: 答复: phoenix query server java.lang.ClassCastException for BIGINT ARRAY column

2018-04-20 Thread Lu Wei
I did some digging, and the reason is because I started PQS using JSON 
serialization, rather than PROTOBUF.

When I switch to PROTOBUF serialization, the 'select  * from testarray' query 
works fine.


There is not type for numbers in Json, so an Json Array[100] is parsed to an 
array containing an integer value. when getting items from the Sql result set, 
there is an convert from 100 (an integer) to long (type defined in table), so a 
conversion exception happened.


I guess we should better use Protobuf rather than Json as serialization for PQS.


From: sergey.solda...@gmail.com  on behalf of Sergey 
Soldatov 
Sent: Friday, April 20, 2018 5:22:47 AM
To: user@phoenix.apache.org
Subject: Re: 答复: phoenix query server java.lang.ClassCastException for BIGINT 
ARRAY column

Definitely, someone who is maintaining CDH branch should take a look. I don't 
observer that behavior on the master branch:

0: jdbc:phoenix:thin:url=http://localhost:876> create table if not exists 
testarray(id bigint not null, events bigint array constraint pk primary key 
(id));
No rows affected (2.4 seconds)
0: jdbc:phoenix:thin:url=http://localhost:876> upsert into testarray values (1, 
array[1,2]);
1 row affected (0.056 seconds)
0: jdbc:phoenix:thin:url=http://localhost:876> select * from testarray;
+-+-+
| ID  | EVENTS  |
+-+-+
| 1   | [1, 2]  |
+-+-+
1 row selected (0.068 seconds)
0: jdbc:phoenix:thin:url=http://localhost:876>


Thanks,
Sergey

On Thu, Apr 19, 2018 at 12:57 PM, Lu Wei 
> wrote:

by the way, all the queries are shot in sqlline-thin.py




发件人: Lu Wei
发送时间: 2018年4月19日 6:51:15
收件人: user@phoenix.apache.org
主题: 答复: phoenix query server java.lang.ClassCastException for BIGINT ARRAY 
column


## Version:
phoenix: 4.13.2-cdh5.11.2
hive: 1.1.0-cdh5.11.2

to reproduce:

-- create table

create table if not exists testarray(id bigint not null, events bigint array 
constraint pk primary key (id))


-- upsert data:

upsert into testarray values (1, array[1,2]);


-- query:

select id from testarray;   -- fine

select * from testarray;-- error


发件人: sergey.solda...@gmail.com 
> 代表 Sergey 
Soldatov >
发送时间: 2018年4月19日 6:37:06
收件人: user@phoenix.apache.org
主题: Re: phoenix query server java.lang.ClassCastException for BIGINT ARRAY 
column

Could you please be more specific? Which version of phoenix are you using? Do 
you have a small script to reproduce? At first glance it looks like a PQS bug.

Thanks,
Sergey

On Thu, Apr 19, 2018 at 8:17 AM, Lu Wei 
> wrote:

Hi there,

I have a phoenix table containing an BIGINT ARRAY column. But when querying 
query server (through sqlline-thin.py), there is an exception:

java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

BTW, when query through sqlline.py, everything works fine. And data in HBase 
table are of Long type, so why does the Integer to Long cast happen?



## Table schema:

create table if not exists gis_tracking3(tracking_object_id bigint not null, 
lat double, lon double, speed double, bearing double, time timestamp not null, 
events bigint array constraint pk primary key (tracking_object_id, time))


## when query events[1], it works fine:

0: jdbc:phoenix:thin:url=http://10.10.13.87:8> select  events[1]+1 from 
gis_tracking3;
+--+
| (ARRAY_ELEM(EVENTS, 1) + 1)  |
+--+
| 11   |
| 2223 |
| null |
| null |
| 10001|
+--+



## when querying events, it throws exception:

0: jdbc:phoenix:thin:url=http://10.10.13.87:8> select  events from 
gis_tracking3;
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
  at 
org.apache.phoenix.shaded.org.apache.calcite.avatica.util.AbstractCursor$LongAccessor.getLong(AbstractCursor.java:550)
  at 
org.apache.phoenix.shaded.org.apache.calcite.avatica.util.AbstractCursor$ArrayAccessor.convertValue(AbstractCursor.java:1310)
  at 
org.apache.phoenix.shaded.org.apache.calcite.avatica.util.AbstractCursor$ArrayAccessor.getObject(AbstractCursor.java:1289)
  at 
org.apache.phoenix.shaded.org.apache.calcite.avatica.util.AbstractCursor$ArrayAccessor.getArray(AbstractCursor.java:1342)
  at