Re: MutationState size is bigger than maximum allowed number of bytes

2018-09-19 Thread Jaanai Zhang
Are you configuring these on the server side?   Your “UPSERT SELECT”
grammar will be executed on the server side.


   Jaanai Zhang
   Best regards!



Batyrshin Alexander <0x62...@gmail.com> 于2018年9月20日周四 上午7:48写道:

> I've tried to copy one table to other via UPSERT SELECT construction and
> got this errors:
>
> Phoenix-4.14-hbase-1.4
>
> 0: jdbc:phoenix:> !autocommit on
> Autocommit status: true
> 0: jdbc:phoenix:>
> 0: jdbc:phoenix:> UPSERT INTO TABLE_V2 ("c", "id", "gt")
> . . . . . . . . > SELECT "c", "id", "gt" FROM TABLE;
> Error: ERROR 730 (LIM02): MutationState size is bigger than maximum allowed 
> number of bytes (state=LIM02,code=730)
> java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.execute.MutationState.throwIfTooBig(MutationState.java:377)
> at org.apache.phoenix.execute.MutationState.join(MutationState.java:478)
> at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory$1.close(MutatingParallelIteratorFactory.java:98)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:104)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.peek(ConcatResultIterator.java:112)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:100)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
> at org.apache.phoenix.trace.TracingIterator.next(TracingIterator.java:56)
> at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1301)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1825)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
>
>
> Config:
>
> 
> phoenix.mutate.batchSize
> 200
> 
> 
> phoenix.mutate.maxSize
> 25
> 
> 
> phoenix.mutate.maxSizeBytes
> 10485760
> 
>
>
> Also mentioned this at https://issues.apache.org/jira/browse/PHOENIX-4671
>


Re: Encountering IllegalStateException while querying Phoenix

2018-09-19 Thread Jaanai Zhang
Are you sure you had restarted RS process?  you can check
"phoenix-server.jar" whether exists in the classpath of  HBase by "jinfo"
command

   Jaanai Zhang
   Best regards!



William Shen  于2018年9月20日周四 上午6:01写道:

> For anyone else interested: we ended up identifying one of the RS actually
> failed to load the UngroupedAggregateRegionObserver because of a strange
> XML parsing issue that was not occurring prior to this incident and not
> happening on the any other RS.
>
> Failed to load coprocessor 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver
> java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
> jar:file:/opt/cloudera/parcels/CDH-5.9.2-1.cdh5.9.2.p0.3/jars/hadoop-common-2.6.0-cdh5.9.2.jar!/core-default.xml;
>  lineNumber: 196; columnNumber: 47; The string "--" is not permitted within 
> comments.
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2656)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
>   at org.apache.hadoop.conf.Configuration.set(Configuration.java:1144)
>   at org.apache.hadoop.conf.Configuration.set(Configuration.java:1116)
>   at 
> org.apache.phoenix.util.PropertiesUtil.cloneConfig(PropertiesUtil.java:81)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.start(UngroupedAggregateRegionObserver.java:219)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:414)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:255)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:208)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:364)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:226)
>   at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:723)
>   at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:631)
>   at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6145)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6449)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6421)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6377)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6328)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.xml.sax.SAXParseException; systemId: 
> jar:file:/opt/cloudera/parcels/CDH-5.9.2-1.cdh5.9.2.p0.3/jars/hadoop-common-2.6.0-cdh5.9.2.jar!/core-default.xml;
>  lineNumber: 196; columnNumber: 47; The string "--" is not permitted within 
> comments.
>   at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
>   at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
>   at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
>   at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2491)
>   at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2479)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2550)
>   ... 27 more
>
>
>
> On Wed, Sep 19, 2018 at 2:15 PM William Shen 
> wrote:
>
>> Hi there,
>>
>> I have encountered the following exception while trying to query from
>> Phoenix (was able to generate the exception doing a simple SELECT
>> count(1)). I have verified (MD5) that each region server has the correct
>> phoenix jars. Would appreciate any guidance on how to proceed further in
>> troubleshooting this (or what could've caused this). Thank you!
>>
>> java.lang.IllegalStateException: Expected single, aggregated KeyValue
>> from coprocessor, but instead received
>> keyvalues={\x03\x80\x00\x00\x00\x00\x8D\xB8Y\x80\x00\x00\x00\x01c$\xE7\x00\x04\x80\x00\x00\x00\x01\x0C\x95N\x80\x00\x00\x00\x01\xCCU\xF1/SL:_0/1525817954352/Put/vlen=1/seqid=0/value=x}

MutationState size is bigger than maximum allowed number of bytes

2018-09-19 Thread Batyrshin Alexander
I've tried to copy one table to other via UPSERT SELECT construction and got 
this errors:

Phoenix-4.14-hbase-1.4

0: jdbc:phoenix:> !autocommit on
Autocommit status: true
0: jdbc:phoenix:>
0: jdbc:phoenix:> UPSERT INTO TABLE_V2 ("c", "id", "gt")
. . . . . . . . > SELECT "c", "id", "gt" FROM TABLE;
Error: ERROR 730 (LIM02): MutationState size is bigger than maximum allowed 
number of bytes (state=LIM02,code=730)
java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger than 
maximum allowed number of bytes
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.execute.MutationState.throwIfTooBig(MutationState.java:377)
at org.apache.phoenix.execute.MutationState.join(MutationState.java:478)
at 
org.apache.phoenix.compile.MutatingParallelIteratorFactory$1.close(MutatingParallelIteratorFactory.java:98)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:104)
at 
org.apache.phoenix.iterate.ConcatResultIterator.peek(ConcatResultIterator.java:112)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:100)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at org.apache.phoenix.trace.TracingIterator.next(TracingIterator.java:56)
at 
org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1301)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1825)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

Config:


phoenix.mutate.batchSize
200


phoenix.mutate.maxSize
25


phoenix.mutate.maxSizeBytes
10485760


Also mentioned this at https://issues.apache.org/jira/browse/PHOENIX-4671

Re: IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Batyrshin Alexander
Indeed. I see that this exception was thrown somewhere near docker restart time.
Thank you for response.


> On 20 Sep 2018, at 02:34, Sergey Soldatov  wrote:
> 
> That might be a misleading message. Actually, that means that JVM shutdown 
> has been triggered (so runtime has executed the shutdown hook for the driver 
> and that's the only place where we set this message) and after that, another 
> thread was trying to create a new connection. 
> 
> Thanks,
> Sergey 
> 
> On Wed, Sep 19, 2018 at 11:17 AM Batyrshin Alexander <0x62...@gmail.com 
> > wrote:
> Version:
> 
> Phoenix-4.14.0-HBase-1.4
> 
> Full trace is:
> 
> java.lang.IllegalStateException: Phoenix driver closed because server is 
> shutting down
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:270)
> at 
> x.persistence.phoenix.ConnectionManager.get(ConnectionManager.scala:12)
> at 
> x.persistence.phoenix.PhoenixDao.$anonfun$count$1(PhoenixDao.scala:58)
> at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
> at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
> at scala.util.Success.$anonfun$map$1(Try.scala:251)
> at scala.util.Success.map(Try.scala:209)
> at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
> at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
> at 
> scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 
> 
> 
> 
> > On 19 Sep 2018, at 20:13, Josh Elser  > > wrote:
> > 
> > What version of Phoenix are you using? Is this the full stack trace you see 
> > that touches Phoenix (or HBase) classes?
> > 
> > On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
> >> Is there any reason for this exception? Which exactly server is shutting 
> >> down if we use quorum of zookepers?
> >> java.lang.IllegalStateException: Phoenix driver closed because server is 
> >> shutting down at 
> >> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
> >>  at 
> >> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285) 
> >> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220) 
> >> at java.sql.DriverManager.getConnection(DriverManager.java:664) at 
> >> java.sql.DriverManager.getConnection(DriverManager.java:270)
> 



Re: IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Sergey Soldatov
That might be a misleading message. Actually, that means that JVM shutdown
has been triggered (so runtime has executed the shutdown hook for the
driver and that's the only place where we set this message) and after that,
another thread was trying to create a new connection.

Thanks,
Sergey

On Wed, Sep 19, 2018 at 11:17 AM Batyrshin Alexander <0x62...@gmail.com>
wrote:

> Version:
>
> Phoenix-4.14.0-HBase-1.4
>
> Full trace is:
>
> java.lang.IllegalStateException: Phoenix driver closed because server is
> shutting down
> at
> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
> at
> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
> at
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:270)
> at
> x.persistence.phoenix.ConnectionManager.get(ConnectionManager.scala:12)
> at
> x.persistence.phoenix.PhoenixDao.$anonfun$count$1(PhoenixDao.scala:58)
> at
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
> at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
> at scala.util.Success.$anonfun$map$1(Try.scala:251)
> at scala.util.Success.map(Try.scala:209)
> at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
> at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
> at
> scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
>
> > On 19 Sep 2018, at 20:13, Josh Elser  wrote:
> >
> > What version of Phoenix are you using? Is this the full stack trace you
> see that touches Phoenix (or HBase) classes?
> >
> > On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
> >> Is there any reason for this exception? Which exactly server is
> shutting down if we use quorum of zookepers?
> >> java.lang.IllegalStateException: Phoenix driver closed because server
> is shutting down at
> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
> at
> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220) at
> java.sql.DriverManager.getConnection(DriverManager.java:664) at
> java.sql.DriverManager.getConnection(DriverManager.java:270)
>
>


Re: Encountering IllegalStateException while querying Phoenix

2018-09-19 Thread William Shen
For anyone else interested: we ended up identifying one of the RS actually
failed to load the UngroupedAggregateRegionObserver because of a strange
XML parsing issue that was not occurring prior to this incident and not
happening on the any other RS.

Failed to load coprocessor
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId:
jar:file:/opt/cloudera/parcels/CDH-5.9.2-1.cdh5.9.2.p0.3/jars/hadoop-common-2.6.0-cdh5.9.2.jar!/core-default.xml;
lineNumber: 196; columnNumber: 47; The string "--" is not permitted
within comments.
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2656)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1144)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1116)
at 
org.apache.phoenix.util.PropertiesUtil.cloneConfig(PropertiesUtil.java:81)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.start(UngroupedAggregateRegionObserver.java:219)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:414)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:255)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:208)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:364)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:226)
at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:723)
at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:631)
at sun.reflect.GeneratedConstructorAccessor23.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6145)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6449)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6421)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6377)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6328)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.xml.sax.SAXParseException; systemId:
jar:file:/opt/cloudera/parcels/CDH-5.9.2-1.cdh5.9.2.p0.3/jars/hadoop-common-2.6.0-cdh5.9.2.jar!/core-default.xml;
lineNumber: 196; columnNumber: 47; The string "--" is not permitted
within comments.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2491)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2479)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2550)
... 27 more



On Wed, Sep 19, 2018 at 2:15 PM William Shen 
wrote:

> Hi there,
>
> I have encountered the following exception while trying to query from
> Phoenix (was able to generate the exception doing a simple SELECT
> count(1)). I have verified (MD5) that each region server has the correct
> phoenix jars. Would appreciate any guidance on how to proceed further in
> troubleshooting this (or what could've caused this). Thank you!
>
> java.lang.IllegalStateException: Expected single, aggregated KeyValue from
> coprocessor, but instead received
> keyvalues={\x03\x80\x00\x00\x00\x00\x8D\xB8Y\x80\x00\x00\x00\x01c$\xE7\x00\x04\x80\x00\x00\x00\x01\x0C\x95N\x80\x00\x00\x00\x01\xCCU\xF1/SL:_0/1525817954352/Put/vlen=1/seqid=0/value=x}
>
> . Ensure aggregating coprocessors are loaded correctly on server
>
> at org.apache.phoenix.util.TupleUtil.getAggregateValue(TupleUtil.java:88)
>
> at
> org.apache.phoenix.expression.aggregator.ClientAggregators.aggregate(ClientAggregators.java:54)
>
> at
> 

Encountering IllegalStateException while querying Phoenix

2018-09-19 Thread William Shen
Hi there,

I have encountered the following exception while trying to query from
Phoenix (was able to generate the exception doing a simple SELECT
count(1)). I have verified (MD5) that each region server has the correct
phoenix jars. Would appreciate any guidance on how to proceed further in
troubleshooting this (or what could've caused this). Thank you!

java.lang.IllegalStateException: Expected single, aggregated KeyValue from
coprocessor, but instead received
keyvalues={\x03\x80\x00\x00\x00\x00\x8D\xB8Y\x80\x00\x00\x00\x01c$\xE7\x00\x04\x80\x00\x00\x00\x01\x0C\x95N\x80\x00\x00\x00\x01\xCCU\xF1/SL:_0/1525817954352/Put/vlen=1/seqid=0/value=x}

. Ensure aggregating coprocessors are loaded correctly on server

at org.apache.phoenix.util.TupleUtil.getAggregateValue(TupleUtil.java:88)

at
org.apache.phoenix.expression.aggregator.ClientAggregators.aggregate(ClientAggregators.java:54)

at
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:74)

at
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)

at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:779)

at sqlline.BufferedRows.(BufferedRows.java:37)

at sqlline.SqlLine.print(SqlLine.java:1660)

at sqlline.Commands.execute(Commands.java:833)

at sqlline.Commands.sql(Commands.java:732)

at sqlline.SqlLine.dispatch(SqlLine.java:813)

at sqlline.SqlLine.begin(SqlLine.java:686)

at sqlline.SqlLine.start(SqlLine.java:398)

at sqlline.SqlLine.main(SqlLine.java:291)

- Will


Re: IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Batyrshin Alexander
Version:

Phoenix-4.14.0-HBase-1.4

Full trace is:

java.lang.IllegalStateException: Phoenix driver closed because server is 
shutting down
at 
org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
at 
org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at 
x.persistence.phoenix.ConnectionManager.get(ConnectionManager.scala:12)
at 
x.persistence.phoenix.PhoenixDao.$anonfun$count$1(PhoenixDao.scala:58)
at 
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:655)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:289)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




> On 19 Sep 2018, at 20:13, Josh Elser  wrote:
> 
> What version of Phoenix are you using? Is this the full stack trace you see 
> that touches Phoenix (or HBase) classes?
> 
> On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
>> Is there any reason for this exception? Which exactly server is shutting 
>> down if we use quorum of zookepers?
>> java.lang.IllegalStateException: Phoenix driver closed because server is 
>> shutting down at 
>> org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
>>  at 
>> org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285) at 
>> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220) at 
>> java.sql.DriverManager.getConnection(DriverManager.java:664) at 
>> java.sql.DriverManager.getConnection(DriverManager.java:270)



Re: IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Josh Elser
What version of Phoenix are you using? Is this the full stack trace you 
see that touches Phoenix (or HBase) classes?


On 9/19/18 12:42 PM, Batyrshin Alexander wrote:
Is there any reason for this exception? Which exactly server is shutting 
down if we use quorum of zookepers?


java.lang.IllegalStateException: Phoenix driver closed because server is 
shutting down at 
org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290) 
at 
org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285) at 
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220) at 
java.sql.DriverManager.getConnection(DriverManager.java:664) at 
java.sql.DriverManager.getConnection(DriverManager.java:270)


IllegalStateException: Phoenix driver closed because server is shutting down

2018-09-19 Thread Batyrshin Alexander
Is there any reason for this exception? Which exactly server is shutting down 
if we use quorum of zookepers?

java.lang.IllegalStateException: Phoenix driver closed because server is 
shutting down
at 
org.apache.phoenix.jdbc.PhoenixDriver.throwDriverClosedException(PhoenixDriver.java:290)
at 
org.apache.phoenix.jdbc.PhoenixDriver.checkClosed(PhoenixDriver.java:285)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:220)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)

Re: Missing content in phoenix after writing from Spark

2018-09-19 Thread Saif Addin
Thanks. We topped with the next problem, we do need to do appending. But
current support documentation says only Overwrite mode is available right?
In this case we'll have to resort back to RDD writing, correct?

On Mon, Sep 17, 2018 at 8:45 PM Josh Elser  wrote:

> As I said earlier, the expectation is that you use the
> phoenix-client.jar and phoenix-spark2.jar for the phoenix-spark
> integration with spark2.
>
> You do not need to reference all of these jars by hand. We create the
> jars with all of the necessary dependencies bundled to specifically
> avoid creating this problem for users.
>
> On 9/17/18 3:27 PM, Saif Addin wrote:
> > Thanks for the patience, sorry maybe I sent incomplete information. We
> > are loading the following jars and still getting: */executor 1):
> > java.lang.NoClassDefFoundError: Could not initialize class
> > org.apache.phoenix.query.QueryServicesOptions/*
> > */
> > /*
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-client/2.1.0/hbase-client-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-common/2.1.0/hbase-common-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-hadoop-compat/2.1.0/hbase-hadoop-compat-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-mapreduce/2.1.0/hbase-mapreduce-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/thirdparty/hbase-shaded-miscellaneous/2.1.0/hbase-shaded-miscellaneous-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-protocol/2.1.0/hbase-protocol-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-protocol-shaded/2.1.0/hbase-protocol-shaded-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/thirdparty/hbase-shaded-protobuf/2.1.0/hbase-shaded-protobuf-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/thirdparty/hbase-shaded-netty/2.1.0/hbase-shaded-netty-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-server/2.1.0/hbase-server-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-hadoop2-compat/2.1.0/hbase-hadoop2-compat-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-metrics/2.1.0/hbase-metrics-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-metrics-api/2.1.0/hbase-metrics-api-2.1.0.jar
> >
> http://central.maven.org/maven2/org/apache/hbase/hbase-zookeeper/2.1.0/hbase-zookeeper-2.1.0.jar
> >
> >
> http://central.maven.org/maven2/org/apache/phoenix/phoenix-spark/5.0.0-HBase-2.0/phoenix-spark-5.0.0-HBase-2.0.jar
> >
> http://central.maven.org/maven2/org/apache/phoenix/phoenix-core/5.0.0-HBase-2.0/phoenix-core-5.0.0-HBase-2.0.jar
> >
> http://central.maven.org/maven2/org/apache/phoenix/phoenix-queryserver/5.0.0-HBase-2.0/phoenix-queryserver-5.0.0-HBase-2.0.jar
> >
> http://central.maven.org/maven2/org/apache/phoenix/phoenix-queryserver-client/5.0.0-HBase-2.0/phoenix-queryserver-client-5.0.0-HBase-2.0.jar
> >
> >
> http://central.maven.org/maven2/org/apache/twill/twill-zookeeper/0.13.0/twill-zookeeper-0.13.0.jar
> >
> http://central.maven.org/maven2/org/apache/twill/twill-discovery-core/0.13.0/twill-discovery-core-0.13.0.jar
> >
> > Not sure which one I could be missing??
> >
> > On Fri, Sep 14, 2018 at 7:34 PM Josh Elser  > > wrote:
> >
> > Uh, you're definitely not using the right JARs :)
> >
> > You'll want the phoenix-client.jar for the Phoenix JDBC driver and
> the
> > phoenix-spark.jar for the Phoenix RDD.
> >
> > On 9/14/18 1:08 PM, Saif Addin wrote:
> >  > Hi, I am attempting to make connection with Spark but no success
> > so far.
> >  >
> >  > For writing into Phoenix, I am trying this:
> >  >
> >  > tdd.toDF("ID", "COL1", "COL2",
> >  > "COL3").write.format("org.apache.phoenix.spark").option("zkUrl",
> >  > "zookeper-host-url:2181").option("table",
> >  > htablename).mode("overwrite").save()
> >  >
> >  > But getting:
> >  > *java.sql.SQLException: ERROR 103 (08004): Unable to establish
> > connection.*
> >  > *
> >  > *
> >  > For reading, on the other hand, attempting this:
> >  >
> >  > val hbConf = HBaseConfiguration.create()
> >  > val hbaseSitePath = "/etc/hbase/conf/hbase-site.xml"
> >  > hbConf.addResource(new Path(hbaseSitePath))
> >  >
> >  > spark.sqlContext.phoenixTableAsDataFrame("VISTA_409X68",
> > Array("ID"),
> >  > conf = hbConf)
> >  >
> >  > Gets me
> >  > *java.lang.NoClassDefFoundError: Could not initialize class
> >  > org.apache.phoenix.query.QueryServicesOptions*
> >  > *
> >  > *
> >  > I have added phoenix-queryserver-5.0.0-HBase-2.0.jar and
> >  > phoenix-queryserver-client-5.0.0-HBase-2.0.jar
> >  > Any thoughts? I have an hbase-site.xml file with more
> > configuration but
> >  > not sure how to get it to be read in the saving instance.
> >  > Thanks
> >  >
> >  > On