Re: PSQ processlist

2019-09-10 Thread Aleksandr Saraseka
Thank you Josh, this is very helpful.
Another question - can we kill long running query in PQS somehow ?

On Mon, Sep 9, 2019 at 5:09 PM Josh Elser  wrote:

> Not unique to PQS, see:
>
> https://issues.apache.org/jira/browse/PHOENIX-2715
>
> On 9/9/19 9:02 AM, Aleksandr Saraseka wrote:
> > Hello.
> > Does Phoenix Query Server have any possibility to track running queries
> > ? Like user connects with thin client and run some long running query,
> > can I understand who and what is running ?
> >
> > --
> >   Aleksandr Saraseka
> > DBA
> > 380997600401
> >  *•* asaras...@eztexting.com
> >  *•* eztexting.com
> > <
> http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> >
> >
> > <
> http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> > <
> https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature
> >
> >
>


-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com










Re: PSQ processlist

2019-09-10 Thread Josh Elser
As you might already know, JDBC is "stateful" in what it does. You have 
a Connection, which creates Statements, and the combination of those two 
track queries being run.


However, HTTP is a stateless protocol. As such, PQS has to cache things 
in memory in order to make this approach work.


To this point, there are Connection and Statement caches which, once 
they are not used, are evicted from the cache and closed[1]. I know that 
Phoenix is not capable of interrupting/free'ing all resources used by a 
Phoenix query (e.g. you cannot interrupt an HBase RPC once it's 
running), but it's likely that Phoenix would clean up the client-side 
state to the best of its ability when the Statement/Connection are closed.


Maybe someone knows the answer to that off the top of their head. 
Otherwise, hopefully this information is a starting point for you to 
look at the code and/or run some experiments.


[1] https://phoenix.apache.org/server.html "Configurations relating to 
the server connection cache."


On 9/10/19 11:28 AM, Aleksandr Saraseka wrote:

Thank you Josh, this is very helpful.
Another question - can we kill long running query in PQS somehow ?

On Mon, Sep 9, 2019 at 5:09 PM Josh Elser > wrote:


Not unique to PQS, see:

https://issues.apache.org/jira/browse/PHOENIX-2715

On 9/9/19 9:02 AM, Aleksandr Saraseka wrote:
 > Hello.
 > Does Phoenix Query Server have any possibility to track running
queries
 > ? Like user connects with thin client and run some long running
query,
 > can I understand who and what is running ?
 >
 > --
 >               Aleksandr Saraseka
 > DBA
 > 380997600401
 >  *•* asaras...@eztexting.com

 > >
*•* eztexting.com 
 >



 >
 >
 >



 >



 >



 >



 >



 >



 >


 >



--
Aleksandr Saraseka
DBA
380997600401
 *•* asaras...@eztexting.com 
 *•* eztexting.com 
 



 
 
 
 
 
 





Why we change index state to PENDING_DISABLE on RegionMovedException

2019-09-10 Thread Alexander Batyrshin
As I know RegionMovedException is not a problem at all, its just notification 
that we need to update meta information about table regions and retry.
Why we do extra work with changing state of index?

2019-09-10 22:35:00,764 WARN  [hconnection-0x4a63b6ea-shared--pool10-t961] 
client.AsyncProcess: #41, table=IDX_TABLE, attempt=1/1 failed=1ops, last 
exception: org.apache.hadoop.hbase.exceptions.RegionMovedException: Region 
moved to: hostname=prod023 port=60020 startCode=1568139705179. As
 of locationSeqNum=93740117. on prod027,60020,1568142287280, tracking started 
Tue Sep 10 22:35:00 MSK 2019; not retrying 1 - final failure
2019-09-10 22:35:00,789 INFO  
[RpcServer.default.FPBQ.Fifo.handler=170,queue=10,port=60020] 
index.PhoenixIndexFailurePolicy: Successfully update INDEX_DISABLE_TIMESTAMP 
for IDX_TABLE due to an exception while writing updates. 
indexState=PENDING_DISABLE
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[IDX_TABLE]
at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:614)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:589)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:572)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3677)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3138)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3080)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2406)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)

Re: Why we change index state to PENDING_DISABLE on RegionMovedException

2019-09-10 Thread Vincent Poon
Normally you're right, this should get retried at the HBase layer and would
be transparent.  However as part of PHOENIX-4130, we have the hbase client
only try the write once, so there's no chance to retry.  We did that to
avoid tying up rpc handlers on the server.
Instead, we retry the entire Phoenix mutation from the client side.  The
index is put into "PENDING_DISABLE", so that if the next write succeeds, it
can flip back to "ACTIVE".

On Tue, Sep 10, 2019 at 2:29 PM Alexander Batyrshin <0x62...@gmail.com>
wrote:

> As I know RegionMovedException is not a problem at all, its just
> notification that we need to update meta information about table regions
> and retry.
> Why we do extra work with changing state of index?
>
> 2019-09-10 22:35:00,764 WARN  [hconnection-0x4a63b6ea-shared--pool10-t961]
> client.AsyncProcess: #41, table=IDX_TABLE, attempt=1/1 failed=1ops, last
> exception: org.apache.hadoop.hbase.exceptions.RegionMovedException: Region
> moved to: hostname=prod023 port=60020 startCode=1568139705179. As
>  of locationSeqNum=93740117. on prod027,60020,1568142287280, tracking
> started Tue Sep 10 22:35:00 MSK 2019; not retrying 1 - final failure
> 2019-09-10 22:35:00,789 INFO
> [RpcServer.default.FPBQ.Fifo.handler=170,queue=10,port=60020]
> index.PhoenixIndexFailurePolicy: Successfully update
> INDEX_DISABLE_TIMESTAMP for IDX_TABLE due to an exception while writing
> updates. indexState=PENDING_DISABLE
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:
> disableIndexOnFailure=true, Failed to write to multiple index tables:
> [IDX_TABLE]
> at
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:614)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:589)
> at
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:572)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3677)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3138)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3080)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2406)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)


Re: Why we change index state to PENDING_DISABLE on RegionMovedException

2019-09-10 Thread Geoffrey Jacoby
Just wanted to add that in the new index architecture recently introduced
in Phoenix 4.14.3 and the forthcoming 4.15, the index stays in ACTIVE state
even if there's a write failure, and the index will be transparently
repaired the next time someone reads from the affected keyrange. From the
client perspective indexes will always be in sync. Indexes created using
the older index framework will still work, but will need to be upgraded to
the new framework with the IndexUpgradeTool in order to benefit from the
new behavior.

We'll be updating the docs on the website soon to reflect that; in the
meantime you can look at PHOENIX-5156 and PHOENIX-5211 if you'd like more
details.

Geoffrey

On Tue, Sep 10, 2019 at 3:02 PM Vincent Poon  wrote:

> Normally you're right, this should get retried at the HBase layer and
> would be transparent.  However as part of PHOENIX-4130, we have the hbase
> client only try the write once, so there's no chance to retry.  We did that
> to avoid tying up rpc handlers on the server.
> Instead, we retry the entire Phoenix mutation from the client side.  The
> index is put into "PENDING_DISABLE", so that if the next write succeeds, it
> can flip back to "ACTIVE".
>
> On Tue, Sep 10, 2019 at 2:29 PM Alexander Batyrshin <0x62...@gmail.com>
> wrote:
>
>> As I know RegionMovedException is not a problem at all, its just
>> notification that we need to update meta information about table regions
>> and retry.
>> Why we do extra work with changing state of index?
>>
>> 2019-09-10 22:35:00,764 WARN
>> [hconnection-0x4a63b6ea-shared--pool10-t961] client.AsyncProcess: #41,
>> table=IDX_TABLE, attempt=1/1 failed=1ops, last exception:
>> org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to:
>> hostname=prod023 port=60020 startCode=1568139705179. As
>>  of locationSeqNum=93740117. on prod027,60020,1568142287280, tracking
>> started Tue Sep 10 22:35:00 MSK 2019; not retrying 1 - final failure
>> 2019-09-10 22:35:00,789 INFO
>> [RpcServer.default.FPBQ.Fifo.handler=170,queue=10,port=60020]
>> index.PhoenixIndexFailurePolicy: Successfully update
>> INDEX_DISABLE_TIMESTAMP for IDX_TABLE due to an exception while writing
>> updates. indexState=PENDING_DISABLE
>> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:
>> disableIndexOnFailure=true, Failed to write to multiple index tables:
>> [IDX_TABLE]
>> at
>> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
>> at
>> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
>> at
>> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
>> at
>> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
>> at
>> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:614)
>> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:589)
>> at
>> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:572)
>> at
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
>> at
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
>> at
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
>> at
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
>> at
>> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3677)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3138)
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3080)
>> at
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
>> at
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
>> at
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2406)
>> at
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>> at
>> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>> at
>> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
>
>


Re: Why we change index state to PENDING_DISABLE on RegionMovedException

2019-09-10 Thread Alexander Batyrshin
Its looks premises, but we need to do write performance evaluation old indexes 
vs new one before we can go with this update.

> On 11 Sep 2019, at 01:15, Geoffrey Jacoby  wrote:
> 
> Just wanted to add that in the new index architecture recently introduced in 
> Phoenix 4.14.3 and the forthcoming 4.15, the index stays in ACTIVE state even 
> if there's a write failure, and the index will be transparently repaired the 
> next time someone reads from the affected keyrange. From the client 
> perspective indexes will always be in sync. Indexes created using the older 
> index framework will still work, but will need to be upgraded to the new 
> framework with the IndexUpgradeTool in order to benefit from the new 
> behavior. 
> 
> We'll be updating the docs on the website soon to reflect that; in the 
> meantime you can look at PHOENIX-5156 and PHOENIX-5211 if you'd like more 
> details. 
> 
> Geoffrey
> 
> On Tue, Sep 10, 2019 at 3:02 PM Vincent Poon  > wrote:
> Normally you're right, this should get retried at the HBase layer and would 
> be transparent.  However as part of PHOENIX-4130, we have the hbase client 
> only try the write once, so there's no chance to retry.  We did that to avoid 
> tying up rpc handlers on the server.
> Instead, we retry the entire Phoenix mutation from the client side.  The 
> index is put into "PENDING_DISABLE", so that if the next write succeeds, it 
> can flip back to "ACTIVE".
> 
> On Tue, Sep 10, 2019 at 2:29 PM Alexander Batyrshin <0x62...@gmail.com 
> > wrote:
> As I know RegionMovedException is not a problem at all, its just notification 
> that we need to update meta information about table regions and retry.
> Why we do extra work with changing state of index?
> 
> 2019-09-10 22:35:00,764 WARN  [hconnection-0x4a63b6ea-shared--pool10-t961] 
> client.AsyncProcess: #41, table=IDX_TABLE, attempt=1/1 failed=1ops, last 
> exception: org.apache.hadoop.hbase.exceptions.RegionMovedException: Region 
> moved to: hostname=prod023 port=60020 startCode=1568139705179. As
>  of locationSeqNum=93740117. on prod027,60020,1568142287280, tracking started 
> Tue Sep 10 22:35:00 MSK 2019; not retrying 1 - final failure
> 2019-09-10 22:35:00,789 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=170,queue=10,port=60020] 
> index.PhoenixIndexFailurePolicy: Successfully update INDEX_DISABLE_TIMESTAMP 
> for IDX_TABLE due to an exception while writing updates. 
> indexState=PENDING_DISABLE
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
> disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [IDX_TABLE]
> at 
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:614)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:589)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:572)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3677)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3138)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3080)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2406)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecuto