[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431704#comment-16431704
 ] 

James Taylor commented on PHOENIX-4658:
---

+1 on the patch. Thanks, [~brfrn169]. Will get this committed soon.

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4366) Rebuilding a local index fails sometimes

2018-04-09 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431669#comment-16431669
 ] 

Samarth Jain commented on PHOENIX-4366:
---

I was motivated by just getting hold of the column encoding related values once 
in preScannerOpen and reusing it across the board (instead of having to fetch 
it from the scan context every time). I made this with the assumption that 
every region gets it's own co-processor instance. Or is it one instance per 
region server? If former, why is it problematic to store these values as member 
variables since their scope should only be limited to the table region.

> Rebuilding a local index fails sometimes
> 
>
> Key: PHOENIX-4366
> URL: https://issues.apache.org/jira/browse/PHOENIX-4366
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Marcin Januszkiewicz
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null 
>   
>   
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)  
>   
>
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)   
>   
>
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>   
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>   
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> 
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>   
>   
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   
>   
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) 
>   
>
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) 
>   
>
> Caused by: java.lang.UnsupportedOperationException
>   
>   
> at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>   
>
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
> 
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) 
>   
>
> at 
> 

[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-09 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431664#comment-16431664
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

Ping [~jamestaylor] [~an...@apache.org]. Could you please review the patch?

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-09 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431662#comment-16431662
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

Ping [~sergey.soldatov] [~an...@apache.org]. Could you please review the patch?

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 

Reminder: PhoenixCon 2018 call for abstracts ends in one week!

2018-04-09 Thread Josh Elser

Hi all,

There's just one week left to submit abstracts to PhoenixCon 2018, held 
in San Jose, CA on June 18th.


We need all of you -- developers, users, admins -- to submit all talks 
to make this event a great success. No talk is too small.


Please reach out if there are any questions!

You can submit your abstract at https://easychair.org/conferences/?conf=pc18

- Josh (on behalf of the Phoenix PMC)


[jira] [Commented] (PHOENIX-2715) Query Log

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431614#comment-16431614
 ] 

Josh Elser commented on PHOENIX-2715:
-

+1 this is fantastic.

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4666) Add a subquery cache that persists beyond the life of a query

2018-04-09 Thread Marcell Ortutay (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431546#comment-16431546
 ] 

Marcell Ortutay edited comment on PHOENIX-4666 at 4/10/18 12:20 AM:


A update on this: I've implemented a basic version of this that re-uses the RHS 
results in a subquery cache. I made a few changes to the original hacky 
implementation that I wanted to get some feedback on.

My code is here: 
[https://github.com/ortutay/phoenix/tree/PHOENIX-4666-subquery-cache] ; please 
note this is a work in progress.

I've changed the following things:
 # In my first implementation, I stored a mapping of subquery hash -> 
ServerCache client side. This works in the single client use case but doesn't 
work if you have a cluster of PQS servers (which is our situation at 23andMe). 
So instead I replaced this with an RPC mechanism. The client will send an RPC 
to each region server, and check if the subquery results are available.
 # Originally I planned to only return a boolean in the RPC check. However, I 
ran into an issue. It turns out that the serialize() method is involved in the 
generation of key ranges that are used in the query [1]. This serialize() 
method is in the addHashCache() code path. In order to make sure this code is 
hit, I am creating a CachedSubqueryResultIterator which is passed to the 
addHashCache() code path. This ensures that all side effects, like the key 
range generation, are the same between cached / uncached code paths.

Would love to get feedback on this approach. For (2) there is an alternate 
approach that also caches the key ranges. This is more efficient but has the 
downside of needing specialized code.

Work still left to do is eviction logic, and hint to enable, and general 
cleanup/testing.

[1]https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/join/HashCacheClient.java#L131


was (Author: ortutay):
A update on this: I've implemented a basic version of this that re-uses the RHS 
results in a subquery cache. I made a few changes to the original hacky 
implementation that I wanted to get some feedback on.

My code is here: 
[https://github.com/ortutay/phoenix/tree/PHOENIX-4666-subquery-cache] ; please 
note this is a work in progress.

I've changed the following things:
 # In my first implementation, I stored a mapping of subquery hash -> 
ServerCache client side. This works in the single client use case but doesn't 
work if you have a cluster of PQS servers (which is our situation at 23andMe). 
So instead I replaced this with an RPC mechanism. The client will send an RPC 
to each region server, and check if the subquery results are available.
 # Originally I planned to only return a boolean in the RPC check. However, I 
ran into an issue. It turns out that the serialize() method is involved in the 
generation of key ranges that are used in the query [1]. This serialize() 
method is in the addHashCache() code path. In order to make sure this code is 
hit, I am creating a CachedSubqueryResultIterator which is passed to the 
addHashCache() code path. This ensures that all side effects, like the key 
range generation, are the same between cached / uncached code paths.

Would love to get feedback on this approach. For (2) there is an alternate 
approach that also caches the key ranges. This is more efficient but has the 
downside of needing specialized code.

Work still left to do is eviction logic, and hint to enable, and general 
cleanup/testing.

[1]https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/join/HashCacheClient.java#L131

> Add a subquery cache that persists beyond the life of a query
> -
>
> Key: PHOENIX-4666
> URL: https://issues.apache.org/jira/browse/PHOENIX-4666
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Marcell Ortutay
>Assignee: Marcell Ortutay
>Priority: Major
>
> The user list thread for additional context is here: 
> [https://lists.apache.org/thread.html/e62a6f5d79bdf7cd238ea79aed8886816d21224d12b0f1fe9b6bb075@%3Cuser.phoenix.apache.org%3E]
> 
> A Phoenix query may contain expensive subqueries, and moreover those 
> expensive subqueries may be used across multiple different queries. While 
> whole result caching is possible at the application level, it is not possible 
> to cache subresults in the application. This can cause bad performance for 
> queries in which the subquery is the most expensive part of the query, and 
> the application is powerless to do anything at the query level. It would be 
> good if Phoenix provided a way to cache subquery results, as it would provide 
> a significant performance gain.
> An illustrative example:
>     SELECT * FROM table1 JOIN (SELECT id_1 FROM 

[jira] [Commented] (PHOENIX-4666) Add a subquery cache that persists beyond the life of a query

2018-04-09 Thread Marcell Ortutay (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431546#comment-16431546
 ] 

Marcell Ortutay commented on PHOENIX-4666:
--

A update on this: I've implemented a basic version of this that re-uses the RHS 
results in a subquery cache. I made a few changes to the original hacky 
implementation that I wanted to get some feedback on.

My code is here: 
[https://github.com/ortutay/phoenix/tree/PHOENIX-4666-subquery-cache] ; please 
note this is a work in progress.

I've changed the following things:
 # In my first implementation, I stored a mapping of subquery hash -> 
ServerCache client side. This works in the single client use case but doesn't 
work if you have a cluster of PQS servers (which is our situation at 23andMe). 
So instead I replaced this with an RPC mechanism. The client will send an RPC 
to each region server, and check if the subquery results are available.
 # Originally I planned to only return a boolean in the RPC check. However, I 
ran into an issue. It turns out that the serialize() method is involved in the 
generation of key ranges that are used in the query [1]. This serialize() 
method is in the addHashCache() code path. In order to make sure this code is 
hit, I am creating a CachedSubqueryResultIterator which is passed to the 
addHashCache() code path. This ensures that all side effects, like the key 
range generation, are the same between cached / uncached code paths.

Would love to get feedback on this approach. For (2) there is an alternate 
approach that also caches the key ranges. This is more efficient but has the 
downside of needing specialized code.

Work still left to do is eviction logic, and hint to enable, and general 
cleanup/testing.

[1]https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/join/HashCacheClient.java#L131

> Add a subquery cache that persists beyond the life of a query
> -
>
> Key: PHOENIX-4666
> URL: https://issues.apache.org/jira/browse/PHOENIX-4666
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Marcell Ortutay
>Assignee: Marcell Ortutay
>Priority: Major
>
> The user list thread for additional context is here: 
> [https://lists.apache.org/thread.html/e62a6f5d79bdf7cd238ea79aed8886816d21224d12b0f1fe9b6bb075@%3Cuser.phoenix.apache.org%3E]
> 
> A Phoenix query may contain expensive subqueries, and moreover those 
> expensive subqueries may be used across multiple different queries. While 
> whole result caching is possible at the application level, it is not possible 
> to cache subresults in the application. This can cause bad performance for 
> queries in which the subquery is the most expensive part of the query, and 
> the application is powerless to do anything at the query level. It would be 
> good if Phoenix provided a way to cache subquery results, as it would provide 
> a significant performance gain.
> An illustrative example:
>     SELECT * FROM table1 JOIN (SELECT id_1 FROM large_table WHERE x = 10) 
> expensive_result ON table1.id_1 = expensive_result.id_2 AND table1.id_1 = 
> \{id}
> In this case, the subquery "expensive_result" is expensive to compute, but it 
> doesn't change between queries. The rest of the query does because of the 
> \{id} parameter. This means the application can't cache it, but it would be 
> good if there was a way to cache expensive_result.
> Note that there is currently a coprocessor based "server cache", but the data 
> in this "cache" is not persisted across queries. It is deleted after a TTL 
> expires (30sec by default), or when the query completes.
> This is issue is fairly high priority for us at 23andMe and we'd be happy to 
> provide a patch with some guidance from Phoenix maintainers. We are currently 
> putting together a design document for a solution, and we'll post it to this 
> Jira ticket for review in a few days.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4366) Rebuilding a local index fails sometimes

2018-04-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431529#comment-16431529
 ] 

James Taylor edited comment on PHOENIX-4366 at 4/10/18 12:08 AM:
-

Please review, [~sergey.soldatov]. Gets rid of state in region observer and 
factory class. [~samarthjain] - any idea why these were made member variables 
of the RegionObserver and initialized in preScannerOpen (as opposed to getting 
them when you need them) in your initial commit of column encoding: 
https://github.com/apache/phoenix/commit/3c7ff99bfb958774c3e2ba5d3714ccfc46bd2367#diff-0a0c4ad0076e138eb2bdf5daced0e909


was (Author: jamestaylor):
Please review, [~sergey.soldatov]. Gets rid of state in region observer and 
factory class. [~samarthjain] - any idea why these were made member variables 
of the RegionObserver and initialized in preScannerOpen (as opposed to getting 
them when you need them)?

> Rebuilding a local index fails sometimes
> 
>
> Key: PHOENIX-4366
> URL: https://issues.apache.org/jira/browse/PHOENIX-4366
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Marcin Januszkiewicz
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null 
>   
>   
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)  
>   
>
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)   
>   
>
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>   
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>   
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> 
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>   
>   
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   
>   
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) 
>   
>
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) 
>   
>
> Caused by: java.lang.UnsupportedOperationException
>   
>   
> at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>   
>
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
> 
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>  

[jira] [Commented] (PHOENIX-4366) Rebuilding a local index fails sometimes

2018-04-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431529#comment-16431529
 ] 

James Taylor commented on PHOENIX-4366:
---

Please review, [~sergey.soldatov]. Gets rid of state in region observer and 
factory class. [~samarthjain] - any idea why these were made member variables 
of the RegionObserver and initialized in preScannerOpen (as opposed to getting 
them when you need them)?

> Rebuilding a local index fails sometimes
> 
>
> Key: PHOENIX-4366
> URL: https://issues.apache.org/jira/browse/PHOENIX-4366
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Marcin Januszkiewicz
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null 
>   
>   
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)  
>   
>
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)   
>   
>
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>   
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>   
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> 
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>   
>   
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   
>   
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) 
>   
>
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) 
>   
>
> Caused by: java.lang.UnsupportedOperationException
>   
>   
> at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>   
>
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
> 
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5735)
>   
>  
> at 
> 

[jira] [Updated] (PHOENIX-4366) Rebuilding a local index fails sometimes

2018-04-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4366:
--
Attachment: PHOENIX-4366_v1.patch

> Rebuilding a local index fails sometimes
> 
>
> Key: PHOENIX-4366
> URL: https://issues.apache.org/jira/browse/PHOENIX-4366
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Marcin Januszkiewicz
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null 
>   
>   
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)  
>   
>
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)   
>   
>
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>   
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>   
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> 
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>   
>   
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   
>   
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) 
>   
>
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) 
>   
>
> Caused by: java.lang.UnsupportedOperationException
>   
>   
> at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>   
>
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
> 
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5735)
>   
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5891)
>   
>
> at 
> 

[jira] [Assigned] (PHOENIX-4366) Rebuilding a local index fails sometimes

2018-04-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4366:
-

Assignee: James Taylor

> Rebuilding a local index fails sometimes
> 
>
> Key: PHOENIX-4366
> URL: https://issues.apache.org/jira/browse/PHOENIX-4366
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Marcin Januszkiewicz
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null 
>   
>   
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)  
>   
>
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)   
>   
>
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>   
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>   
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> 
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>   
>   
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   
>   
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) 
>   
>
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) 
>   
>
> Caused by: java.lang.UnsupportedOperationException
>   
>   
> at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>   
>
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
> 
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147) 
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5735)
>   
>  
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5891)
>   
>
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5669)
>

[jira] [Commented] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431486#comment-16431486
 ] 

Hudson commented on PHOENIX-4683:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #83 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/83/])
PHOENIX-4683 Cap timeouts for stats precompact hook logic (vincentpoon: rev 
775c046ea66a9d0da30aa2e482a01ebf5b9a1a43)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriterUtils.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionCoprocessorEnvironment.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java


> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431469#comment-16431469
 ] 

Hudson commented on PHOENIX-4683:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1849 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1849/])
PHOENIX-4683 Cap timeouts for stats precompact hook logic (vincentpoon: rev 
a9ddf1709cb898c2e2508bd4e931b1b8d272e8a6)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriterUtils.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionCoprocessorEnvironment.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java


> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431394#comment-16431394
 ] 

Josh Elser commented on PHOENIX-4534:
-

Great digging, Sergey!

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431377#comment-16431377
 ] 

Sergey Soldatov commented on PHOENIX-4534:
--

Just confirmed that with v3 patch applied 
PartialIndexRebuilderIT#testWriteWhileRebuilding passed.

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4682) UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw exceptions

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431372#comment-16431372
 ] 

Hudson commented on PHOENIX-4682:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1824 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1824/])
PHOENIX-4682 UngroupedAggregateRegionObserver preCompactScannerOpen hook 
(vincentpoon: rev 2da904ebcb84d03231cccae298d78b0add1012ba)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
Revert "PHOENIX-4682 UngroupedAggregateRegionObserver (vincentpoon: rev 
49610d188a34e078514cfc61560e2389933b80b0)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
PHOENIX-4682 UngroupedAggregateRegionObserver preCompactScannerOpen hook 
(vincentpoon: rev 701c447d366977eaa3d28d99940faf2bff958085)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java


> UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw 
> exceptions
> ---
>
> Key: PHOENIX-4682
> URL: https://issues.apache.org/jira/browse/PHOENIX-4682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4682.master.v1.patch, 
> PHOENIX-4682.v2.0.98.patch, PHOENIX-4682.v2.master.patch, 
> PHOENIX-4682.v3.5.x.patch, PHOENIX-4682.v3.master.patch
>
>
> TableNotFoundException in the preCompactScannerOpen hook can lead to RS abort.
> Some tables might have the phoenix coprocessor loaded but not be actual 
> Phoenix tables (i.e. have a row in SYSTEM.CATALOG).  We should ignore these 
> Exceptions instead of throwing them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4616) Move join query optimization out from QueryCompiler into QueryOptimizer

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431373#comment-16431373
 ] 

Hudson commented on PHOENIX-4616:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1824 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1824/])
PHOENIX-4616 Move join query optimization out from QueryCompiler into 
(maryannxue: rev 49fca494bf9e13918db558e8276676e3dfda9d74)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
PHOENIX-4616 Move join query optimization out from QueryCompiler into 
(maryannxue: rev 0b1b219ef0e803d7ff254408c24b4bb67a5d88f9)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java


> Move join query optimization out from QueryCompiler into QueryOptimizer
> ---
>
> Key: PHOENIX-4616
> URL: https://issues.apache.org/jira/browse/PHOENIX-4616
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4616.patch
>
>
> Currently we do optimization for join queries inside QueryCompiler, which 
> makes the APIs and code logic confusing, so we need to move join optimization 
> logic into QueryOptimizer.
>  Similarly, but probably with a different approach, we need to optimize UNION 
> ALL queries and derived table sub-queries in QueryOptimizer.optimize().
> Please also refer to this comment:
> https://issues.apache.org/jira/browse/PHOENIX-4585?focusedCommentId=16367616=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16367616



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4534:
-
Affects Version/s: 4.14.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4534:
-
Fix Version/s: 4.14.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431374#comment-16431374
 ] 

Hudson commented on PHOENIX-4683:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1824 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1824/])
PHOENIX-4683 Cap timeouts for stats precompact hook logic (vincentpoon: rev 
28c11fe3f4647f192849adb9c2567cff9c405bbb)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriterUtils.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionCoprocessorEnvironment.java


> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431360#comment-16431360
 ] 

Sergey Soldatov commented on PHOENIX-4534:
--

Looks like those changes have been done in HBase 1.4 as well. And that's the 
reason why master branch has a number of failures with indexes related to 
upsert/delete/upsert row. [~elserj] that's one of the problems I mentioned 
earlier. 

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431344#comment-16431344
 ] 

Sergey Soldatov commented on PHOENIX-4685:
--

I believe that it's all related.  Actually I mentioned both. The last problem - 
when we open a region we create a separate connection to get admin, so we easy 
hit 60 max client connection to zk when, for example, create a table with 60+ 
salt buckets. And it may be the reason why we run out of threads - for each 
connection we also create a number of threads.  

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685_jstack
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> 

[jira] [Commented] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431316#comment-16431316
 ] 

Vincent Poon commented on PHOENIX-4683:
---

Pushed to:

4.x-HBase-0.98

4.x-HBase-1.1

4.x-HBase-1.2

4.x-HBase-1.3

5.x-HBase-2.0

> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4682) UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw exceptions

2018-04-09 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4682.
---
   Resolution: Fixed
Fix Version/s: 5.0.0
   4.14.0

> UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw 
> exceptions
> ---
>
> Key: PHOENIX-4682
> URL: https://issues.apache.org/jira/browse/PHOENIX-4682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4682.master.v1.patch, 
> PHOENIX-4682.v2.0.98.patch, PHOENIX-4682.v2.master.patch, 
> PHOENIX-4682.v3.5.x.patch, PHOENIX-4682.v3.master.patch
>
>
> TableNotFoundException in the preCompactScannerOpen hook can lead to RS abort.
> Some tables might have the phoenix coprocessor loaded but not be actual 
> Phoenix tables (i.e. have a row in SYSTEM.CATALOG).  We should ignore these 
> Exceptions instead of throwing them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4682) UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw exceptions

2018-04-09 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431312#comment-16431312
 ] 

Vincent Poon commented on PHOENIX-4682:
---

Pushed to:

4.x-HBase-0.98

4.x-HBase-1.1

4.x-HBase-1.2

4.x-HBase-1.3

5.x-HBase-2.0

> UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw 
> exceptions
> ---
>
> Key: PHOENIX-4682
> URL: https://issues.apache.org/jira/browse/PHOENIX-4682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4682.master.v1.patch, 
> PHOENIX-4682.v2.0.98.patch, PHOENIX-4682.v2.master.patch, 
> PHOENIX-4682.v3.5.x.patch, PHOENIX-4682.v3.master.patch
>
>
> TableNotFoundException in the preCompactScannerOpen hook can lead to RS abort.
> Some tables might have the phoenix coprocessor loaded but not be actual 
> Phoenix tables (i.e. have a row in SYSTEM.CATALOG).  We should ignore these 
> Exceptions instead of throwing them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4683.
---
   Resolution: Fixed
Fix Version/s: 5.0.0
   4.14.0

> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431275#comment-16431275
 ] 

Vincent Poon commented on PHOENIX-4683:
---

Port for 5.x branch, there's no longer getTable() on the env , so instead in 
the stats classes I create the connection with the env config

> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4683) Cap timeouts for stats precompact hook logic

2018-04-09 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4683:
--
Attachment: PHOENIX-4683.v5.5.x.patch

> Cap timeouts for stats precompact hook logic
> 
>
> Key: PHOENIX-4683
> URL: https://issues.apache.org/jira/browse/PHOENIX-4683
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4683.v1.0.98.patch, PHOENIX-4683.v2.0.98.patch, 
> PHOENIX-4683.v3.0.98.patch, PHOENIX-4683.v4.0.98.patch, 
> PHOENIX-4683.v5.0.98.patch, PHOENIX-4683.v5.5.x.patch
>
>
> In UngroupedAggregateRegionObserver#preCompact we call 
> DefaultStatisticsCollector.createCompactionScanner.  It uses the env config 
> which in turn contains the RS server rpc timeout of 20 minutes.  That's too 
> long for a compaction hook.
> Like in PHOENIX-4169, we should cap the timeout so the compaction doesn't get 
> blocked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4682) UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw exceptions

2018-04-09 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4682:
--
Attachment: PHOENIX-4682.v3.5.x.patch

> UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw 
> exceptions
> ---
>
> Key: PHOENIX-4682
> URL: https://issues.apache.org/jira/browse/PHOENIX-4682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4682.master.v1.patch, 
> PHOENIX-4682.v2.0.98.patch, PHOENIX-4682.v2.master.patch, 
> PHOENIX-4682.v3.5.x.patch, PHOENIX-4682.v3.master.patch
>
>
> TableNotFoundException in the preCompactScannerOpen hook can lead to RS abort.
> Some tables might have the phoenix coprocessor loaded but not be actual 
> Phoenix tables (i.e. have a row in SYSTEM.CATALOG).  We should ignore these 
> Exceptions instead of throwing them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431098#comment-16431098
 ] 

Josh Elser commented on PHOENIX-4685:
-

[~sergey.soldatov], I remember you mentioning something like this around 
CoprocessorConnections (or was it ZK related?..)

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685_jstack
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
>  ... 25 more
> Caused by: 

[jira] [Commented] (PHOENIX-2715) Query Log

2018-04-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431090#comment-16431090
 ] 

Andrew Purtell commented on PHOENIX-2715:
-

Awesome, thanks so much

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4672) Fix naming of QUERY_SERVER_KERBEROS_HTTP_PRINCIPAL_ATTRIB

2018-04-09 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431071#comment-16431071
 ] 

Lev Bronshtein commented on PHOENIX-4672:
-

I think we're all ok with this chnage

> Fix naming of QUERY_SERVER_KERBEROS_HTTP_PRINCIPAL_ATTRIB
> -
>
> Key: PHOENIX-4672
> URL: https://issues.apache.org/jira/browse/PHOENIX-4672
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4672.diff
>
>
> The HTTP-specific kerberos credentials implemented in PHOENIX-4533 introduce 
> some ambiguity: It is presently 
> {{phoenix.queryserver.kerberos.http.principal}}, but it should be 
> {{phoenix.queryserver.http.kerberos.principal}} to match the rest of Hadoop, 
> HBase, and Phoenix configuration kerberos principal properties.
> Need to update docs too.
> FYI [~lbronshtein]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4672) Fix naming of QUERY_SERVER_KERBEROS_HTTP_PRINCIPAL_ATTRIB

2018-04-09 Thread Lev Bronshtein (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431071#comment-16431071
 ] 

Lev Bronshtein edited comment on PHOENIX-4672 at 4/9/18 7:05 PM:
-

I think we're all ok with this change


was (Author: lbronshtein):
I think we're all ok with this chnage

> Fix naming of QUERY_SERVER_KERBEROS_HTTP_PRINCIPAL_ATTRIB
> -
>
> Key: PHOENIX-4672
> URL: https://issues.apache.org/jira/browse/PHOENIX-4672
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4672.diff
>
>
> The HTTP-specific kerberos credentials implemented in PHOENIX-4533 introduce 
> some ambiguity: It is presently 
> {{phoenix.queryserver.kerberos.http.principal}}, but it should be 
> {{phoenix.queryserver.http.kerberos.principal}} to match the rest of Hadoop, 
> HBase, and Phoenix configuration kerberos principal properties.
> Need to update docs too.
> FYI [~lbronshtein]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4528) PhoenixAccessController checks permissions only at table level when creating views

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431067#comment-16431067
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4528:
--

Uploaded WIP patch for 5.x branch. Might need changes in HBase for this to get 
user permissions where existing shaded classes doesn't have method to get user 
permission.

> PhoenixAccessController checks permissions only at table level when creating 
> views
> --
>
> Key: PHOENIX-4528
> URL: https://issues.apache.org/jira/browse/PHOENIX-4528
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4528.001.patch, PHOENIX-4528.master.001.patch, 
> PHOENIX-4528.repro-test.diff, PHOENIX-4528_5.x-HBase-2.0.patch
>
>
> The {{PhoenixAccessController#preCreateTable()}} method is invoked everytime 
> a user wants to create a view on a base table. The {{requireAccess()}} method 
> takes in tableName as the parameter and checks for user permissions only at 
> that table level. The correct approach is to also check permissions at 
> namespace level, since it is at a larger scope than per table level.
> For example, if the table name is {{TEST_SCHEMA.TEST_TABLE}}, it will created 
> as {{TEST_SCHEMA:TEST_TABLE}} HBase table is namespace mapping is enabled. 
> View creation on this table would fail if permissions are granted to just 
> {{TEST_SCHEMA}} and not on {{TEST_TABLE}}. It works correctly if same 
> permissions are granted at table level too.
> FYI. [~ankit.singhal] [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4528) PhoenixAccessController checks permissions only at table level when creating views

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4528:
-
Attachment: PHOENIX-4528_5.x-HBase-2.0.patch

> PhoenixAccessController checks permissions only at table level when creating 
> views
> --
>
> Key: PHOENIX-4528
> URL: https://issues.apache.org/jira/browse/PHOENIX-4528
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4528.001.patch, PHOENIX-4528.master.001.patch, 
> PHOENIX-4528.repro-test.diff, PHOENIX-4528_5.x-HBase-2.0.patch
>
>
> The {{PhoenixAccessController#preCreateTable()}} method is invoked everytime 
> a user wants to create a view on a base table. The {{requireAccess()}} method 
> takes in tableName as the parameter and checks for user permissions only at 
> that table level. The correct approach is to also check permissions at 
> namespace level, since it is at a larger scope than per table level.
> For example, if the table name is {{TEST_SCHEMA.TEST_TABLE}}, it will created 
> as {{TEST_SCHEMA:TEST_TABLE}} HBase table is namespace mapping is enabled. 
> View creation on this table would fail if permissions are granted to just 
> {{TEST_SCHEMA}} and not on {{TEST_TABLE}}. It works correctly if same 
> permissions are granted at table level too.
> FYI. [~ankit.singhal] [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4528) PhoenixAccessController checks permissions only at table level when creating views

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4528:
-
Fix Version/s: 5.0.0

> PhoenixAccessController checks permissions only at table level when creating 
> views
> --
>
> Key: PHOENIX-4528
> URL: https://issues.apache.org/jira/browse/PHOENIX-4528
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4528.001.patch, PHOENIX-4528.master.001.patch, 
> PHOENIX-4528.repro-test.diff
>
>
> The {{PhoenixAccessController#preCreateTable()}} method is invoked everytime 
> a user wants to create a view on a base table. The {{requireAccess()}} method 
> takes in tableName as the parameter and checks for user permissions only at 
> that table level. The correct approach is to also check permissions at 
> namespace level, since it is at a larger scope than per table level.
> For example, if the table name is {{TEST_SCHEMA.TEST_TABLE}}, it will created 
> as {{TEST_SCHEMA:TEST_TABLE}} HBase table is namespace mapping is enabled. 
> View creation on this table would fail if permissions are granted to just 
> {{TEST_SCHEMA}} and not on {{TEST_TABLE}}. It works correctly if same 
> permissions are granted at table level too.
> FYI. [~ankit.singhal] [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4528) PhoenixAccessController checks permissions only at table level when creating views

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4528:
-
Fix Version/s: (was: 5.0.0)

> PhoenixAccessController checks permissions only at table level when creating 
> views
> --
>
> Key: PHOENIX-4528
> URL: https://issues.apache.org/jira/browse/PHOENIX-4528
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4528.001.patch, PHOENIX-4528.master.001.patch, 
> PHOENIX-4528.repro-test.diff
>
>
> The {{PhoenixAccessController#preCreateTable()}} method is invoked everytime 
> a user wants to create a view on a base table. The {{requireAccess()}} method 
> takes in tableName as the parameter and checks for user permissions only at 
> that table level. The correct approach is to also check permissions at 
> namespace level, since it is at a larger scope than per table level.
> For example, if the table name is {{TEST_SCHEMA.TEST_TABLE}}, it will created 
> as {{TEST_SCHEMA:TEST_TABLE}} HBase table is namespace mapping is enabled. 
> View creation on this table would fail if permissions are granted to just 
> {{TEST_SCHEMA}} and not on {{TEST_TABLE}}. It works correctly if same 
> permissions are granted at table level too.
> FYI. [~ankit.singhal] [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4653) Upgrading from namespace enabled cluster to latest version failing with UpgradeInProgressException

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431064#comment-16431064
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4653:
--

[~jamestaylor] [~elserj] Can you please review. We need to check whether mutex 
table exists or not even when namespaces mapping enabled because when 
namespaces enabled in older versions system tables still need to be upgraded so 
we need to create the system.mutex table.

>  Upgrading from namespace enabled cluster to latest version failing with 
> UpgradeInProgressException
> ---
>
> Key: PHOENIX-4653
> URL: https://issues.apache.org/jira/browse/PHOENIX-4653
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4653.patch
>
>
> Currently SYSTEM.MUTEX table is not getting created in any case if the 
> namespaces already enabled in older versions and trying to upgrade to latest 
> version so that upgrade failing with the following error.
> {noformat}
> Error: Cluster is being concurrently upgraded from 4.7.x to 5.0.x. Please 
> retry establishing connection. (state=INT12,code=2010)
> org.apache.phoenix.exception.UpgradeInProgressException: Cluster is being 
> concurrently upgraded from 4.7.x to 5.0.x. Please retry establishing 
> connection.
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:3301)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2680)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2524)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2417)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2417)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.close(Commands.java:906)
>   at sqlline.Commands.quit(Commands.java:870)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4653) Upgrading from namespace enabled cluster to latest version failing with UpgradeInProgressException

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4653:
-
Attachment: PHOENIX-4653.patch

>  Upgrading from namespace enabled cluster to latest version failing with 
> UpgradeInProgressException
> ---
>
> Key: PHOENIX-4653
> URL: https://issues.apache.org/jira/browse/PHOENIX-4653
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4653.patch
>
>
> Currently SYSTEM.MUTEX table is not getting created in any case if the 
> namespaces already enabled in older versions and trying to upgrade to latest 
> version so that upgrade failing with the following error.
> {noformat}
> Error: Cluster is being concurrently upgraded from 4.7.x to 5.0.x. Please 
> retry establishing connection. (state=INT12,code=2010)
> org.apache.phoenix.exception.UpgradeInProgressException: Cluster is being 
> concurrently upgraded from 4.7.x to 5.0.x. Please retry establishing 
> connection.
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.acquireUpgradeMutex(ConnectionQueryServicesImpl.java:3301)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:2680)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2524)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2417)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2417)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.close(Commands.java:906)
>   at sqlline.Commands.quit(Commands.java:870)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-04-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4605:
--
Attachment: PHOENIX-4605_wip1.patch

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4605_wip1.patch
>
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4605) Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using boolean

2018-04-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4605:
--
Attachment: (was: PHOENIX-4605_wip1.patch)

> Add TRANSACTION_PROVIDER and DEFAULT_TRANSACTION_PROVIDER instead of using 
> boolean
> --
>
> Key: PHOENIX-4605
> URL: https://issues.apache.org/jira/browse/PHOENIX-4605
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>
> We should deprecate QueryServices.DEFAULT_TABLE_ISTRANSACTIONAL_ATTRIB and 
> instead have a QueryServices.DEFAULT_TRANSACTION_PROVIDER now that we'll have 
> two transaction providers: Tephra and Omid. Along the same lines, we should 
> add a TRANSACTION_PROVIDER column to SYSTEM.CATALOG  and stop using the 
> IS_TRANSACTIONAL table property. For backwards compatibility, we can assume 
> the provider is Tephra if the existing properties are set to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-672.

Resolution: Fixed

Ok, relevant ITs are passing for me. Pushed this up to 5.x-HBase-2.0.

Sorry for the noise, folks!

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672.addendum-5.x.patch, 
> PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430901#comment-16430901
 ] 

Karan Mehta commented on PHOENIX-672:
-

Thanks [~elserj]!

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672.addendum-5.x.patch, 
> PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430857#comment-16430857
 ] 

Josh Elser commented on PHOENIX-672:


Attached an addendum for posterity. UTs have passed, running the relevant ITs.

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672.addendum-5.x.patch, 
> PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-672:
---
Attachment: PHOENIX-672.addendum-5.x.patch

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672.addendum-5.x.patch, 
> PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430844#comment-16430844
 ] 

Josh Elser commented on PHOENIX-672:


Looks like ChangePermsStatement.java and TablesNotInSyncException.java were 
just missed, comparing the two commits (brand new files, probably just missed a 
git-add).

Trying my hand at fixing.

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430798#comment-16430798
 ] 

Josh Elser commented on PHOENIX-672:


[~rajeshbabu], actually, maybe this was you, and not Karan (oops)

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>Priority: Major
>  Labels: namespaces, security
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch, PHOENIX-672_5.x-HBase-2.0
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2018-04-09 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reopened PHOENIX-672:


[~karanmehta93], this is breaking compilation on 5.x-HBase-2.0.
{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure:
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:[102,32]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: package org.apache.phoenix.parse
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNodeFactory.java:[928,12]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.parse.ParseNodeFactory
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/target/generated-sources/antlr3/org/apache/phoenix/parse/PhoenixSQLParser.java:[1408,22]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.parse.PhoenixSQLParser
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/target/generated-sources/antlr3/org/apache/phoenix/parse/PhoenixSQLParser.java:[1547,22]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.parse.PhoenixSQLParser
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[174,32]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: package org.apache.phoenix.parse
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[4450,44]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.schema.MetaDataClient
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[4492,75]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.schema.MetaDataClient
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[4500,88]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.schema.MetaDataClient
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[4555,74]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.schema.MetaDataClient
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[4566,73]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.schema.MetaDataClient
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:[1177,65]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.jdbc.PhoenixStatement
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:[1177,20]
 org.apache.phoenix.jdbc.PhoenixStatement.ExecutableChangePermsStatement is not 
abstract and does not override abstract method getOperation() in 
org.apache.phoenix.parse.BindableStatement
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:[1188,54]
 cannot find symbol
[ERROR]   symbol: method getOperation()
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/parse/ParseNodeFactory.java:[930,20]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.parse.ParseNodeFactory
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/target/generated-sources/antlr3/org/apache/phoenix/parse/PhoenixSQLParser.java:[1409,17]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.parse.PhoenixSQLParser
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/target/generated-sources/antlr3/org/apache/phoenix/parse/PhoenixSQLParser.java:[1548,17]
 cannot find symbol
[ERROR]   symbol:   class ChangePermsStatement
[ERROR]   location: class org.apache.phoenix.parse.PhoenixSQLParser
[ERROR] 
/Users/jelser/projects/phoenix-copy.git/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java:[4533,23]
 cannot find symbol
[ERROR]   symbol:   class TablesNotInSyncException
[ERROR]   location: class 

[jira] [Updated] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4685:
-
Description: 
Currently trying to write data to indexed table failing with OOME where unable 
to create native threads. But it's working fine with 4.7.x branches. Found many 
threads created for meta lookup and shared threads and no space to create 
threads. This is happening even with short circuit writes enabled.
{noformat}
2018-04-08 13:06:04,747 WARN  
[RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
index.PhoenixIndexFailurePolicy: handleFailure failed
java.io.IOException: java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
 ... 25 more
Caused by: java.lang.Exception: java.lang.OutOfMemoryError: unable to create 
new native thread
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:266)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
... 32 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at 

[jira] [Updated] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4685:
-
Description: 
Currently trying to write data to indexed table failing with OOME where unable 
to create native threads. But it's working fine with 4.7.x branches. Found many 
threads created for meta lookup and shared threads and no space to create 
threads.
{noformat}
2018-04-08 13:06:04,747 WARN  
[RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
index.PhoenixIndexFailurePolicy: handleFailure failed
java.io.IOException: java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
 ... 25 more
Caused by: java.lang.Exception: java.lang.OutOfMemoryError: unable to create 
new native thread
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:266)
at 
org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
... 32 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at 

[jira] [Updated] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4685:
-
Attachment: PHOENIX-4685_jstack

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685_jstack
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4685:


 Summary: Parallel writes continuously to indexed table failing 
with OOME very quickly in 5.x branch
 Key: PHOENIX-4685
 URL: https://issues.apache.org/jira/browse/PHOENIX-4685
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0


Currently trying to write data to indexed table failing with OOME where unable 
to create native threads. But it's working fine with 4.7.x branches. Found many 
threads created for meta lookup and shared threads and no space to create 
threads.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2715) Query Log

2018-04-09 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2715:
---
Attachment: PHOENIX-2715_master_V2.patch

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2715) Query Log

2018-04-09 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430121#comment-16430121
 ] 

Ankit Singhal commented on PHOENIX-2715:


Thanks [~apurtell], these were great suggestions and are easy to implement. The 
latest patch now has all these changes incorporated.

bq. I'm noticing that I get ~4 entries in system.log for every one query I run 
in sqlline against a user table.
Yes, it will happen with sqlline only as it queries meta table for primary key 
and column information.

bq.Pruning out system table queries (or maybe having an option to prune them) 
would be a nice-to-have follow-on – my guess is that it's hard to identify 
these queries.
I fixed this in the latest patch, but it may not cover the complex queries 
which are EXPLICITLY running on SYSTEM table which include join or derived 
tables or bind node.



> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)