[jira] [Commented] (PHOENIX-3005) Use DistinctPrefixFilter for DISTINCT index scans

2016-06-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337583#comment-15337583
 ] 

James Taylor commented on PHOENIX-3005:
---

No, it should work the same for indexes. Can you post an example of a query 
that uses an index, but not the DistinctPrefixFilter as expected?

> Use DistinctPrefixFilter for DISTINCT index scans
> -
>
> Key: PHOENIX-3005
> URL: https://issues.apache.org/jira/browse/PHOENIX-3005
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
>
> Currently the optimization in PHOENIX-258 is not used for DISTINCT index 
> scans. We should add that as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3005) Use DistinctPrefixFilter for DISTINCT index scans

2016-06-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337561#comment-15337561
 ] 

Lars Hofhansl commented on PHOENIX-3005:


The reason this currently fails is because the OrderPreservingTracker does take 
indexes into account - that's probably by design? ([~giacomotaylor])

> Use DistinctPrefixFilter for DISTINCT index scans
> -
>
> Key: PHOENIX-3005
> URL: https://issues.apache.org/jira/browse/PHOENIX-3005
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
> Fix For: 4.8.0
>
>
> Currently the optimization in PHOENIX-258 is not used for DISTINCT index 
> scans. We should add that as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3005) Use DistinctPrefixFilter for DISTINCT index scans

2016-06-17 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-3005:
--

 Summary: Use DistinctPrefixFilter for DISTINCT index scans
 Key: PHOENIX-3005
 URL: https://issues.apache.org/jira/browse/PHOENIX-3005
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Lars Hofhansl


Currently the optimization in PHOENIX-258 is not used for DISTINCT index scans. 
We should add that as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337056#comment-15337056
 ] 

Hudson commented on PHOENIX-2276:
-

FAILURE: Integrated in Phoenix-master #1266 (See 
[https://builds.apache.org/job/Phoenix-master/1266/])
PHOENIX-2276 Fix test failure (samarth: rev 
ac3c4ba072774deaf6bc9588d708bf3750262b3d)
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java


> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3000) Reduce memory consumption during DISTINCT aggregation

2016-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337055#comment-15337055
 ] 

Hudson commented on PHOENIX-3000:
-

FAILURE: Integrated in Phoenix-master #1266 (See 
[https://builds.apache.org/job/Phoenix-master/1266/])
PHOENIX-3000 Reduce memory consumption during DISTINCT aggregation. (larsh: rev 
3581d29071d5023110cc8afc2ec58840e84dc861)
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/DistinctValueWithCountServerAggregator.java


> Reduce memory consumption during DISTINCT aggregation
> -
>
> Key: PHOENIX-3000
> URL: https://issues.apache.org/jira/browse/PHOENIX-3000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 3000-v2.txt, 3000.txt
>
>
> In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
> handed to us from HBase.
> Note that this pointer points into an HFile Block, and hence we hold onto the 
> entire block for the duration of the aggregation.
> If the column has high cardinality we might attempt holding the entire table 
> in memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336918#comment-15336918
 ] 

Hudson commented on PHOENIX-2276:
-

FAILURE: Integrated in Phoenix-master #1265 (See 
[https://builds.apache.org/job/Phoenix-master/1265/])
PHOENIX-2276 Addednum2 to fix test failures (samarth: rev 
0acca8e81e3dccc029e3c822e3666c8014e4ba11)
* phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java


> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3004) Allow users to configure allowed Kerberos realms for PQS

2016-06-17 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-3004:
---

 Summary: Allow users to configure allowed Kerberos realms for PQS
 Key: PHOENIX-3004
 URL: https://issues.apache.org/jira/browse/PHOENIX-3004
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 4.9.0


Through some great testing by [~kliew], we got an early warning that PQS' 
support for SPNEGO authentication didn't work for clients coming from a realm 
other than the server.

I put a fix in CALCITE-1282 to allow configuration, but we'll need to update an 
API call to configure the Avatica server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3000) Reduce memory consumption during DISTINCT aggregation

2016-06-17 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3000:
---
Fix Version/s: 4.8.0

> Reduce memory consumption during DISTINCT aggregation
> -
>
> Key: PHOENIX-3000
> URL: https://issues.apache.org/jira/browse/PHOENIX-3000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 3000-v2.txt, 3000.txt
>
>
> In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
> handed to us from HBase.
> Note that this pointer points into an HFile Block, and hence we hold onto the 
> entire block for the duration of the aggregation.
> If the column has high cardinality we might attempt holding the entire table 
> in memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-3000) Reduce memory consumption during DISTINCT aggregation

2016-06-17 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-3000.

Resolution: Fixed
  Assignee: Lars Hofhansl

Committed to master and 4.x*

> Reduce memory consumption during DISTINCT aggregation
> -
>
> Key: PHOENIX-3000
> URL: https://issues.apache.org/jira/browse/PHOENIX-3000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Attachments: 3000-v2.txt, 3000.txt
>
>
> In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
> handed to us from HBase.
> Note that this pointer points into an HFile Block, and hence we hold onto the 
> entire block for the duration of the aggregation.
> If the column has high cardinality we might attempt holding the entire table 
> in memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336779#comment-15336779
 ] 

James Taylor commented on PHOENIX-2940:
---

bq. Replacing with byte[] or ImmutableBytesPtr?
I'd do {{ImmutableBytesPtr}} since this is the key the underlying cache is 
using and you can easily access this from {{PTable.getName().getBytesPtr()}}. 
Otherwise you end up creating a new {{ImmutableBytesPtr}} with every call to 
get. There's only a few callers of ConnectionQueryService.getTableStats(), so 
I'm hoping it's not too bad.

bq. I could remove it and leave a big-fat-warning to not reuse the number 12 
(and we'd just need to be aware of it for a few releases in code-reviews to 
prevent someone from trying to be smart). How does that strike you?
I knew I was asking the right person about what would happen if we updated the 
protobuf. :-) I think a warning that just says do not try to reuse numbers to 
fill in gaps as these have already been in use and have been removed. Either 
that or renaming to OBSOLETE1_. What ever you think is better.


> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch, PHOENIX-2940.004.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336774#comment-15336774
 ] 

Josh Elser commented on PHOENIX-2940:
-

bq. Replacing with byte[] or ImmutableBytesPtr? I see byte[] primarily in use 
by ConnectionQueryServices. Unless I hear otherwise from ya, I'll go the 
'consistency with what's already there' route 

Ugh, not reading closely enough what you originally said..

> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch, PHOENIX-2940.004.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336772#comment-15336772
 ] 

Hadoop QA commented on PHOENIX-2940:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12811416/PHOENIX-2940.004.patch
  against master branch at commit d2960b3b49f087b852628cefdf65a230bad33ffa.
  ATTACHMENT ID: 12811416

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 7 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+// Avoid querying the stats table because we're holding the 
rowLock here. Issuing an RPC to a remote
+long scn = context.getConnection().getSCN() == null ? Long.MAX_VALUE : 
context.getConnection().getSCN();
+PTableStats tableStats = 
context.getConnection().getQueryServices().getTableStats(table.getName().getBytes(),
 scn);
+GuidePostsInfo gpsInfo = 
tableStats.getGuidePosts().get(SchemaUtil.getEmptyColumnFamily(table));
+tableStats = useStats() ? 
context.getConnection().getQueryServices().getTableStats(physicalTableName, 
currentSCN) : PTableStats.EMPTY_STATS;
+ * Removes cache {@link PTableStats} for the given table. If no cached 
stats are present, this does nothing.
+  return 
connection.getQueryServices().getTableStats(Bytes.toBytes(physicalName), 
getCurrentScn());
+return 
connection.getQueryServices().getTableStats(table.getName().getBytes(), 
getCurrentScn());
+this.schemaName, parentTableName, indexes, isImmutableRows, 
physicalNames, defaultFamilyName,
+table.getBaseColumnCount(), table.rowKeyOrderOptimizable(), 
table.isTransactional(), table.getUpdateCacheFrequency(),

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SpooledSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.NotQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PointInTimeQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.MutableRollbackIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.IndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.IndexExpressionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CastAndCoerceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.LocalIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.QueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DeleteIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.HashJoinLocalIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SaltedViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AggregateQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.GroupByIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CaseStatementIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CsvBulkLoadToolIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.TxWriteFailureIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.HashJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TenantSpecificViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UserDefinedFunctionsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ScanQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexToolIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/400//testReport/

[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336758#comment-15336758
 ] 

Josh Elser commented on PHOENIX-2940:
-

{quote}
Change ConnectionQueryServices.invalidateStats(), 
ConnectionQueryServicesImpl.addTableStats(), 
ConnectionQueryServicesImpl.getTableStats(), and TableStatsCache.put() to all 
be consistent and use ImmutableBytesPtr as the arg as it's possible you'd want 
to get the stats without having a PTable.
Remove TableStatsCache.put(PTable).
{quote}

Replacing with {{byte[]}} or {{ImmutableBytesPtr}}? I see {{byte[]}} primarily 
in use by {{ConnectionQueryServices}}. Unless I hear otherwise from ya, I'll go 
the 'consistency with what's already there' route :)

bq. Would it be possible to remove repeated PTableStats guidePosts = 12 from 
phoenix-protocol/src/main/PTable.proto without affecting b/w compat?

Older client talking to newer server: The server would send a PTable from the 
cache without the stats field, so the client would just think that it's 
missing. The old client would construct a PTableStatsImpl with an empty list of 
guideposts

Newer client talking to older server: The client would ignore the stats sent in 
the PTable protobuf and query it on its own.

So, the only concern I can think of is preventing any future use of the 
identifier {{12}} in PTable. If that would happen in some later Phoenix release 
it could break older clients. The protobuf 2 docs actually have a section:

{quote}
Non-required fields can be removed, as long as the tag number is not used again 
in your updated message type. You may want to rename the field instead, perhaps 
adding the prefix "OBSOLETE_", or make the tag reserved, so that future users 
of your .proto can't accidentally reuse the number. 
{quote}

I could remove it and leave a big-fat-warning to not reuse the number 12 (and 
we'd just need to be aware of it for a few releases in code-reviews to prevent 
someone from trying to be smart). How does that strike you?

> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch, PHOENIX-2940.004.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336644#comment-15336644
 ] 

James Taylor commented on PHOENIX-2952:
---

+1, [~ram_krish]. Please commit to master and the three 4.x branches ASAP so 
this can make 4.8.0 release. Thanks, [~ryvius] and [~ram_krish]!

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>Assignee: Joseph Sun
>  Labels: test
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2952.patch
>
>
> execute sql.
> {code}
> select 
> 

[jira] [Updated] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2952:
--
Assignee: Joseph Sun

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>Assignee: Joseph Sun
>  Labels: test
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2952.patch
>
>
> execute sql.
> {code}
> select 
> 

[jira] [Updated] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2952:
--
Fix Version/s: 4.8.0

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2952.patch
>
>
> execute sql.
> {code}
> select 
> 

[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336628#comment-15336628
 ] 

James Taylor commented on PHOENIX-2940:
---

Couple minor API recommendations, [~elserj]:
- Change ConnectionQueryServices.invalidateStats(), 
ConnectionQueryServicesImpl.addTableStats(), 
ConnectionQueryServicesImpl.getTableStats(), and TableStatsCache.put() to all 
be consistent and use ImmutableBytesPtr as the arg as it's possible you'd want 
to get the stats without having a PTable.
- Remove TableStatsCache.put(PTable).
- Would it be possible to remove {{repeated PTableStats guidePosts = 12}} from 
{{phoenix-protocol/src/main/PTable.proto}} without affecting b/w compat?


> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch, PHOENIX-2940.004.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-2940:

Attachment: PHOENIX-2940.004.patch

.004 Final set of recommendations from James. What I'll be using for my cluster 
testing.

> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch, PHOENIX-2940.004.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336542#comment-15336542
 ] 

Josh Elser commented on PHOENIX-2940:
-

{quote}
bq.Not sure I understand this new logic, as it seems like we don't need the 
if (PTableStats.EMPTY_STATS == stats) block. Before, we were relying on the 
PTable meta data cache to find stats, but now we're relying on this new cache 
plus querying directly if not found. So clearing the table from the cache won't 
have any impact on stats.

Clearing the table from the cache also does invalidate any stats for that table 
from the cache as well. We could expose a method to directly clear the stats 
cache to QueryServices which would be a little more direct. I can add a comment 
to clarify at the very minimum.
{quote}

I think you're right, this is just pointless. Removing this.

> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2940) Remove STATS RPCs from rowlock

2016-06-17 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336453#comment-15336453
 ] 

Josh Elser commented on PHOENIX-2940:
-

bq. +1 to add ConnectionQueryServices.invalidateStats(tableName). I still don't 
completely understand why we need that code, though, given that the underlying 
cache will fetch the stats if they're not cached. When do we "fail to acquire 
stats"? Is it for when we collect stats synchronously (which is really a 
test-only case)? If that's the case, then how about just invalidating the stats 
before making the server-side call to update them?

Testing would definitely benefit from such a method. I'm thinking that we could 
fail to acquire stats for transient "unhealthy" HBase reasons (e.g. 
system.stats region(s) isn't online) which would cache an EMPTY_STATS instance. 
It would correct itself after {{phoenix.stats.updateFrequency}} (15mins) 
though. I'm unable to think of a reason the user would have to ask us to 
invalidate them at the moment before that timeout hits.

I have a little test infra with JMeter to do some concurrency testing on the 
read side. Will do some quick comparisons and aim to try to commit later today.

> Remove STATS RPCs from rowlock
> --
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
>  Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch, 
> PHOENIX-2940.003.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-17 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336402#comment-15336402
 ] 

Samarth Jain commented on PHOENIX-2276:
---

Sorry for the churn here. Looking at the test failures.

> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3001) Dropping local index and recreation it with following split may cause RS failure

2016-06-17 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336353#comment-15336353
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3001:
--

+1 [~sergey.soldatov]. The tests seems not related.

> Dropping local index and recreation it with following split may cause RS 
> failure
> 
>
> Key: PHOENIX-3001
> URL: https://issues.apache.org/jira/browse/PHOENIX-3001
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Critical
> Attachments: PHOENIX-3001.patch
>
>
> If local index was dropped and recreated during the next split RS crashes 
> with the following exception :
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.isSatisfiedMidKeyCondition(LocalIndexStoreFileScanner.java:158)
> at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seekOrReseekToProperKey(LocalIndexStoreFileScanner.java:236)
> at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seekOrReseek(LocalIndexStoreFileScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.seek(LocalIndexStoreFileScanner.java:89)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:281)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:243)
> at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preCompactScannerOpen(IndexHalfStoreFileReaderGenerator.java:212)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$6.call(RegionCoprocessorHost.java:499)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1638)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1712)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1677)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompactScannerOpen(RegionCoprocessorHost.java:494)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:349)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:68)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1239)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1904)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:525)
> at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:562)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The reason is that in the isSatisfiedMidKeyCondition we are looking for 
> current  index maintainers only, so for deleted rows we are unable to build 
> rowkey
> FYI [~rajeshbabu].
> [~an...@apache.org] It looks like a blocker for the release



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-1702) Allow Phoenix to be used from JMeter

2016-06-17 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-1702:
---

Assignee: Josh Elser

> Allow Phoenix to be used from JMeter
> 
>
> Key: PHOENIX-1702
> URL: https://issues.apache.org/jira/browse/PHOENIX-1702
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nick Dimiduk
>Assignee: Josh Elser
>Priority: Minor
> Attachments: PHOENIX-1702.00.patch
>
>
> Here's a simple patch to allow Phoenix to be invoked from jmeter. Perhaps it 
> should be made more robust re: additional data types. I've been using this 
> combination for some performance work recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3002) Upgrading to 4.8 doesn't recreate local indexes

2016-06-17 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336203#comment-15336203
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3002:
--

[~samarthjain] Looking at it. 

> Upgrading to 4.8 doesn't recreate local indexes
> ---
>
> Key: PHOENIX-3002
> URL: https://issues.apache.org/jira/browse/PHOENIX-3002
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
>
> [~rajeshbabu] - I noticed that when upgrading to 4.8, local indexes created 
> with 4.7 or before aren't getting recreated with the new local indexes 
> implementation.  I am not seeing the metadata rows for the recreated indices 
> in SYSTEM.CATALOG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3003) DatabaseMetaData.getTypeInfo() returns an empty ResultSet

2016-06-17 Thread JIRA
Roger Bjärevall created PHOENIX-3003:


 Summary: DatabaseMetaData.getTypeInfo() returns an empty ResultSet
 Key: PHOENIX-3003
 URL: https://issues.apache.org/jira/browse/PHOENIX-3003
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Roger Bjärevall


I am one of the developers working with DbVisualizer and currently we're 
testing DbVisualizer with Phoenix. We've run into an issue with 
DatabaseMetaData.getTypeInfo() that returns an empty ResultSet.

The type information is an enabler for some features (create/alter table) in 
DbVisualizer and without proper types these functions will not work.

Are there any plans to implement getTypeInfo() in the near future?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated PHOENIX-2952:

Attachment: PHOENIX-2952.patch

[~giacomotay...@gmail.com]
I think the fix makes sense. I tested with a case where I have an array that 
has number of elements greater than SHORT.MAX. In that case we have saved the 
offset info also with Int and then the number of elements is negated. So in 
this API of getArrayLength() we don't handle that case. So doing abs() should 
work here. 
Adding a patch with test case also so that it can be used for regression. So 
for ARRAYS greater than INT.MAX I don't think we are currently such bigger 
arrays, right?

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
> Attachments: PHOENIX-2952.patch
>
>
> execute sql.
> {code}
> select 
> 

[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335780#comment-15335780
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-2952:
-

[~giacomotay...@gmail.com]
I think the fix makes sense. I tested with a case where I have an array that 
has number of elements greater than SHORT.MAX. In that case we have saved the 
offset info also with Int and then the number of elements is negated. So in 
this API of getArrayLength() we don't handle that case. So doing abs() should 
work here. 

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
> 

[jira] [Commented] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335629#comment-15335629
 ] 

Hudson commented on PHOENIX-2276:
-

FAILURE: Integrated in Phoenix-master #1264 (See 
[https://builds.apache.org/job/Phoenix-master/1264/])
PHOENIX-2276 addendum for fixing test failures (samarth: rev 
d2960b3b49f087b852628cefdf65a230bad33ffa)
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseTenantSpecificViewIndexIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
* phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/ExplainTable.java


> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335587#comment-15335587
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-2952:
-

[~giacomotaylor]
Ya I too think so. Let me have a look at the code and the fix.

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
> 

[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-17 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1533#comment-1533
 ] 

James Taylor commented on PHOENIX-2952:
---

Can you take a look [~ram_krish]? Sounds like it happens when array is larger 
than a short?

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
>