[jira] [Updated] (PHOENIX-418) Support approximate COUNT DISTINCT

2017-08-25 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-418:
---
Attachment: PHOENIX-418-v8.patch

> Support approximate COUNT DISTINCT
> --
>
> Key: PHOENIX-418
> URL: https://issues.apache.org/jira/browse/PHOENIX-418
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Ethan Wang
>  Labels: gsoc2016
> Attachments: PHOENIX-418-v1.patch, PHOENIX-418-v2.patch, 
> PHOENIX-418-v3.patch, PHOENIX-418-v4.patch, PHOENIX-418-v5.patch, 
> PHOENIX-418-v6.patch, PHOENIX-418-v7.patch, PHOENIX-418-v8.patch
>
>
> Support an "approximation" of count distinct to prevent having to hold on to 
> all distinct values (since this will not scale well when the number of 
> distinct values is huge). The Apache Drill folks have had some interesting 
> discussions on this 
> [here](http://mail-archives.apache.org/mod_mbox/incubator-drill-dev/201306.mbox/%3CJIRA.12650169.1369931282407.88049.1370645900553%40arcas%3E).
>  They recommend using  [Welford's 
> method](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance_Online_algorithm).
>  I'm open to having a config option that uses exact versus approximate. I 
> don't have experience implementing an approximate implementation, so I'm not 
> sure how much state is required to keep on the server and return to the 
> client (other than realizing it'd be much less that returning all distinct 
> values and their counts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4128) CNF org/apache/commons/math3/exception/OutOfRangeException

2017-08-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142612#comment-16142612
 ] 

James Taylor commented on PHOENIX-4128:
---

That's a strange one, [~mujtabachohan]. When do you see that? What statement is 
being parsed?

> CNF org/apache/commons/math3/exception/OutOfRangeException 
> ---
>
> Key: PHOENIX-4128
> URL: https://issues.apache.org/jira/browse/PHOENIX-4128
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Mujtaba Chohan
>Priority: Minor
> Fix For: 4.12.0
>
>
> {noformat}
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: T: 
> org/apache/commons/math3/exception/OutOfRangeException
> > at 
> > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:92)
> > at 
> > org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:58)
> > at 
> > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2163)
> > at 
> > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropColumn(MetaDataEndpointImpl.java:3243)
> > at 
> > org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16317)
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6082)
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3533)
> > at 
> > org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3515)
> > at 
> > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
> ...
> > Caused by: java.lang.NoClassDefFoundError: 
> > org/apache/commons/math3/exception/OutOfRangeException
> > at 
> > org.apache.phoenix.parse.ParseNodeFactory.namedTable(ParseNodeFactory.java:404)
> > at 
> > org.apache.phoenix.parse.PhoenixSQLParser.upsert_node(PhoenixSQLParser.java:5239)
> > at 
> > org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:827)
> > at 
> > org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:519)
> > at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
> > at 
> > org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1497)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4131) UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can deadlock

2017-08-25 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4131:
-

 Summary: UngroupedAggregateRegionObserver.preClose() and 
doPostScannerOpen() can deadlock
 Key: PHOENIX-4131
 URL: https://issues.apache.org/jira/browse/PHOENIX-4131
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain


On my local test run I saw that the tests were not completing because the mini 
cluster couldn't shut down. So I took a jstack and discovered the following 
deadlock:

{code}
"RS:0;samarthjai-wsm4:59006" #16265 prio=5 os_prio=31 tid=0x7fafa6327000 
nid=0x37b3f runnable [0x7000115f5000]
   java.lang.Thread.State: RUNNABLE
at java.lang.Object.wait(Native Method)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preClose(UngroupedAggregateRegionObserver.java:1201)
- locked <0x00072bc406b8> (a java.lang.Object)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$4.call(RegionCoprocessorHost.java:494)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preClose(RegionCoprocessorHost.java:490)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2843)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2805)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2423)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1052)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:157)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:141)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at 
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:334)
at 
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:139)
at java.lang.Thread.run(Thread.java:748)
{code}

{code}
"RpcServer.FifoWFPBQ.default.handler=3,queue=0,port=59006" #16246 daemon prio=5 
os_prio=31 tid=0x7fafae856000 nid=0x1abdb waiting for monitor entry 
[0x7000102bc000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:734)
- waiting to lock <0x00072bc406b8> (a java.lang.Object)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:236)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:281)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
- locked <0x00072b625a90> (a 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
{code}

preClose() has the object monitor and is waiting for scanReferencesCount to go 
down to 0. doPostScannerOpen() is trying to acquire the same lock so that it 
can reduce the scanReferencesCount to 0.

I think this bug was introduced in PHOENIX-3111 to solve other deadlocks. FYI, 
[~rajeshbabu], [~sergey.soldatov], [~enis], [~lhofhansl].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-418) Support approximate COUNT DISTINCT

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142533#comment-16142533
 ] 

Hadoop QA commented on PHOENIX-418:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12883785/PHOENIX-418-v7.patch
  against master branch at commit 435441ea8ba336e1967b03cf84f1868c5ef14790.
  ATTACHMENT ID: 12883785

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   String query = "SELECT APPROX_COUNT_DISTINCT(a.i1||a.i2||b.i2) 
FROM " + tableName + " a, " + tableName
+   final private void prepareTableWithValues(final Connection conn, final 
int nRows) throws Exception {
+   final PreparedStatement stmt = conn.prepareStatement("upsert 
into " + tableName + " VALUES (?, ?)");
+@BuiltInFunction(name=DistinctCountHyperLogLogAggregateFunction.NAME, 
nodeClass=DistinctCountHyperLogLogAggregateParseNode.class, args= {@Argument()} 
)
+public DistinctCountHyperLogLogAggregateFunction(List 
childExpressions, CountAggregateFunction delegate){
+   private HyperLogLogPlus hll = new 
HyperLogLogPlus(DistinctCountHyperLogLogAggregateFunction.NormalSetPrecision, 
DistinctCountHyperLogLogAggregateFunction.SparseSetPrecision);
+   private HyperLogLogPlus hll = new 
HyperLogLogPlus(DistinctCountHyperLogLogAggregateFunction.NormalSetPrecision, 
DistinctCountHyperLogLogAggregateFunction.SparseSetPrecision);
+   protected final ImmutableBytesWritable valueByteArray = new 
ImmutableBytesWritable(ByteUtil.EMPTY_BYTE_ARRAY);
+public DistinctCountHyperLogLogAggregateParseNode(String name, 
List children, BuiltInFunctionInfo info) {
+public FunctionExpression create(List children, 
StatementContext context) throws SQLException {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1305//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1305//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1305//console

This message is automatically generated.

> Support approximate COUNT DISTINCT
> --
>
> Key: PHOENIX-418
> URL: https://issues.apache.org/jira/browse/PHOENIX-418
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Ethan Wang
>  Labels: gsoc2016
> Attachments: PHOENIX-418-v1.patch, PHOENIX-418-v2.patch, 
> PHOENIX-418-v3.patch, PHOENIX-418-v4.patch, PHOENIX-418-v5.patch, 
> PHOENIX-418-v6.patch, PHOENIX-418-v7.patch
>
>
> Support an "approximation" of count distinct to prevent having to hold on to 
> all distinct values (since this will not scale well when the number of 
> distinct values is huge). The Apache Drill folks have had some interesting 
> discussions on this 
> [here](http://mail-archives.apache.org/mod_mbox/incubator-drill-dev/201306.mbox/%3CJIRA.12650169.1369931282407.88049.1370645900553%40arcas%3E).
>  They recommend using  [Welford's 
> method](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance_Online_algorithm).
>  I'm open to having a config option that uses exact versus approximate. I 
> don't have experience implementing an approximate implementation, so I'm not 
> sure how much state is required to keep on the server and return to the 
> client (other than realizing it'd be much less that returning all distinct 
> values and their counts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2017-08-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142508#comment-16142508
 ] 

Lars Hofhansl commented on PHOENIX-4130:


As [~jamestaylor] points out setting the failed index timestamp in #2 above is 
also a cross-regionserver call. We should ensure that we do not wait too long 
on that - but if that fails, we're truly f*cked, so some wait is appropriate. 
Perhaps 2 or 3 minutes by default.


> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2017-08-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142501#comment-16142501
 ] 

Lars Hofhansl commented on PHOENIX-4130:


[~vik.karma], [~abhishek.chouhan], FYI.

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4130) Avoid server retries for mutable indexes

2017-08-25 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-4130:
--

 Summary: Avoid server retries for mutable indexes
 Key: PHOENIX-4130
 URL: https://issues.apache.org/jira/browse/PHOENIX-4130
 Project: Phoenix
  Issue Type: Improvement
Reporter: Lars Hofhansl


Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
during which I suggested that we can possibly eliminate retry loops happening 
at the server that cause the handler threads to be stuck potentially for quite 
a while (at least multiple seconds to ride over common scenarios like splits).
Instead we can do the retries at the Phoenix client that.

So:
# The index updates are not retried on the server. (retries = 0)
# A failed index update would set the failed index timestamp but leave the 
index enabled.
# Now the handler thread is done, it throws an appropriate exception back to 
the client.
# The Phoenix client can now retry. When those retries fail the index is 
disabled (if the policy dictates that) and throw the exception back to its 
caller.

So no more waiting is needed on the server, handler threads are freed 
immediately.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142375#comment-16142375
 ] 

James Taylor commented on PHOENIX-4127:
---

This looks like it helps. +1 to checking it in.

> Shutdownhook that halts JVM is causing test runs to fail
> 
>
> Key: PHOENIX-4127
> URL: https://issues.apache.org/jira/browse/PHOENIX-4127
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4127_4.x-HBase-0.98.patch, PHOENIX-4127.patch
>
>
> When all the unit tests actually pass, surefire still reports that there was 
> an error in the fork. This is because it doesn't like the fact that a forked 
> JVM called system.exit() or system.halt(). We have a shutdown hook in 
> BaseTest which does that. Such failed runs happen more frequently on the  
> 4.x-HBase-0.98 branch. And sometimes on the master and 1.1 branch. 
> I know the code was added because we were seeing test runs hang. Since we 
> changed a lot of tests since then, I would like to remove that hook and see 
> if it helps.
> {code}
> private static String checkClusterInitialized(ReadOnlyProps serverProps) 
> throws Exception {
> if (!clusterInitialized) {
> url = setUpTestCluster(config, serverProps);
> clusterInitialized = true;
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run() {
> logger.info("SHUTDOWN: halting JVM now");
> Runtime.getRuntime().halt(0);
> }
> });
> }
> return url;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142354#comment-16142354
 ] 

Hudson commented on PHOENIX-3953:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1752 (See 
[https://builds.apache.org/job/Phoenix-master/1752/])
PHOENIX-3953 Clear INDEX_DISABLED_TIMESTAMP and disable index on (jtaylor: rev 
435441ea8ba336e1967b03cf84f1868c5ef14790)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java


> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4124) Two new tests without Apache licenses

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142353#comment-16142353
 ] 

Hudson commented on PHOENIX-4124:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1752 (See 
[https://builds.apache.org/job/Phoenix-master/1752/])
PHOENIX-4124 Two new tests without Apache licenses (jtaylor: rev 
7d3368b62e91aa572cbb5c1e7083336d3f0c6538)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/util/IndexScrutinyIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/RunUntilFailure.java


> Two new tests without Apache licenses
> -
>
> Key: PHOENIX-4124
> URL: https://issues.apache.org/jira/browse/PHOENIX-4124
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Geoffrey Jacoby
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4124.patch, PHOENIX-4124_v2.patch
>
>
> It appears that new test files, IndexScrutinyIT and RunUntilFailure, were 
> recently checked in without Apache licenses. This is causing the release 
> audit jenkins job to fail on unrelated JIRAs. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4122) Prevent IllegalMonitorStateException from occurring when releasing row locks

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142352#comment-16142352
 ] 

Hudson commented on PHOENIX-4122:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1752 (See 
[https://builds.apache.org/job/Phoenix-master/1752/])
PHOENIX-4122 Prevent IllegalMonitorStateException from occurring when (jtaylor: 
rev 8b7a836a5e5502c1c217c6e3b6ab7298b980cb81)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java


> Prevent IllegalMonitorStateException from occurring when releasing row locks
> 
>
> Key: PHOENIX-4122
> URL: https://issues.apache.org/jira/browse/PHOENIX-4122
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4122.patch
>
>
> In this iteration, after RS process was killed, Indexes were disabled and 
> soon its status was changed to INACTIVE state.
> After all regions were re-assigned, we observed that two RS were aborted with 
> below exception in logs
> {code}
> org.apache.phoenix.hbase.index.Indexer threw 
> java.lang.IllegalMonitorStateException
> java.lang.IllegalMonitorStateException
> at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
> at 
> java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockImpl.release(LockManager.java:223)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockContext.releaseRowLock(LockManager.java:166)
> at 
> org.apache.phoenix.hbase.index.LockManager.unlockRow(LockManager.java:131)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1023)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1653)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1019)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2837)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2410)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2365)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:225)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:791)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:705)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:306)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3361)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-24 07:47:19,916 FATAL [4,queue=4,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4122) Prevent IllegalMonitorStateException from occurring when releasing row locks

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142196#comment-16142196
 ] 

Samarth Jain commented on PHOENIX-4122:
---

+1

> Prevent IllegalMonitorStateException from occurring when releasing row locks
> 
>
> Key: PHOENIX-4122
> URL: https://issues.apache.org/jira/browse/PHOENIX-4122
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4122.patch
>
>
> In this iteration, after RS process was killed, Indexes were disabled and 
> soon its status was changed to INACTIVE state.
> After all regions were re-assigned, we observed that two RS were aborted with 
> below exception in logs
> {code}
> org.apache.phoenix.hbase.index.Indexer threw 
> java.lang.IllegalMonitorStateException
> java.lang.IllegalMonitorStateException
> at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
> at 
> java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockImpl.release(LockManager.java:223)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockContext.releaseRowLock(LockManager.java:166)
> at 
> org.apache.phoenix.hbase.index.LockManager.unlockRow(LockManager.java:131)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1023)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1653)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1019)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2837)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2410)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2365)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:225)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:791)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:705)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:306)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3361)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-24 07:47:19,916 FATAL [4,queue=4,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4122) Prevent IllegalMonitorStateException from occurring when releasing row locks

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142127#comment-16142127
 ] 

Samarth Jain commented on PHOENIX-4122:
---

Taking a look right now.

> Prevent IllegalMonitorStateException from occurring when releasing row locks
> 
>
> Key: PHOENIX-4122
> URL: https://issues.apache.org/jira/browse/PHOENIX-4122
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4122.patch
>
>
> In this iteration, after RS process was killed, Indexes were disabled and 
> soon its status was changed to INACTIVE state.
> After all regions were re-assigned, we observed that two RS were aborted with 
> below exception in logs
> {code}
> org.apache.phoenix.hbase.index.Indexer threw 
> java.lang.IllegalMonitorStateException
> java.lang.IllegalMonitorStateException
> at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
> at 
> java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockImpl.release(LockManager.java:223)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockContext.releaseRowLock(LockManager.java:166)
> at 
> org.apache.phoenix.hbase.index.LockManager.unlockRow(LockManager.java:131)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1023)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1653)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1019)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2837)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2410)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2365)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:225)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:791)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:705)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:306)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3361)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-24 07:47:19,916 FATAL [4,queue=4,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4122) Prevent IllegalMonitorStateException from occurring when releasing row locks

2017-08-25 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142126#comment-16142126
 ] 

Thomas D'Silva commented on PHOENIX-4122:
-

+1

> Prevent IllegalMonitorStateException from occurring when releasing row locks
> 
>
> Key: PHOENIX-4122
> URL: https://issues.apache.org/jira/browse/PHOENIX-4122
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4122.patch
>
>
> In this iteration, after RS process was killed, Indexes were disabled and 
> soon its status was changed to INACTIVE state.
> After all regions were re-assigned, we observed that two RS were aborted with 
> below exception in logs
> {code}
> org.apache.phoenix.hbase.index.Indexer threw 
> java.lang.IllegalMonitorStateException
> java.lang.IllegalMonitorStateException
> at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
> at 
> java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockImpl.release(LockManager.java:223)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockContext.releaseRowLock(LockManager.java:166)
> at 
> org.apache.phoenix.hbase.index.LockManager.unlockRow(LockManager.java:131)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1023)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1653)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1019)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2837)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2410)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2365)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:225)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:791)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:705)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:306)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3361)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-24 07:47:19,916 FATAL [4,queue=4,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142084#comment-16142084
 ] 

Samarth Jain commented on PHOENIX-3953:
---

Thanks for the explanation, [~jamestaylor]. +1 to the patch.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142076#comment-16142076
 ] 

Hadoop QA commented on PHOENIX-4127:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12883770/PHOENIX-4127_4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
a5fb51948225ee7ee9ece76d233f4a150006e4a9.
  ATTACHMENT ID: 12883770

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
53 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1300//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1300//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1300//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1300//console

This message is automatically generated.

> Shutdownhook that halts JVM is causing test runs to fail
> 
>
> Key: PHOENIX-4127
> URL: https://issues.apache.org/jira/browse/PHOENIX-4127
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4127_4.x-HBase-0.98.patch, PHOENIX-4127.patch
>
>
> When all the unit tests actually pass, surefire still reports that there was 
> an error in the fork. This is because it doesn't like the fact that a forked 
> JVM called system.exit() or system.halt(). We have a shutdown hook in 
> BaseTest which does that. Such failed runs happen more frequently on the  
> 4.x-HBase-0.98 branch. And sometimes on the master and 1.1 branch. 
> I know the code was added because we were seeing test runs hang. Since we 
> changed a lot of tests since then, I would like to remove that hook and see 
> if it helps.
> {code}
> private static String checkClusterInitialized(ReadOnlyProps serverProps) 
> throws Exception {
> if (!clusterInitialized) {
> url = setUpTestCluster(config, serverProps);
> clusterInitialized = true;
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run() {
> logger.info("SHUTDOWN: halting JVM now");
> Runtime.getRuntime().halt(0);
> }
> });
> }
> return url;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4126) Reset FAIL_WRITE flag in MutableIndexFailureIT

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142075#comment-16142075
 ] 

Hudson commented on PHOENIX-4126:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1751 (See 
[https://builds.apache.org/job/Phoenix-master/1751/])
PHOENIX-4126 Reset FAIL_WRITE flag in MutableIndexFailureIT (samarth: rev 
a5fb51948225ee7ee9ece76d233f4a150006e4a9)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java


> Reset FAIL_WRITE flag in MutableIndexFailureIT
> --
>
> Key: PHOENIX-4126
> URL: https://issues.apache.org/jira/browse/PHOENIX-4126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4126.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142057#comment-16142057
 ] 

Hadoop QA commented on PHOENIX-4127:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12883770/PHOENIX-4127_4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
a5fb51948225ee7ee9ece76d233f4a150006e4a9.
  ATTACHMENT ID: 12883770

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
53 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1299//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1299//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1299//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1299//console

This message is automatically generated.

> Shutdownhook that halts JVM is causing test runs to fail
> 
>
> Key: PHOENIX-4127
> URL: https://issues.apache.org/jira/browse/PHOENIX-4127
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4127_4.x-HBase-0.98.patch, PHOENIX-4127.patch
>
>
> When all the unit tests actually pass, surefire still reports that there was 
> an error in the fork. This is because it doesn't like the fact that a forked 
> JVM called system.exit() or system.halt(). We have a shutdown hook in 
> BaseTest which does that. Such failed runs happen more frequently on the  
> 4.x-HBase-0.98 branch. And sometimes on the master and 1.1 branch. 
> I know the code was added because we were seeing test runs hang. Since we 
> changed a lot of tests since then, I would like to remove that hook and see 
> if it helps.
> {code}
> private static String checkClusterInitialized(ReadOnlyProps serverProps) 
> throws Exception {
> if (!clusterInitialized) {
> url = setUpTestCluster(config, serverProps);
> clusterInitialized = true;
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run() {
> logger.info("SHUTDOWN: halting JVM now");
> Runtime.getRuntime().halt(0);
> }
> });
> }
> return url;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4103) Convert ColumnProjectionOptimizationIT to extend ParallelStatsDisabledIT

2017-08-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142039#comment-16142039
 ] 

James Taylor commented on PHOENIX-4103:
---

Yes, I've filed PHOENIX-4096 to make a PhoenixConnection read-only when an SCN 
is specified.

> Convert ColumnProjectionOptimizationIT to extend ParallelStatsDisabledIT
> 
>
> Key: PHOENIX-4103
> URL: https://issues.apache.org/jira/browse/PHOENIX-4103
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4103.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4096) Disallow DML operations on connections with CURRENT_SCN set

2017-08-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4096:
-

Assignee: James Taylor

> Disallow DML operations on connections with CURRENT_SCN set
> ---
>
> Key: PHOENIX-4096
> URL: https://issues.apache.org/jira/browse/PHOENIX-4096
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>
> We should make a connection read-only if CURRENT_SCN is set. It's really a 
> bad idea to go back in time and update data and it won't work with secondary 
> indexing, potentially leading to your index and table getting out of sync.
> For testing purposes, where we need to control the timestamp, we should rely 
> on the EnvironmentEdgeManager instead to control the current time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4129) Handle partial rebuild of an index on a view

2017-08-25 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4129:
-

 Summary: Handle partial rebuild of an index on a view
 Key: PHOENIX-4129
 URL: https://issues.apache.org/jira/browse/PHOENIX-4129
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


I don't think we have any tests of partial rebuilding an index on a view. We 
should test this and verify it works correctly. We'll need to be careful since 
the underlying physical table for the index does not have an entry in the 
SYSTEM.CATALOG table. We'll need to track disablement and rebuild at the view 
level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142031#comment-16142031
 ] 

Hadoop QA commented on PHOENIX-3953:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12883781/PHOENIX-3953_v2.patch
  against master branch at commit a5fb51948225ee7ee9ece76d233f4a150006e4a9.
  ATTACHMENT ID: 12883781

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1303//console

This message is automatically generated.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4122) Prevent IllegalMonitorStateException from occurring when releasing row locks

2017-08-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142027#comment-16142027
 ] 

James Taylor commented on PHOENIX-4122:
---

[~samarthjain] - would you have any spare cycles to review this?

> Prevent IllegalMonitorStateException from occurring when releasing row locks
> 
>
> Key: PHOENIX-4122
> URL: https://issues.apache.org/jira/browse/PHOENIX-4122
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4122.patch
>
>
> In this iteration, after RS process was killed, Indexes were disabled and 
> soon its status was changed to INACTIVE state.
> After all regions were re-assigned, we observed that two RS were aborted with 
> below exception in logs
> {code}
> org.apache.phoenix.hbase.index.Indexer threw 
> java.lang.IllegalMonitorStateException
> java.lang.IllegalMonitorStateException
> at 
> java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
> at 
> java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockImpl.release(LockManager.java:223)
> at 
> org.apache.phoenix.hbase.index.LockManager$RowLockContext.releaseRowLock(LockManager.java:166)
> at 
> org.apache.phoenix.hbase.index.LockManager.unlockRow(LockManager.java:131)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1023)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1653)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1019)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2837)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2410)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2365)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:225)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:791)
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:705)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:306)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3361)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> 2017-08-24 07:47:19,916 FATAL [4,queue=4,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142028#comment-16142028
 ] 

Samarth Jain commented on PHOENIX-3953:
---

Or is the indexer coprocessor only added when there is a mutable index on a 
table? 

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4128) CNF org/apache/commons/math3/exception/OutOfRangeException

2017-08-25 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-4128:
---

 Summary: CNF 
org/apache/commons/math3/exception/OutOfRangeException 
 Key: PHOENIX-4128
 URL: https://issues.apache.org/jira/browse/PHOENIX-4128
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Mujtaba Chohan
Priority: Minor
 Fix For: 4.12.0


{noformat}
Error: org.apache.hadoop.hbase.DoNotRetryIOException: T: 
org/apache/commons/math3/exception/OutOfRangeException
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:92)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:58)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2163)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropColumn(MetaDataEndpointImpl.java:3243)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16317)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6082)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3533)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3515)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
...
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/math3/exception/OutOfRangeException
>   at 
> org.apache.phoenix.parse.ParseNodeFactory.namedTable(ParseNodeFactory.java:404)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.upsert_node(PhoenixSQLParser.java:5239)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:827)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:519)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1497)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-418) Support approximate COUNT DISTINCT

2017-08-25 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-418:
---
Attachment: PHOENIX-418-v7.patch

> Support approximate COUNT DISTINCT
> --
>
> Key: PHOENIX-418
> URL: https://issues.apache.org/jira/browse/PHOENIX-418
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Ethan Wang
>  Labels: gsoc2016
> Attachments: PHOENIX-418-v1.patch, PHOENIX-418-v2.patch, 
> PHOENIX-418-v3.patch, PHOENIX-418-v4.patch, PHOENIX-418-v5.patch, 
> PHOENIX-418-v6.patch, PHOENIX-418-v7.patch
>
>
> Support an "approximation" of count distinct to prevent having to hold on to 
> all distinct values (since this will not scale well when the number of 
> distinct values is huge). The Apache Drill folks have had some interesting 
> discussions on this 
> [here](http://mail-archives.apache.org/mod_mbox/incubator-drill-dev/201306.mbox/%3CJIRA.12650169.1369931282407.88049.1370645900553%40arcas%3E).
>  They recommend using  [Welford's 
> method](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance_Online_algorithm).
>  I'm open to having a config option that uses exact versus approximate. I 
> don't have experience implementing an approximate implementation, so I'm not 
> sure how much state is required to keep on the server and return to the 
> client (other than realizing it'd be much less that returning all distinct 
> values and their counts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16142024#comment-16142024
 ] 

Samarth Jain commented on PHOENIX-3953:
---

Thanks for the updated patch, James. Do we want to do this only for mutable 
indexes?

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3953:
--
Attachment: PHOENIX-3953_v2.patch

Thanks for the review, [~samarthjain]. Yes, you're absolutely right - I should 
have been checking for compaction of the data table. Please take a look at this 
v2 patch.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4103) Convert ColumnProjectionOptimizationIT to extend ParallelStatsDisabledIT

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141955#comment-16141955
 ] 

Samarth Jain commented on PHOENIX-4103:
---

Will do, [~jamestaylor]. I think we should disable DML and DDL queries across 
the board when CURRENT_SCN is set. There are several edge cases which are hard 
to reason about when we allow DML and DDL operations with SCN (like we 
discussed offline). Let me know if there is a JIRA for that one already else I 
will file one.

> Convert ColumnProjectionOptimizationIT to extend ParallelStatsDisabledIT
> 
>
> Key: PHOENIX-4103
> URL: https://issues.apache.org/jira/browse/PHOENIX-4103
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4103.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4125) Shutdown and start metrics system when mini cluster is shutdown and started

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141950#comment-16141950
 ] 

Samarth Jain commented on PHOENIX-4125:
---

I realized that with PHOENIX-3752, tracing no longer depends on the hadoop 
metrics system. But that change wasn't made to the 4.x-HBase-0.98 branch, 
purposefully. So for branches other than 4.x-HBase-0.98, we can just disable 
the metrics system (I hope there is a way to do that). If there isn't, and I 
know there is JMXCacheBuster that keeps on restarting the metrics system, we 
can always keep tearing it down whenever we tear down our test driver (in 
various @AfterClass methods). 

> Shutdown and start metrics system when mini cluster is shutdown and started
> ---
>
> Key: PHOENIX-4125
> URL: https://issues.apache.org/jira/browse/PHOENIX-4125
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>
> While doing analysis for PHOENIX-4110, I noticed that the metrics system is 
> keeping a lot of garbage around even though we frequently restart the mini 
> cluster. This is because the metrics system is a singleton and is only 
> shutdown when the JVM is shutdown. We should figure out a way to either 
> disable it, or start it only for tests (primarily Tracing related) that use 
> it, or tie it's lifecycle to the mini-cluster's lifecycle.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reassigned PHOENIX-4127:
-

Assignee: Samarth Jain

> Shutdownhook that halts JVM is causing test runs to fail
> 
>
> Key: PHOENIX-4127
> URL: https://issues.apache.org/jira/browse/PHOENIX-4127
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4127_4.x-HBase-0.98.patch, PHOENIX-4127.patch
>
>
> When all the unit tests actually pass, surefire still reports that there was 
> an error in the fork. This is because it doesn't like the fact that a forked 
> JVM called system.exit() or system.halt(). We have a shutdown hook in 
> BaseTest which does that. Such failed runs happen more frequently on the  
> 4.x-HBase-0.98 branch. And sometimes on the master and 1.1 branch. 
> I know the code was added because we were seeing test runs hang. Since we 
> changed a lot of tests since then, I would like to remove that hook and see 
> if it helps.
> {code}
> private static String checkClusterInitialized(ReadOnlyProps serverProps) 
> throws Exception {
> if (!clusterInitialized) {
> url = setUpTestCluster(config, serverProps);
> clusterInitialized = true;
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run() {
> logger.info("SHUTDOWN: halting JVM now");
> Runtime.getRuntime().halt(0);
> }
> });
> }
> return url;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4127:
--
Attachment: PHOENIX-4127.patch

> Shutdownhook that halts JVM is causing test runs to fail
> 
>
> Key: PHOENIX-4127
> URL: https://issues.apache.org/jira/browse/PHOENIX-4127
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4127_4.x-HBase-0.98.patch, PHOENIX-4127.patch
>
>
> When all the unit tests actually pass, surefire still reports that there was 
> an error in the fork. This is because it doesn't like the fact that a forked 
> JVM called system.exit() or system.halt(). We have a shutdown hook in 
> BaseTest which does that. Such failed runs happen more frequently on the  
> 4.x-HBase-0.98 branch. And sometimes on the master and 1.1 branch. 
> I know the code was added because we were seeing test runs hang. Since we 
> changed a lot of tests since then, I would like to remove that hook and see 
> if it helps.
> {code}
> private static String checkClusterInitialized(ReadOnlyProps serverProps) 
> throws Exception {
> if (!clusterInitialized) {
> url = setUpTestCluster(config, serverProps);
> clusterInitialized = true;
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run() {
> logger.info("SHUTDOWN: halting JVM now");
> Runtime.getRuntime().halt(0);
> }
> });
> }
> return url;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4127:
--
Attachment: PHOENIX-4127_4.x-HBase-0.98.patch

> Shutdownhook that halts JVM is causing test runs to fail
> 
>
> Key: PHOENIX-4127
> URL: https://issues.apache.org/jira/browse/PHOENIX-4127
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4127_4.x-HBase-0.98.patch, PHOENIX-4127.patch
>
>
> When all the unit tests actually pass, surefire still reports that there was 
> an error in the fork. This is because it doesn't like the fact that a forked 
> JVM called system.exit() or system.halt(). We have a shutdown hook in 
> BaseTest which does that. Such failed runs happen more frequently on the  
> 4.x-HBase-0.98 branch. And sometimes on the master and 1.1 branch. 
> I know the code was added because we were seeing test runs hang. Since we 
> changed a lot of tests since then, I would like to remove that hook and see 
> if it helps.
> {code}
> private static String checkClusterInitialized(ReadOnlyProps serverProps) 
> throws Exception {
> if (!clusterInitialized) {
> url = setUpTestCluster(config, serverProps);
> clusterInitialized = true;
> Runtime.getRuntime().addShutdownHook(new Thread() {
> @Override
> public void run() {
> logger.info("SHUTDOWN: halting JVM now");
> Runtime.getRuntime().halt(0);
> }
> });
> }
> return url;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4127) Shutdownhook that halts JVM is causing test runs to fail

2017-08-25 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4127:
-

 Summary: Shutdownhook that halts JVM is causing test runs to fail
 Key: PHOENIX-4127
 URL: https://issues.apache.org/jira/browse/PHOENIX-4127
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


When all the unit tests actually pass, surefire still reports that there was an 
error in the fork. This is because it doesn't like the fact that a forked JVM 
called system.exit() or system.halt(). We have a shutdown hook in BaseTest 
which does that. Such failed runs happen more frequently on the  4.x-HBase-0.98 
branch. And sometimes on the master and 1.1 branch. 

I know the code was added because we were seeing test runs hang. Since we 
changed a lot of tests since then, I would like to remove that hook and see if 
it helps.

{code}
private static String checkClusterInitialized(ReadOnlyProps serverProps) throws 
Exception {
if (!clusterInitialized) {
url = setUpTestCluster(config, serverProps);
clusterInitialized = true;
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
logger.info("SHUTDOWN: halting JVM now");
Runtime.getRuntime().halt(0);
}
});
}
return url;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4080) The error message for version mismatch is not accurate.

2017-08-25 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141891#comment-16141891
 ] 

Geoffrey Jacoby commented on PHOENIX-4080:
--

[~aertoria], can you confirm that the failing tests aren't related to your 
change? (I wouldn't expect them to be, but that should be checked.) If so, I'll 
commit this. 

> The error message for version mismatch is not accurate.
> ---
>
> Key: PHOENIX-4080
> URL: https://issues.apache.org/jira/browse/PHOENIX-4080
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.11.0
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4080.patch, PHOENIX-4080-v2.patch
>
>
> When accessing a 4.10 running cluster with 4.11 client, it referred as 
> The following servers require an updated phoenix.jar to be put in the 
> classpath of HBase: region=SYSTEM.CATALOG
> It should be phoenix-[version]-server.jar rather than phoenix.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-08-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141849#comment-16141849
 ] 

Josh Elser commented on PHOENIX-3655:
-

I'm finding it difficult to weigh in because the scope of what you want to do 
is very broad, [~rahulshrivastava]. We've touched on many things already here:

* Existing metrics being collected in Avatica already
* Existing metrics being collected by the PhoenixDriver (inside PQS) already
* HTrace to do runtime analysis of specific actions
* The use of the new hbase-metrics-api for aggregation of metrics data

Re-reading this issue's description, as well as the supplemental PDF, we 
immediately dive into how to do the metrics work, not thinking about what kind 
of information we want (and basing how to implement collection/reporting on 
that information). I think it would make sense to take a step bad and consider 
what the high-level goals are: list the things we want to measure, consider 
what the tools already at our disposal do, and what gaps exist from letting us 
observe the things we want to measure.

> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.12.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4126) Reset FAIL_WRITE flag in MutableIndexFailureIT

2017-08-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-4126.
---
   Resolution: Fixed
Fix Version/s: 4.12.0

> Reset FAIL_WRITE flag in MutableIndexFailureIT
> --
>
> Key: PHOENIX-4126
> URL: https://issues.apache.org/jira/browse/PHOENIX-4126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4126.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4126) Reset FAIL_WRITE flag in MutableIndexFailureIT

2017-08-25 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4126:
--
Attachment: PHOENIX-4126.patch

> Reset FAIL_WRITE flag in MutableIndexFailureIT
> --
>
> Key: PHOENIX-4126
> URL: https://issues.apache.org/jira/browse/PHOENIX-4126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4126.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4126) Reset FAIL_WRITE flag in MutableIndexFailureIT

2017-08-25 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4126:
-

 Summary: Reset FAIL_WRITE flag in MutableIndexFailureIT
 Key: PHOENIX-4126
 URL: https://issues.apache.org/jira/browse/PHOENIX-4126
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4125) Shutdown and start metrics system when mini cluster is shutdown and started

2017-08-25 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4125:
-

 Summary: Shutdown and start metrics system when mini cluster is 
shutdown and started
 Key: PHOENIX-4125
 URL: https://issues.apache.org/jira/browse/PHOENIX-4125
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain


While doing analysis for PHOENIX-4110, I noticed that the metrics system is 
keeping a lot of garbage around even though we frequently restart the mini 
cluster. This is because the metrics system is a singleton and is only shutdown 
when the JVM is shutdown. We should figure out a way to either disable it, or 
start it only for tests (primarily Tracing related) that use it, or tie it's 
lifecycle to the mini-cluster's lifecycle.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4118) Disable tests in ImmutableIndexIT till PHOENIX-2582 is fixed

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141773#comment-16141773
 ] 

Hudson commented on PHOENIX-4118:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1750 (See 
[https://builds.apache.org/job/Phoenix-master/1750/])
PHOENIX-4118 Disable tests in ImmutableIndexIT till PHOENIX-2582 is (samarth: 
rev dd008f4fe4e67836a6fb362843c9618e4e518182)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java


> Disable tests in ImmutableIndexIT till PHOENIX-2582 is fixed
> 
>
> Key: PHOENIX-4118
> URL: https://issues.apache.org/jira/browse/PHOENIX-4118
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4118.patch
>
>
> In a couple of test runs like 
> https://builds.apache.org/job/PreCommit-PHOENIX-Build/1292//testReport/, I 
> have seen that tests that are meant to check that the immutable index has all 
> the data table rows flap. I think we need to fix PHOENIX-2582 first since the 
> catch up query to run the index populating UPSERT SELECT is a bit arbitrary 
> and dependent on getting the timing right.
> [~jamestaylor]/[~tdsilva] - let me know if you have concerns.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2582) Prevent need of catch up query when creating non transactional index

2017-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141774#comment-16141774
 ] 

Hudson commented on PHOENIX-2582:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1750 (See 
[https://builds.apache.org/job/Phoenix-master/1750/])
PHOENIX-4118 Disable tests in ImmutableIndexIT till PHOENIX-2582 is (samarth: 
rev dd008f4fe4e67836a6fb362843c9618e4e518182)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java


> Prevent need of catch up query when creating non transactional index
> 
>
> Key: PHOENIX-2582
> URL: https://issues.apache.org/jira/browse/PHOENIX-2582
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>
> If we create an index while we are upserting rows to the table its possible 
> we can miss writing corresponding rows to the index table. 
> If a region server is writing a batch of rows and we create an index just 
> before the batch is written we will miss writing that batch to the index 
> table. This is because we run the inital UPSERT SELECT to populate the index 
> with an SCN that we get from the server which will be before the timestamp 
> the batch of rows is written. 
> We need to figure out if there is a way to determine that are pending batches 
> have been written before running the UPSERT SELECT to do the initial index 
> population.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-418) Support approximate COUNT DISTINCT

2017-08-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141767#comment-16141767
 ] 

Josh Elser commented on PHOENIX-418:


bq. So we should add that text to our NOTICE file, right?

Correct, specifically to {{dev/release_files/NOTICE}} only (not the top-level 
NOTICE file as that is for our src-release which would not actually contain 
this JAR).

> Support approximate COUNT DISTINCT
> --
>
> Key: PHOENIX-418
> URL: https://issues.apache.org/jira/browse/PHOENIX-418
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Ethan Wang
>  Labels: gsoc2016
> Attachments: PHOENIX-418-v1.patch, PHOENIX-418-v2.patch, 
> PHOENIX-418-v3.patch, PHOENIX-418-v4.patch, PHOENIX-418-v5.patch, 
> PHOENIX-418-v6.patch
>
>
> Support an "approximation" of count distinct to prevent having to hold on to 
> all distinct values (since this will not scale well when the number of 
> distinct values is huge). The Apache Drill folks have had some interesting 
> discussions on this 
> [here](http://mail-archives.apache.org/mod_mbox/incubator-drill-dev/201306.mbox/%3CJIRA.12650169.1369931282407.88049.1370645900553%40arcas%3E).
>  They recommend using  [Welford's 
> method](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance_Online_algorithm).
>  I'm open to having a config option that uses exact versus approximate. I 
> don't have experience implementing an approximate implementation, so I'm not 
> sure how much state is required to keep on the server and return to the 
> client (other than realizing it'd be much less that returning all distinct 
> values and their counts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-418) Support approximate COUNT DISTINCT

2017-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141430#comment-16141430
 ] 

Hadoop QA commented on PHOENIX-418:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12883652/PHOENIX-418-v6.patch
  against master branch at commit dd008f4fe4e67836a6fb362843c9618e4e518182.
  ATTACHMENT ID: 12883652

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
56 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   String query = "SELECT APPROX_COUNT_DISTINCT(a.i1||a.i2||b.i2) 
FROM " + tableName + " a, " + tableName
+   final private void prepareTableWithValues(final Connection conn, final 
int nRows) throws Exception {
+   final PreparedStatement stmt = conn.prepareStatement("upsert 
into " + tableName + " VALUES (?, ?)");
+@BuiltInFunction(name=DistinctCountHyperLogLogAggregateFunction.NAME, 
nodeClass=DistinctCountHyperLogLogAggregateParseNode.class, args= {@Argument()} 
)
+public DistinctCountHyperLogLogAggregateFunction(List 
childExpressions, CountAggregateFunction delegate){
+   private HyperLogLogPlus hll = new 
HyperLogLogPlus(DistinctCountHyperLogLogAggregateFunction.NormalSetPrecision, 
DistinctCountHyperLogLogAggregateFunction.SparseSetPrecision);
+   private HyperLogLogPlus hll = new 
HyperLogLogPlus(DistinctCountHyperLogLogAggregateFunction.NormalSetPrecision, 
DistinctCountHyperLogLogAggregateFunction.SparseSetPrecision);
+   protected final ImmutableBytesWritable valueByteArray = new 
ImmutableBytesWritable(ByteUtil.EMPTY_BYTE_ARRAY);
+public DistinctCountHyperLogLogAggregateParseNode(String name, 
List children, BuiltInFunctionInfo info) {
+public FunctionExpression create(List children, 
StatementContext context) throws SQLException {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DerivedTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SequenceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CustomEntityDataIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TransactionalViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexReplicationIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CreateTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.OrderByIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.FirstValuesFunctionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StatementHintsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RowValueConstructorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.InListIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IsNullIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.RollbackIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StatsCollectorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DynamicUpsertIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.QueryExecWithoutSCNIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RenewLeaseIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TopNIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.EvaluationOfORIT

[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-08-25 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141303#comment-16141303
 ] 

Samarth Jain commented on PHOENIX-3953:
---

Thanks for the patch, [~jamestaylor]. I have a couple of comments:

- With this patch, we are setting the index_disable_timestamp to 0 when the the 
major compaction is running for the *index* table. I think we would need to do 
the same when major compaction is running on the data table too. I am not too 
sure what would happen if major compaction runs on a disabled index. It may or 
may not be ok. But definitely, major compaction running on a data table when a 
global mutable index on it is disabled is not good.

- I am not too sure if adding this logic to stats collection is the best 
choice. In fact, I was planning on filing a JIRA where we can have a config 
which controls whether we should be collecting stats for tables via major 
compaction. I think it would make sense to just refactor this logic in its own 
method and call it in the preCompactHook of UngroupedAggregateRegionObserver.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4119) Upsert uses timestamp of server holding table metadata

2017-08-25 Thread Mark Christiaens (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16141291#comment-16141291
 ] 

Mark Christiaens commented on PHOENIX-4119:
---

Badly synchronous clocks was the way that we ran into this issue.  Properly 
enabling NTP helped suppress the error but clearly, perfectly synchronized 
clocks are physically impossible so the issue could still pop up.

> Upsert uses timestamp of server holding table metadata
> --
>
> Key: PHOENIX-4119
> URL: https://issues.apache.org/jira/browse/PHOENIX-4119
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Mark Christiaens
>
> When doing an upsert, I noticed that the resulting put-command to HBase uses 
> a timestamp obtained from the server managing the corresponding table's 
> metadata (cfr. 
> https://issues.apache.org/jira/browse/PHOENIX-1674?focusedCommentId=14349982=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14349982).
>   
> Now, if the row that is updated resides on a second server, that second 
> server could have some clock skew.  After the upsert, if you fetch that 
> updated row without specifying an explicit query timestamp, then you might 
> not see the effects of the upsert until the second server's clock has caught 
> up with the first server's clock.
> I think that the desired behavior is that the puts performed occur with a 
> timestamp derived from the region server where their rows are stored.  (Of 
> course, that still leaves the problem of region migration from one server to 
> the next but it's a step in the right direction).  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)