Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #556

2017-08-31 Thread Apache Jenkins Server
See 


Changes:

[samarth] PHOENIX-4143 ConcurrentMutationsIT flaps

--
[...truncated 313.95 KB...]
at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:495)
at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:490)
at 
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
at 
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
at 
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSparkIT.scala:44)
at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Exception encountered when invoking run on a nested suite - 
java.io.IOException: Cannot create directory 

 *** ABORTED ***
  java.lang.RuntimeException: java.io.IOException: Cannot create directory 

  at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:525)
  at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:442)
  at 
org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:424)
  at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:495)
  at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:490)
  at 
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
  at 
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
  at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
  at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
  at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
  ...
  Cause: java.io.IOException: Cannot create directory 

  at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
  at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
  at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
  at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:742)
  ...
4743 [RpcServer.reader=1,bindAddress=asf927.gq1.ygridcore.net,port=43567] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 67.195.81.163 
port: 42955 with version info: version: "1.1.9" url: 
"git://diocles.local/Volumes/hbase-1.1.9/hbase" revision: 
"0d1feabed5295495ed2257d31fab9e6553e8a9d7" user: "ndimiduk" date: "Mon Feb 20 
22:35:28 PST 2017" src_checksum: "b68339108ddccd1dfc44a76646588a58"
5688 [RpcServer.reader=1,bindAddress=asf927.gq1.ygridcore.net,port=43617] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 67.195.81.163 
port: 42416 with version info: version: "1.1.9" url: 
"git://diocles.local/Volumes/hbase-1.1.9/hbase" revision: 
"0d1feabed5295495ed2257d31fab9e6553e8a9d7" user: "ndimiduk" date: "Mon Feb 20 
22:35:28 PST 2017" src_checksum: "b68339108ddccd1dfc44a76646588a58"
5920 [RpcServer.reader=2,bindAddress=asf927.gq1.ygridcore.net,port=43617] INFO  
SecurityLogger.org.apache.hadoop.hbase.Serve

Build failed in Jenkins: Phoenix | Master #1762

2017-08-31 Thread Apache Jenkins Server
See 


Changes:

[samarth] PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and

[samarth] PHOENIX-4143 ConcurrentMutationsIT flaps

--
[...truncated 331.06 KB...]
- Can create schema RDD and execute query on case sensitive table (no config)
88574 [RpcServer.reader=9,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
88574 [RpcServer.reader=9,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 44684 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 UTC 2017" src_checksum: "791334e624f150dd04641b4a3245a4b9" 
version_major: 1 version_minor: 3
88598 [RpcServer.reader=7,bindAddress=qnode1.quenda.co,port=33917] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
88610 [RpcServer.reader=7,bindAddress=qnode1.quenda.co,port=33917] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 58978 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 UTC 2017" src_checksum: "791334e624f150dd04641b4a3245a4b9" 
version_major: 1 version_minor: 3
88742 [RpcServer.reader=0,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
88743 [RpcServer.reader=0,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 44690 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 UTC 2017" src_checksum: "791334e624f150dd04641b4a3245a4b9" 
version_major: 1 version_minor: 3
88767 [RpcServer.reader=8,bindAddress=qnode1.quenda.co,port=33917] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
88768 [RpcServer.reader=8,bindAddress=qnode1.quenda.co,port=33917] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 58984 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 UTC 2017" src_checksum: "791334e624f150dd04641b4a3245a4b9" 
version_major: 1 version_minor: 3
- Can create schema RDD and execute constrained query
91533 [RpcServer.reader=1,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
91534 [RpcServer.reader=1,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 44702 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 UTC 2017" src_checksum: "791334e624f150dd04641b4a3245a4b9" 
version_major: 1 version_minor: 3
91549 [RpcServer.reader=9,bindAddress=qnode1.quenda.co,port=33917] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
91551 [RpcServer.reader=9,bindAddress=qnode1.quenda.co,port=33917] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 58996 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 UTC 2017" src_checksum: "791334e624f150dd04641b4a3245a4b9" 
version_major: 1 version_minor: 3
- Can create schema RDD with predicate that will never match
92078 [RpcServer.reader=2,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Auth successful for jenkins 
(auth:SIMPLE)
92098 [RpcServer.reader=2,bindAddress=qnode1.quenda.co,port=40555] INFO  
SecurityLogger.org.apache.hadoop.hbase.Server  - Connection from 127.0.0.1 
port: 44708 with version info: version: "1.3.1" url: 
"git://qnode1/home/jenkins/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/phoenix/hbase"
 revision: "2b60f4ecd98da8b4c74f044cc1ec0d221d960399" user: "jenkins" date: 
"Thu Jul 20 16:30:37 U

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #555

2017-08-31 Thread Apache Jenkins Server
See 


Changes:

[samarth] PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and

--
[...truncated 100.23 KB...]
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.653 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
225.249 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 289.91 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
2,076.694 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3039, Failures: 0, Errors: 0, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.phoenix.end2end.DistinctCountIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.CreateTableIT
[INFO] Running org.apache.phoenix.end2end.CreateSchemaIT
[INFO] Running org.apache.phoenix.end2end.CustomEntityDataIT
[INFO] Running org.apache.phoenix.end2end.DerivedTableIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.457 s 
- in org.apache.phoenix.end2end.CreateSchemaIT
[INFO] Running org.apache.phoenix.end2end.ExtendedQueryExecIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.339 s 
- in org.apache.phoenix.end2end.ExtendedQueryExecIT
[INFO] Running org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.46 s - 
in org.apache.phoenix.end2end.CustomEntityDataIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.519 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.ProductMetricsIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.453 s 
- in org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.934 
s - in org.apache.phoenix.end2end.DerivedTableIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.751 
s - in org.apache.phoenix.end2end.DistinctCountIT
[INFO] Running org.apache.phoenix.end2end.ReadIsolationLevelIT
[INFO] Running org.apache.phoenix.end2end.RowValueConstructorIT
[INFO] Running org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.503 s 
- in org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.232 s 
- in org.apache.phoenix.end2end.ReadIsolationLevelIT
[INFO] Running org.apache.phoenix.end2end.SequenceIT
[INFO] Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.609 
s - in org.apache.phoenix.end2end.ProductMetricsIT
[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.033 
s - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
[INFO] Running org.apache.phoenix.end2end.ToNumberFunctionIT
[INFO] Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.374 
s - in org.apache.phoenix.end2end.SequenceIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.621 s 
- in org.apache.phoenix.end2end.ToNumberFunctionIT
[INFO] Running org.apache.phoenix.end2end.TruncateFunctionIT
[INFO] Running org.apache.phoenix.end2end.TopNIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.836 s 
- in org.apache.phoenix.end2end.TruncateFunctionIT
[INFO] Running org.apache.phoenix.end2end.VariableLengthPKIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.366 s 
- in org.apache.phoenix.end2end.TopNIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.583 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.146 s 
- in org.apache.phoenix.rpc.UpdateCacheWithScnIT
[INFO] Running org.apache.phoenix.end2end.UpsertValuesIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.742 
s - in org.apache.phoenix.end2end.VariableLengthPKIT
[INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.286 
s - in org.apache.phoenix.end2end.RowValueConstructorIT
[INFO] Tests run: 19, Failures: 0, E

Build failed in Jenkins: Phoenix | Master #1761

2017-08-31 Thread Apache Jenkins Server
See 


Changes:

[jtaylor] Revert "PHOENIX-3815 Only disable indexes on which write failures

--
[...truncated 96.98 KB...]
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.098 s 
- in org.apache.phoenix.end2end.index.SaltedIndexIT
[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.259 s 
- in org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 308.894 
s - in org.apache.phoenix.end2end.index.DropColumnIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.1 s - 
in org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.655 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.326 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.216 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.492 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.405 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.054 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 407.122 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.415 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.945 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 968.97 
s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.303 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.299 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 440.531 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.115 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
196.16 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 261.427 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 658.978 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 699.374 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,866.914 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3043, Failures: 0, Errors: 0, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.pho

phoenix git commit: PHOENIX-4143 ConcurrentMutationsIT flaps

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 ed9acd505 -> a92c01ebe


PHOENIX-4143 ConcurrentMutationsIT flaps


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a92c01eb
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a92c01eb
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a92c01eb

Branch: refs/heads/4.x-HBase-0.98
Commit: a92c01ebe9de1e8d57840564fbde4ee324014691
Parents: ed9acd5
Author: Samarth Jain 
Authored: Thu Aug 31 17:46:53 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:46:53 2017 -0700

--
 .../phoenix/end2end/ConcurrentMutationsIT.java| 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a92c01eb/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
index 83b9913..6d327f7 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
@@ -64,16 +64,24 @@ public class ConcurrentMutationsIT extends 
ParallelStatsDisabledIT {
 private final Object lock = new Object();
 private long scn = 100;
 
-private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws SQLException, IOException {
+private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws Exception {
 int priority = QueryServicesOptions.DEFAULT_COPROCESSOR_PRIORITY + 100;
 ConnectionQueryServices services = 
conn.unwrap(PhoenixConnection.class).getQueryServices();
 HTableDescriptor descriptor = 
services.getTableDescriptor(Bytes.toBytes(tableName));
 descriptor.addCoprocessor(DelayingRegionObserver.class.getName(), 
null, priority, null);
-HBaseAdmin admin = services.getAdmin();
-try {
+int numTries = 10;
+try (HBaseAdmin admin = services.getAdmin()) {
 admin.modifyTable(Bytes.toBytes(tableName), descriptor);
-} finally {
-admin.close();
+while 
(!admin.getTableDescriptor(Bytes.toBytes(tableName)).equals(descriptor)
+&& numTries > 0) {
+numTries--;
+if (numTries == 0) {
+throw new Exception(
+"Check to detect if delaying co-processor was 
added failed after "
++ numTries + " retries.");
+}
+Thread.sleep(1000);
+}
 }
 }
 



phoenix git commit: PHOENIX-4143 ConcurrentMutationsIT flaps

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 d1ee8159c -> 64975baab


PHOENIX-4143 ConcurrentMutationsIT flaps


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/64975baa
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/64975baa
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/64975baa

Branch: refs/heads/4.x-HBase-1.1
Commit: 64975baab035173dd83c2fcb924eb705bc907d9a
Parents: d1ee815
Author: Samarth Jain 
Authored: Thu Aug 31 17:46:23 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:46:23 2017 -0700

--
 .../phoenix/end2end/ConcurrentMutationsIT.java| 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/64975baa/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
index 83b9913..6d327f7 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
@@ -64,16 +64,24 @@ public class ConcurrentMutationsIT extends 
ParallelStatsDisabledIT {
 private final Object lock = new Object();
 private long scn = 100;
 
-private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws SQLException, IOException {
+private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws Exception {
 int priority = QueryServicesOptions.DEFAULT_COPROCESSOR_PRIORITY + 100;
 ConnectionQueryServices services = 
conn.unwrap(PhoenixConnection.class).getQueryServices();
 HTableDescriptor descriptor = 
services.getTableDescriptor(Bytes.toBytes(tableName));
 descriptor.addCoprocessor(DelayingRegionObserver.class.getName(), 
null, priority, null);
-HBaseAdmin admin = services.getAdmin();
-try {
+int numTries = 10;
+try (HBaseAdmin admin = services.getAdmin()) {
 admin.modifyTable(Bytes.toBytes(tableName), descriptor);
-} finally {
-admin.close();
+while 
(!admin.getTableDescriptor(Bytes.toBytes(tableName)).equals(descriptor)
+&& numTries > 0) {
+numTries--;
+if (numTries == 0) {
+throw new Exception(
+"Check to detect if delaying co-processor was 
added failed after "
++ numTries + " retries.");
+}
+Thread.sleep(1000);
+}
 }
 }
 



phoenix git commit: PHOENIX-4143 ConcurrentMutationsIT flaps

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master c2e85f213 -> 1aabbfa0d


PHOENIX-4143 ConcurrentMutationsIT flaps


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1aabbfa0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1aabbfa0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1aabbfa0

Branch: refs/heads/master
Commit: 1aabbfa0d04261d1cebba845deaebf4c4048ae62
Parents: c2e85f2
Author: Samarth Jain 
Authored: Thu Aug 31 17:45:35 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:45:35 2017 -0700

--
 .../phoenix/end2end/ConcurrentMutationsIT.java| 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1aabbfa0/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
index 83b9913..6d327f7 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
@@ -64,16 +64,24 @@ public class ConcurrentMutationsIT extends 
ParallelStatsDisabledIT {
 private final Object lock = new Object();
 private long scn = 100;
 
-private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws SQLException, IOException {
+private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws Exception {
 int priority = QueryServicesOptions.DEFAULT_COPROCESSOR_PRIORITY + 100;
 ConnectionQueryServices services = 
conn.unwrap(PhoenixConnection.class).getQueryServices();
 HTableDescriptor descriptor = 
services.getTableDescriptor(Bytes.toBytes(tableName));
 descriptor.addCoprocessor(DelayingRegionObserver.class.getName(), 
null, priority, null);
-HBaseAdmin admin = services.getAdmin();
-try {
+int numTries = 10;
+try (HBaseAdmin admin = services.getAdmin()) {
 admin.modifyTable(Bytes.toBytes(tableName), descriptor);
-} finally {
-admin.close();
+while 
(!admin.getTableDescriptor(Bytes.toBytes(tableName)).equals(descriptor)
+&& numTries > 0) {
+numTries--;
+if (numTries == 0) {
+throw new Exception(
+"Check to detect if delaying co-processor was 
added failed after "
++ numTries + " retries.");
+}
+Thread.sleep(1000);
+}
 }
 }
 



phoenix git commit: PHOENIX-4143 ConcurrentMutationsIT flaps

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 aa2cce6c4 -> d877abe74


PHOENIX-4143 ConcurrentMutationsIT flaps


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d877abe7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d877abe7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d877abe7

Branch: refs/heads/4.x-HBase-1.2
Commit: d877abe745c1fbc1a5e2bcb80430792041e9d2d4
Parents: aa2cce6
Author: Samarth Jain 
Authored: Thu Aug 31 17:45:59 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:45:59 2017 -0700

--
 .../phoenix/end2end/ConcurrentMutationsIT.java| 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d877abe7/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
index 83b9913..6d327f7 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsIT.java
@@ -64,16 +64,24 @@ public class ConcurrentMutationsIT extends 
ParallelStatsDisabledIT {
 private final Object lock = new Object();
 private long scn = 100;
 
-private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws SQLException, IOException {
+private static void addDelayingCoprocessor(Connection conn, String 
tableName) throws Exception {
 int priority = QueryServicesOptions.DEFAULT_COPROCESSOR_PRIORITY + 100;
 ConnectionQueryServices services = 
conn.unwrap(PhoenixConnection.class).getQueryServices();
 HTableDescriptor descriptor = 
services.getTableDescriptor(Bytes.toBytes(tableName));
 descriptor.addCoprocessor(DelayingRegionObserver.class.getName(), 
null, priority, null);
-HBaseAdmin admin = services.getAdmin();
-try {
+int numTries = 10;
+try (HBaseAdmin admin = services.getAdmin()) {
 admin.modifyTable(Bytes.toBytes(tableName), descriptor);
-} finally {
-admin.close();
+while 
(!admin.getTableDescriptor(Bytes.toBytes(tableName)).equals(descriptor)
+&& numTries > 0) {
+numTries--;
+if (numTries == 0) {
+throw new Exception(
+"Check to detect if delaying co-processor was 
added failed after "
++ numTries + " retries.");
+}
+Thread.sleep(1000);
+}
 }
 }
 



phoenix git commit: PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can deadlock

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 b43dc09f0 -> ed9acd505


PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and 
doPostScannerOpen() can deadlock


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ed9acd50
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ed9acd50
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ed9acd50

Branch: refs/heads/4.x-HBase-0.98
Commit: ed9acd5051a63b439f651d89fd7c95b490d62e37
Parents: b43dc09
Author: Samarth Jain 
Authored: Thu Aug 31 17:23:11 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:23:11 2017 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   | 44 
 .../UngroupedAggregateRegionObserver.java   | 35 ++--
 .../java/org/apache/phoenix/query/BaseTest.java |  7 
 3 files changed, 32 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ed9acd50/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index dfdc28d..85cc706 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -555,10 +555,8 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 private PTable buildTable(byte[] key, ImmutableBytesPtr cacheKey, HRegion 
region,
 long clientTimeStamp) throws IOException, SQLException {
 Scan scan = MetaDataUtil.newTableRowsScan(key, MIN_TABLE_TIMESTAMP, 
clientTimeStamp);
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 PTable oldTable = (PTable)metaDataCache.getIfPresent(cacheKey);
 long tableTimeStamp = oldTable == null ? MIN_TABLE_TIMESTAMP-1 : 
oldTable.getTimeStamp();
 PTable newTable;
@@ -580,8 +578,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 metaDataCache.put(cacheKey, newTable);
 }
 return newTable;
-} finally {
-scanner.close();
 }
 }
 
@@ -598,13 +594,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List functions = new ArrayList();
 PFunction function = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for(int i = 0; i< keys.size(); i++) {
 function = null;
 function =
@@ -621,8 +614,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 functions.add(function);
 }
 return functions;
-} finally {
-scanner.close();
 }
 }
 
@@ -639,13 +630,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List schemas = new ArrayList();
 PSchema schema = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for (int i = 0; i < keys.size(); i++) {
 schema = null;
 schema = getSchema(scanner, clientTimeStamp);
@@ -654,8 +642,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 schemas.add(schema);
 }
 return schemas;
-} finally {
-scanner.close();
 }
 }
 
@@ -1704,14 +1690,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 // TableName systemCatalogTableName = 
region.getTableDesc().getTableName();
 // HTableInterface hTable = env.getTable(systemCatalogTableName);
 // These deprec

phoenix git commit: PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can deadlock

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 67800c127 -> aa2cce6c4


PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and 
doPostScannerOpen() can deadlock


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/aa2cce6c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/aa2cce6c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/aa2cce6c

Branch: refs/heads/4.x-HBase-1.2
Commit: aa2cce6c466794122b5b2ab332823cc687a7e708
Parents: 67800c1
Author: Samarth Jain 
Authored: Thu Aug 31 17:22:07 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:22:07 2017 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   | 44 
 .../UngroupedAggregateRegionObserver.java   | 35 ++--
 .../java/org/apache/phoenix/query/BaseTest.java |  7 
 3 files changed, 32 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/aa2cce6c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 4378c47..aac5619 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -556,10 +556,8 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 private PTable buildTable(byte[] key, ImmutableBytesPtr cacheKey, Region 
region,
 long clientTimeStamp) throws IOException, SQLException {
 Scan scan = MetaDataUtil.newTableRowsScan(key, MIN_TABLE_TIMESTAMP, 
clientTimeStamp);
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 PTable oldTable = (PTable)metaDataCache.getIfPresent(cacheKey);
 long tableTimeStamp = oldTable == null ? MIN_TABLE_TIMESTAMP-1 : 
oldTable.getTimeStamp();
 PTable newTable;
@@ -581,8 +579,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 metaDataCache.put(cacheKey, newTable);
 }
 return newTable;
-} finally {
-scanner.close();
 }
 }
 
@@ -599,13 +595,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List functions = new ArrayList();
 PFunction function = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for(int i = 0; i< keys.size(); i++) {
 function = null;
 function =
@@ -622,8 +615,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 functions.add(function);
 }
 return functions;
-} finally {
-scanner.close();
 }
 }
 
@@ -640,13 +631,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List schemas = new ArrayList();
 PSchema schema = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for (int i = 0; i < keys.size(); i++) {
 schema = null;
 schema = getSchema(scanner, clientTimeStamp);
@@ -655,8 +643,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 schemas.add(schema);
 }
 return schemas;
-} finally {
-scanner.close();
 }
 }
 
@@ -1706,14 +1692,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 // TableName systemCatalogTableName = 
region.getTableDesc().getTableName();
 // HTableInterface hTable = env.getTable(systemCatalogTableName);
 // These deprecate

phoenix git commit: PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can deadlock

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 6e5f3152f -> d1ee8159c


PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and 
doPostScannerOpen() can deadlock


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d1ee8159
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d1ee8159
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d1ee8159

Branch: refs/heads/4.x-HBase-1.1
Commit: d1ee8159c3f67b158c2acd2f3a64126ad86970fd
Parents: 6e5f315
Author: Samarth Jain 
Authored: Thu Aug 31 17:22:31 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:22:31 2017 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   | 44 
 .../UngroupedAggregateRegionObserver.java   | 35 ++--
 .../java/org/apache/phoenix/query/BaseTest.java |  7 
 3 files changed, 32 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d1ee8159/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 85a6603..1de3af4 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -556,10 +556,8 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 private PTable buildTable(byte[] key, ImmutableBytesPtr cacheKey, Region 
region,
 long clientTimeStamp) throws IOException, SQLException {
 Scan scan = MetaDataUtil.newTableRowsScan(key, MIN_TABLE_TIMESTAMP, 
clientTimeStamp);
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 PTable oldTable = (PTable)metaDataCache.getIfPresent(cacheKey);
 long tableTimeStamp = oldTable == null ? MIN_TABLE_TIMESTAMP-1 : 
oldTable.getTimeStamp();
 PTable newTable;
@@ -581,8 +579,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 metaDataCache.put(cacheKey, newTable);
 }
 return newTable;
-} finally {
-scanner.close();
 }
 }
 
@@ -599,13 +595,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List functions = new ArrayList();
 PFunction function = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for(int i = 0; i< keys.size(); i++) {
 function = null;
 function =
@@ -622,8 +615,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 functions.add(function);
 }
 return functions;
-} finally {
-scanner.close();
 }
 }
 
@@ -640,13 +631,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List schemas = new ArrayList();
 PSchema schema = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for (int i = 0; i < keys.size(); i++) {
 schema = null;
 schema = getSchema(scanner, clientTimeStamp);
@@ -655,8 +643,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 schemas.add(schema);
 }
 return schemas;
-} finally {
-scanner.close();
 }
 }
 
@@ -1706,14 +1692,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 // TableName systemCatalogTableName = 
region.getTableDesc().getTableName();
 // HTableInterface hTable = env.getTable(systemCatalogTableName);
 // These deprecate

phoenix git commit: PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can deadlock

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/master 3f58452f4 -> c2e85f213


PHOENIX-4131 UngroupedAggregateRegionObserver.preClose() and 
doPostScannerOpen() can deadlock


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c2e85f21
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c2e85f21
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c2e85f21

Branch: refs/heads/master
Commit: c2e85f2131669c381e61cc3d6982ab66e4ed63b9
Parents: 3f58452
Author: Samarth Jain 
Authored: Thu Aug 31 17:21:36 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 17:21:36 2017 -0700

--
 .../coprocessor/MetaDataEndpointImpl.java   | 44 
 .../UngroupedAggregateRegionObserver.java   | 35 ++--
 .../java/org/apache/phoenix/query/BaseTest.java |  7 
 3 files changed, 32 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c2e85f21/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 4378c47..aac5619 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -556,10 +556,8 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 private PTable buildTable(byte[] key, ImmutableBytesPtr cacheKey, Region 
region,
 long clientTimeStamp) throws IOException, SQLException {
 Scan scan = MetaDataUtil.newTableRowsScan(key, MIN_TABLE_TIMESTAMP, 
clientTimeStamp);
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 PTable oldTable = (PTable)metaDataCache.getIfPresent(cacheKey);
 long tableTimeStamp = oldTable == null ? MIN_TABLE_TIMESTAMP-1 : 
oldTable.getTimeStamp();
 PTable newTable;
@@ -581,8 +579,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 metaDataCache.put(cacheKey, newTable);
 }
 return newTable;
-} finally {
-scanner.close();
 }
 }
 
@@ -599,13 +595,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List functions = new ArrayList();
 PFunction function = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for(int i = 0; i< keys.size(); i++) {
 function = null;
 function =
@@ -622,8 +615,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 functions.add(function);
 }
 return functions;
-} finally {
-scanner.close();
 }
 }
 
@@ -640,13 +631,10 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 ScanRanges scanRanges = ScanRanges.createPointLookup(keyRanges);
 scanRanges.initializeScan(scan);
 scan.setFilter(scanRanges.getSkipScanFilter());
-
-RegionScanner scanner = region.getScanner(scan);
-
 Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
 List schemas = new ArrayList();
 PSchema schema = null;
-try {
+try (RegionScanner scanner = region.getScanner(scan)) {
 for (int i = 0; i < keys.size(); i++) {
 schema = null;
 schema = getSchema(scanner, clientTimeStamp);
@@ -655,8 +643,6 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 schemas.add(schema);
 }
 return schemas;
-} finally {
-scanner.close();
 }
 }
 
@@ -1706,14 +1692,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 // TableName systemCatalogTableName = 
region.getTableDesc().getTableName();
 // HTableInterface hTable = env.getTable(systemCatalogTableName);
 // These deprecated calls work a

phoenix git commit: Revert "PHOENIX-3815 Only disable indexes on which write failures occurred (Vincent Poon)"

2017-08-31 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 e0b1b4a01 -> 67800c127


Revert "PHOENIX-3815 Only disable indexes on which write failures occurred 
(Vincent Poon)"

This reverts commit 8924fb1084f17012b2673e17d6dfcac0c0e5cbaf.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/67800c12
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/67800c12
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/67800c12

Branch: refs/heads/4.x-HBase-1.2
Commit: 67800c127d55c2aa0c72f9910c8c25b8a7e5dd56
Parents: e0b1b4a
Author: James Taylor 
Authored: Thu Aug 31 16:47:38 2017 -0700
Committer: James Taylor 
Committed: Thu Aug 31 16:47:38 2017 -0700

--
 .../end2end/index/MutableIndexFailureIT.java|  25 +-
 .../phoenix/hbase/index/write/IndexWriter.java  |   4 +-
 .../write/ParallelWriterIndexCommitter.java | 235 ++
 .../hbase/index/write/RecoveryIndexWriter.java  |   1 +
 .../TrackingParallelWriterIndexCommitter.java   | 245 ---
 .../recovery/StoreFailuresInCachePolicy.java|   1 -
 .../TrackingParallelWriterIndexCommitter.java   | 238 ++
 .../index/PhoenixIndexFailurePolicy.java|  15 --
 .../hbase/index/write/TestIndexWriter.java  |  89 ++-
 .../index/write/TestParalleIndexWriter.java |   4 +-
 .../write/TestParalleWriterIndexCommitter.java  |   4 +-
 11 files changed, 574 insertions(+), 287 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/67800c12/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index da8f315..f8697b1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -46,9 +46,7 @@ import 
org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.execute.CommitException;
-import org.apache.phoenix.hbase.index.write.IndexWriter;
 import org.apache.phoenix.hbase.index.write.IndexWriterUtils;
-import 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
@@ -173,7 +171,7 @@ public class MutableIndexFailureIT extends BaseTest {
 @Test
 public void testIndexWriteFailure() throws Exception {
 String secondIndexName = "B_" + FailingRegionObserver.FAIL_INDEX_NAME;
-String thirdIndexName = "C_" + "IDX";
+//String thirdIndexName = "C_" + INDEX_NAME;
 //String thirdFullIndexName = SchemaUtil.getTableName(schema, 
thirdIndexName);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
String.valueOf(isNamespaceMapped));
@@ -197,8 +195,8 @@ public class MutableIndexFailureIT extends BaseTest {
 // check the drop index.
 conn.createStatement().execute(
 "CREATE "  + (!localIndex ? "LOCAL " : "") + " INDEX " + 
secondIndexName + " ON " + fullTableName + " (v2) INCLUDE (v1)");
-conn.createStatement().execute(
-"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
thirdIndexName + " ON " + fullTableName + " (v1) INCLUDE (v2)");
+//conn.createStatement().execute(
+//"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
thirdIndexName + " ON " + fullTableName + " (v1) INCLUDE (v2)");
 
 query = "SELECT * FROM " + fullIndexName;
 rs = conn.createStatement().executeQuery(query);
@@ -248,10 +246,6 @@ public class MutableIndexFailureIT extends BaseTest {
 } else {
 String indexState = rs.getString("INDEX_STATE");
 assertTrue(PIndexState.DISABLE.toString().equals(indexState) 
|| PIndexState.INACTIVE.toString().equals(indexState));
-// non-failing index should remain active
-ResultSet thirdRs = 
conn.createStatement().executeQuery(getSysCatQuery(thirdIndexName));
-assertTrue(thirdRs.next());
-assertEquals(PIndexState.ACTIVE.getSerializedValue(), 
thirdRs.getString(1));
 }
 assertFalse(rs.next());
 
@@ -312,7 +306,10 @@ public class MutableIndexFailureIT extends BaseTest {
 waitFor

phoenix git commit: Revert "PHOENIX-3815 Only disable indexes on which write failures occurred (Vincent Poon)"

2017-08-31 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 7d8b84302 -> 3f58452f4


Revert "PHOENIX-3815 Only disable indexes on which write failures occurred 
(Vincent Poon)"

This reverts commit 7cf7b3abb10fc6a1ace0cb77615f054a8a6378f7.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3f58452f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3f58452f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3f58452f

Branch: refs/heads/master
Commit: 3f58452f4995490df81b74bb40dcc4e4d2a7329e
Parents: 7d8b843
Author: James Taylor 
Authored: Thu Aug 31 16:46:40 2017 -0700
Committer: James Taylor 
Committed: Thu Aug 31 16:46:40 2017 -0700

--
 .../end2end/index/MutableIndexFailureIT.java|  25 +-
 .../phoenix/hbase/index/write/IndexWriter.java  |   4 +-
 .../write/ParallelWriterIndexCommitter.java | 235 ++
 .../hbase/index/write/RecoveryIndexWriter.java  |   1 +
 .../TrackingParallelWriterIndexCommitter.java   | 245 ---
 .../recovery/StoreFailuresInCachePolicy.java|   1 -
 .../TrackingParallelWriterIndexCommitter.java   | 238 ++
 .../index/PhoenixIndexFailurePolicy.java|  15 --
 .../hbase/index/write/TestIndexWriter.java  |  89 ++-
 .../index/write/TestParalleIndexWriter.java |   4 +-
 .../write/TestParalleWriterIndexCommitter.java  |   4 +-
 11 files changed, 574 insertions(+), 287 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3f58452f/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index 4ccd99c..a1e2b9e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -46,9 +46,7 @@ import 
org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.execute.CommitException;
-import org.apache.phoenix.hbase.index.write.IndexWriter;
 import org.apache.phoenix.hbase.index.write.IndexWriterUtils;
-import 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter;
 import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
@@ -173,7 +171,7 @@ public class MutableIndexFailureIT extends BaseTest {
 @Test
 public void testIndexWriteFailure() throws Exception {
 String secondIndexName = "B_" + FailingRegionObserver.FAIL_INDEX_NAME;
-String thirdIndexName = "C_" + "IDX";
+//String thirdIndexName = "C_" + INDEX_NAME;
 //String thirdFullIndexName = SchemaUtil.getTableName(schema, 
thirdIndexName);
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 props.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
String.valueOf(isNamespaceMapped));
@@ -197,8 +195,8 @@ public class MutableIndexFailureIT extends BaseTest {
 // check the drop index.
 conn.createStatement().execute(
 "CREATE "  + (!localIndex ? "LOCAL " : "") + " INDEX " + 
secondIndexName + " ON " + fullTableName + " (v2) INCLUDE (v1)");
-conn.createStatement().execute(
-"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
thirdIndexName + " ON " + fullTableName + " (v1) INCLUDE (v2)");
+//conn.createStatement().execute(
+//"CREATE " + (localIndex ? "LOCAL " : "") + " INDEX " + 
thirdIndexName + " ON " + fullTableName + " (v1) INCLUDE (v2)");
 
 query = "SELECT * FROM " + fullIndexName;
 rs = conn.createStatement().executeQuery(query);
@@ -248,10 +246,6 @@ public class MutableIndexFailureIT extends BaseTest {
 } else {
 String indexState = rs.getString("INDEX_STATE");
 assertTrue(PIndexState.DISABLE.toString().equals(indexState) 
|| PIndexState.INACTIVE.toString().equals(indexState));
-// non-failing index should remain active
-ResultSet thirdRs = 
conn.createStatement().executeQuery(getSysCatQuery(thirdIndexName));
-assertTrue(thirdRs.next());
-assertEquals(PIndexState.ACTIVE.getSerializedValue(), 
thirdRs.getString(1));
 }
 assertFalse(rs.next());
 
@@ -312,7 +306,10 @@ public class MutableIndexFailureIT extends BaseTest {
 waitForIndexRebuild(c

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #554

2017-08-31 Thread Apache Jenkins Server
See 


Changes:

[samarth] PHOENIX-4141 Addendum to fix test failure

--
[...truncated 100.19 KB...]
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.743 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.368 s 
- in org.apache.phoenix.tx.TransactionIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
231.093 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 299.697 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
2,096.202 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 3039, Failures: 0, Errors: 0, Skipped: 5
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (ClientManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.phoenix.end2end.CreateTableIT
[INFO] Running org.apache.phoenix.end2end.DerivedTableIT
[INFO] Running org.apache.phoenix.end2end.CreateSchemaIT
[INFO] Running org.apache.phoenix.end2end.CustomEntityDataIT
[INFO] Running org.apache.phoenix.end2end.DistinctCountIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.489 s 
- in org.apache.phoenix.end2end.CreateSchemaIT
[INFO] Running org.apache.phoenix.end2end.ExtendedQueryExecIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.396 s 
- in org.apache.phoenix.end2end.ExtendedQueryExecIT
[INFO] Running org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.352 s 
- in org.apache.phoenix.end2end.CustomEntityDataIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.609 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Running org.apache.phoenix.end2end.ProductMetricsIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.303 s 
- in org.apache.phoenix.end2end.FunkyNamesIT
[INFO] Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.748 
s - in org.apache.phoenix.end2end.DerivedTableIT
[INFO] Running org.apache.phoenix.end2end.ReadIsolationLevelIT
[INFO] Running org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.618 
s - in org.apache.phoenix.end2end.DistinctCountIT
[INFO] Running org.apache.phoenix.end2end.RowValueConstructorIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.54 s - 
in org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.136 s 
- in org.apache.phoenix.end2end.ReadIsolationLevelIT
[INFO] Running org.apache.phoenix.end2end.SequenceIT
[INFO] Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.965 
s - in org.apache.phoenix.end2end.ProductMetricsIT
[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.556 
s - in org.apache.phoenix.end2end.SequenceBulkAllocationIT
[INFO] Running org.apache.phoenix.end2end.ToNumberFunctionIT
[INFO] Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.31 s 
- in org.apache.phoenix.end2end.SequenceIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.063 s 
- in org.apache.phoenix.end2end.ToNumberFunctionIT
[INFO] Running org.apache.phoenix.end2end.TruncateFunctionIT
[INFO] Running org.apache.phoenix.end2end.TopNIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.908 s 
- in org.apache.phoenix.end2end.TruncateFunctionIT
[INFO] Running org.apache.phoenix.end2end.VariableLengthPKIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.408 s 
- in org.apache.phoenix.end2end.TopNIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.785 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheWithScnIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.216 s 
- in org.apache.phoenix.rpc.UpdateCacheWithScnIT
[INFO] Running org.apache.phoenix.end2end.UpsertValuesIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.487 
s - in org.apache.phoenix.end2end.VariableLengthPKIT
[INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.971 
s - in org.apache.phoenix.end2end.RowValueCo

phoenix git commit: Fix broken compilation due to Calcite interface change

2017-08-31 Thread maryannxue
Repository: phoenix
Updated Branches:
  refs/heads/calcite c0961ebfe -> bea429aad


Fix broken compilation due to Calcite interface change


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bea429aa
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bea429aa
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bea429aa

Branch: refs/heads/calcite
Commit: bea429aada2e68ef529ae6cf14bd63111ef8c27c
Parents: c0961eb
Author: maryannxue 
Authored: Thu Aug 31 12:15:52 2017 -0700
Committer: maryannxue 
Committed: Thu Aug 31 12:15:52 2017 -0700

--
 .../apache/phoenix/calcite/PhoenixSchema.java   | 10 +++-
 .../phoenix/calcite/PhoenixSqlConformance.java  | 24 ++--
 .../phoenix/calcite/ToExpressionTest.java   |  8 ++-
 3 files changed, 7 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bea429aa/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
index b17e0aa..4ef0898 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSchema.java
@@ -24,6 +24,7 @@ import org.apache.calcite.schema.FunctionParameter;
 import org.apache.calcite.schema.Schema;
 import org.apache.calcite.schema.SchemaFactory;
 import org.apache.calcite.schema.SchemaPlus;
+import org.apache.calcite.schema.SchemaVersion;
 import org.apache.calcite.schema.Table;
 import org.apache.calcite.schema.impl.TableFunctionImpl;
 import org.apache.calcite.schema.impl.ViewTable;
@@ -551,15 +552,10 @@ public class PhoenixSchema implements Schema {
 }
 
 @Override
-public boolean contentsHaveChangedSince(long lastCheck, long now) {
-return lastCheck != now;
-}
-
-@Override
-public Schema snapshot(long now) {
+public Schema snapshot(SchemaVersion version) {
 return new PhoenixSchema(name, schemaName, parentSchema, pc, 
typeFactory);
 }
-
+
 public void defineIndexesAsMaterializations(SchemaPlus parentSchema) {
 SchemaPlus schema = parentSchema.getSubSchema(this.name);
 SchemaPlus viewSqlSchema =

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bea429aa/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSqlConformance.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSqlConformance.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSqlConformance.java
index 9e45198..bf311c7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSqlConformance.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/calcite/PhoenixSqlConformance.java
@@ -17,9 +17,9 @@
  */
 package org.apache.phoenix.calcite;
 
-import org.apache.calcite.sql.validate.SqlConformance;
+import org.apache.calcite.sql.validate.SqlAbstractConformance;
 
-public class PhoenixSqlConformance implements SqlConformance {
+public class PhoenixSqlConformance extends SqlAbstractConformance {
 
 public static final PhoenixSqlConformance INSTANCE =
 new PhoenixSqlConformance();
@@ -37,11 +37,6 @@ public class PhoenixSqlConformance implements SqlConformance 
{
 }
 
 @Override
-public boolean isSortByAliasObscures() {
-return false;
-}
-
-@Override
 public boolean isFromRequired() {
 return false;
 }
@@ -52,16 +47,6 @@ public class PhoenixSqlConformance implements SqlConformance 
{
 }
 
 @Override
-public boolean isMinusAllowed() {
-return false;
-}
-
-@Override
-public boolean isApplyAllowed() {
-return false;
-}
-
-@Override
 public boolean isInsertSubsetColumnsAllowed() {
 return true;
 }
@@ -90,9 +75,4 @@ public class PhoenixSqlConformance implements SqlConformance {
 public boolean allowExtend() {
 return true;
 }
-
-@Override
-public boolean isLimitStartCountAllowed() {
-return false;
-}
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bea429aa/phoenix-core/src/test/java/org/apache/phoenix/calcite/ToExpressionTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/calcite/ToExpressionTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/calcite/ToExpressionTest.java
index 5bfc77b..648cd37 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/calcite/ToExpressionTest.java
+++ 
b/phoenix-core/src/test/java/

phoenix git commit: PHOENIX-4141 Addendum to fix test failure

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 8924fb108 -> e0b1b4a01


PHOENIX-4141 Addendum to fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e0b1b4a0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e0b1b4a0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e0b1b4a0

Branch: refs/heads/4.x-HBase-1.2
Commit: e0b1b4a01058f2f736711cf0bfe1805e04f782d3
Parents: 8924fb1
Author: Samarth Jain 
Authored: Thu Aug 31 10:22:54 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 10:22:54 2017 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e0b1b4a0/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index 591f028..39d97a1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -65,6 +65,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 
 private static List> result;
 private String tableName;
+
 private MyClock clock;
 
 @Before
@@ -90,7 +91,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // configure Phoenix M/R job to read snapshot
 final Configuration conf = getUtility().getConfiguration();
 Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getDataTestDirOnTestFS(SNAPSHOT_NAME);
+Path tmpDir = getUtility().getRandomDir();
 
 PhoenixMapReduceUtil.setInput(job, PhoenixIndexDBWritable.class, 
SNAPSHOT_NAME, tableName,
 tmpDir, null, FIELD1, FIELD2, FIELD3);
@@ -110,7 +111,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // configure Phoenix M/R job to read snapshot
 final Configuration conf = getUtility().getConfiguration();
 Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getDataTestDirOnTestFS(SNAPSHOT_NAME);
+Path tmpDir = getUtility().getRandomDir();
 PhoenixMapReduceUtil.setInput(job, PhoenixIndexDBWritable.class, 
SNAPSHOT_NAME, tableName,
 tmpDir, FIELD3 + " > 0001", FIELD1, FIELD2, FIELD3);
 
@@ -130,7 +131,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // configure Phoenix M/R job to read snapshot
 final Configuration conf = getUtility().getConfiguration();
 Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getDataTestDirOnTestFS(SNAPSHOT_NAME);
+Path tmpDir = getUtility().getRandomDir();
 // Running limit with order by on non pk column
 String inputQuery = "SELECT * FROM " + tableName + " ORDER BY FIELD2 
LIMIT 1";
 PhoenixMapReduceUtil.setInput(job, PhoenixIndexDBWritable.class, 
SNAPSHOT_NAME, tableName,
@@ -156,6 +157,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // verify the result, should match the values at the corresponding 
timestamp
 Properties props = new Properties();
 props.setProperty("CurrentSCN", Long.toString(clock.time));
+
 StringBuilder selectQuery = new StringBuilder("SELECT * FROM " + 
tableName);
 if (condition != null) {
 selectQuery.append(" WHERE " + condition);
@@ -178,7 +180,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 }
 
 assertFalse(
-"Should only have stored " + result.size() + "rows in the 
table for the timestamp!",
+"Should only have stored" + result.size() + "rows in the table 
for the timestamp!",
 rs.next());
 } finally {
 deleteSnapshotAndTable(tableName);
@@ -240,6 +242,10 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 Connection conn = DriverManager.getConnection(getUrl());
 HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
 admin.deleteSnapshot(SNAPSHOT_NAME);
+
+conn.createStatement().execute("DROP TABLE " + tableName);
+conn.close();
+
 }
 
 public static class TableSnapshotMapper extends
@@ -257,4 +263,4 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 }

phoenix git commit: PHOENIX-4141 Addendum to fix test failure

2017-08-31 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 2d4daa629 -> 6e5f3152f


PHOENIX-4141 Addendum to fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6e5f3152
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6e5f3152
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6e5f3152

Branch: refs/heads/4.x-HBase-1.1
Commit: 6e5f3152f6940af714088ff4ab344690d22015ab
Parents: 2d4daa6
Author: Samarth Jain 
Authored: Thu Aug 31 10:13:27 2017 -0700
Committer: Samarth Jain 
Committed: Thu Aug 31 10:13:37 2017 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 42 +++-
 1 file changed, 23 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6e5f3152/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index 591f028..92a2bda 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -70,7 +70,6 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 @Before
 public void injectMyClock() {
 clock = new MyClock(1000);
-// Use our own clock to prevent race between partial rebuilder and 
compaction
 EnvironmentEdgeManager.injectEdge(clock);
 }
 
@@ -90,7 +89,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // configure Phoenix M/R job to read snapshot
 final Configuration conf = getUtility().getConfiguration();
 Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getDataTestDirOnTestFS(SNAPSHOT_NAME);
+Path tmpDir = getUtility().getDataTestDir(SNAPSHOT_NAME);
 
 PhoenixMapReduceUtil.setInput(job, PhoenixIndexDBWritable.class, 
SNAPSHOT_NAME, tableName,
 tmpDir, null, FIELD1, FIELD2, FIELD3);
@@ -110,7 +109,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // configure Phoenix M/R job to read snapshot
 final Configuration conf = getUtility().getConfiguration();
 Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getDataTestDirOnTestFS(SNAPSHOT_NAME);
+Path tmpDir = getUtility().getDataTestDir(SNAPSHOT_NAME);
 PhoenixMapReduceUtil.setInput(job, PhoenixIndexDBWritable.class, 
SNAPSHOT_NAME, tableName,
 tmpDir, FIELD3 + " > 0001", FIELD1, FIELD2, FIELD3);
 
@@ -130,7 +129,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // configure Phoenix M/R job to read snapshot
 final Configuration conf = getUtility().getConfiguration();
 Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getDataTestDirOnTestFS(SNAPSHOT_NAME);
+Path tmpDir = getUtility().getDataTestDir(SNAPSHOT_NAME);
 // Running limit with order by on non pk column
 String inputQuery = "SELECT * FROM " + tableName + " ORDER BY FIELD2 
LIMIT 1";
 PhoenixMapReduceUtil.setInput(job, PhoenixIndexDBWritable.class, 
SNAPSHOT_NAME, tableName,
@@ -156,6 +155,7 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 // verify the result, should match the values at the corresponding 
timestamp
 Properties props = new Properties();
 props.setProperty("CurrentSCN", Long.toString(clock.time));
+
 StringBuilder selectQuery = new StringBuilder("SELECT * FROM " + 
tableName);
 if (condition != null) {
 selectQuery.append(" WHERE " + condition);
@@ -178,26 +178,13 @@ public class TableSnapshotReadsMapReduceIT extends 
ParallelStatsDisabledIT {
 }
 
 assertFalse(
-"Should only have stored " + result.size() + "rows in the 
table for the timestamp!",
+"Should only have stored" + result.size() + "rows in the table 
for the timestamp!",
 rs.next());
 } finally {
 deleteSnapshotAndTable(tableName);
 }
 }
 
-private static class MyClock extends EnvironmentEdge {
-public volatile long time;
-
-public MyClock(long time) {
-this.time = time;
-}
-
-@Override
-public long currentTime() {
-return time;
-}
-}
-
 private void upsertData(String tableName) throws SQLException {
 Connection conn = DriverMa

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #553

2017-08-31 Thread Apache Jenkins Server
See 


Changes:

[jtaylor] Revert "PHOENIX-3815 Only disable indexes on which write failures

--
[...truncated 99.11 KB...]
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 153.453 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitForwardScanIT
[INFO] Running org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 325.593 
s - in org.apache.phoenix.end2end.index.DropColumnIT
[INFO] Running org.apache.phoenix.end2end.index.SaltedIndexIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.492 s 
- in org.apache.phoenix.end2end.index.SaltedIndexIT
[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.309 s 
- in org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 159.974 
s - in org.apache.phoenix.end2end.index.MutableIndexSplitReverseScanIT
[INFO] Running org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Tests run: 67, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 476.324 
s - in org.apache.phoenix.end2end.index.IndexExpressionIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.705 s 
- in org.apache.phoenix.end2end.index.txn.MutableRollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,144.992 s - in org.apache.phoenix.end2end.SortMergeJoinIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.676 s 
- in org.apache.phoenix.end2end.index.txn.RollbackIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.266 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.167 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.658 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.305 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.692 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.331 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.246 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.828 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 497.911 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.668 s 
- in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.183 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.74 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
225.321 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 292.044 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 304, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
2,088.489 s - in org.apache.phoenix.end2end.index.IndexIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshotWithLimit:140->configureJob:154
[ERROR]   
TableSnapshotReadsMapReduceIT.testMapReduceSnapshots:99->configureJob:154
[ERROR]

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #392

2017-08-31 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on qnode3 (ubuntu) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins3748959351702264559.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 128341
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
core id : 6
core id : 7
physical id : 0
MemTotal:   32865152 kB
MemFree:10523404 kB
Filesystem  Size  Used Avail Use% Mounted on
none 16G 0   16G   0% /dev
tmpfs   3.2G  343M  2.8G  11% /run
/dev/nbd046G   29G   16G  65% /
tmpfs16G 0   16G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs16G 0   16G   0% /sys/fs/cgroup
/dev/sda1   235G  123G  101G  55% /home
tmpfs   3.2G 0  3.2G   0% /run/user/9997
tmpfs   3.2G 0  3.2G   0% /run/user/999
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.

main:
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common
 [exec] 
~/jenkins-slave/workspace/Phoenix_Compile_Compat_wHBase/hbase/hbase-common

main:
[mkdir] Created dir: 

 [exec] tar: hadoop-snappy-nativelibs.tar: Cannot open: No such file or 
directory
 [exec] tar: Error is not recoverable: exiting now
 [exec] Result: 2

main:
[mkdir] Created dir: 

 [copy] Copying 20 files to 

[mkdir] Created dir: 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 17 files to 

[mkdir] Created dir: 


main:
[mkdir] Created dir: 

 [copy] Copying 1 file to 

[mkdir] Created dir: 


HBase pom.xml:

Got HBase version as 0.98.25-SNAPSHOT
Cloning into 'phoenix'...
Switched to a new branch '4.x-HBase-0.98'
Branch 4.x-HBase-0.98 set up to track remote branch 4.x-HBase-0.98 from origin.
ANTLR Parser Generator  Version 3.5.2
Output file 

 does not exist: must build 

PhoenixSQL.g


===
Verify