See <https://builds.apache.org/job/Phoenix-4.0/368/changes>

Changes:

[jtaylor] PHOENIX-1328 Update ANALYZE syntax to collect stats on index tables 
and all tables (ramkrishna.s.vasudevan)

------------------------------------------
[...truncated 2502 lines...]
        at 
org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:77)
        at 
org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:426)
        at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:164)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
Join.OrderTable,,1412797740998.9c8a321c8166832a0f8609b832f8014f.: Requested 
memory of 21196 bytes could not be allocated from remaining memory of 23752 
bytes from global pool of 40000 bytes after waiting for 0ms.
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:83)
        at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:51)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:158)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1845)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3092)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested 
memory of 21196 bytes could not be allocated from remaining memory of 23752 
bytes from global pool of 40000 bytes after waiting for 0ms.
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:81)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:100)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:106)
        at 
org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:365)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:400)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:161)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:140)
        ... 8 more

        at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
        at 
org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:57)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:583)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:578)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
Join.OrderTable,,1412797740998.9c8a321c8166832a0f8609b832f8014f.: Requested 
memory of 21196 bytes could not be allocated from remaining memory of 23752 
bytes from global pool of 40000 bytes after waiting for 0ms.
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:83)
        at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:51)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:158)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1845)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3092)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested 
memory of 21196 bytes could not be allocated from remaining memory of 23752 
bytes from global pool of 40000 bytes after waiting for 0ms.
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:81)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:100)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:106)
        at 
org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:365)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:400)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:161)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:140)
        ... 8 more

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:285)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:316)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:164)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:282)
        at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:187)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:182)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:109)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:738)
        at 
org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:54)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:583)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:578)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
Join.OrderTable,,1412797740998.9c8a321c8166832a0f8609b832f8014f.: Requested 
memory of 21196 bytes could not be allocated from remaining memory of 23752 
bytes from global pool of 40000 bytes after waiting for 0ms.
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:83)
        at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:51)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:158)
        at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1845)
        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3092)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
        at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested 
memory of 21196 bytes could not be allocated from remaining memory of 23752 
bytes from global pool of 40000 bytes after waiting for 0ms.
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:81)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:100)
        at 
org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:106)
        at 
org.apache.phoenix.cache.aggcache.SpillableGroupByCache.<init>(SpillableGroupByCache.java:150)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver$GroupByCacheFactory.newCache(GroupedAggregateRegionObserver.java:365)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:400)
        at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:161)
        at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:140)
        ... 8 more

        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1452)
        at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1656)
        at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1714)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:29900)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:308)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:164)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:282)
        at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:187)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:182)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:109)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:738)
        at 
org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:54)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:583)
        at 
org.apache.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:578)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)

Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.046 sec - in 
org.apache.phoenix.end2end.UpsertBigValuesIT
Running org.apache.phoenix.end2end.SortOrderFIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.043 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.069 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.ReverseScanIT
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.899 sec - 
in org.apache.phoenix.end2end.InListIT
Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.647 sec - in 
org.apache.phoenix.end2end.ReverseScanIT
Running org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.886 sec - in 
org.apache.phoenix.end2end.SortOrderFIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.1 sec - in 
org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Running org.apache.phoenix.end2end.LastValueFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.849 sec - in 
org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.329 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.LpadFunctionIT
Running org.apache.phoenix.end2end.RoundFloorCeilFunctionsEnd2EndIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.613 sec - in 
org.apache.phoenix.end2end.LastValueFunctionIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.715 sec - in 
org.apache.phoenix.end2end.LpadFunctionIT
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.738 sec - in 
org.apache.phoenix.end2end.RoundFloorCeilFunctionsEnd2EndIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.529 sec - in 
org.apache.phoenix.end2end.QueryMoreIT
Tests run: 96, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.886 sec - 
in org.apache.phoenix.end2end.HashJoinIT

Results :

Tests in error: 
  LocalIndexIT.testLocalIndexScanJoinColumnsFromDataTable:423 » PhoenixIO 
org.ap...
  SubqueryIT.testComparisonSubquery:891 » SQL Encountered exception in sub plan 
...
  SubqueryIT.testInSubquery:722 » SQL Encountered exception in sub plan [0] 
exec...
  SubqueryIT.testExistsSubquery:825 » SQL Encountered exception in sub plan [0] 
...
  SubqueryIT.testComparisonSubquery:891 » SQL Encountered exception in sub plan 
...
  SubqueryIT.testInSubquery:722 » SQL Encountered exception in sub plan [0] 
exec...
  SubqueryIT.testExistsSubquery:825 » SQL Encountered exception in sub plan [0] 
...
  SubqueryIT.testComparisonSubquery:891 » SQL Encountered exception in sub plan 
...
  SubqueryIT.testInSubquery:722 » SQL Encountered exception in sub plan [0] 
exec...
  SubqueryIT.testExistsSubquery:825 » SQL Encountered exception in sub plan [0] 
...

Tests run: 511, Failures: 0, Errors: 10, Skipped: 1

[INFO] [failsafe:integration-test {execution: NeedTheirOwnClusterTests}]
[INFO] Failsafe report directory: 
<https://builds.apache.org/job/Phoenix-4.0/ws/phoenix-core/target/failsafe-reports>
[INFO] parallel='none', perCoreThreadCount=true, threadCount=0, 
useUnlimitedThreads=false, threadCountSuites=0, threadCountClasses=0, 
threadCountMethods=0, parallelOptimized=true

-------------------------------------------------------
 T E S T S
-------------------------------------------------------

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running 
org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Running org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Running org.apache.phoenix.hbase.index.balancer.IndexLoadBalancerIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.458 sec - in 
org.apache.phoenix.hbase.index.covered.EndToEndCoveredColumnsIndexBuilderIT
Running org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.784 sec - in 
org.apache.phoenix.hbase.index.covered.example.FailWithoutRetriesIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.779 sec - in 
org.apache.phoenix.hbase.index.FailForUnsupportedHBaseVersionsIT
Running org.apache.phoenix.end2end.KeyOnlyIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.721 sec - in 
org.apache.phoenix.end2end.KeyOnlyIT
Running org.apache.phoenix.end2end.ParallelIteratorsIT
Running org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.522 sec - 
in org.apache.phoenix.hbase.index.covered.example.EndToEndCoveredIndexingIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.165 sec - in 
org.apache.phoenix.end2end.ParallelIteratorsIT
Running org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.883 sec - 
in 
org.apache.phoenix.hbase.index.covered.example.EndtoEndIndexingWithCompressionIT
Running org.apache.phoenix.end2end.index.DropIndexDuringUpsertIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.516 sec - 
in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
Running org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.01 sec - in 
org.apache.phoenix.end2end.ContextClassloaderIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.612 sec - in 
org.apache.phoenix.end2end.index.MutableIndexReplicationIT
Running org.apache.phoenix.end2end.StatsCollectorIT
Running org.apache.phoenix.end2end.TenantSpecificTablesDMLIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.679 sec - in 
org.apache.phoenix.end2end.StatsCollectorIT
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.526 sec - 
in org.apache.phoenix.end2end.TenantSpecificTablesDMLIT
Running org.apache.phoenix.end2end.MultiCfQueryExecIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.425 sec - 
in org.apache.phoenix.hbase.index.balancer.IndexLoadBalancerIT
Running org.apache.phoenix.mapreduce.CsvBulkLoadToolIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.681 sec - in 
org.apache.phoenix.end2end.MultiCfQueryExecIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.593 sec - in 
org.apache.phoenix.end2end.index.DropIndexDuringUpsertIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.177 sec - 
in org.apache.phoenix.end2end.index.MutableIndexFailureIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 173.658 sec - 
in org.apache.phoenix.mapreduce.CsvBulkLoadToolIT

Results :

Tests run: 93, Failures: 0, Errors: 0, Skipped: 0

[INFO] [failsafe:verify {execution: ClientManagedTimeTests}]
[INFO] Failsafe report directory: 
<https://builds.apache.org/job/Phoenix-4.0/ws/phoenix-core/target/failsafe-reports>
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] There are test failures.

Please refer to 
<https://builds.apache.org/job/Phoenix-4.0/ws/phoenix-core/target/failsafe-reports>
 for the individual test results.
[INFO] ------------------------------------------------------------------------
[INFO] For more information, run Maven with the -e switch
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15 minutes 2 seconds
[INFO] Finished at: Wed Oct 08 19:55:55 UTC 2014
[INFO] Final Memory: 101M/930M
[INFO] ------------------------------------------------------------------------
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Sending artifact delta relative to Phoenix | 4.0 #367
Archived 725 artifacts
Archive block size is 32768
Received 5557 blocks and 397142481 bytes
Compression is 31.4%
Took 3 min 29 sec
Recording test results

Reply via email to