See 
<https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/685/display/redirect?page=changes>

Changes:

[abhishek.chouhan] PHOENIX-5529 Creating a grand-child view on a table with an 
index fails


------------------------------------------
[...truncated 141.09 KB...]
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 27 more


[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.628 s 
- in org.apache.phoenix.end2end.DropIndexedColsIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.917 s 
- in org.apache.phoenix.end2end.TenantSpecificViewIndexSaltedIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 146.592 
s - in org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT
[ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 134.26 
s <<< FAILURE! - in org.apache.phoenix.end2end.TenantSpecificViewIndexIT
[ERROR] 
testMultiCFViewLocalIndex(org.apache.phoenix.end2end.TenantSpecificViewIndexIT) 
 Time elapsed: 10.767 s  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA2.N000002: 
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new 
native thread
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2106)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8350)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2170)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2152)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2376)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.OutOfMemoryError: unable to create new native thread
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
        at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
        at 
org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
        at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:124)
        at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:197)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1762)
        ... 9 more
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create new native thread
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
        at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:228)
        at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:202)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1298)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1197)
        at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 18 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
        at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
        at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:128)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:139)
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:179)
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:153)
        at 
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1711)
        at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:55)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1227)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1194)
        at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 27 more

        at 
org.apache.phoenix.end2end.TenantSpecificViewIndexIT.createViewAndIndexesWithTenantId(TenantSpecificViewIndexIT.java:160)
        at 
org.apache.phoenix.end2end.TenantSpecificViewIndexIT.testMultiCFViewIndex(TenantSpecificViewIndexIT.java:127)
        at 
org.apache.phoenix.end2end.TenantSpecificViewIndexIT.testMultiCFViewLocalIndex(TenantSpecificViewIndexIT.java:92)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA2.N000002: 
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new 
native thread
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2106)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8350)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2170)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2152)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2376)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.OutOfMemoryError: unable to create new native thread
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
        at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
        at 
org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
        at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:124)
        at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:197)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1762)
        ... 9 more
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create new native thread
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
        at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:228)
        at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:202)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1298)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1197)
        at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 18 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
        at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
        at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:128)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:139)
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:179)
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:153)
        at 
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1711)
        at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:55)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1227)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1194)
        at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 27 more

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SCHEMA2.N000002: 
java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create new 
native thread
        at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2106)
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8350)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2170)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2152)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2376)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.OutOfMemoryError: unable to create new native thread
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
        at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:164)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:159)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
        at 
org.apache.hadoop.hbase.client.HTableWrapper.getScanner(HTableWrapper.java:215)
        at org.apache.phoenix.util.ViewUtil.findRelatedViews(ViewUtil.java:124)
        at org.apache.phoenix.util.ViewUtil.dropChildViews(ViewUtil.java:197)
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1762)
        ... 9 more
Caused by: java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to 
create new native thread
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:220)
        at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.loadCache(ClientSmallReversedScanner.java:228)
        at 
org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:202)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1298)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1197)
        at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 18 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:717)
        at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
        at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141)
        at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:128)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:139)
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:179)
        at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:153)
        at 
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1711)
        at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:55)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1227)
        at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1194)
        at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:303)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
        ... 27 more


[INFO] Running org.apache.phoenix.end2end.ViewMetadataIT
[INFO] Running org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 508.739 
s - in org.apache.phoenix.end2end.AlterTableWithViewsIT
[WARNING] Tests run: 24, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
267.109 s - in org.apache.phoenix.end2end.index.ViewIndexIT
[INFO] Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 352.35 
s - in org.apache.phoenix.end2end.ViewMetadataIT
[INFO] Tests run: 85, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 582.672 
s - in org.apache.phoenix.end2end.ViewIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   DropTableWithViewsIT.testDropTableWithChildViews:127 » PhoenixIO 
org.apache.ha...
[ERROR]   
TenantSpecificViewIndexIT.testMultiCFViewLocalIndex:92->testMultiCFViewIndex:127->createViewAndIndexesWithTenantId:160
 » PhoenixIO
[INFO] 
[ERROR] Tests run: 218, Failures: 0, Errors: 2, Skipped: 2
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.22.0:verify (ParallelStatsEnabledTest) @ 
phoenix-core ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache Phoenix 4.16.0-HBase-1.3-SNAPSHOT:
[INFO] 
[INFO] Apache Phoenix ..................................... SUCCESS [  2.872 s]
[INFO] Phoenix Core ....................................... FAILURE [  03:18 h]
[INFO] Phoenix - Pherf .................................... SKIPPED
[INFO] Phoenix Client ..................................... SKIPPED
[INFO] Phoenix Server ..................................... SKIPPED
[INFO] Phoenix Assembly ................................... SKIPPED
[INFO] Phoenix - Tracing Web Application .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  03:18 h
[INFO] Finished at: 2020-02-15T00:21:04Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-failsafe-plugin:2.22.0:verify 
(ParallelStatsEnabledTest) on project phoenix-core: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/ws/phoenix-core/target/failsafe-reports>
 for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, 
[date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :phoenix-core
Build step 'Invoke top-level Maven targets' marked build as failure
[JIRA] Updating issue PHOENIX-5529
Recording test results
Not sending mail to unregistered user Rajeshbabu Chintaguntla
Not sending mail to unregistered user s.ka...@salesforce.com
Not sending mail to unregistered user abhishek.chou...@salesforce.com

Reply via email to