See <https://builds.apache.org/job/Tajo-trunk-nightly/279/changes>
Changes:
[hyunsik] TAJO-609: PlannerUtil::getRelationLineage ignores
PartitionedTableScanNode.
[hyunsik] TAJO-601: Improve distinct aggregation query processing.
[jhjung] TAJO-575: Worker's env.jsp has wrong URL which go to worker's
index.jsp. (hyoungjunkim via jaehwa)
[jhjung] TAJO-578: Update configuration for tajo-site.xml. (jaehwa)
[hyunsik] TAJO-610: Refactor Column class.
------------------------------------------
[...truncated 43231 lines...]
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:509)
java.util.TimerThread.run(Timer.java:462)
Thread 4 (Signal Dispatcher):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
Thread 3 (Finalizer):
State: WAITING
Blocked count: 40
Waited count: 21
Waiting on java.lang.ref.ReferenceQueue$Lock@9e8c34
Stack:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
Thread 2 (Reference Handler):
State: WAITING
Blocked count: 23
Waited count: 23
Waiting on java.lang.ref.Reference$Lock@106df95
Stack:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
Thread 1 (main):
State: TIMED_WAITING
Blocked count: 1195
Waited count: 2687
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.tajo.client.TajoClient.getQueryResultAndWait(TajoClient.java:261)
org.apache.tajo.client.TajoClient.executeQueryAndGetResult(TajoClient.java:177)
org.apache.tajo.LocalTajoTestingUtility.execute(LocalTajoTestingUtility.java:110)
org.apache.tajo.TpchTestBase.execute(TpchTestBase.java:103)
org.apache.tajo.QueryTestCaseBase.executeFile(QueryTestCaseBase.java:207)
org.apache.tajo.QueryTestCaseBase.executeQuery(QueryTestCaseBase.java:193)
org.apache.tajo.engine.query.TestUnionQuery.testUnion10(TestUnionQuery.java:125)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:47)
org.junit.rules.RunRules.evaluate(RunRules.java:18)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
2014-02-21 02:56:54,860 INFO worker.TaskAttemptContext
(TaskAttemptContext.java:setState(106)) - Query status of
ta_1392951319112_0118_000006_000000_00 is changed to TA_SUCCEEDED
2014-02-21 02:56:54,861 WARN worker.Task (Task.java:run(658)) - Last retry,
killing
2014-02-21 02:56:54,861 INFO worker.Task (Task.java:run(436)) - Task Counter -
total:263, succeeded: 263, killed: 0, failed: 0
2014-02-21 02:56:54,861 INFO worker.TaskRunner (TaskRunner.java:run(333)) -
Request GetTask:
eb_1392951319112_0118_000006,container_1392951319112_0118_01_000253
2014-02-21 02:56:54,862 INFO worker.TajoWorker (TajoWorker.java:run(678)) -
============================================
2014-02-21 02:56:54,862 INFO master.DefaultTaskScheduler
(DefaultTaskScheduler.java:handle(262)) - TaskRequest:
container_1392951319112_0118_01_000253,eb_1392951319112_0118_000006
2014-02-21 02:56:54,862 INFO worker.TajoWorker (TajoWorker.java:run(679)) -
TajoWorker received SIGINT Signal
2014-02-21 02:56:54,862 INFO querymaster.SubQuery
(SubQuery.java:transition(975)) - [eb_1392951319112_0118_000006] Task
Completion Event (Total: 1, Success: 1, Killed: 0, Failed: 0
2014-02-21 02:56:54,863 INFO querymaster.SubQuery
(SubQuery.java:transition(1015)) - subQuery completed -
eb_1392951319112_0118_000006 (total=1, success=1, killed=0)
2014-02-21 02:56:54,863 INFO master.DefaultTaskScheduler
(DefaultTaskScheduler.java:stop(146)) - Task Scheduler stopped
2014-02-21 02:56:54,863 INFO master.DefaultTaskScheduler
(DefaultTaskScheduler.java:run(105)) - TaskScheduler schedulingThread stopped
2014-02-21 02:56:54,863 INFO worker.TaskRunner (TaskRunner.java:run(363)) -
Received ShouldDie
flag:eb_1392951319112_0118_000006,container_1392951319112_0118_01_000253
2014-02-21 02:56:54,864 INFO worker.TaskRunner (TaskRunner.java:stop(227)) -
Stop TaskRunner: eb_1392951319112_0118_000006
2014-02-21 02:56:54,864 INFO worker.TaskRunnerManager
(TaskRunnerManager.java:stopTask(89)) - Stop
Task:eb_1392951319112_0118_000006,container_1392951319112_0118_01_000253
2014-02-21 02:56:54,863 INFO worker.TajoWorker (TajoWorker.java:run(680)) -
============================================
2014-02-21 02:56:54,864 INFO worker.TajoWorker (TajoWorker.java:run(652)) -
Worker Resource Heartbeat Thread stopped.
2014-02-21 02:56:54,866 INFO querymaster.Query (Query.java:handle(641)) -
Processing q_1392951319112_0118 of type SUBQUERY_COMPLETED
2014-02-21 02:56:54,867 INFO querymaster.SubQuery
(SubQuery.java:calculateShuffleOutputNum(712)) - ============>>>>> Unexpected
Case! <<<<<================
2014-02-21 02:56:54,867 INFO worker.TajoResourceAllocator
(TajoResourceAllocator.java:run(188)) - ContainerProxy
stopped:container_1392951319112_0118_01_000253,eb_1392951319112_0118_000006
2014-02-21 02:56:54,867 INFO master.ContainerProxy
(TajoContainerProxy.java:stopContainer(113)) - Release TajoWorker Resource:
eb_1392951319112_0118_000006,container_1392951319112_0118_01_000253,
state:RUNNING
2014-02-21 02:56:54,867 INFO querymaster.SubQuery
(SubQuery.java:calculateShuffleOutputNum(716)) - Table's volume is
approximately 1 MB
2014-02-21 02:56:54,868 INFO querymaster.SubQuery
(SubQuery.java:calculateShuffleOutputNum(719)) - The determined number of
partitions is 1
2014-02-21 02:56:54,868 INFO querymaster.SubQuery
(SubQuery.java:initTaskScheduler(611)) -
org.apache.tajo.master.DefaultTaskScheduler is chosen for the task scheduling
2014-02-21 02:56:54,868 INFO querymaster.SubQuery
(SubQuery.java:getNonLeafTaskNum(748)) - Table's volume is approximately 1 MB
2014-02-21 02:56:54,868 INFO querymaster.SubQuery
(SubQuery.java:getNonLeafTaskNum(751)) - The determined number of non-leaf
tasks is 1
2014-02-21 02:56:54,869 INFO querymaster.SubQuery
(SubQuery.java:transition(581)) - 1 objects are scheduled
2014-02-21 02:56:54,869 INFO master.DefaultTaskScheduler
(DefaultTaskScheduler.java:start(90)) - Start TaskScheduler
2014-02-21 02:56:54,869 INFO querymaster.SubQuery
(SubQuery.java:allocateContainers(793)) - Request Container for
eb_1392951319112_0118_000007 containers=1
2014-02-21 02:56:54,870 INFO querymaster.Query
(Query.java:executeNextBlock(562)) - Scheduling
SubQuery:eb_1392951319112_0118_000007
2014-02-21 02:56:54,870 INFO worker.TajoResourceAllocator
(TajoResourceAllocator.java:run(208)) - Start TajoWorkerAllocationThread
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.register(AbstractNioSelector.java:100)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.register(NioClientBoss.java:42)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:121)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at org.jboss.netty.channel.Channels.connect(Channels.java:634)
at
org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.apache.tajo.rpc.NettyClientBase.connect(NettyClientBase.java:65)
at org.apache.tajo.rpc.NettyClientBase.init(NettyClientBase.java:57)
at org.apache.tajo.rpc.AsyncRpcClient.<init>(AsyncRpcClient.java:77)
at
org.apache.tajo.rpc.RpcConnectionPool.makeConnection(RpcConnectionPool.java:64)
at
org.apache.tajo.rpc.RpcConnectionPool.getConnection(RpcConnectionPool.java:78)
at
org.apache.tajo.master.TajoContainerProxy.releaseWorkerResource(TajoContainerProxy.java:160)
at
org.apache.tajo.master.TajoContainerProxy.releaseWorkerResource(TajoContainerProxy.java:144)
at
org.apache.tajo.master.TajoContainerProxy.stopContainer(TajoContainerProxy.java:123)
at
org.apache.tajo.worker.TajoResourceAllocator$StopContainerRunner.run(TajoResourceAllocator.java:189)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-02-21 02:56:54,876 ERROR master.ContainerProxy
(TajoContainerProxy.java:releaseWorkerResource(170)) -
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
java.io.IOException: java.util.concurrent.RejectedExecutionException: Worker
has already been shutdown
at org.apache.tajo.rpc.NettyClientBase.init(NettyClientBase.java:60)
at org.apache.tajo.rpc.AsyncRpcClient.<init>(AsyncRpcClient.java:77)
at
org.apache.tajo.rpc.RpcConnectionPool.makeConnection(RpcConnectionPool.java:64)
at
org.apache.tajo.rpc.RpcConnectionPool.getConnection(RpcConnectionPool.java:78)
at
org.apache.tajo.master.TajoContainerProxy.releaseWorkerResource(TajoContainerProxy.java:160)
at
org.apache.tajo.master.TajoContainerProxy.releaseWorkerResource(TajoContainerProxy.java:144)
at
org.apache.tajo.master.TajoContainerProxy.stopContainer(TajoContainerProxy.java:123)
at
org.apache.tajo.worker.TajoResourceAllocator$StopContainerRunner.run(TajoResourceAllocator.java:189)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.RejectedExecutionException: Worker has already
been shutdown
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.register(AbstractNioSelector.java:100)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.register(NioClientBoss.java:42)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:121)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at org.jboss.netty.channel.Channels.connect(Channels.java:634)
at
org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.apache.tajo.rpc.NettyClientBase.connect(NettyClientBase.java:65)
at org.apache.tajo.rpc.NettyClientBase.init(NettyClientBase.java:57)
... 13 more
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.register(AbstractNioSelector.java:100)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.register(NioClientBoss.java:42)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:121)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at org.jboss.netty.channel.Channels.connect(Channels.java:634)
at
org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.apache.tajo.rpc.NettyClientBase.connect(NettyClientBase.java:65)
at org.apache.tajo.rpc.NettyClientBase.init(NettyClientBase.java:57)
at org.apache.tajo.rpc.AsyncRpcClient.<init>(AsyncRpcClient.java:77)
at
org.apache.tajo.rpc.RpcConnectionPool.makeConnection(RpcConnectionPool.java:64)
at
org.apache.tajo.rpc.RpcConnectionPool.getConnection(RpcConnectionPool.java:78)
at
org.apache.tajo.worker.TajoResourceAllocator$TajoWorkerAllocationThread.run(TajoResourceAllocator.java:230)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-02-21 02:56:54,878 ERROR worker.TajoResourceAllocator
(TajoResourceAllocator.java:run(236)) -
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
java.io.IOException: java.util.concurrent.RejectedExecutionException: Worker
has already been shutdown
at org.apache.tajo.rpc.NettyClientBase.init(NettyClientBase.java:60)
at org.apache.tajo.rpc.AsyncRpcClient.<init>(AsyncRpcClient.java:77)
at
org.apache.tajo.rpc.RpcConnectionPool.makeConnection(RpcConnectionPool.java:64)
at
org.apache.tajo.rpc.RpcConnectionPool.getConnection(RpcConnectionPool.java:78)
at
org.apache.tajo.worker.TajoResourceAllocator$TajoWorkerAllocationThread.run(TajoResourceAllocator.java:230)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.RejectedExecutionException: Worker has already
been shutdown
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:115)
at
org.jboss.netty.channel.socket.nio.AbstractNioSelector.register(AbstractNioSelector.java:100)
at
org.jboss.netty.channel.socket.nio.NioClientBoss.register(NioClientBoss.java:42)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:121)
at
org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at
org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)
at org.jboss.netty.channel.Channels.connect(Channels.java:634)
at
org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at
org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.apache.tajo.rpc.NettyClientBase.connect(NettyClientBase.java:65)
at org.apache.tajo.rpc.NettyClientBase.init(NettyClientBase.java:57)
... 10 more
2014-02-21 02:56:54,882 INFO rpc.NettyServerBase
(NettyServerBase.java:shutdown(126)) - Rpc (TajoWorkerProtocol) listened on
0:0:0:0:0:0:0:0:36066) shutdown
2014-02-21 02:56:54,883 INFO worker.TajoWorkerManagerService
(TajoWorkerManagerService.java:stop(95)) - TajoWorkerManagerService stopped
2014-02-21 02:56:54,885 INFO rpc.NettyServerBase
(NettyServerBase.java:shutdown(126)) - Rpc (QueryMasterProtocol) listened on
0:0:0:0:0:0:0:0:36065) shutdown
2014-02-21 02:56:54,885 INFO querymaster.QueryMasterManagerService
(QueryMasterManagerService.java:stop(110)) - QueryMasterManagerService stopped
2014-02-21 02:56:54,886 INFO querymaster.QueryMaster
(QueryMaster.java:run(425)) - QueryMaster heartbeat thread stopped
2014-02-21 02:56:54,886 INFO master.TajoAsyncDispatcher
(TajoAsyncDispatcher.java:stop(122)) - AsyncDispatcher
stopped:querymaster_1392951320469
2014-02-21 02:56:54,887 INFO querymaster.QueryMaster
(QueryMaster.java:stop(160)) - QueryMaster stop
2014-02-21 02:56:54,887 INFO worker.TajoWorkerClientService
(TajoWorkerClientService.java:stop(107)) - TajoWorkerClientService stopping
2014-02-21 02:56:54,889 INFO rpc.NettyServerBase
(NettyServerBase.java:shutdown(126)) - Rpc (QueryMasterClientProtocol) listened
on 0:0:0:0:0:0:0:0:36064) shutdown
2014-02-21 02:56:54,889 INFO worker.TajoWorkerClientService
(TajoWorkerClientService.java:stop(111)) - TajoWorkerClientService stopped
2014-02-21 02:56:54,889 INFO worker.TajoWorker (TajoWorker.java:stop(347)) -
TajoWorker main thread exiting
Results :
Tests run: 195, Failures: 0, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Tajo Main ......................................... SUCCESS [19.996s]
[INFO] Tajo Project POM .................................. SUCCESS [4.301s]
[INFO] Tajo Common ....................................... SUCCESS [10.595s]
[INFO] Tajo Algebra ...................................... SUCCESS [3.254s]
[INFO] Tajo Rpc .......................................... SUCCESS [18.642s]
[INFO] Tajo Catalog Common ............................... SUCCESS [5.571s]
[INFO] Tajo Catalog Client ............................... SUCCESS [1.572s]
[INFO] Tajo Catalog Server ............................... SUCCESS [10.135s]
[INFO] Tajo Storage ...................................... SUCCESS [42.937s]
[INFO] Tajo Core PullServer .............................. SUCCESS [1.318s]
[INFO] Tajo Client ....................................... SUCCESS [4.098s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [0.612s]
[INFO] Tajo Core Backend ................................. FAILURE [2:02.832s]
[INFO] Tajo Core ......................................... SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4:06.790s
[INFO] Finished at: Fri Feb 21 02:56:55 UTC 2014
[INFO] Final Memory: 40M/229M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on
project tajo-core-backend: Execution default-test of goal
org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test failed: The forked
VM terminated without saying properly goodbye. VM crash or System.exit called ?
-> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :tajo-core-backend
Build step 'Execute shell' marked build as failure
Recording test results