[ 
https://issues.apache.org/jira/browse/TAJO-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293093#comment-14293093
 ] 

Hudson commented on TAJO-1318:
------------------------------

SUCCESS: Integrated in Tajo-master-build #564 (See 
[https://builds.apache.org/job/Tajo-master-build/564/])
TAJO-1318: Unit test failure after miniDFS cluster restart. (jinho) (jhkim: rev 
5ba8e383c61e6074e84dbd07a3b150370ccbaea1)
* tajo-core/src/main/java/org/apache/tajo/worker/TajoWorker.java
* CHANGES
* 
tajo-storage/tajo-storage-common/src/main/java/org/apache/tajo/storage/StorageManager.java
* tajo-core/src/test/java/org/apache/tajo/TajoTestingCluster.java


> Unit test failure after miniDFS cluster restart
> -----------------------------------------------
>
>                 Key: TAJO-1318
>                 URL: https://issues.apache.org/jira/browse/TAJO-1318
>             Project: Tajo
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.10
>            Reporter: Jinho Kim
>            Assignee: Jinho Kim
>            Priority: Blocker
>             Fix For: 0.10
>
>         Attachments: TAJO-1318.patch
>
>
> The causes is disconnected hdfs client in StorageManager cache.
> We should clear the cache during stop the worker
> {noformat}
> 2015-01-26 11:06:53,216 INFO: org.apache.tajo.master.exec.DDLExecutor 
> (createTable(249)) - Table default.testddlbyexecutequery is created (982)
> 2015-01-26 11:06:53,438 INFO: org.apache.tajo.master.GlobalEngine 
> (updateQuery(196)) - SQL: create table testgetfinishedquerylist (deptname 
> text, score int4)
> 2015-01-26 11:06:53,440 INFO: org.apache.tajo.master.GlobalEngine 
> (createLogicalPlan(239)) - Non Optimized Query: 
> -----------------------------
> Query Block Graph
> -----------------------------
> |-#ROOT
> -----------------------------
> Optimization Log:
> -----------------------------
> 2015-01-26 11:06:53,440 INFO: org.apache.tajo.master.GlobalEngine 
> (createLogicalPlan(241)) - =============================================
> 2015-01-26 11:06:53,440 INFO: org.apache.tajo.master.GlobalEngine 
> (createLogicalPlan(242)) - Optimized Query: 
> -----------------------------
> Query Block Graph
> -----------------------------
> |-#ROOT
> -----------------------------
> Optimization Log:
> -----------------------------
> 2015-01-26 11:06:53,441 INFO: org.apache.tajo.master.GlobalEngine 
> (createLogicalPlan(243)) - =============================================
> 2015-01-26 11:06:53,442 ERROR: org.apache.tajo.master.GlobalEngine 
> (updateQuery(216)) - Call From asf901.gq1.ygridcore.net/67.195.81.145 to 
> localhost:33961 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
> java.net.ConnectException: Call From asf901.gq1.ygridcore.net/67.195.81.145 
> to localhost:33961 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>       at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
>       at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1415)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>       at com.sun.proxy.$Proxy24.mkdirs(Unknown Source)
>       at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>       at com.sun.proxy.$Proxy24.mkdirs(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:508)
>       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2587)
>       at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2558)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:820)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:816)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:809)
>       at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>       at 
> org.apache.tajo.storage.FileStorageManager.createTable(FileStorageManager.java:678)
>       at 
> org.apache.tajo.master.exec.DDLExecutor.createTable(DDLExecutor.java:246)
>       at 
> org.apache.tajo.master.exec.DDLExecutor.createTable(DDLExecutor.java:208)
>       at org.apache.tajo.master.exec.DDLExecutor.execute(DDLExecutor.java:88)
>       at 
> org.apache.tajo.master.GlobalEngine.updateQuery(GlobalEngine.java:212)
>       at 
> org.apache.tajo.master.TajoMasterClientService$TajoMasterClientProtocolServiceHandler.updateQuery(TajoMasterClientService.java:313)
>       at 
> org.apache.tajo.ipc.TajoMasterClientProtocol$TajoMasterClientProtocolService$2.callBlockingMethod(TajoMasterClientProtocol.java:545)
>       at 
> org.apache.tajo.rpc.BlockingRpcServer$ServerHandler.messageReceived(BlockingRpcServer.java:103)
>       at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>       at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>       at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>       at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>       at 
> org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
>       at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>       at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>       at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>       at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
>       at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
>       at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>       at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>       at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>       at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>       at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>       at 
> org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>       at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
>       at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
>       at 
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
>       at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
>       at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>       at 
> org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>       at 
> org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>       at java.lang.Thread.run(Thread.java:662)
> Caused by: java.net.ConnectException: Connection refused
>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>       at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
>       at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>       at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
>       at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
>       at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
>       at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1382)
>       ... 52 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to