[
https://issues.apache.org/jira/browse/IMPALA-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yida Wu resolved IMPALA-11958.
------------------------------
Resolution: Fixed
Should be resolved by increasing the timeout interval in IMPALA-11934.
> tmp-file-mgr-test aborted in Ozone build
> ----------------------------------------
>
> Key: IMPALA-11958
> URL: https://issues.apache.org/jira/browse/IMPALA-11958
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Reporter: Zoltán Borók-Nagy
> Assignee: Yida Wu
> Priority: Major
> Labels: broken-build
>
> The following exception has been thrown during tmp-file-mgr-test's
> TmpFileMgrTest.TestDirectoryLimitParsingRemotePath:
> {noformat}
> hdfsCreateDirectory(hdfs://localhost:/tmp/impala-scratch): FileSystem#mkdirs
> error:
> ConnectException: Connection refusedjava.net.ConnectException: Call From
> impala-ec2-centos79-m6i-4xlarge-ondemand-15fa.vpc.cloudera.com/127.0.0.1 to
> localhost:8020 failed on connection exception: java.net.ConnectException:
> Connection refused; For more details see:
> http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:892)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:812)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1617)
> at org.apache.hadoop.ipc.Client.call(Client.java:1559)
> at org.apache.hadoop.ipc.Client.call(Client.java:1456)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:666)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
> at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2487)
> at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2463)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1478)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1475)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1492)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1467)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2395)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:205)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
> at
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:727)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:840)
> at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:430)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1678)
> at org.apache.hadoop.ipc.Client.call(Client.java:1503)
> ... 23 more
> hdfsExists: invokeMethod((Lorg/apache/hadoop/fs/Path;)Z) error:
> IllegalArgumentException: Pathname /tmp:/impala-scratch from
> hdfs://localhost/tmp:/impala-scratch is not a valid DFS
> filename.java.lang.IllegalArgumentException: Pathname /tmp:/impala-scratch
> from hdfs://localhost/tmp:/impala-scratch is not a valid DFS filename.
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:256)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1746)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1743)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1758)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
> hdfsCreateDirectory(hdfs://localhost/tmp:/impala-scratch): FileSystem#mkdirs
> error:
> IllegalArgumentException: Pathname /tmp:/impala-scratch from
> hdfs://localhost/tmp:/impala-scratch is not a valid DFS
> filename.java.lang.IllegalArgumentException: Pathname /tmp:/impala-scratch
> from hdfs://localhost/tmp:/impala-scratch is not a valid DFS filename.
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:256)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1478)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1475)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1492)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1467)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2395)
> hdfsExists: invokeMethod((Lorg/apache/hadoop/fs/Path;)Z) error:
> IllegalArgumentException: Pathname /tmp:1/impala-scratch from
> hdfs://localhost/tmp:1/impala-scratch is not a valid DFS
> filename.java.lang.IllegalArgumentException: Pathname /tmp:1/impala-scratch
> from hdfs://localhost/tmp:1/impala-scratch is not a valid DFS filename.
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:256)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1746)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1743)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1758)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742)
> hdfsCreateDirectory(hdfs://localhost/tmp:1/impala-scratch): FileSystem#mkdirs
> error:
> IllegalArgumentException: Pathname /tmp:1/impala-scratch from
> hdfs://localhost/tmp:1/impala-scratch is not a valid DFS
> filename.java.lang.IllegalArgumentException: Pathname /tmp:1/impala-scratch
> from hdfs://localhost/tmp:1/impala-scratch is not a valid DFS filename.
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:256)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1478)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1475)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1492)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1467)
> at
> org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2395){noformat}
> Which aborted the backend test here:
> {noformat}
> *** Check failure stack trace: ***
> @ 0x3f2bfdc google::LogMessage::Fail()
> @ 0x3f2d88c google::LogMessage::SendToLog()
> @ 0x3f2b93a google::LogMessage::Flush()
> @ 0x3f2f4f8 google::LogMessageFatal::~LogMessageFatal()
> @ 0x1ecee1b impala::io::ScanRange::DoRead()
> @ 0x1eac353 impala::io::DiskQueue::DiskThreadLoop()
> @ 0x1eac6d0
> boost::detail::function::void_function_obj_invoker0<>::invoke()
> @ 0x298238a impala::Thread::SuperviseThread()
> @ 0x29854fc boost::detail::thread_data<>::run()
> @ 0x2be5130 thread_proxy
> @ 0x7f8baaf0dea4 start_thread
> @ 0x7f8ba7907b0c __clone{noformat}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)