[
https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606955#comment-17606955
]
Matthias Pohl commented on FLINK-29315:
---------------------------------------
FYI: In today's release call we decided to try to fix this before continuing
with a rc1 for 1.16.0. But there is the option to disable this test and not
blocking the rc1 creation on this issue considering that the test is actual
passing on AzureCI machines.
> HDFSTest#testBlobServerRecovery fails on CI
> -------------------------------------------
>
> Key: FLINK-29315
> URL: https://issues.apache.org/jira/browse/FLINK-29315
> Project: Flink
> Issue Type: Technical Debt
> Components: Connectors / FileSystem, Tests
> Affects Versions: 1.16.0, 1.15.2
> Reporter: Chesnay Schepler
> Priority: Blocker
> Fix For: 1.16.0, 1.17.0
>
>
> The test started failing 2 days ago on different branches. I suspect
> something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures:
> Sep 15 09:11:22 [ERROR] HDFSTest.testBlobServerRecovery Multiple Failures
> (2 failures)
> Sep 15 09:11:22 java.lang.AssertionError: Test failed Error while
> running command to get file permissions : java.io.IOException: Cannot run
> program "ls": error=1, Operation not permitted
> Sep 15 09:11:22 at
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22 at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22 at
> org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22 at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22 at
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22 at
> org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:485)
> Sep 15 09:11:22 at
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22 at
> org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> Sep 15 09:11:22 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22 at
> java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22 ... 67 more
> Sep 15 09:11:22
> Sep 15 09:11:22 java.lang.NullPointerException: <no message>
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)