[
https://issues.apache.org/jira/browse/ARROW-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15965027#comment-15965027
]
Leif Walsh commented on ARROW-805:
----------------------------------
Living proof:
{noformat}
ubuntu@49b5b4f128cb:~/build$ ARROW_HDFS_TEST_PORT=9000
ARROW_HDFS_TEST_USER=hdfs ARROW_HDFS_TEST_HOST=impala debug/io-hdfs-test
[==========] Running 18 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 9 tests from TestHdfsClient/0, where TypeParam =
arrow::io::JNIDriver
[ RUN ] TestHdfsClient/0.ConnectsAgain
17/04/11 22:06:02 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
17/04/11 22:06:03 WARN shortcircuit.DomainSocketFactory: The short-circuit
local reads feature cannot be used because libhadoop cannot be loaded.
[ OK ] TestHdfsClient/0.ConnectsAgain (1837 ms)
[ RUN ] TestHdfsClient/0.CreateDirectory
[ OK ] TestHdfsClient/0.CreateDirectory (211 ms)
[ RUN ] TestHdfsClient/0.GetCapacityUsed
[ OK ] TestHdfsClient/0.GetCapacityUsed (156 ms)
[ RUN ] TestHdfsClient/0.GetPathInfo
[ OK ] TestHdfsClient/0.GetPathInfo (393 ms)
[ RUN ] TestHdfsClient/0.AppendToFile
[ OK ] TestHdfsClient/0.AppendToFile (232 ms)
[ RUN ] TestHdfsClient/0.ListDirectory
[ OK ] TestHdfsClient/0.ListDirectory (228 ms)
[ RUN ] TestHdfsClient/0.ReadableMethods
[ OK ] TestHdfsClient/0.ReadableMethods (230 ms)
[ RUN ] TestHdfsClient/0.LargeFile
[ OK ] TestHdfsClient/0.LargeFile (263 ms)
[ RUN ] TestHdfsClient/0.RenameFile
[ OK ] TestHdfsClient/0.RenameFile (192 ms)
[----------] 9 tests from TestHdfsClient/0 (3742 ms total)
[----------] 9 tests from TestHdfsClient/1, where TypeParam =
arrow::io::PivotalDriver
[ RUN ] TestHdfsClient/1.ConnectsAgain
[ OK ] TestHdfsClient/1.ConnectsAgain (150 ms)
[ RUN ] TestHdfsClient/1.CreateDirectory
[ OK ] TestHdfsClient/1.CreateDirectory (156 ms)
[ RUN ] TestHdfsClient/1.GetCapacityUsed
[ OK ] TestHdfsClient/1.GetCapacityUsed (127 ms)
[ RUN ] TestHdfsClient/1.GetPathInfo
[ OK ] TestHdfsClient/1.GetPathInfo (196 ms)
[ RUN ] TestHdfsClient/1.AppendToFile
[ OK ] TestHdfsClient/1.AppendToFile (225 ms)
[ RUN ] TestHdfsClient/1.ListDirectory
[ OK ] TestHdfsClient/1.ListDirectory (230 ms)
[ RUN ] TestHdfsClient/1.ReadableMethods
[ OK ] TestHdfsClient/1.ReadableMethods (195 ms)
[ RUN ] TestHdfsClient/1.LargeFile
[ OK ] TestHdfsClient/1.LargeFile (251 ms)
[ RUN ] TestHdfsClient/1.RenameFile
[ OK ] TestHdfsClient/1.RenameFile (189 ms)
[----------] 9 tests from TestHdfsClient/1 (1719 ms total)
[----------] Global test environment tear-down
[==========] 18 tests from 2 test cases ran. (5462 ms total)
[ PASSED ] 18 tests.
{noformat}
> listing empty HDFS directory returns an error instead of returning empty list
> -----------------------------------------------------------------------------
>
> Key: ARROW-805
> URL: https://issues.apache.org/jira/browse/ARROW-805
> Project: Apache Arrow
> Issue Type: Bug
> Affects Versions: 0.2.0, 0.3.0
> Reporter: Leif Walsh
> Assignee: Leif Walsh
> Fix For: 0.3.0
>
>
> https://github.com/apache/arrow/blob/master/cpp/src/arrow/io/hdfs.cc#L409-L410
> {code}
> if (entries == nullptr) {
> // If the directory is empty, entries is NULL but errno is 0. Non-zero
> // errno indicates error
> //
> // Note: errno is thread-locala
> if (errno == 0) { num_entries = 0; }
> { return Status::IOError("HDFS: list directory failed"); }
> }
> {code}
> I think that should have an else:
> {code}
> if (entries == nullptr) {
> // If the directory is empty, entries is NULL but errno is 0. Non-zero
> // errno indicates error
> //
> // Note: errno is thread-locala
> if (errno == 0) {
> num_entries = 0;
> } else {
> return Status::IOError("HDFS: list directory failed");
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)