[GitHub] [hadoop] hadoop-yetus commented on pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-643567930


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 48s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 49s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 48s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2072 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8d98f2261e89 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 785b1def959 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/3/testReport/ |
   | Max. process+thread count | 311 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-12 Thread GitBox


ishaniahuja commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-643562997


   namespace enabled, current rest version(2019-12-12), private endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 41
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   namespace enabled, current rest version(2019-12-12), private endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 41
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   namespace enabled, current rest version(2018-11-09), public endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 41
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   namespace disabled, current rest version( 2018-11-09), public endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 244
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-643549888


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m  8s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 15s |  hadoop-tools/hadoop-azure: The 
patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 10s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  62m 20s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2072 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7a1debdabf22 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 785b1def959 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/2/testReport/ |
   | Max. process+thread count | 319 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17063) S3A deleteObjects hanging/retrying forever

2020-06-12 Thread Dyno (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134609#comment-17134609
 ] 

Dyno commented on HADOOP-17063:
---

switch to magic looks working. thanks for your help. [~ste...@apache.org].

> S3A deleteObjects hanging/retrying forever
> --
>
> Key: HADOOP-17063
> URL: https://issues.apache.org/jira/browse/HADOOP-17063
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: hadoop 3.2.1
> spark 2.4.4
>  
>Reporter: Dyno
>Priority: Minor
> Attachments: jstack_exec-34.log, jstack_exec-40.log, 
> jstack_exec-74.log
>
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> org.apache.spark.scheduler.Task.run(Task.scala:123) 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
>  
> we are using spark 2.4.4 with hadoop 3.2.1 on kubernetes/spark-operator, 
> sometimes we see this hang with the stacktrace above. it looks like the 
> putObject never return, we have to kill the executor to make the job move 
> forward. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2066: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath.

2020-06-12 Thread GitBox


umamaheswararao commented on pull request #2066:
URL: https://github.com/apache/hadoop/pull/2066#issuecomment-643491971


   Thank you @jojochuang for review!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao merged pull request #2066: HDFS-15387. FSUsage#DF should consider ViewFSOverloadScheme in processPath.

2020-06-12 Thread GitBox


umamaheswararao merged pull request #2066:
URL: https://github.com/apache/hadoop/pull/2066


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-643439735


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
23 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 10s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 34s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 43s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 10s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 49s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 55s |  the patch passed  |
   | -1 :x: |  javac  |  17m 55s |  root generated 1 new + 1857 unchanged - 1 
fixed = 1858 total (was 1858)  |
   | -0 :warning: |  checkstyle  |   2m 57s |  root: The patch generated 21 new 
+ 160 unchanged - 22 fixed = 181 total (was 182)  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 9 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 36s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  the patch passed  |
   | -1 :x: |  findbugs  |   2m 17s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 33s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   1m 20s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 133m 10s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Dead store to sourceCounters in 
org.apache.hadoop.fs.statistics.impl.CounterIOStatisticsImpl.copy(IOStatistics) 
 At 
CounterIOStatisticsImpl.java:org.apache.hadoop.fs.statistics.impl.CounterIOStatisticsImpl.copy(IOStatistics)
  At CounterIOStatisticsImpl.java:[line 85] |
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeCentralMountTableConfig |
   |   | hadoop.fs.statistics.TestDynamicIOStatistics |
   |   | hadoop.fs.contract.localfs.TestLocalFSContractStreamIOStatistics |
   |   | hadoop.fs.viewfs.TestHCFSMountTableConfigLoader |
   |   | hadoop.fs.s3a.TestS3AUnbuffer |
   |   | hadoop.fs.s3a.s3guard.TestObjectChangeDetectionAttributes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2069 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 6edb1f978072 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7c4de59fc10 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/2/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/2/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2069/2/artifact/out/whitespace-eol.txt
 |
   | findbugs | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-643423265


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 33s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 33s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 20 new + 7 unchanged - 0 fixed = 27 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | -1 :x: |  findbugs  |   1m  3s |  hadoop-tools/hadoop-azure generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 28s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  62m  2s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Unread field:AbfsRestOperation.java:[line 144] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2072 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9cc83b566ff8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7c4de59fc10 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/1/testReport/ |
   | Max. process+thread count | 414 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17046) Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes.

2020-06-12 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134441#comment-17134441
 ] 

Hudson commented on HADOOP-17046:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18348 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18348/])
HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations (github: 
rev e15408477017753ea1a0896c8f54daeadee40d10)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcWritable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/SCMAdminProtocolPBClientImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
* (add) 
hadoop-common-project/hadoop-common/src/main/proto/ProtobufRpcEngine2.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceManagerAdministrationProtocolPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/factories/impl/pb/RpcServerFactoryPBImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockNamenode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestRPC.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/server/HSAdminServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ClientSCMProtocolPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClient.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpcServerHandoff.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ApplicationClientProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/DistributedSchedulingAMProtocolPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeLifelineProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ApplicationMasterProtocolPBClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceTrackerPBClientImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* (edit) 

[jira] [Updated] (HADOOP-17046) Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes.

2020-06-12 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-17046:
---
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~aajisaka] [~ayushtkn] for reviews on PR.
Merged to trunk, branch-3.3 and branch-3.3.0


> Support downstreams' existing Hadoop-rpc implementations using non-shaded 
> protobuf classes.
> ---
>
> Key: HADOOP-17046
> URL: https://issues.apache.org/jira/browse/HADOOP-17046
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.3.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> After upgrade/shade of protobuf to 3.7 version, existing Hadoop-Rpc 
> client-server implementations using ProtobufRpcEngine will not work.
> So, this Jira proposes to keep existing ProtobuRpcEngine as-is (without 
> shading and with protobuf-2.5.0 implementation) to support downstream 
> implementations.
> Use new ProtobufRpcEngine2 to use shaded protobuf classes within Hadoop and 
> later projects who wish to upgrade their protobufs to 3.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12231) setXIncludeAware errror keep logged while calling get from Configuration

2020-06-12 Thread Jeff Evans (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17134430#comment-17134430
 ] 

Jeff Evans commented on HADOOP-12231:
-

Does anyone have a way to suppress this?

> setXIncludeAware errror keep logged while calling get from Configuration
> 
>
> Key: HADOOP-12231
> URL: https://issues.apache.org/jira/browse/HADOOP-12231
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
> Environment: Oracle XDK parser
>Reporter: Krishnamoorthy Dharmalingam
>Priority: Trivial
>
> [junit] [ERROR] Configuration - Failed to set setXIncludeAware(true) for 
> parser 
> oracle.xml.jaxp.JXDocumentBuilderFactory@14673fc2:java.lang.UnsupportedOperationException:
>   setXIncludeAware is not supported on this JAXP implementation or earlier: 
> class oracle.xml.jaxp.JXDocumentBuilderFactory 
>  on this JAXP implementation or earlier: class 
> oracle.xml.jaxp.JXDocumentBuilderFactory>java.lang.UnsupportedOperationException:
>   setXIncludeAware is not supported on this JAXP implementation or earlier: 
> class oracle.xml.jaxp.JXDocumentBuilderFactory
> [junit]  at 
> javax.xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory.java:584)
> [junit]  at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2216)
> [junit]  at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2185)
> [junit]  at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2102)
> [junit]  at 
> org.apache.hadoop.conf.Configuration.get(Configuration.java:851)
>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026#issuecomment-643405454


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  5s |  https://github.com/apache/hadoop/pull/2026 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2026 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2026/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb merged pull request #2026: HADOOP-17046. Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes

2020-06-12 Thread GitBox


vinayakumarb merged pull request #2026:
URL: https://github.com/apache/hadoop/pull/2026


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2071: YARN-10313. Add hadoop process id to the suffix of hadoop-unjar directory

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2071:
URL: https://github.com/apache/hadoop/pull/2071#issuecomment-643399198


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  22m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 48s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 10s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   2m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 18s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 51s |  
hadoop-common-project/hadoop-common: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   2m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 17s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 128m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2071/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6a41096f7281 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7c4de59fc10 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2071/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2071/1/testReport/ |
   | Max. process+thread count | 2661 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2071/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja opened a new pull request #2072: HADOOP-17058. ABFS Support for AppendBlob in Hadoop ABFS Driver

2020-06-12 Thread GitBox


ishaniahuja opened a new pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072


   The PR provides support for AppendBlob in the Hadoop ABFS Driver. The change 
also adds changes in the existing test case for appendblob based files. The 
change has been tested against the HBASE runs. Along with the integration and 
the unit tests. 
   
   Here are the test results for the integration tests:

   fs.azure.appendblob.key=abfs://abfs-testcontainer  (this leads to all the 
files created via tests to be based on appendblob)
namespace enabled, current rest version(2019-12-12), private endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 41
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
namespace enabled, current rest version(2019-12-12), private endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 41
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
namespace enabled, current rest version(2018-11-09), public endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 41
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   namespace disabled, current rest version( 2018-11-09), public endpoint
   Tests run: 83, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 440, Failures: 0, Errors: 0, Skipped: 244
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Chandson opened a new pull request #2071: YARN-10313. Add hadoop process id to the suffix of hadoop-unjar directory

2020-06-12 Thread GitBox


Chandson opened a new pull request #2071:
URL: https://github.com/apache/hadoop/pull/2071


   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2014: HADOOP-16854. ABFS: Fix for the OutOfMemoryException in AbfsOutputStream

2020-06-12 Thread GitBox


steveloughran commented on a change in pull request #2014:
URL: https://github.com/apache/hadoop/pull/2014#discussion_r439326950



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java
##
@@ -207,6 +213,92 @@ public Void call() throws Exception {
 assertEquals((long) TEST_BUFFER_SIZE * FLUSH_TIMES, fileStatus.getLen());
   }
 
+  @Test
+  public void testShouldUseOlderAbfsOutputStreamConf() throws IOException {
+AzureBlobFileSystem fs = getFileSystem();
+Path testPath = new Path(methodName.getMethodName() + "1");
+getFileSystem().getAbfsStore().getAbfsConfiguration()
+.setShouldUseOlderAbfsOutputStream(true);
+try (FSDataOutputStream stream = fs.create(testPath)) {
+  Assertions.assertThat(stream.getWrappedStream()).describedAs("When the "
+  + "shouldUseOlderAbfsOutputStream is set the wrapped stream inside "
+  + "the FSDataOutputStream object should be of class "
+  + "AbfsOutputStreamOld.").isInstanceOf(AbfsOutputStreamOld.class);
+}
+testPath = new Path(methodName.getMethodName());
+getFileSystem().getAbfsStore().getAbfsConfiguration()
+.setShouldUseOlderAbfsOutputStream(false);
+try (FSDataOutputStream stream = fs.create(testPath)) {
+  Assertions.assertThat(stream.getWrappedStream()).describedAs("When the "
+  + "shouldUseOlderAbfsOutputStream is set the wrapped stream inside "
+  + "the FSDataOutputStream object should be of class "
+  + "AbfsOutputStream.").isInstanceOf(AbfsOutputStream.class);
+}
+  }
+
+  @Test
+  public void testWriteWithMultipleOutputStreamAtTheSameTime()
+  throws IOException, InterruptedException, ExecutionException {
+AzureBlobFileSystem fs = getFileSystem();
+String testFilePath = methodName.getMethodName();
+Path[] testPaths = new Path[CONCURRENT_STREAM_OBJS_TEST_OBJ_COUNT];
+createNStreamsAndWriteDifferentSizesConcurrently(fs, testFilePath,
+CONCURRENT_STREAM_OBJS_TEST_OBJ_COUNT, testPaths);
+assertSuccessfulWritesOnAllStreams(fs,
+CONCURRENT_STREAM_OBJS_TEST_OBJ_COUNT, testPaths);
+  }
+
+  private void assertSuccessfulWritesOnAllStreams(final FileSystem fs,
+  final int numConcurrentObjects, final Path[] testPaths)
+  throws IOException {
+for (int i = 0; i < numConcurrentObjects; i++) {
+  FileStatus fileStatus = fs.getFileStatus(testPaths[i]);
+  int numWritesMadeOnStream = i + 1;
+  long expectedLength = TEST_BUFFER_SIZE * numWritesMadeOnStream;
+  assertThat(fileStatus.getLen(), is(equalTo(expectedLength)));
+}
+  }
+
+  private void createNStreamsAndWriteDifferentSizesConcurrently(
+  final FileSystem fs, final String testFilePath,
+  final int numConcurrentObjects, final Path[] testPaths)
+  throws ExecutionException, InterruptedException {
+final byte[] b = new byte[TEST_BUFFER_SIZE];
+new Random().nextBytes(b);
+final ExecutorService es = Executors.newFixedThreadPool(40);
+final List> futureTasks = new ArrayList<>();
+for (int i = 0; i < numConcurrentObjects; i++) {
+  Path testPath = new Path(testFilePath + i);
+  testPaths[i] = testPath;
+  int numWritesToBeDone = i + 1;
+  futureTasks.add(es.submit(() -> {
+try (FSDataOutputStream stream = fs.create(testPath)) {
+  makeNWritesToStream(stream, numWritesToBeDone, b, es);
+}
+return null;
+  }));
+}
+for (Future futureTask : futureTasks) {
+  futureTask.get();
+}
+es.shutdownNow();
+  }
+
+  private void makeNWritesToStream(final FSDataOutputStream stream,
+  final int numWrites, final byte[] b, final ExecutorService es)
+  throws ExecutionException, InterruptedException, IOException {
+final List> futureTasks = new ArrayList<>();
+for (int i = 0; i < numWrites; i++) {
+  futureTasks.add(es.submit(() -> {
+stream.write(b);
+return null;
+  }));
+}
+for (Future futureTask : futureTasks) {

Review comment:
   see if you can use org.apache.hadoop.fs.impl.FutureIOSupport here. And 
somewhere there's a method to block waiting for futures to complete without 
doing it sequentally; I believe it is faster

##
File path: hadoop-tools/hadoop-azure/pom.xml
##
@@ -172,6 +172,12 @@
   com.google.guava
   guava
 
+

Review comment:
   1. can you add to hadoop project pom and then refer here. its how we 
guarantee consistent versions.
   2. do we really need to add a new JAR into production just for annotations? 
if that is all it is for, maybe we could somehow avoid doing that
   
   which annotations is it actually for? as VisibleForTesting is in guava

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsByteBufferPool.java
##
@@ -0,0 +1,160 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under 

[GitHub] [hadoop] steveloughran commented on a change in pull request #2056: HADOOP-17065. Adding Network Counters in ABFS

2020-06-12 Thread GitBox


steveloughran commented on a change in pull request #2056:
URL: https://github.com/apache/hadoop/pull/2056#discussion_r439319323



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
##
@@ -57,7 +57,23 @@
   FILES_DELETED("files_deleted",
   "Total number of files deleted from the object store."),
   ERROR_IGNORED("error_ignored",
-  "Errors caught and ignored.");
+  "Errors caught and ignored."),
+
+  //Network statistics.
+  CONNECTIONS_MADE("connections_made",
+  "Total number of times connection was made with Data store."),
+  SEND_REQUESTS("send_requests",
+  "Total number of times http requests was sent to the data store."),
+  GET_RESPONSE("get_response",
+  "Total number of times response was recorded after sending requests."),

Review comment:
   "a response"

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
##
@@ -57,7 +57,23 @@
   FILES_DELETED("files_deleted",
   "Total number of files deleted from the object store."),
   ERROR_IGNORED("error_ignored",
-  "Errors caught and ignored.");
+  "Errors caught and ignored."),
+
+  //Network statistics.
+  CONNECTIONS_MADE("connections_made",
+  "Total number of times connection was made with Data store."),
+  SEND_REQUESTS("send_requests",
+  "Total number of times http requests was sent to the data store."),
+  GET_RESPONSE("get_response",

Review comment:
   responses

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
##
@@ -57,7 +57,23 @@
   FILES_DELETED("files_deleted",
   "Total number of files deleted from the object store."),
   ERROR_IGNORED("error_ignored",
-  "Errors caught and ignored.");
+  "Errors caught and ignored."),
+
+  //Network statistics.
+  CONNECTIONS_MADE("connections_made",
+  "Total number of times connection was made with Data store."),
+  SEND_REQUESTS("send_requests",
+  "Total number of times http requests was sent to the data store."),
+  GET_RESPONSE("get_response",
+  "Total number of times response was recorded after sending requests."),
+  BYTES_SEND("bytes_send",
+  "Total bytes sent through http requests."),

Review comment:
   how about "bytes sent from Azure Datalake"

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
##
@@ -57,7 +57,23 @@
   FILES_DELETED("files_deleted",
   "Total number of files deleted from the object store."),
   ERROR_IGNORED("error_ignored",
-  "Errors caught and ignored.");
+  "Errors caught and ignored."),
+
+  //Network statistics.
+  CONNECTIONS_MADE("connections_made",
+  "Total number of times connection was made with Data store."),
+  SEND_REQUESTS("send_requests",
+  "Total number of times http requests was sent to the data store."),

Review comment:
   "were sent"

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
##
@@ -57,7 +57,23 @@
   FILES_DELETED("files_deleted",
   "Total number of files deleted from the object store."),
   ERROR_IGNORED("error_ignored",
-  "Errors caught and ignored.");
+  "Errors caught and ignored."),
+
+  //Network statistics.
+  CONNECTIONS_MADE("connections_made",
+  "Total number of times connection was made with Data store."),
+  SEND_REQUESTS("send_requests",
+  "Total number of times http requests was sent to the data store."),
+  GET_RESPONSE("get_response",
+  "Total number of times response was recorded after sending requests."),
+  BYTES_SEND("bytes_send",

Review comment:
   "bytes sent"

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java
##
@@ -0,0 +1,245 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.junit.Test;
+
+import 

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2063: HADOOP-17020. RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-06-12 Thread GitBox


mehakmeet commented on a change in pull request #2063:
URL: https://github.com/apache/hadoop/pull/2063#discussion_r439321862



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
##
@@ -518,7 +520,12 @@ public boolean delete(Path p, boolean recursive) throws 
IOException {
 }
 return new FileStatus[] {
 new DeprecatedRawLocalFileStatus(localf,
-getDefaultBlockSize(f), this) };
+defaultBlockSize, this) };
+  }
+
+  @Override
+  public boolean exists(Path f) throws IOException {
+return pathToFile(f).exists();

Review comment:
   This is a new method(Overriden) for this patch.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2063: HADOOP-17020. RawFileSystem could localize default block size to avoid sync bottleneck in config

2020-06-12 Thread GitBox


steveloughran commented on pull request #2063:
URL: https://github.com/apache/hadoop/pull/2063#issuecomment-643181214


   Looks good. One question though: Is the change on L520 Meant to be part of 
this patch? I can see it is an optimisation -But it is not listed in the JIRA
   
   If you want it in, we need to change it's title to, say "improve RawFS 
performance" and list the changes in the commit message and JIRA



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.

2020-06-12 Thread GitBox


hadoop-yetus commented on pull request #2038:
URL: https://github.com/apache/hadoop/pull/2038#issuecomment-643180766


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  21m  2s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m 56s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 12 unchanged - 1 fixed = 16 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  5s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 23s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  82m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2038 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f96e0e2d9d79 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fed6fecd3a9 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/2/testReport/ |
   | Max. process+thread count | 344 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.

2020-06-12 Thread GitBox


mukund-thakur commented on a change in pull request #2038:
URL: https://github.com/apache/hadoop/pull/2038#discussion_r439283239



##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java
##
@@ -168,6 +168,65 @@ public void testCostOfListLocatedStatusOnNonEmptyDir() 
throws Throwable {
 }
   }
 
+  @Test
+  public void testCostOfListFilesOnFile() throws Throwable {
+describe("Performing listFiles() on a file");
+Path file = path(getMethodName() + ".txt");
+S3AFileSystem fs = getFileSystem();
+touch(fs, file);
+resetMetricDiffs();
+fs.listFiles(file, true);
+if (!fs.hasMetadataStore()) {
+  metadataRequests.assertDiffEquals(1);
+} else {
+  if (fs.allowAuthoritative(file)) {
+listRequests.assertDiffEquals(0);
+  } else {
+listRequests.assertDiffEquals(1);
+  }
+}
+  }
+
+  @Test
+  public void testCostOfListFilesOnEmptyDir() throws Throwable {
+describe("Performing listFiles() on an empty dir");
+Path dir = path(getMethodName());
+S3AFileSystem fs = getFileSystem();
+fs.mkdirs(dir);
+resetMetricDiffs();
+fs.listFiles(dir, true);
+if (!fs.hasMetadataStore()) {
+  verifyOperationCount(2, 1);
+} else {
+  if (fs.allowAuthoritative(dir)) {
+verifyOperationCount(0, 0);
+  } else {
+verifyOperationCount(0, 1);
+  }
+}
+  }
+
+  @Test
+  public void testCostOfListFilesOnNonEmptyDir() throws Throwable {

Review comment:
   Done. There are no cost changes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.

2020-06-12 Thread GitBox


mukund-thakur commented on a change in pull request #2038:
URL: https://github.com/apache/hadoop/pull/2038#discussion_r439283351



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##
@@ -4181,79 +4181,114 @@ public LocatedFileStatus next() throws IOException {
 Path path = qualify(f);
 LOG.debug("listFiles({}, {})", path, recursive);
 try {
-  // if a status was given, that is used, otherwise
-  // call getFileStatus, which triggers an existence check
-  final S3AFileStatus fileStatus = status != null
-  ? status
-  : (S3AFileStatus) getFileStatus(path);
-  if (fileStatus.isFile()) {
+  // if a status was given and it is a file.
+  if (status != null && status.isFile()) {
 // simple case: File
 LOG.debug("Path is a file");
 return new Listing.SingleStatusRemoteIterator(
-toLocatedFileStatus(fileStatus));
-  } else {
-// directory: do a bulk operation
-String key = maybeAddTrailingSlash(pathToKey(path));
-String delimiter = recursive ? null : "/";
-LOG.debug("Requesting all entries under {} with delimiter '{}'",
-key, delimiter);
-final RemoteIterator cachedFilesIterator;
-final Set tombstones;
-boolean allowAuthoritative = allowAuthoritative(f);
-if (recursive) {
-  final PathMetadata pm = metadataStore.get(path, true);
-  // shouldn't need to check pm.isDeleted() because that will have
-  // been caught by getFileStatus above.
-  MetadataStoreListFilesIterator metadataStoreListFilesIterator =
-  new MetadataStoreListFilesIterator(metadataStore, pm,
-  allowAuthoritative);
-  tombstones = metadataStoreListFilesIterator.listTombstones();
-  // if all of the below is true
-  //  - authoritative access is allowed for this metadatastore for 
this directory,
-  //  - all the directory listings are authoritative on the client
-  //  - the caller does not force non-authoritative access
-  // return the listing without any further s3 access
-  if (!forceNonAuthoritativeMS &&
-  allowAuthoritative &&
-  metadataStoreListFilesIterator.isRecursivelyAuthoritative()) {
-S3AFileStatus[] statuses = S3Guard.iteratorToStatuses(
-metadataStoreListFilesIterator, tombstones);
-cachedFilesIterator = listing.createProvidedFileStatusIterator(
-statuses, ACCEPT_ALL, acceptor);
-return 
listing.createLocatedFileStatusIterator(cachedFilesIterator);
-  }
-  cachedFilesIterator = metadataStoreListFilesIterator;
-} else {
-  DirListingMetadata meta =
-  S3Guard.listChildrenWithTtl(metadataStore, path, ttlTimeProvider,
-  allowAuthoritative);
-  if (meta != null) {
-tombstones = meta.listTombstones();
-  } else {
-tombstones = null;
-  }
-  cachedFilesIterator = listing.createProvidedFileStatusIterator(
-  S3Guard.dirMetaToStatuses(meta), ACCEPT_ALL, acceptor);
-  if (allowAuthoritative && meta != null && meta.isAuthoritative()) {
-// metadata listing is authoritative, so return it directly
-return 
listing.createLocatedFileStatusIterator(cachedFilesIterator);
-  }
+toLocatedFileStatus(status));
+  }
+  // Assuming the path to be a directory
+  // do a bulk operation.
+  RemoteIterator listFilesAssumingDir =
+  getListFilesAssumingDir(path,
+  recursive,
+  acceptor,
+  collectTombstones,
+  forceNonAuthoritativeMS);
+  // If there are no list entries present, we
+  // fallback to file existence check as the path
+  // can be a file or empty directory.
+  if (!listFilesAssumingDir.hasNext()) {
+final S3AFileStatus fileStatus = (S3AFileStatus) getFileStatus(path);

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org