[GitHub] [hadoop] hadoop-yetus commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-661641026


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 30s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 38s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  10m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 42s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-yarn in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-mapreduce-project in trunk failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-sls in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-cos in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   5m 49s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 40s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   1m 57s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   2m 52s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |  11m  8s |  hadoop-yarn-project/hadoop-yarn in trunk 
has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   0m 43s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   3m 53s |  hadoop-mapreduce-project in trunk has 2 
extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   0m 48s |  hadoop-tools/hadoop-sls in trunk has 1 
extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   0m 39s |  hadoop-cloud-storage-project/hadoop-cos 
in trunk has 1 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   9m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 45s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  18m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 40s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 40s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 41s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  10m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  7s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 39s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-yarn in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-mapreduce-project in the patch 
failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-sls in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-cos in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   5m 49s 

[GitHub] [hadoop] mehakmeet commented on a change in pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-20 Thread GitBox


mehakmeet commented on a change in pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#discussion_r457841220



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
##
@@ -285,6 +291,98 @@ public void testWithNullStreamStatistics() throws 
IOException {
 }
   }
 
+  /**
+   * Testing readAhead counters in AbfsInputStream with 30 seconds timeout.
+   */
+  @Test(timeout = TIMEOUT_30_SECONDS)
+  public void testReadAheadCounters() throws IOException {
+describe("Test to check correct values for readAhead counters in "
++ "AbfsInputStream");
+
+AzureBlobFileSystem fs = getFileSystem();
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+Path readAheadCountersPath = path(getMethodName());
+
+/*
+ * Setting the block size for readAhead as 4KB.
+ */
+abfss.getAbfsConfiguration().setReadBufferSize(CUSTOM_BLOCK_BUFFER_SIZE);
+
+AbfsOutputStream out = null;
+AbfsInputStream in = null;
+
+try {
+
+  /*
+   * Creating a file of 1MB size.
+   */
+  out = createAbfsOutputStreamWithFlushEnabled(fs, readAheadCountersPath);
+  out.write(defBuffer);
+  out.close();
+
+  in = abfss.openFileForRead(readAheadCountersPath, fs.getFsStatistics());
+
+  /*
+   * Reading 1KB after each i * KB positions. Hence the reads are from 0
+   * to 1KB, 1KB to 2KB, and so on.. for 5 operations.
+   */
+  for (int i = 0; i < 5; i++) {
+in.seek(ONE_KB * i);
+in.read(defBuffer, ONE_KB * i, ONE_KB);
+  }
+  AbfsInputStreamStatisticsImpl stats =
+  (AbfsInputStreamStatisticsImpl) in.getStreamStatistics();
+
+  /*
+   * Since, readAhead is done in background threads. Sometimes, the
+   * threads aren't finished in the background and could result in
+   * inaccurate results. So, we wait till we have the accurate values
+   * with a limit of 30 seconds as that's when the test times out.
+   *
+   */
+  while (stats.getRemoteBytesRead() < CUSTOM_READ_AHEAD_BUFFER_SIZE
+  || stats.getReadAheadBytesRead() < CUSTOM_BLOCK_BUFFER_SIZE) {
+Thread.sleep(THREAD_SLEEP_10_SECONDS);
+  }
+
+  /*
+   * Verifying the counter values of readAheadBytesRead and 
remoteBytesRead.
+   *
+   * readAheadBytesRead : Since, we read 1KBs 5 times, that means we go
+   * from 0 to 5KB in the file. The bufferSize is set to 4KB, and since
+   * we have 8 blocks of readAhead buffer. We would have 8 blocks of 4KB
+   * buffer. Our read is till 5KB, hence readAhead would ideally read 2
+   * blocks of 4KB which is equal to 8KB. But, sometimes to get more than
+   * one block from readAhead buffer we might have to wait for background
+   * threads to fill the buffer and hence we might do remote read which
+   * would be faster. Therefore, readAheadBytesRead would be equal to or
+   * greater than 4KB.
+   *
+   * remoteBytesRead : Since, the bufferSize is set to 4KB and the number
+   * of blocks or readAheadQueueDepth is equal to 8. We would read 8 * 4
+   * KB buffer on the first read, which is equal to 32KB. But, if we are 
not
+   * able to read some bytes that were in the buffer after doing
+   * readAhead, we might use remote read again. Thus, the bytes read
+   * remotely could also be greater than 32Kb.
+   *
+   */
+  assertTrue(String.format("actual value of %d is not greater than or "

Review comment:
   It would be good to add it in this patch. Thanks for the tip.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-20 Thread GitBox


aajisaka commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-661635176


   Thank you @iwasakims for the PR.
   
   How about fixing `DLS_DEAD_LOCAL_STORE` instead of ignoring the warning?
   For example, in FSEditLogLoader#incrOpCount
   ```diff
   -  holder.held++;
   +  holder.held = holder.held + 1;
   ```
   fixes the warning and removes unnecessary operations.
   
   I ran `javap -c -p` to the class and found the unnecessary operations even 
in OpenJDK 11.0.7.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156#issuecomment-661623929


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  20m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m  8s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 58s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 100m 17s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 172m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2156 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 378e6c9b5303 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/3/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | unit | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #1816: HDFS-10648. Expose Balancer metrics through Metrics2.

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #1816:
URL: https://github.com/apache/hadoop/pull/1816#issuecomment-661618939


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 20s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  2s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 58s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   | -0 :warning: |  patch  |   3m 17s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 32 unchanged - 1 
fixed = 32 total (was 33)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  95m 27s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 165m 56s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1816/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1816 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0bb1fa43469f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1816/4/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1816/4/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | javadoc | 

[GitHub] [hadoop] snvijaya commented on pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-20 Thread GitBox


snvijaya commented on pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150#issuecomment-661608767


   Javadoc failure because of JDK11 support. Being addressed in 
https://issues.apache.org/jira/browse/HADOOP-16862



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2160: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2160:
URL: https://github.com/apache/hadoop/pull/2160#issuecomment-661587226


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 54s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  2s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 47s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 14s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 12s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 24s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  19m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m  6s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 21s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 163m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2160 |
   | JIRA Issue | HDFS-15478 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9e79b4bff7dd 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/1/testReport/ |
   | Max. process+thread count | 2322 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/1/console |
   | versions | git=2.17.1 maven=3.6.0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#issuecomment-661554559


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  49m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 19s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  10m 18s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 27s |  root in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  compile  |   3m 40s |  root in trunk failed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  checkstyle  |   4m 10s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 19s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   2m  2s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  1s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  22m  1s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 2045 new + 0 unchanged - 0 
fixed = 2045 total (was 0)  |
   | +1 :green_heart: |  compile  |  19m  2s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  19m  2s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1488 new + 451 unchanged - 0 
fixed = 1939 total (was 451)  |
   | -0 :warning: |  checkstyle  |   3m 44s |  root: The patch generated 118 
new + 68 unchanged - 0 fixed = 186 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  findbugs  |   0m 27s |  hadoop-project has no data from 
findbugs  |
   | -1 :x: |  findbugs  |   2m 36s |  hadoop-common-project/hadoop-common 
generated 6 new + 2 unchanged - 0 fixed = 8 total (was 2)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 27s |  hadoop-project in the patch 
passed.  |
   | -1 :x: |  unit  |   9m 37s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 178m 41s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  The class name com.hadoop.compression.lzo.LzoCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzoCodec  At 
LzoCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzoCodec  At LzoCodec.java:[lines 29-48] |
   |  |  Write to static field com.hadoop.compression.lzo.LzoCodec.warned from 
instance method 
com.hadoop.compression.lzo.LzoCodec.createOutputStream(OutputStream, 
Compressor)  At LzoCodec.java:from instance method 
com.hadoop.compression.lzo.LzoCodec.createOutputStream(OutputStream, 
Compressor)  At LzoCodec.java:[line 46] |
   |  |  The class name com.hadoop.compression.lzo.LzopCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzopCodec  At 
LzopCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzopCodec 

[jira] [Issue Comment Deleted] (HADOOP-15338) Java 11 runtime support

2020-07-20 Thread Shubing Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubing Zheng updated HADOOP-15338:
---
Comment: was deleted

(was: miner version?)

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15338) Java 11 runtime support

2020-07-20 Thread Shubing Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161651#comment-17161651
 ] 

Shubing Zheng commented on HADOOP-15338:


miner version?

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161649#comment-17161649
 ] 

Íñigo Goiri commented on HADOOP-12549:
--

I don't have full context on this but I'm pretty sure this change will be 
controversial.
[~eyang] may be able to review this properly.

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156#issuecomment-661486156


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 47s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 27s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m 25s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 54s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 1 new + 523 unchanged - 0 fixed = 524 total (was 523)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  98m  5s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 182m 49s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.hdfs.TestDFSInputStream |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2156 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fb3c36b5aed7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/2/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/2/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | 

[GitHub] [hadoop] umamaheswararao opened a new pull request #2160: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-20 Thread GitBox


umamaheswararao opened a new pull request #2160:
URL: https://github.com/apache/hadoop/pull/2160


   https://issues.apache.org/jira/browse/HDFS-15478



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2158: [HADOOP-17124] [COMMON] Support LZO using aircompressor

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2158:
URL: https://github.com/apache/hadoop/pull/2158#issuecomment-661454168


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 25s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 58s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m  7s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 19s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   2m 10s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 41s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  18m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 42s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 42s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 40s |  root: The patch generated 118 
new + 68 unchanged - 0 fixed = 186 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  findbugs  |   0m 32s |  hadoop-project has no data from 
findbugs  |
   | -1 :x: |  findbugs  |   2m 23s |  hadoop-common-project/hadoop-common 
generated 6 new + 2 unchanged - 0 fixed = 8 total (was 2)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-project in the patch 
passed.  |
   | -1 :x: |  unit  |   9m 18s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 160m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  The class name com.hadoop.compression.lzo.LzoCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzoCodec  At 
LzoCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzoCodec  At LzoCodec.java:[lines 29-48] |
   |  |  Write to static field com.hadoop.compression.lzo.LzoCodec.warned from 
instance method 
com.hadoop.compression.lzo.LzoCodec.createOutputStream(OutputStream, 
Compressor)  At LzoCodec.java:from instance method 
com.hadoop.compression.lzo.LzoCodec.createOutputStream(OutputStream, 
Compressor)  At LzoCodec.java:[line 46] |
   |  |  The class name com.hadoop.compression.lzo.LzopCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzopCodec  At 
LzopCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzopCodec  At LzopCodec.java:[lines 29-48] |
   |  |  Write to static field com.hadoop.compression.lzo.LzopCodec.warned from 
instance method 
com.hadoop.compression.lzo.LzopCodec.createOutputStream(OutputStream, 
Compressor)  At LzopCodec.java:from instance method 

[GitHub] [hadoop] dbtsai opened a new pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-20 Thread GitBox


dbtsai opened a new pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159


   See https://issues.apache.org/jira/browse/HADOOP-17124 for details.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17141) Add Capability To Get Text Length

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161592#comment-17161592
 ] 

Hadoop QA commented on HADOOP-17141:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
51s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
35s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
15s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 
extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
15s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
36s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-common in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2157: HADOOP-17141: Add Capability To Get Text Length

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2157:
URL: https://github.com/apache/hadoop/pull/2157#issuecomment-661404259


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 19s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 51s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  8s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 13s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 15s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 36s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   2m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 24s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 160m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2157/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2157 |
   | JIRA Issue | HADOOP-17141 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e31abc98e297 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2157/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2157/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2157/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2157/1/testReport/ |
   | Max. process+thread count | 2523 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2157/1/console |
   | versions | git=2.17.1 maven=3.6.0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150#issuecomment-661379886


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2150/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2150 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 978ee2d7e818 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2150/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2150/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2150/2/testReport/ |
   | Max. process+thread count | 294 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2150/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please 

[GitHub] [hadoop] dbtsai closed pull request #2158: [HADOOP-17124] [COMMON] Support LZO using aircompressor

2020-07-20 Thread GitBox


dbtsai closed pull request #2158:
URL: https://github.com/apache/hadoop/pull/2158


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai opened a new pull request #2158: [HADOOP-17124] [COMMON] Support LZO using aircompressor

2020-07-20 Thread GitBox


dbtsai opened a new pull request #2158:
URL: https://github.com/apache/hadoop/pull/2158


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16753) Refactor HAAdmin

2020-07-20 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161528#comment-17161528
 ] 

Chen Liang edited comment on HADOOP-16753 at 7/20/20, 9:04 PM:
---

I have just backported this change to branch-3.2/3.1 and branch-2.10 (clean 
cherry-pick except for a couple import diff).


was (Author: vagarychen):
I have just backported this change to branch-3.2 and branch-2.10 (clean 
cherry-pick except for a couple import diff).

> Refactor HAAdmin
> 
>
> Key: HADOOP-16753
> URL: https://issues.apache.org/jira/browse/HADOOP-16753
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16753.001.patch, HADOOP-16753.002.patch, 
> HADOOP-16753.003.patch, HADOOP-16753.004.patch
>
>
> https://issues.apache.org/jira/browse/YARN-9985?focusedCommentId=16991414=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16991414
> We should move HDFS-specific haadmin options from HAAdmin to DFSHAAdmin to 
> remove unnecessary if-else statements from RMAdmin command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156#issuecomment-661331824


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m 24s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m 39s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m 15s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 58s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 4 new + 524 unchanged - 0 fixed = 528 total (was 524)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 114m 35s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 206m 25s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2156 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3254a6045f44 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f2033de2342 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2156/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | checkstyle | 

[jira] [Comment Edited] (HADOOP-16753) Refactor HAAdmin

2020-07-20 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161528#comment-17161528
 ] 

Chen Liang edited comment on HADOOP-16753 at 7/20/20, 8:54 PM:
---

I have just backported this change to branch-3.2 and branch-2.10 (clean 
cherry-pick except for a couple import diff).


was (Author: vagarychen):
I have just backported this change to branch-3.2, branch-3.1 and branch-2.10 
(clean cherry-pick except for a couple import diff).

> Refactor HAAdmin
> 
>
> Key: HADOOP-16753
> URL: https://issues.apache.org/jira/browse/HADOOP-16753
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16753.001.patch, HADOOP-16753.002.patch, 
> HADOOP-16753.003.patch, HADOOP-16753.004.patch
>
>
> https://issues.apache.org/jira/browse/YARN-9985?focusedCommentId=16991414=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16991414
> We should move HDFS-specific haadmin options from HAAdmin to DFSHAAdmin to 
> remove unnecessary if-else statements from RMAdmin command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-20 Thread GitBox


snvijaya commented on pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150#issuecomment-661326219


   Thanks @steveloughran . Have addressed the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16753) Refactor HAAdmin

2020-07-20 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161528#comment-17161528
 ] 

Chen Liang commented on HADOOP-16753:
-

I backported this change to branch-3.2, branch-3.1 and branch-2.10

> Refactor HAAdmin
> 
>
> Key: HADOOP-16753
> URL: https://issues.apache.org/jira/browse/HADOOP-16753
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16753.001.patch, HADOOP-16753.002.patch, 
> HADOOP-16753.003.patch, HADOOP-16753.004.patch
>
>
> https://issues.apache.org/jira/browse/YARN-9985?focusedCommentId=16991414=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16991414
> We should move HDFS-specific haadmin options from HAAdmin to DFSHAAdmin to 
> remove unnecessary if-else statements from RMAdmin command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16753) Refactor HAAdmin

2020-07-20 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161528#comment-17161528
 ] 

Chen Liang edited comment on HADOOP-16753 at 7/20/20, 8:49 PM:
---

I have just backported this change to branch-3.2, branch-3.1 and branch-2.10 
(clean cherry-pick except for a couple import diff).


was (Author: vagarychen):
I backported this change to branch-3.2, branch-3.1 and branch-2.10

> Refactor HAAdmin
> 
>
> Key: HADOOP-16753
> URL: https://issues.apache.org/jira/browse/HADOOP-16753
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16753.001.patch, HADOOP-16753.002.patch, 
> HADOOP-16753.003.patch, HADOOP-16753.004.patch
>
>
> https://issues.apache.org/jira/browse/YARN-9985?focusedCommentId=16991414=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16991414
> We should move HDFS-specific haadmin options from HAAdmin to DFSHAAdmin to 
> remove unnecessary if-else statements from RMAdmin command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-20 Thread GitBox


snvijaya commented on a change in pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150#discussion_r457683413



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelete.java
##
@@ -181,6 +186,64 @@ public void testDeleteIdempotency() throws Exception {
 .describedAs(
 "Delete is considered idempotent by default and should return 
success.")
 .isEqualTo(HTTP_OK);
+
+// Case 2: Mock instance of Http Operation response. This will return
+// HTTP:Bad Request
+AbfsHttpOperation http400Op = mock(AbfsHttpOperation.class);
+when(http400Op.getStatusCode()).thenReturn(HTTP_BAD_REQUEST);
+
+// Mock delete response to 400
+when(op.getResult()).thenReturn(http400Op);
+
+Assertions.assertThat(testClient.deleteIdempotencyCheckOp(op)
+.getResult()
+.getStatusCode())
+.describedAs(
+"Idempotency check to happen only for HTTP 404 response.")
+.isEqualTo(HTTP_BAD_REQUEST);
+
+  }
+
+  @Test
+  public void testDeleteIdempotencyTriggerHttp404() throws Exception {
+
+final AzureBlobFileSystem fs = getFileSystem();
+AbfsClient client = TestAbfsClient.createTestClientFromCurrentContext(
+fs.getAbfsStore().getClient(),
+this.getConfiguration());
+
+// Case 1: Not a retried case should throw error back
+intercept(AbfsRestOperationException.class,
+() -> client.deletePath(
+"/NonExistingPath",
+false,
+null));
+
+// mock idempotency check to mimick retried case

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on a change in pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-20 Thread GitBox


snvijaya commented on a change in pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150#discussion_r457682497



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
##
@@ -170,7 +170,7 @@ String getSasToken() {
* Executes the REST operation with retry, by issuing one or more
* HTTP operations.
*/
-  void execute() throws AzureBlobFileSystemException {
+   public void execute() throws AzureBlobFileSystemException {

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-20 Thread GitBox


steveloughran commented on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-661318534


   not going to suggest any changes of my own. One thing to know in future is 
that one of the hadoop retry policies takes a map of other policies and 
dynamically chooses the correct one based on the exception raised. Not needed 
here with only two exceptions treated as recoverable, but if you want to do 
more complex things, especially handle throttling, worth looking at
   
   see org.apache.hadoop.fs.s3a.S3ARetryPolicy for it in action



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-658296547







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-658763916


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 16s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 50s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 48s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  5s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 15s |  hadoop-tools/hadoop-azure: The 
patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 19s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux aa5b85613b9c 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 80046d1c8a4 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/8/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/8/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/8/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/8/testReport/ |
   | Max. process+thread count | 309 (vs. ulimit of 5500) |
   | 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-654233656


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 50s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 38s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 15s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  91m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fc5b43c34b3f 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 639acb6d892 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/testReport/ |
   | Max. process+thread count | 312 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-654957447


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 40s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hadoop-tools/hadoop-azure: The 
patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 4 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 33s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  72m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux b503ac2358ea 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2bbd00dff49 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/2/testReport/ |
   | Max. process+thread count | 338 (vs. ulimit of 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-655071803


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  9s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 19s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  65m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux cfd8da72cceb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4f26454a7d1 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 

[GitHub] [hadoop] steveloughran commented on a change in pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-20 Thread GitBox


steveloughran commented on a change in pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150#discussion_r457668405



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelete.java
##
@@ -181,6 +186,64 @@ public void testDeleteIdempotency() throws Exception {
 .describedAs(
 "Delete is considered idempotent by default and should return 
success.")
 .isEqualTo(HTTP_OK);
+
+// Case 2: Mock instance of Http Operation response. This will return
+// HTTP:Bad Request
+AbfsHttpOperation http400Op = mock(AbfsHttpOperation.class);
+when(http400Op.getStatusCode()).thenReturn(HTTP_BAD_REQUEST);
+
+// Mock delete response to 400
+when(op.getResult()).thenReturn(http400Op);
+
+Assertions.assertThat(testClient.deleteIdempotencyCheckOp(op)
+.getResult()
+.getStatusCode())
+.describedAs(
+"Idempotency check to happen only for HTTP 404 response.")
+.isEqualTo(HTTP_BAD_REQUEST);
+
+  }
+
+  @Test
+  public void testDeleteIdempotencyTriggerHttp404() throws Exception {
+
+final AzureBlobFileSystem fs = getFileSystem();
+AbfsClient client = TestAbfsClient.createTestClientFromCurrentContext(
+fs.getAbfsStore().getClient(),
+this.getConfiguration());
+
+// Case 1: Not a retried case should throw error back
+intercept(AbfsRestOperationException.class,
+() -> client.deletePath(
+"/NonExistingPath",
+false,
+null));
+
+// mock idempotency check to mimick retried case

Review comment:
   check spelling of mimic

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java
##
@@ -170,7 +170,7 @@ String getSasToken() {
* Executes the REST operation with retry, by issuing one or more
* HTTP operations.
*/
-  void execute() throws AzureBlobFileSystemException {
+   public void execute() throws AzureBlobFileSystemException {

Review comment:
   tag this as @VisibleForTesting too





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17132) ABFS: Fix For Idempotency code

2020-07-20 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17132:
---
Fix Version/s: 3.4.0
Affects Version/s: 3.4.0
   Status: Patch Available  (was: Open)

> ABFS: Fix For Idempotency code
> --
>
> Key: HADOOP-17132
> URL: https://issues.apache.org/jira/browse/HADOOP-17132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
> Fix For: 3.4.0
>
>
> Trigger to handle the idempotency code introduced in 
> https://issues.apache.org/jira/browse/HADOOP-17015 is incomplete. 
> This PR is to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17132) ABFS: Fix For Idempotency code

2020-07-20 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17132:
---
Description: 
Trigger to handle the idempotency code introduced in 
https://issues.apache.org/jira/browse/HADOOP-17015 is incomplete. 

This PR is to fix the issue.

  was:
Trigger to handle the idempotency code introduced in 
https://issues.apache.org/jira/browse/HADOOP-17137 is incomplete. 

This PR is to fix the issue.


> ABFS: Fix For Idempotency code
> --
>
> Key: HADOOP-17132
> URL: https://issues.apache.org/jira/browse/HADOOP-17132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> Trigger to handle the idempotency code introduced in 
> https://issues.apache.org/jira/browse/HADOOP-17015 is incomplete. 
> This PR is to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17132) ABFS: Fix For Idempotency code

2020-07-20 Thread Sneha Vijayarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161510#comment-17161510
 ] 

Sneha Vijayarajan commented on HADOOP-17132:


Thanks. Fixed description.

> ABFS: Fix For Idempotency code
> --
>
> Key: HADOOP-17132
> URL: https://issues.apache.org/jira/browse/HADOOP-17132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> Trigger to handle the idempotency code introduced in 
> https://issues.apache.org/jira/browse/HADOOP-17015 is incomplete. 
> This PR is to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17132) ABFS: Fix For Idempotency code

2020-07-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161507#comment-17161507
 ] 

Steve Loughran commented on HADOOP-17132:
-

don't think that's the correct JIRA to point to

> ABFS: Fix For Idempotency code
> --
>
> Key: HADOOP-17132
> URL: https://issues.apache.org/jira/browse/HADOOP-17132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> Trigger to handle the idempotency code introduced in 
> https://issues.apache.org/jira/browse/HADOOP-17137 is incomplete. 
> This PR is to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17140) KMSClientProvider Sends HTTP GET with null "Content-Type" Header

2020-07-20 Thread Axton Grams (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161506#comment-17161506
 ] 

Axton Grams commented on HADOOP-17140:
--

Let me work on a patch and UT.  This look like it was fixed in a later version 
under https://issues.apache.org/jira/browse/HDFS-13682.

 

Commit: 
[https://github.com/apache/hadoop/commit/32f867a6a907c05a312657139d295a92756d98ef#diff-69fcf6a48cb828203cb2b3073a035345L543-L549]

 

Unfortunately, the fix for the HTTP 400 error referenced above did not include 
a UT, so I can't reuse the UT provided with that commit.

 

It appears there is no existing unit test for KMSClientProvider class. 

> KMSClientProvider Sends HTTP GET with null "Content-Type" Header
> 
>
> Key: HADOOP-17140
> URL: https://issues.apache.org/jira/browse/HADOOP-17140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.3
>Reporter: Axton Grams
>Priority: Major
>
> Hive Server uses 'org.apache.hadoop.crypto.key.kms.KMSClientProvider' when 
> interacting with HDFS TDE zones. This triggers a call to the KMS server. If 
> the request method is a GET, the HTTP Header Content-Type is sent with a null 
> value.
> When using Ranger KMS, the embedded Tomcat server returns a HTTP 400 error 
> with the following error message:
> {quote}HTTP Status 400 - Bad Content-Type header value: ''
>  The request sent by the client was syntactically incorrect.
> {quote}
> This only occurs with HTTP GET method calls. 
> This is a captured HTTP request:
>  
> {code:java}
> GET /kms/v1/key/xxx/_metadata?doAs=yyy=yyy HTTP/1.1
> Cookie: 
> hadoop.auth="u=hive=hive/domain@domain.com=kerberos-dt=123789456=xxx="
> Content-Type:
> Cache-Control: no-cache
> Pragma: no-cache
> User-Agent: Java/1.8.0_241
> Host: kms.domain.com:9292
> Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
> Connection: keep-alive{code}
>  
> Note the empty 'Content-Type' header.
> And the corresponding response:
>  
> {code:java}
> HTTP/1.1 400 Bad Request
> Server: Apache-Coyote/1.1
> Content-Type: text/html;charset=utf-8
> Content-Language: en
> Content-Length: 1034
> Date: Thu, 16 Jul 2020 04:23:18 GMT
> Connection: close{code}
>  
> This is the stack trace from the Hive server:
>  
> {code:java}
> Caused by: java.io.IOException: HTTP status [400], message [Bad Request]
> at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:608)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:597)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:566)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:861)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.compareKeyStrength(Hadoop23Shims.java:1506)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.comparePathKeyStrength(Hadoop23Shims.java:1442)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.comparePathKeyStrength(SemanticAnalyzer.java:1990)
> ... 38 more{code}
>  
> This looks to occur in 
> [https://github.com/hortonworks/hadoop-release/blob/HDP-2.6.5.165-3-tag/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L591-L599]
> {code:java}
>   if (authRetryCount > 0) {
> String contentType = conn.getRequestProperty(CONTENT_TYPE);
> String requestMethod = conn.getRequestMethod();
> URL url = conn.getURL();
> conn = createConnection(url, requestMethod);
> conn.setRequestProperty(CONTENT_TYPE, contentType);
> return call(conn, jsonOutput, expectedResponse, klass,
> authRetryCount - 1);
>   }{code}
>  I think when a GET method is received, the Content-Type header is not 
> defined, then in line 592:
> {code:java}
>  String contentType = conn.getRequestProperty(CONTENT_TYPE);
> {code}
> The code attempts to retrieve the CONTENT_TYPE Request Property, which 
> returns null.
> Then in line 596:
> {code:java}
> conn.setRequestProperty(CONTENT_TYPE, contentType);
> {code}
> The null content type is used to construct the HTTP call to the KMS server.
> A null Content-Type header is not allowed/considered malformed by the 
> receiving KMS server.
> I propose this code be updated to inspect the value returned by 
> conn.getRequestProperty(CONTENT_TYPE), and not use a null value to construct 
> the new KMS connection.
> Proposed pseudo-patch:
> {code:java}
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
> +++ 
> 

[GitHub] [hadoop] steveloughran commented on pull request #2145: HADOOP-17133. Implement HttpServer2 metrics

2020-07-20 Thread GitBox


steveloughran commented on pull request #2145:
URL: https://github.com/apache/hadoop/pull/2145#issuecomment-661310541


   BTW, on the topic of metrics and HttpFS, look at #2069 . That's adding the 
ability of client-side code to collect stats exclusive for the specific 
operation (input stream, output stream, listLocatedStatus) with the integration 
with the core IO classes so that things pass through all the way. 
   
   Ultimate Goal: a hive/spark/impala job will be able to report an aggregate 
summary of the IO operations performed during a query, ideally even including 
throttling/retry events. Nowhere near that level of collection yet, but the 
public API should be ready for review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-661237432


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
34 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  5s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 30s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  18m 30s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 25s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   1m 28s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in trunk has 2 extant findbugs warnings.  |
   | -0 :warning: |  patch  |   1m 27s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  20m 48s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 2046 unchanged - 1 
fixed = 2047 total (was 2047)  |
   | +1 :green_heart: |  compile  |  18m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  18m 23s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1940 unchanged - 1 
fixed = 1941 total (was 1941)  |
   | -0 :warning: |  checkstyle  |   2m 58s |  root: The patch generated 28 new 
+ 240 unchanged - 25 fixed = 268 total (was 265)  |
   | +1 :green_heart: |  mvnsite  |   3m  8s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 37s |  hadoop-common-project/hadoop-common 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   | -1 :x: |  findbugs  |   1m 42s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m  2s |  hadoop-common in the patch passed.  

[GitHub] [hadoop] steveloughran commented on a change in pull request #2145: HADOOP-17133. Implement HttpServer2 metrics

2020-07-20 Thread GitBox


steveloughran commented on a change in pull request #2145:
URL: https://github.com/apache/hadoop/pull/2145#discussion_r457662507



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2Metrics.java
##
@@ -0,0 +1,163 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.http;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.eclipse.jetty.server.handler.StatisticsHandler;

Review comment:
   should be in its own block





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17137) ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17137:

Affects Version/s: 3.3.0

> ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic
> -
>
> Key: HADOOP-17137
> URL: https://issues.apache.org/jira/browse/HADOOP-17137
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Tess in ITestAbfsNetworkStatistics have asserts to a  static number of 
> network calls made from the start of fileystem instance creation. But this 
> number of calls are dependent on the certain configs settings which allow 
> creation of container or account is HNS enabled to avoid GetAcl call.
>  
> The tests need to be modified to ensure that count asserts are made for the 
> requests made by the tests alone.
>  
> {code:java}
> [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[INFO] 
> Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] Tests 
> run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.148 s <<< 
> FAILURE! - in org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] 
> testAbfsHttpResponseStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 4.148 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> get_responses expected:<8> but was:<7> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics(ITestAbfsNetworkStatistics.java:207)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> [ERROR] 
> testAbfsHttpSendStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 2.987 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> connections_made expected:<6> but was:<5> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpSendStatistics(ITestAbfsNetworkStatistics.java:91)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> 

[GitHub] [hadoop] belugabehr opened a new pull request #2157: HADOOP-17141: Add Capability To Get Text Length

2020-07-20 Thread GitBox


belugabehr opened a new pull request #2157:
URL: https://github.com/apache/hadoop/pull/2157


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17137) ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17137:

Component/s: test

> ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic
> -
>
> Key: HADOOP-17137
> URL: https://issues.apache.org/jira/browse/HADOOP-17137
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Tess in ITestAbfsNetworkStatistics have asserts to a  static number of 
> network calls made from the start of fileystem instance creation. But this 
> number of calls are dependent on the certain configs settings which allow 
> creation of container or account is HNS enabled to avoid GetAcl call.
>  
> The tests need to be modified to ensure that count asserts are made for the 
> requests made by the tests alone.
>  
> {code:java}
> [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[INFO] 
> Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] Tests 
> run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.148 s <<< 
> FAILURE! - in org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] 
> testAbfsHttpResponseStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 4.148 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> get_responses expected:<8> but was:<7> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics(ITestAbfsNetworkStatistics.java:207)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> [ERROR] 
> testAbfsHttpSendStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 2.987 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> connections_made expected:<6> but was:<5> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpSendStatistics(ITestAbfsNetworkStatistics.java:91)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> 

[jira] [Updated] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17139:

Affects Version/s: 3.3.0
   3.2.1

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Priority: Major
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17137) ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic

2020-07-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161503#comment-17161503
 ] 

Steve Loughran commented on HADOOP-17137:
-

have a look at org.apache.hadoop.fs.s3a.S3ATestUtils.MetricDiff to see what we 
do there: wrap up recording the previous value with the actual assertion to run 
after. Works pretty well.

> ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic
> -
>
> Key: HADOOP-17137
> URL: https://issues.apache.org/jira/browse/HADOOP-17137
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Tess in ITestAbfsNetworkStatistics have asserts to a  static number of 
> network calls made from the start of fileystem instance creation. But this 
> number of calls are dependent on the certain configs settings which allow 
> creation of container or account is HNS enabled to avoid GetAcl call.
>  
> The tests need to be modified to ensure that count asserts are made for the 
> requests made by the tests alone.
>  
> {code:java}
> [INFO] Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[INFO] 
> Running org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] Tests 
> run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.148 s <<< 
> FAILURE! - in org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] 
> testAbfsHttpResponseStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 4.148 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> get_responses expected:<8> but was:<7> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics(ITestAbfsNetworkStatistics.java:207)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> [ERROR] 
> testAbfsHttpSendStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 2.987 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> connections_made expected:<6> but was:<5> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpSendStatistics(ITestAbfsNetworkStatistics.java:91)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at 

[jira] [Updated] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17139:

Priority: Minor  (was: Major)

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Priority: Minor
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17139:

Component/s: fs/s3

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Priority: Major
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2020-07-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161499#comment-17161499
 ] 

Steve Loughran commented on HADOOP-17139:
-

yeah, it was broken. Worked well for a source file, but missed the *small* 
detail that you needed to handle directories too.

Cut it out in an emergency & never got round to doing it as I didn't think it 
was that critical a path for work. 

the commented out impl uses the AWS transfer manager to do the upload. It 
splits the upload up into parts and uploads > 1 part in parallel, does 
multipart as needed. But we should have a test to verify that (if we can do a 
test which doesn't involve a 5GB file)

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Priority: Major
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17141) Add Capability To Get Text Length

2020-07-20 Thread David Mollitor (Jira)
David Mollitor created HADOOP-17141:
---

 Summary: Add Capability To Get Text Length
 Key: HADOOP-17141
 URL: https://issues.apache.org/jira/browse/HADOOP-17141
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-20 Thread GitBox


steveloughran commented on a change in pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#discussion_r457655642



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
##
@@ -285,6 +291,98 @@ public void testWithNullStreamStatistics() throws 
IOException {
 }
   }
 
+  /**
+   * Testing readAhead counters in AbfsInputStream with 30 seconds timeout.
+   */
+  @Test(timeout = TIMEOUT_30_SECONDS)
+  public void testReadAheadCounters() throws IOException {
+describe("Test to check correct values for readAhead counters in "
++ "AbfsInputStream");
+
+AzureBlobFileSystem fs = getFileSystem();
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+Path readAheadCountersPath = path(getMethodName());
+
+/*
+ * Setting the block size for readAhead as 4KB.
+ */
+abfss.getAbfsConfiguration().setReadBufferSize(CUSTOM_BLOCK_BUFFER_SIZE);
+
+AbfsOutputStream out = null;
+AbfsInputStream in = null;
+
+try {
+
+  /*
+   * Creating a file of 1MB size.
+   */
+  out = createAbfsOutputStreamWithFlushEnabled(fs, readAheadCountersPath);
+  out.write(defBuffer);
+  out.close();
+
+  in = abfss.openFileForRead(readAheadCountersPath, fs.getFsStatistics());
+
+  /*
+   * Reading 1KB after each i * KB positions. Hence the reads are from 0
+   * to 1KB, 1KB to 2KB, and so on.. for 5 operations.
+   */
+  for (int i = 0; i < 5; i++) {
+in.seek(ONE_KB * i);
+in.read(defBuffer, ONE_KB * i, ONE_KB);
+  }
+  AbfsInputStreamStatisticsImpl stats =
+  (AbfsInputStreamStatisticsImpl) in.getStreamStatistics();
+
+  /*
+   * Since, readAhead is done in background threads. Sometimes, the
+   * threads aren't finished in the background and could result in
+   * inaccurate results. So, we wait till we have the accurate values
+   * with a limit of 30 seconds as that's when the test times out.
+   *
+   */
+  while (stats.getRemoteBytesRead() < CUSTOM_READ_AHEAD_BUFFER_SIZE
+  || stats.getReadAheadBytesRead() < CUSTOM_BLOCK_BUFFER_SIZE) {
+Thread.sleep(THREAD_SLEEP_10_SECONDS);
+  }
+
+  /*
+   * Verifying the counter values of readAheadBytesRead and 
remoteBytesRead.
+   *
+   * readAheadBytesRead : Since, we read 1KBs 5 times, that means we go
+   * from 0 to 5KB in the file. The bufferSize is set to 4KB, and since
+   * we have 8 blocks of readAhead buffer. We would have 8 blocks of 4KB
+   * buffer. Our read is till 5KB, hence readAhead would ideally read 2
+   * blocks of 4KB which is equal to 8KB. But, sometimes to get more than
+   * one block from readAhead buffer we might have to wait for background
+   * threads to fill the buffer and hence we might do remote read which
+   * would be faster. Therefore, readAheadBytesRead would be equal to or
+   * greater than 4KB.
+   *
+   * remoteBytesRead : Since, the bufferSize is set to 4KB and the number
+   * of blocks or readAheadQueueDepth is equal to 8. We would read 8 * 4
+   * KB buffer on the first read, which is equal to 32KB. But, if we are 
not
+   * able to read some bytes that were in the buffer after doing
+   * readAhead, we might use remote read again. Thus, the bytes read
+   * remotely could also be greater than 32Kb.
+   *
+   */
+  assertTrue(String.format("actual value of %d is not greater than or "

Review comment:
   Try using AssertJ.assertThat here, it lets you declare the specific 
"isGreaterThan" assertion; it's describedAs() does the string formatting too. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-20 Thread GitBox


steveloughran commented on a change in pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#discussion_r457655267



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
##
@@ -285,6 +291,98 @@ public void testWithNullStreamStatistics() throws 
IOException {
 }
   }
 
+  /**
+   * Testing readAhead counters in AbfsInputStream with 30 seconds timeout.
+   */
+  @Test(timeout = TIMEOUT_30_SECONDS)
+  public void testReadAheadCounters() throws IOException {
+describe("Test to check correct values for readAhead counters in "
++ "AbfsInputStream");
+
+AzureBlobFileSystem fs = getFileSystem();
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+Path readAheadCountersPath = path(getMethodName());
+
+/*
+ * Setting the block size for readAhead as 4KB.
+ */
+abfss.getAbfsConfiguration().setReadBufferSize(CUSTOM_BLOCK_BUFFER_SIZE);
+
+AbfsOutputStream out = null;
+AbfsInputStream in = null;
+
+try {
+
+  /*
+   * Creating a file of 1MB size.
+   */
+  out = createAbfsOutputStreamWithFlushEnabled(fs, readAheadCountersPath);
+  out.write(defBuffer);
+  out.close();
+
+  in = abfss.openFileForRead(readAheadCountersPath, fs.getFsStatistics());
+
+  /*
+   * Reading 1KB after each i * KB positions. Hence the reads are from 0
+   * to 1KB, 1KB to 2KB, and so on.. for 5 operations.
+   */
+  for (int i = 0; i < 5; i++) {
+in.seek(ONE_KB * i);
+in.read(defBuffer, ONE_KB * i, ONE_KB);
+  }
+  AbfsInputStreamStatisticsImpl stats =
+  (AbfsInputStreamStatisticsImpl) in.getStreamStatistics();
+
+  /*
+   * Since, readAhead is done in background threads. Sometimes, the
+   * threads aren't finished in the background and could result in
+   * inaccurate results. So, we wait till we have the accurate values
+   * with a limit of 30 seconds as that's when the test times out.
+   *
+   */
+  while (stats.getRemoteBytesRead() < CUSTOM_READ_AHEAD_BUFFER_SIZE
+  || stats.getReadAheadBytesRead() < CUSTOM_BLOCK_BUFFER_SIZE) {
+Thread.sleep(THREAD_SLEEP_10_SECONDS);
+  }
+
+  /*
+   * Verifying the counter values of readAheadBytesRead and 
remoteBytesRead.

Review comment:
   nice explanation





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161462#comment-17161462
 ] 

Hadoop QA commented on HADOOP-12549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
16s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
14s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 
extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 46s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/17055/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-12549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008031/HADOOP-12549.002.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 27cbe7744136 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 6cbd8854ee5 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| findbugs | 

[GitHub] [hadoop] mukund-thakur commented on pull request #2148: HADOOP-17131 Moving listing to use operation callback

2020-07-20 Thread GitBox


mukund-thakur commented on pull request #2148:
URL: https://github.com/apache/hadoop/pull/2148#issuecomment-661245111


   Tested raw and guarded config in ap-south-1 bucket. All good. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-661239583


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 43s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
34 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 59s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 38s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  18m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 25s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   1m 21s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in trunk has 2 extant findbugs warnings.  |
   | -0 :warning: |  patch  |   1m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  20m 46s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 2056 unchanged - 1 
fixed = 2057 total (was 2057)  |
   | +1 :green_heart: |  compile  |  18m 33s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  18m 33s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1949 unchanged - 1 
fixed = 1950 total (was 1950)  |
   | -0 :warning: |  checkstyle  |   2m 48s |  root: The patch generated 28 new 
+ 240 unchanged - 25 fixed = 268 total (was 265)  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 46s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 45s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 36s |  hadoop-common-project/hadoop-common 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   | -1 :x: |  findbugs  |   1m 36s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 47s |  hadoop-common in the patch passed.  |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-661237432


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 52s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
34 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  5s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 30s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 36s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  18m 30s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 25s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   1m 28s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in trunk has 2 extant findbugs warnings.  |
   | -0 :warning: |  patch  |   1m 27s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  20m 48s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 2046 unchanged - 1 
fixed = 2047 total (was 2047)  |
   | +1 :green_heart: |  compile  |  18m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  18m 23s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1940 unchanged - 1 
fixed = 1941 total (was 1941)  |
   | -0 :warning: |  checkstyle  |   2m 58s |  root: The patch generated 28 new 
+ 240 unchanged - 25 fixed = 268 total (was 265)  |
   | +1 :green_heart: |  mvnsite  |   3m  8s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 37s |  hadoop-common-project/hadoop-common 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   | -1 :x: |  findbugs  |   1m 42s |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m  2s |  hadoop-common in the patch passed.  |
   | 

[GitHub] [hadoop] iwasakims commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-20 Thread GitBox


iwasakims commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-661236852


   The patch cleared all spotbugs warnings on my local.
   
   ```
   $ mvn clean install findbugs:findbugs -DskipTests -DskipShade
   $ find . -name findbugsXml.xml | xargs -n 1 
/opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17140) KMSClientProvider Sends HTTP GET with null "Content-Type" Header

2020-07-20 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161424#comment-17161424
 ] 

Hemanth Boyina commented on HADOOP-17140:
-

thanks [~agrams] for the report , can you provide a patch with a UT 

> KMSClientProvider Sends HTTP GET with null "Content-Type" Header
> 
>
> Key: HADOOP-17140
> URL: https://issues.apache.org/jira/browse/HADOOP-17140
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.3
>Reporter: Axton Grams
>Priority: Major
>
> Hive Server uses 'org.apache.hadoop.crypto.key.kms.KMSClientProvider' when 
> interacting with HDFS TDE zones. This triggers a call to the KMS server. If 
> the request method is a GET, the HTTP Header Content-Type is sent with a null 
> value.
> When using Ranger KMS, the embedded Tomcat server returns a HTTP 400 error 
> with the following error message:
> {quote}HTTP Status 400 - Bad Content-Type header value: ''
>  The request sent by the client was syntactically incorrect.
> {quote}
> This only occurs with HTTP GET method calls. 
> This is a captured HTTP request:
>  
> {code:java}
> GET /kms/v1/key/xxx/_metadata?doAs=yyy=yyy HTTP/1.1
> Cookie: 
> hadoop.auth="u=hive=hive/domain@domain.com=kerberos-dt=123789456=xxx="
> Content-Type:
> Cache-Control: no-cache
> Pragma: no-cache
> User-Agent: Java/1.8.0_241
> Host: kms.domain.com:9292
> Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
> Connection: keep-alive{code}
>  
> Note the empty 'Content-Type' header.
> And the corresponding response:
>  
> {code:java}
> HTTP/1.1 400 Bad Request
> Server: Apache-Coyote/1.1
> Content-Type: text/html;charset=utf-8
> Content-Language: en
> Content-Length: 1034
> Date: Thu, 16 Jul 2020 04:23:18 GMT
> Connection: close{code}
>  
> This is the stack trace from the Hive server:
>  
> {code:java}
> Caused by: java.io.IOException: HTTP status [400], message [Bad Request]
> at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:608)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:597)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:566)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:861)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.compareKeyStrength(Hadoop23Shims.java:1506)
> at 
> org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.comparePathKeyStrength(Hadoop23Shims.java:1442)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.comparePathKeyStrength(SemanticAnalyzer.java:1990)
> ... 38 more{code}
>  
> This looks to occur in 
> [https://github.com/hortonworks/hadoop-release/blob/HDP-2.6.5.165-3-tag/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L591-L599]
> {code:java}
>   if (authRetryCount > 0) {
> String contentType = conn.getRequestProperty(CONTENT_TYPE);
> String requestMethod = conn.getRequestMethod();
> URL url = conn.getURL();
> conn = createConnection(url, requestMethod);
> conn.setRequestProperty(CONTENT_TYPE, contentType);
> return call(conn, jsonOutput, expectedResponse, klass,
> authRetryCount - 1);
>   }{code}
>  I think when a GET method is received, the Content-Type header is not 
> defined, then in line 592:
> {code:java}
>  String contentType = conn.getRequestProperty(CONTENT_TYPE);
> {code}
> The code attempts to retrieve the CONTENT_TYPE Request Property, which 
> returns null.
> Then in line 596:
> {code:java}
> conn.setRequestProperty(CONTENT_TYPE, contentType);
> {code}
> The null content type is used to construct the HTTP call to the KMS server.
> A null Content-Type header is not allowed/considered malformed by the 
> receiving KMS server.
> I propose this code be updated to inspect the value returned by 
> conn.getRequestProperty(CONTENT_TYPE), and not use a null value to construct 
> the new KMS connection.
> Proposed pseudo-patch:
> {code:java}
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
> @@ -593,7 +593,9 @@ public HttpURLConnection run() throws Exception {
>  String requestMethod = conn.getRequestMethod();
>  URL url = conn.getURL();
>  conn = createConnection(url, requestMethod);
> -conn.setRequestProperty(CONTENT_TYPE, contentType);
> +if (conn.getRequestProperty(CONTENT_TYPE) != null) {
> + 

[GitHub] [hadoop] szetszwo opened a new pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-20 Thread GitBox


szetszwo opened a new pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156


   Please see https://issues.apache.org/jira/browse/HDFS-15479



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#issuecomment-661226453


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 54s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 21s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  99m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2154 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5ff1c5366f44 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6cbd8854ee5 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/1/testReport/ |
   | Max. process+thread count | 315 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please 

[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161415#comment-17161415
 ] 

Hudson commented on HADOOP-17119:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18455 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18455/])
HADOOP-17119. Jetty upgrade to 9.4.x causes MR app fail with (ayushsaxena: rev 
f2033de2342d20d5f540775dfe4848d452c68957)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> Jetty upgrade to 9.4.x causes MR app fail with IOException
> --
>
> Key: HADOOP-17119
> URL: https://issues.apache.org/jira/browse/HADOOP-17119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch
>
>
> I think we should catch IOException here instead of BindException in 
> HttpServer2#bindForPortRange
> {code:java}
>  for(Integer port : portRanges) {
>   if (port == startPort) {
> continue;
>   }
>   Thread.sleep(100);
>   listener.setPort(port);
>   try {
> bindListener(listener);
> return;
>   } catch (BindException ex) {
> // Ignore exception. Move to next port.
> ioException = ex;
>   }
> }
> {code}
> Stacktrace:
> {code:java}
>  HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142
> java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
>   at 
> org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
>   at 
> org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440)
>   at 
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890)
> Caused by: java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
>   ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-20 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17138:
--
Status: Patch Available  (was: Open)

> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-20 Thread GitBox


iwasakims opened a new pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155


   Please refer to comments of 
[HADOOP-17138](https://issues.apache.org/jira/browse/HADOOP-17138) for the 
description.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-20 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161402#comment-17161402
 ] 

Masatake Iwasaki commented on HADOOP-17138:
---

{noformat}
M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
 overrides the nullness annotation of parameter result in an incompatible way  
At ThrottledAsyncChecker.java:[lines 170-175]
{noformat}

The {{onSuccess}} in ThrottledAsyncChecker accepts null and set it as valid 
value of {{LastCheckResult.result}}.


> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException

2020-07-20 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-17119:
--
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Jetty upgrade to 9.4.x causes MR app fail with IOException
> --
>
> Key: HADOOP-17119
> URL: https://issues.apache.org/jira/browse/HADOOP-17119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch
>
>
> I think we should catch IOException here instead of BindException in 
> HttpServer2#bindForPortRange
> {code:java}
>  for(Integer port : portRanges) {
>   if (port == startPort) {
> continue;
>   }
>   Thread.sleep(100);
>   listener.setPort(port);
>   try {
> bindListener(listener);
> return;
>   } catch (BindException ex) {
> // Ignore exception. Move to next port.
> ioException = ex;
>   }
> }
> {code}
> Stacktrace:
> {code:java}
>  HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142
> java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
>   at 
> org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
>   at 
> org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440)
>   at 
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890)
> Caused by: java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
>   ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17119) Jetty upgrade to 9.4.x causes MR app fail with IOException

2020-07-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161379#comment-17161379
 ] 

Ayush Saxena commented on HADOOP-17119:
---

Committed to trunk and branch-3.3
Thanx [~BilwaST] for the contribution, [~weichiu] and [~surendralilhore] for 
the reviews!!!

> Jetty upgrade to 9.4.x causes MR app fail with IOException
> --
>
> Key: HADOOP-17119
> URL: https://issues.apache.org/jira/browse/HADOOP-17119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-17119.001.patch, HADOOP-17119.002.patch
>
>
> I think we should catch IOException here instead of BindException in 
> HttpServer2#bindForPortRange
> {code:java}
>  for(Integer port : portRanges) {
>   if (port == startPort) {
> continue;
>   }
>   Thread.sleep(100);
>   listener.setPort(port);
>   try {
> bindListener(listener);
> return;
>   } catch (BindException ex) {
> // Ignore exception. Move to next port.
> ioException = ex;
>   }
> }
> {code}
> Stacktrace:
> {code:java}
>  HttpServer.start() threw a non Bind IOException | HttpServer2.java:1142
> java.io.IOException: Failed to bind to x/xxx.xx.xx.xx:27101
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
>   at 
> org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
>   at 
> org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1190)
>   at 
> org.apache.hadoop.http.HttpServer2.bindForPortRange(HttpServer2.java:1258)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1282)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1139)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:451)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:440)
>   at 
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:148)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1378)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$7.run(MRAppMaster.java:1998)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1994)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1890)
> Caused by: java.net.BindException: Address already in use
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85)
>   at 
> org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
>   ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-12549:
--
Attachment: HADOOP-12549.002.patch

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-12549:
--
Attachment: (was: HADOOP-12549.002.patch)

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161357#comment-17161357
 ] 

Hadoop QA commented on HADOOP-12549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HADOOP-12549 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008028/HADOOP-12549.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/17054/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161356#comment-17161356
 ] 

Hudson commented on HADOOP-17136:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18453 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18453/])
HADOOP-17136. ITestS3ADirectoryPerformance.testListOperations failing (github: 
rev bb459d4dd607d3e4d259e3c8cc47b93062d78e4d)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADirectoryPerformance.java


> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list

2020-07-20 Thread GitBox


steveloughran commented on pull request #2083:
URL: https://github.com/apache/hadoop/pull/2083#issuecomment-661144333


   1. S3A DT will have exactly one list it builds up, the s3 one.
   2. When anything asks for a new set of AWS creds via  
AWSCredentialProviderList shareCredentials, with no dt: return the s3 list. 
   3. with DT: ask all the way through each DT plugin
   3. And aggregate these for the final chain
   
   Every DT should, by default, provide its initial set of AWS credential 
providers to all authentication clients. It is only when a client knows that it 
wants to provide a different (possibly empty) list of credential providers. 
that there's any need to differ.
   
   +define the "official" names of credential chains needed.Proposed: by AWS 
service rather than use made in app.
   
   troublespot`shareCredentials()` uses reference counting to decide when to go 
through the provider list and close them. This matters for providers which use 
close() to release resouces like threads
   
   * we MUST call close on all providers when they are no longer needed 
anywhere,
   * we MUST NOT close() a provider which may be needed elsewhere
   
   If we have different AWSProviderList lists with different sharing, we can't 
use a simple reference count for the list any more.
   
   If a new provider list is generated, then all providers in it MUST be new 
*or* we do some complicated and hard to test/maintain tricks to reference count 
providers in there.
   
   If we go with "needs new providers" then we will need some two pass 
collection for non-default lists, something like
   
   1. ask all entries if they wish to provide a new provider list
   1. if all say no: return old one
   2. If any one says yes: ask every provider to generate a new provider with 
its own lifecycle. It may just be the empty list for an unsupported service
   
   




This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17136:

Priority: Minor  (was: Major)

> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17136:

Affects Version/s: (was: 3.1.3)
   3.4.0

> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.1.4
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17136.
-
Resolution: Fixed

> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17136:

Component/s: test

> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17136:

Fix Version/s: (was: 3.1.4)
   3.4.0

> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2153: HADOOP-17136 ITestS3ADirectoryPerformance.testListOperations failing because of HADOOP-17022

2020-07-20 Thread GitBox


steveloughran merged pull request #2153:
URL: https://github.com/apache/hadoop/pull/2153


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2153: HADOOP-17136 ITestS3ADirectoryPerformance.testListOperations failing because of HADOOP-17022

2020-07-20 Thread GitBox


steveloughran commented on pull request #2153:
URL: https://github.com/apache/hadoop/pull/2153#issuecomment-661127926


   LGTM
   
   +1, merging



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161341#comment-17161341
 ] 

Chao Sun commented on HADOOP-12549:
---

Attaching patch v2 on [~qwertymaniac]'s behalf.

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-12549:
--
Attachment: HADOOP-12549.002.patch

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet opened a new pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-20 Thread GitBox


mehakmeet opened a new pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154


   tested by: mvn -T 1C -Dparallel-tests=abfs clean verify
   Region: East US, West US
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161339#comment-17161339
 ] 

Hadoop QA commented on HADOOP-12549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-12549 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770469/HADOOP-12549.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/17053/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.



> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-20 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161330#comment-17161330
 ] 

Masatake Iwasaki commented on HADOOP-17138:
---

DLS_DEAD_LOCAL_STORE around autoboxed Integer increment/decrement should be 
false positive.
 [https://github.com/spotbugs/spotbugs/issues/571]

NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION and 
NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE are related to the 
{{@Nullable}} annotation of the argument of 
[FutureCallback#onSuccess|https://github.com/google/guava/blob/v27.0/guava/src/com/google/common/util/concurrent/package-info.java#L29]
 of guava 27.0. It should mean overriding {{@ParametersAreNonnullByDefault}} 
[used in the 
package|https://github.com/google/guava/blob/v27.0/guava/src/com/google/common/util/concurrent/package-info.java#L29].

> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-20 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161330#comment-17161330
 ] 

Masatake Iwasaki edited comment on HADOOP-17138 at 7/20/20, 3:35 PM:
-

DLS_DEAD_LOCAL_STORE around autoboxing Integer increment/decrement should be 
false positive.
https://github.com/spotbugs/spotbugs/issues/571

NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION and 
NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE are related to the 
{{@Nullable}} annotation of the argument of 
[FutureCallback#onSuccess|https://github.com/google/guava/blob/v27.0/guava/src/com/google/common/util/concurrent/FutureCallback.java#L34]
 of guava 27.0. It should mean overriding {{@ParametersAreNonnullByDefault}} 
[used in the 
package|https://github.com/google/guava/blob/v27.0/guava/src/com/google/common/util/concurrent/package-info.java#L29].


was (Author: iwasakims):
DLS_DEAD_LOCAL_STORE around autoboxed Integer increment/decrement should be 
false positive.
 [https://github.com/spotbugs/spotbugs/issues/571]

NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION and 
NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE are related to the 
{{@Nullable}} annotation of the argument of 
[FutureCallback#onSuccess|https://github.com/google/guava/blob/v27.0/guava/src/com/google/common/util/concurrent/package-info.java#L29]
 of guava 27.0. It should mean overriding {{@ParametersAreNonnullByDefault}} 
[used in the 
package|https://github.com/google/guava/blob/v27.0/guava/src/com/google/common/util/concurrent/package-info.java#L29].

> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161325#comment-17161325
 ] 

Chao Sun commented on HADOOP-12549:
---

cc [~weichiu], [~elgoiri], [~hexiaoqiao] to see if you have any objection to 
this.

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-20 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161323#comment-17161323
 ] 

Chao Sun commented on HADOOP-12549:
---

+1. We recently encountered an issue related to this. I think it is much 
resilient to have this rather than having to rely on {{hdfs-default.xml}} which 
often not get loaded. 

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17140) KMSClientProvider Sends HTTP GET with null "Content-Type" Header

2020-07-20 Thread Axton Grams (Jira)
Axton Grams created HADOOP-17140:


 Summary: KMSClientProvider Sends HTTP GET with null "Content-Type" 
Header
 Key: HADOOP-17140
 URL: https://issues.apache.org/jira/browse/HADOOP-17140
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.7.3
Reporter: Axton Grams


Hive Server uses 'org.apache.hadoop.crypto.key.kms.KMSClientProvider' when 
interacting with HDFS TDE zones. This triggers a call to the KMS server. If the 
request method is a GET, the HTTP Header Content-Type is sent with a null value.

When using Ranger KMS, the embedded Tomcat server returns a HTTP 400 error with 
the following error message:
{quote}HTTP Status 400 - Bad Content-Type header value: ''
 The request sent by the client was syntactically incorrect.
{quote}
This only occurs with HTTP GET method calls. 

This is a captured HTTP request:

 
{code:java}
GET /kms/v1/key/xxx/_metadata?doAs=yyy=yyy HTTP/1.1
Cookie: 
hadoop.auth="u=hive=hive/domain@domain.com=kerberos-dt=123789456=xxx="
Content-Type:
Cache-Control: no-cache
Pragma: no-cache
User-Agent: Java/1.8.0_241
Host: kms.domain.com:9292
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive{code}
 

Note the empty 'Content-Type' header.

And the corresponding response:

 
{code:java}
HTTP/1.1 400 Bad Request
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 1034
Date: Thu, 16 Jul 2020 04:23:18 GMT
Connection: close{code}
 

This is the stack trace from the Hive server:

 
{code:java}
Caused by: java.io.IOException: HTTP status [400], message [Bad Request]
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:608)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:597)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:566)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:861)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.compareKeyStrength(Hadoop23Shims.java:1506)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.comparePathKeyStrength(Hadoop23Shims.java:1442)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.comparePathKeyStrength(SemanticAnalyzer.java:1990)
... 38 more{code}
 

This looks to occur in 
[https://github.com/hortonworks/hadoop-release/blob/HDP-2.6.5.165-3-tag/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L591-L599]
{code:java}
  if (authRetryCount > 0) {
String contentType = conn.getRequestProperty(CONTENT_TYPE);
String requestMethod = conn.getRequestMethod();
URL url = conn.getURL();
conn = createConnection(url, requestMethod);
conn.setRequestProperty(CONTENT_TYPE, contentType);
return call(conn, jsonOutput, expectedResponse, klass,
authRetryCount - 1);
  }{code}
 I think when a GET method is received, the Content-Type header is not defined, 
then in line 592:
{code:java}
 String contentType = conn.getRequestProperty(CONTENT_TYPE);
{code}
The code attempts to retrieve the CONTENT_TYPE Request Property, which returns 
null.

Then in line 596:
{code:java}
conn.setRequestProperty(CONTENT_TYPE, contentType);
{code}
The null content type is used to construct the HTTP call to the KMS server.

A null Content-Type header is not allowed/considered malformed by the receiving 
KMS server.

I propose this code be updated to inspect the value returned by 
conn.getRequestProperty(CONTENT_TYPE), and not use a null value to construct 
the new KMS connection.

Proposed pseudo-patch:
{code:java}
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -593,7 +593,9 @@ public HttpURLConnection run() throws Exception {
 String requestMethod = conn.getRequestMethod();
 URL url = conn.getURL();
 conn = createConnection(url, requestMethod);
-conn.setRequestProperty(CONTENT_TYPE, contentType);
+if (conn.getRequestProperty(CONTENT_TYPE) != null) {
+  conn.setRequestProperty(CONTENT_TYPE, contentType);
+}
 return call(conn, jsonOutput, expectedResponse, klass,
 authRetryCount - 1);
   }{code}
This should not impact any other use of this class and should only address 
cases where a null is returned for Content-Type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2148: HADOOP-17131 Moving listing to use operation callback

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2148:
URL: https://github.com/apache/hadoop/pull/2148#issuecomment-661094908


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  docker  |   9m 51s |  Docker failed to build 
yetus/hadoop:cce5a6f6094.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2148 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2148/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-20 Thread GitBox


steveloughran commented on a change in pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#discussion_r457442637



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/IOStatisticsSupport.java
##
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Support for working with IOStatistics.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public final class IOStatisticsSupport {
+
+  private IOStatisticsSupport() {
+  }
+
+  /**
+   * Take a snapshot of the current statistics state.
+   * 
+   * This is not an atomic option.
+   * 
+   * The instance can be serialized, and its
+   * {@code toString()} method lists all the values.
+   * @param statistics statistics
+   * @return a snapshot of the current values.
+   */
+  public static IOStatisticsSnapshot
+  snapshotIOStatistics(IOStatistics statistics) {
+
+IOStatisticsSnapshot stats = new IOStatisticsSnapshot(statistics);
+stats.snapshot(statistics);

Review comment:
   you are right. cut this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-20 Thread GitBox


hadoop-yetus removed a comment on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-657806727


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  25m 18s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
33 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  3s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 59s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 44s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  16m 35s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 41s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  20m 41s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 1 new + 2046 unchanged - 1 
fixed = 2047 total (was 2047)  |
   | +1 :green_heart: |  compile  |  18m  2s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  18m  2s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 1 new + 1940 unchanged - 1 
fixed = 1941 total (was 1941)  |
   | -0 :warning: |  checkstyle  |   2m 54s |  root: The patch generated 20 new 
+ 224 unchanged - 24 fixed = 244 total (was 248)  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 14 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  13m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with 
JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 0 new + 0 unchanged 
- 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   2m 28s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 48s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 31s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 200m 59s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.MeanStatistic.sum; locked 50% of time  
Unsynchronized access at MeanStatistic.java:50% of time  Unsynchronized access 
at MeanStatistic.java:[line 202] |

[GitHub] [hadoop] steveloughran commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-07-20 Thread GitBox


steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-661077889


   The latest update records the min/max and mean times to initiate the 
(long-haul) http get request
   
   ```
   2020-07-17 17:56:30,433 [JUnit] INFO  scale.ITestS3AInputStreamPerformance 
(ITestS3AInputStreamPerformance.java:dumpIOStatistics(135)) -
   Aggregate Stream Statistics counters=((stream_aborted=2) 
(stream_read_bytes=47870126) (stream_read_bytes_backwards_on_seek=12713984) 
   (stream_read_bytes_discarded_in_abort=43889622) 
(stream_read_bytes_read_in_close=252395) 
(stream_read_bytes_skipped_on_seek=55054163)
   (stream_read_close_operations=0) (stream_read_closed=12) 
(stream_read_exceptions=0) (stream_read_fully_operations=8)
   (stream_read_opened=14) (stream_read_operations=3415) 
(stream_read_operations_incomplete=3362)
   (stream_read_seek_backward_operations=4) 
(stream_read_seek_bytes_read=45092691) 
   (stream_read_seek_forward_operations=175) (stream_read_seek_operations=179) 
   (stream_read_seek_policy_changed=8) (stream_read_total_bytes=93215212) 
   (stream_read_version_mismatches=0)); 
gauges=((stream_read_gauge_input_policy=6)); 
   minimums=((op_http_get_request.min=29));
   maximums=((op_http_get_request.max=753));
   means=((op_http_get_request.mean=MeanStatistic{sum=2420, samples=14, 
mean=172.85714285714286})); 
   ```
   
   S3A also collects it for listings, and pass that all the way back through 
LocationStatusFetcher
   
   ```
   2020-07-20 14:48:40,563 [JUnit-testLocatedFileStatusFourThreads[raw]] INFO  
s3a.ITestLocatedFileStatusFetcher 
(ITestLocatedFileStatusFetcher.java:assertListCount(184))
   - Statistics of fetcher: counters=((op_http_list_request=4)); gauges=(); 
   minimums=((op_http_list_request.min=29));
   maximums=((op_http_list_request.max=114));
   means=((op_http_list_request.mean=sum=274, samples=4, mean=68.50)); 
   ```
   
   also goes through LineReader and the codec in/out streams.
   
   This means that applications using the MR classes can now ask for FS 
performance values.
   
   1. o.a.h.fs.statistics API is ready for review; this is the bit we need to 
keep stable.
   2. o.a.h.fs.statistics.impl is also up for looking at. This is where we can 
be agile about change, but we should still look for obvious issues.
   3. S3A stats migration is complete. It's a big part of this patch, but can 
be reviewed independently. It's just driven the work, especially the .impl 
package
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2153: HADOOP-17136 ITestS3ADirectoryPerformance.testListOperations failing because of HADOOP-17022

2020-07-20 Thread GitBox


mukund-thakur commented on pull request #2153:
URL: https://github.com/apache/hadoop/pull/2153#issuecomment-661029958


   CC @steveloughran  Please take a look whenever you are free. Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2153: HADOOP-17136 ITestS3ADirectoryPerformance.testListOperations failing because of HADOOP-17022

2020-07-20 Thread GitBox


hadoop-yetus commented on pull request #2153:
URL: https://github.com/apache/hadoop/pull/2153#issuecomment-660971078


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 32s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 29s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2153/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2153 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7596e5bbb06c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9f407bcc88a |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2153/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2153/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2153/1/testReport/ |
   | Max. process+thread count | 455 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2153/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub 

[jira] [Commented] (HADOOP-17107) hadoop-azure parallel tests not working on recent JDKs

2020-07-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161156#comment-17161156
 ] 

Hudson commented on HADOOP-17107:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #18452 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18452/])
HADOOP-17107. hadoop-azure parallel tests not working on recent JDKs (github: 
rev 9f407bcc88a315dd72ba4c2e9935f3a94d2e0174)
* (edit) hadoop-tools/hadoop-azure/pom.xml


> hadoop-azure parallel tests not working on recent JDKs
> --
>
> Key: HADOOP-17107
> URL: https://issues.apache.org/jira/browse/HADOOP-17107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, fs/azure
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> recent JDKs are failing to run the wasb or abfs parallel test runs -unable to 
> instantiate the javascript engine.
> Maybe it's been cut from the JVM or the ant script task can't bind to it.
> Fix is as HADOOP-14696 -use our own plugin to set up the test dirs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-20 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17138:
--
Description: 
Spotbugs 4.0.6 generated additional warnings.
{noformat}
$ find . -name findbugsXml.xml | xargs -n 1 
/opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  At 
Server.java:[line 3729]
M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  At 
Server.java:[line 3717]
H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
 overrides the nullness annotation of parameter $L1 in an incompatible way  At 
DatasetVolumeChecker.java:[line 322]
H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
 overrides the nullness annotation of parameter result in an incompatible way  
At DatasetVolumeChecker.java:[lines 358-376]
M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
 overrides the nullness annotation of parameter result in an incompatible way  
At ThrottledAsyncChecker.java:[lines 170-175]
M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
 EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
380-397]
M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
291-309]
M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
SLSRunner.java:[line 816]
H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class  At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
  At TestTimelineReaderHBaseDown.java:[line 190]
M V EI_EXPOSE_REP EI: 
org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
CosNInputStream.java:[line 87]
{noformat}

  was:
Spotbugs 4.0.6 generated additional warnings.
{noformat}
$ mvn clean install findbugs:findbugs -DskipTests -DskipShade
$ find . -name findbugsXml.xml | xargs -n 1 
/opt/spotbugs-4.0.6/bin/convertXmlToText 
M D DLS: Dead store to $L5 in 
org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  At 
Server.java:[line 3729]
M D DLS: Dead store to $L5 in 
org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  At 
Server.java:[line 3717]
H D NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
 overrides the nullness annotation of parameter $L1 in an incompatible way  At 
DatasetVolumeChecker.java:[line 322]
H D NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
 overrides the nullness annotation of parameter result in an incompatible way  
At DatasetVolumeChecker.java:[lines 358-376]
M D NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
 overrides the nullness annotation of parameter result in an incompatible way  
At ThrottledAsyncChecker.java:[lines 170-175]
M D DLS: Dead store to $L8 in 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
 EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
M D NP: result must be non-null but is marked as nullable  At 
LocatedFileStatusFetcher.java:[lines 380-397]
M D NP: result must be non-null but is marked as nullable  At 
LocatedFileStatusFetcher.java:[lines 291-309]
M D DLS: Dead store to $L6 in 
org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
SLSRunner.java:[line 816]
H C UMAC: Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class  At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
M D DLS: Dead store to entities in 

[jira] [Updated] (HADOOP-16682) Remove unnecessary ABFS toString() invocations

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16682:

Summary: Remove unnecessary ABFS toString() invocations  (was: Remove 
unnecessary toString() invocations)

> Remove unnecessary ABFS toString() invocations
> --
>
> Key: HADOOP-16682
> URL: https://issues.apache.org/jira/browse/HADOOP-16682
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Jeetesh Mangwani
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.3.1
>
>
> Remove unnecessary toString() invocations from the hadoop-azure module
> For example:
> permission.toString() in the line here: 
> https://github.com/apache/hadoop/blob/04a6c095cf6d09b6ad417f1f7b7c64fbfdc9d5e4/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L386
> path.toString() in the line here: 
> https://github.com/apache/hadoop/blob/04a6c095cf6d09b6ad417f1f7b7c64fbfdc9d5e4/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L795



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2118: HADOOP-17107. hadoop-azure parallel tests not working on recent JDKs

2020-07-20 Thread GitBox


steveloughran commented on pull request #2118:
URL: https://github.com/apache/hadoop/pull/2118#issuecomment-660926974


   thanks. will backport to 3.3; if people need it for 3.2 then ping me after 
doing a test run locally



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16682) Remove unnecessary toString() invocations

2020-07-20 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161092#comment-17161092
 ] 

Steve Loughran commented on HADOOP-16682:
-

CP'd to branch-3.3

Remember to set fix version when closing a JIRA so the release notes are 
correctly generated, ideally affects versions and component too. thx

> Remove unnecessary toString() invocations
> -
>
> Key: HADOOP-16682
> URL: https://issues.apache.org/jira/browse/HADOOP-16682
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Jeetesh Mangwani
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 3.3.1
>
>
> Remove unnecessary toString() invocations from the hadoop-azure module
> For example:
> permission.toString() in the line here: 
> https://github.com/apache/hadoop/blob/04a6c095cf6d09b6ad417f1f7b7c64fbfdc9d5e4/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L386
> path.toString() in the line here: 
> https://github.com/apache/hadoop/blob/04a6c095cf6d09b6ad417f1f7b7c64fbfdc9d5e4/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L795



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >