[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2022-05-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=765218=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-765218
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 02/May/22 23:49
Start Date: 02/May/22 23:49
Worklog Time Spent: 10m 
  Work Description: raymondlam12 commented on code in PR #3381:
URL: https://github.com/apache/hadoop/pull/3381#discussion_r863255244


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -336,95 +347,141 @@ public void sendRequest(byte[] buffer, int offset, int 
length) throws IOExceptio
*
* @throws IOException if an error occurs.
*/
-  public void processResponse(final byte[] buffer, final int offset, final int 
length) throws IOException {
+  public void processResponse(final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
 
 // get the response
 long startTime = 0;
-if (this.isTraceEnabled) {
+if (isTraceEnabled) {
   startTime = System.nanoTime();
 }
 
-this.statusCode = this.connection.getResponseCode();
+statusCode = connection.getResponseCode();
 
-if (this.isTraceEnabled) {
-  this.recvResponseTimeMs = elapsedTimeMs(startTime);
+if (isTraceEnabled) {
+  recvResponseTimeMs = elapsedTimeMs(startTime);
 }
 
-this.statusDescription = this.connection.getResponseMessage();
+statusDescription = connection.getResponseMessage();
 
-this.requestId = 
this.connection.getHeaderField(HttpHeaderConfigurations.X_MS_REQUEST_ID);
-if (this.requestId == null) {
-  this.requestId = AbfsHttpConstants.EMPTY_STRING;
+requestId = 
connection.getHeaderField(HttpHeaderConfigurations.X_MS_REQUEST_ID);
+if (requestId == null) {
+  requestId = AbfsHttpConstants.EMPTY_STRING;
 }
 // dump the headers
 AbfsIoUtils.dumpHeadersToDebugLog("Response Headers",
 connection.getHeaderFields());
 
-if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(this.method)) {
+if (AbfsHttpConstants.HTTP_METHOD_HEAD.equals(method)) {
   // If it is HEAD, and it is ERROR
   return;
 }
 
-if (this.isTraceEnabled) {
+if (isTraceEnabled) {
   startTime = System.nanoTime();
 }
 
+long totalBytesRead = 0;
+
+try {
+  totalBytesRead = parseResponse(buffer, offset, length);
+} finally {
+  if (isTraceEnabled) {
+recvResponseTimeMs += elapsedTimeMs(startTime);
+  }
+  bytesReceived = totalBytesRead;
+}
+  }
+
+  /**
+   * Detects if the Http response indicates an error or success response.
+   * Parses the response and returns the number of bytes read from the
+   * response.
+   *
+   * @param buffer a buffer to hold the response entity body.
+   * @param offset an offset in the buffer where the data will being.
+   * @param length the number of bytes to be written to the buffer.
+   * @return number of bytes read from response InputStream.
+   * @throws IOException if an error occurs.
+   */
+  public long parseResponse(final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
 if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
   processStorageErrorResponse();
-  if (this.isTraceEnabled) {
-this.recvResponseTimeMs += elapsedTimeMs(startTime);
-  }
-  this.bytesReceived = 
this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0);
+  return connection.getHeaderFieldLong(
+  HttpHeaderConfigurations.CONTENT_LENGTH, 0);
 } else {
-  // consume the input stream to release resources
-  int totalBytesRead = 0;
-
-  try (InputStream stream = this.connection.getInputStream()) {
+  try (InputStream stream = connection.getInputStream()) {
 if (isNullInputStream(stream)) {
-  return;
+  return 0;
 }
-boolean endOfStream = false;
 
-// this is a list operation and need to retrieve the data
-// need a better solution
-if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == 
null) {
+// Incase of ListStatus call, request is of GET Method and the
+// caller doesnt provide buffer because the length can not be
+// pre-determined
+if (AbfsHttpConstants.HTTP_METHOD_GET.equals(method)
+&& buffer == null) {
   parseListFilesResponse(stream);
 } else {
-  if (buffer != null) {
-while (totalBytesRead < length) {
-  int bytesRead = stream.read(buffer, offset + totalBytesRead, 
length - totalBytesRead);
-  if (bytesRead == -1) {
-endOfStream = true;
-break;
-  }
-  totalBytesRead += bytesRead;
-}
-  }
-

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-11-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=677521=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-677521
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 05/Nov/21 20:05
Start Date: 05/Nov/21 20:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#issuecomment-912612360






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 677521)
Time Spent: 1h 50m  (was: 1h 40m)

> ABFS: Refactor HTTP request handling code
> -
>
> Key: HADOOP-17890
> URL: https://issues.apache.org/jira/browse/HADOOP-17890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Aims at Http request handling code refactoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-11-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=677032=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-677032
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 05/Nov/21 13:00
Start Date: 05/Nov/21 13:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#issuecomment-912612360


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  0s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  77m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3381 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ee6ce4f77d9f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aff57f5b308c41e7a3e3b878b573e4ec222998a6 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/testReport/ |
   | Max. process+thread count | 548 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/console |
   | versions | 

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650177
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 18:11
Start Date: 13/Sep/21 18:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918449188


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 51s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  79m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3381 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 42a6b218d54d 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 41d43efe5a1c185076f57d43b7ee87c5dcc7d6d8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/testReport/ |
   | Max. process+thread count | 571 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/3/console |
   | versions | git=2.25.1 

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650028=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650028
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 13:52
Start Date: 13/Sep/21 13:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918213470


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with 
JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 15 unchanged - 0 
fixed = 17 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  
hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new 
+ 15 unchanged - 0 fixed = 17 total (was 15)  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  76m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3381 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ccd54cc5e5bd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650021
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 13:36
Start Date: 13/Sep/21 13:36
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#issuecomment-918198704


   > LGTM; some minor changes. Main one is using/adding a statistic to 
StreamStatisticNames
   
   Hi @steveloughran , Thanks for taking the time to review this PR. 
   Post analyzing the metric gathering spot and the metric grouping in 
StreamStatistics and StoreStatistics, I feel the new statistics is probably 
right to be defined within AbfsStatistics. Have added my explanation for this 
above. Kindly request your inputs on this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 650021)
Time Spent: 1h 10m  (was: 1h)

> ABFS: Refactor HTTP request handling code
> -
>
> Key: HADOOP-17890
> URL: https://issues.apache.org/jira/browse/HADOOP-17890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Aims at Http request handling code refactoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650010
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 13:20
Start Date: 13/Sep/21 13:20
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#discussion_r707326581



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -74,6 +76,7 @@
   // metrics
   private int bytesSent;
   private long bytesReceived;
+  private long bytesDiscarded;

Review comment:
   Have added the javadocs. Will keep a PR checklist point on this. Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 650010)
Time Spent: 1h  (was: 50m)

> ABFS: Refactor HTTP request handling code
> -
>
> Key: HADOOP-17890
> URL: https://issues.apache.org/jira/browse/HADOOP-17890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Aims at Http request handling code refactoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650008=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650008
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 13:17
Start Date: 13/Sep/21 13:17
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#discussion_r707324115



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final 
int offset, final int len
   startTime = System.nanoTime();
 }
 
-if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
-  processStorageErrorResponse();
+long totalBytesRead = 0;
+
+try {
+  totalBytesRead = parseResponse(buffer, offset, length);
+} finally {
   if (this.isTraceEnabled) {
 this.recvResponseTimeMs += elapsedTimeMs(startTime);
   }
-  this.bytesReceived = 
this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0);
-} else {
-  // consume the input stream to release resources
-  int totalBytesRead = 0;
+  this.bytesReceived = totalBytesRead;
+}
+  }
 
+  public long parseResponse(final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
+if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+  processStorageErrorResponse();
+  return this.connection.getHeaderFieldLong(
+  HttpHeaderConfigurations.CONTENT_LENGTH, 0);
+} else {
   try (InputStream stream = this.connection.getInputStream()) {
 if (isNullInputStream(stream)) {
-  return;
+  return 0;
 }
-boolean endOfStream = false;
 
-// this is a list operation and need to retrieve the data
-// need a better solution
-if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == 
null) {
+if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+&& buffer == null) {
   parseListFilesResponse(stream);
 } else {
-  if (buffer != null) {
-while (totalBytesRead < length) {
-  int bytesRead = stream.read(buffer, offset + totalBytesRead, 
length - totalBytesRead);
-  if (bytesRead == -1) {
-endOfStream = true;
-break;
-  }
-  totalBytesRead += bytesRead;
-}
-  }
-  if (!endOfStream && stream.read() != -1) {
-// read and discard
-int bytesRead = 0;
-byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
-while ((bytesRead = stream.read(b)) >= 0) {
-  totalBytesRead += bytesRead;
-}
-  }
+  return readDataFromStream(stream, buffer, offset, length);
 }
-  } catch (IOException ex) {
-LOG.warn("IO/Network error: {} {}: {}",
-method, getMaskedUrl(), ex.getMessage());
-LOG.debug("IO Error: ", ex);
-throw ex;
-  } finally {
-if (this.isTraceEnabled) {
-  this.recvResponseTimeMs += elapsedTimeMs(startTime);
+  }
+}
+
+return 0;
+  }
+
+  public long readDataFromStream(final InputStream stream,
+  final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
+// consume the input stream to release resources
+int totalBytesRead = 0;
+boolean endOfStream = false;
+
+if (buffer != null) {

Review comment:
   Ideally never. Other than List and Read, server should not be sending 
any content that the client isnt ready. 
   In case of List, buffer is not provided by the caller as the size is not 
known, and is parsed and returned before reaching this method. 
   Buffer passed in here can not be null in case of read flow either, as the 
null check happens before HttpRequest can be raised.
   However this is an existing protective check in the code, hence retaining it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 650008)
Time Spent: 50m  (was: 40m)

> ABFS: Refactor HTTP request handling code
> -
>
> Key: HADOOP-17890
> URL: https://issues.apache.org/jira/browse/HADOOP-17890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects 

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650005=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650005
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 13:11
Start Date: 13/Sep/21 13:11
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#discussion_r707319307



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final 
int offset, final int len
   startTime = System.nanoTime();
 }
 
-if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
-  processStorageErrorResponse();
+long totalBytesRead = 0;
+
+try {
+  totalBytesRead = parseResponse(buffer, offset, length);
+} finally {
   if (this.isTraceEnabled) {
 this.recvResponseTimeMs += elapsedTimeMs(startTime);
   }
-  this.bytesReceived = 
this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0);
-} else {
-  // consume the input stream to release resources
-  int totalBytesRead = 0;
+  this.bytesReceived = totalBytesRead;
+}
+  }
 
+  public long parseResponse(final byte[] buffer,

Review comment:
   Have added the javadocs.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final 
int offset, final int len
   startTime = System.nanoTime();
 }
 
-if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
-  processStorageErrorResponse();
+long totalBytesRead = 0;
+
+try {
+  totalBytesRead = parseResponse(buffer, offset, length);
+} finally {
   if (this.isTraceEnabled) {
 this.recvResponseTimeMs += elapsedTimeMs(startTime);
   }
-  this.bytesReceived = 
this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0);
-} else {
-  // consume the input stream to release resources
-  int totalBytesRead = 0;
+  this.bytesReceived = totalBytesRead;
+}
+  }
 
+  public long parseResponse(final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
+if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+  processStorageErrorResponse();
+  return this.connection.getHeaderFieldLong(
+  HttpHeaderConfigurations.CONTENT_LENGTH, 0);
+} else {
   try (InputStream stream = this.connection.getInputStream()) {
 if (isNullInputStream(stream)) {
-  return;
+  return 0;
 }
-boolean endOfStream = false;
 
-// this is a list operation and need to retrieve the data
-// need a better solution
-if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == 
null) {
+if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+&& buffer == null) {
   parseListFilesResponse(stream);
 } else {
-  if (buffer != null) {
-while (totalBytesRead < length) {
-  int bytesRead = stream.read(buffer, offset + totalBytesRead, 
length - totalBytesRead);
-  if (bytesRead == -1) {
-endOfStream = true;
-break;
-  }
-  totalBytesRead += bytesRead;
-}
-  }
-  if (!endOfStream && stream.read() != -1) {
-// read and discard
-int bytesRead = 0;
-byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
-while ((bytesRead = stream.read(b)) >= 0) {
-  totalBytesRead += bytesRead;
-}
-  }
+  return readDataFromStream(stream, buffer, offset, length);
 }
-  } catch (IOException ex) {
-LOG.warn("IO/Network error: {} {}: {}",
-method, getMaskedUrl(), ex.getMessage());
-LOG.debug("IO Error: ", ex);
-throw ex;
-  } finally {
-if (this.isTraceEnabled) {
-  this.recvResponseTimeMs += elapsedTimeMs(startTime);
+  }
+}
+
+return 0;
+  }
+
+  public long readDataFromStream(final InputStream stream,

Review comment:
   Have added the javadocs.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 650005)
Time Spent: 40m  (was: 0.5h)

> ABFS: Refactor HTTP 

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=650004=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-650004
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 13/Sep/21 13:11
Start Date: 13/Sep/21 13:11
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on a change in pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#discussion_r707319027



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsStatistic.java
##
@@ -75,6 +75,8 @@
   "Total bytes uploaded."),
   BYTES_RECEIVED("bytes_received",
   "Total bytes received."),
+  BYTES_DISCARDED_AT_SOCKET_READ("bytes_discarded_at_socket_read",

Review comment:
   The bytesDiscarded is incremented when server happens to return any 
bytes that the client wasnt expecting to receive. 
   
   As of today, there are only 2 APIs that the server will return response 
body, which is List and Read. In case of List, inputStream is provided to the 
ObjectMapper for json conversion. This leaves just the read API where data 
intended to be read should match with the space in buffer to store data 
received.
   
   Ideally there are no scenarios in driver-server communication that this is 
expected. I couldnt find any clue that lead to the code that drains the socket 
either, but saw few forums mention about the side effects of client 
disconnecting while server might still be transmitting. TCP Reset gets 
triggered and signals an error in connection which in turn triggers some error 
handling and network layer buffers being reset. 
   
   In the case of read flow, AbfsHttpOperation layer has no access to 
AbfsInputStream instance and hence cant access the stream statistics it holds 
to. While logically read is the only possible API that can hit this case, this 
code is in a general Http response handling code, hence I retained the new 
statistic outside of StreamStatistics to track this.
   
   I looked at StoreStatisticNames, and it didnt look right to add a new 
statistic in there, hence adding this along with the other network statistics 
such as BYTES_SEND and BYTES_RECEIVED defined in AbfsStatistic enum. 
   
   Please let me know if this looks ok.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 650004)
Time Spent: 0.5h  (was: 20m)

> ABFS: Refactor HTTP request handling code
> -
>
> Key: HADOOP-17890
> URL: https://issues.apache.org/jira/browse/HADOOP-17890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Aims at Http request handling code refactoring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=647113=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647113
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 06/Sep/21 19:23
Start Date: 06/Sep/21 19:23
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3381:
URL: https://github.com/apache/hadoop/pull/3381#discussion_r703046253



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final 
int offset, final int len
   startTime = System.nanoTime();
 }
 
-if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
-  processStorageErrorResponse();
+long totalBytesRead = 0;
+
+try {
+  totalBytesRead = parseResponse(buffer, offset, length);
+} finally {
   if (this.isTraceEnabled) {
 this.recvResponseTimeMs += elapsedTimeMs(startTime);
   }
-  this.bytesReceived = 
this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0);
-} else {
-  // consume the input stream to release resources
-  int totalBytesRead = 0;
+  this.bytesReceived = totalBytesRead;
+}
+  }
 
+  public long parseResponse(final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
+if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+  processStorageErrorResponse();
+  return this.connection.getHeaderFieldLong(

Review comment:
   nit: nit: no need for `this.`

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -369,58 +378,75 @@ public void processResponse(final byte[] buffer, final 
int offset, final int len
   startTime = System.nanoTime();
 }
 
-if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
-  processStorageErrorResponse();
+long totalBytesRead = 0;
+
+try {
+  totalBytesRead = parseResponse(buffer, offset, length);
+} finally {
   if (this.isTraceEnabled) {
 this.recvResponseTimeMs += elapsedTimeMs(startTime);
   }
-  this.bytesReceived = 
this.connection.getHeaderFieldLong(HttpHeaderConfigurations.CONTENT_LENGTH, 0);
-} else {
-  // consume the input stream to release resources
-  int totalBytesRead = 0;
+  this.bytesReceived = totalBytesRead;
+}
+  }
 
+  public long parseResponse(final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
+if (statusCode >= HttpURLConnection.HTTP_BAD_REQUEST) {
+  processStorageErrorResponse();
+  return this.connection.getHeaderFieldLong(
+  HttpHeaderConfigurations.CONTENT_LENGTH, 0);
+} else {
   try (InputStream stream = this.connection.getInputStream()) {
 if (isNullInputStream(stream)) {
-  return;
+  return 0;
 }
-boolean endOfStream = false;
 
-// this is a list operation and need to retrieve the data
-// need a better solution
-if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method) && buffer == 
null) {
+if (AbfsHttpConstants.HTTP_METHOD_GET.equals(this.method)
+&& buffer == null) {
   parseListFilesResponse(stream);
 } else {
-  if (buffer != null) {
-while (totalBytesRead < length) {
-  int bytesRead = stream.read(buffer, offset + totalBytesRead, 
length - totalBytesRead);
-  if (bytesRead == -1) {
-endOfStream = true;
-break;
-  }
-  totalBytesRead += bytesRead;
-}
-  }
-  if (!endOfStream && stream.read() != -1) {
-// read and discard
-int bytesRead = 0;
-byte[] b = new byte[CLEAN_UP_BUFFER_SIZE];
-while ((bytesRead = stream.read(b)) >= 0) {
-  totalBytesRead += bytesRead;
-}
-  }
+  return readDataFromStream(stream, buffer, offset, length);
 }
-  } catch (IOException ex) {
-LOG.warn("IO/Network error: {} {}: {}",
-method, getMaskedUrl(), ex.getMessage());
-LOG.debug("IO Error: ", ex);
-throw ex;
-  } finally {
-if (this.isTraceEnabled) {
-  this.recvResponseTimeMs += elapsedTimeMs(startTime);
+  }
+}
+
+return 0;
+  }
+
+  public long readDataFromStream(final InputStream stream,
+  final byte[] buffer,
+  final int offset,
+  final int length) throws IOException {
+// consume the input stream to release resources
+int totalBytesRead = 0;
+boolean endOfStream = false;
+
+if (buffer != null) {

Review comment:
   does this 

[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=646350=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646350
 ]

ASF GitHub Bot logged work on HADOOP-17890:
---

Author: ASF GitHub Bot
Created on: 03/Sep/21 15:13
Start Date: 03/Sep/21 15:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3381:
URL: https://github.com/apache/hadoop/pull/3381#issuecomment-912612360


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  0s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  77m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3381 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ee6ce4f77d9f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aff57f5b308c41e7a3e3b878b573e4ec222998a6 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/testReport/ |
   | Max. process+thread count | 548 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/console |
   | versions | git=2.25.1