[ 
https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646318&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646318
 ]

ASF GitHub Bot logged work on HADOOP-17864:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 03/Sep/21 14:07
            Start Date: 03/Sep/21 14:07
    Worklog Time Spent: 10m 
      Work Description: snvijaya commented on a change in pull request #3335:
URL: https://github.com/apache/hadoop/pull/3335#discussion_r701920531



##########
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java
##########
@@ -0,0 +1,367 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.List;
+import java.util.Map;
+
+import javax.net.ssl.HttpsURLConnection;
+import javax.net.ssl.SSLSocketFactory;
+
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonParser;
+import org.codehaus.jackson.JsonToken;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory;
+import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
+import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations;
+import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
+
+public class AbfsHttpConnection extends AbfsHttpOperation {
+  private static final Logger LOG = 
LoggerFactory.getLogger(AbfsHttpOperation.class);
+  private HttpURLConnection connection;
+  private ListResultSchema listResultSchema = null;
+
+  public AbfsHttpConnection(final URL url,
+      final String method,
+      List<AbfsHttpHeader> requestHeaders) throws IOException {
+    super(url, method);
+    init(method, requestHeaders);
+  }
+
+  /**
+   * Initializes a new HTTP request and opens the connection.
+   *
+   * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE).
+   * @param requestHeaders The HTTP request headers.READ_TIMEOUT
+   *
+   * @throws IOException if an error occurs.
+   */
+  public void init(final String method, List<AbfsHttpHeader> requestHeaders)
+      throws IOException {
+    this.connection = openConnection();
+    if (this.connection instanceof HttpsURLConnection) {
+      HttpsURLConnection secureConn = (HttpsURLConnection) this.connection;
+      SSLSocketFactory sslSocketFactory = 
DelegatingSSLSocketFactory.getDefaultFactory();
+      if (sslSocketFactory != null) {
+        secureConn.setSSLSocketFactory(sslSocketFactory);
+      }
+    }
+
+    this.connection.setConnectTimeout(getConnectTimeout());
+    this.connection.setReadTimeout(getReadTimeout());
+
+    this.connection.setRequestMethod(method);
+
+    for (AbfsHttpHeader header : requestHeaders) {
+      this.connection.setRequestProperty(header.getName(), header.getValue());
+    }
+  }
+
+  public HttpURLConnection getConnection() {
+    return connection;
+  }
+
+  public ListResultSchema getListResultSchema() {
+    return listResultSchema;
+  }
+
+  public String getResponseHeader(String httpHeader) {
+    return connection.getHeaderField(httpHeader);
+  }
+
+  public void setHeader(String header, String value) {
+    this.getConnection().setRequestProperty(header, value);
+  }
+
+  public Map<String, List<String>> getRequestHeaders() {
+    return getConnection().getRequestProperties();
+  }
+
+  public String getRequestHeader(String header) {
+    return getConnection().getRequestProperty(header);
+  }
+
+  public String getClientRequestId() {
+    return this.connection
+        .getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID);
+  }
+  /**
+   * Sends the HTTP request.  Note that HttpUrlConnection requires that an
+   * empty buffer be sent in order to set the "Content-Length: 0" header, which
+   * is required by our endpoint.
+   *
+   * @param buffer the request entity body.
+   * @param offset an offset into the buffer where the data beings.
+   * @param length the length of the data in the buffer.
+   *
+   * @throws IOException if an error occurs.
+   */
+  public void sendRequest(byte[] buffer, int offset, int length) throws 
IOException {
+    this.connection.setDoOutput(true);
+    this.connection.setFixedLengthStreamingMode(length);
+    if (buffer == null) {
+      // An empty buffer is sent to set the "Content-Length: 0" header, which
+      // is required by our endpoint.
+      buffer = new byte[]{};
+      offset = 0;
+      length = 0;
+    }
+
+    // send the request body
+
+    long startTime = 0;
+    if (isTraceEnabled()) {
+      startTime = System.nanoTime();
+    }
+    try (OutputStream outputStream = this.connection.getOutputStream()) {
+      // update bytes sent before they are sent so we may observe
+      // attempted sends as well as successful sends via the
+      // accompanying statusCode
+      setBytesSent(length);
+      outputStream.write(buffer, offset, length);
+    } finally {
+      if (isTraceEnabled()) {
+        setSendRequestTimeMs(elapsedTimeMs(startTime));
+      }
+    }
+  }
+
+  /**
+   * Gets and processes the HTTP response.
+   *
+   * @param buffer a buffer to hold the response entity body
+   * @param offset an offset in the buffer where the data will being.
+   * @param length the number of bytes to be written to the buffer.
+   *
+   * @throws IOException if an error occurs.
+   */
+  public void processResponse(byte[] buffer, final int offset,

Review comment:
       Have created a separate PR to address these comments to refactor the 
Http request handling [HADOOP-17890 
PR](https://github.com/apache/hadoop/pull/3381) . The JSON parsing code is in a 
method parseListFilesResponse which takes input stream as a parameter to it. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 646318)
    Time Spent: 1h 20m  (was: 1h 10m)

> ABFS: Fork AbfsHttpOperation to add alternate connection
> --------------------------------------------------------
>
>                 Key: HADOOP-17864
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17864
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.4.0
>            Reporter: Sneha Vijayarajan
>            Assignee: Sneha Vijayarajan
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This Jira is to facilitate upcoming work as part of adding an alternate 
> connection :
> [HADOOP-17853] ABFS: Enable optional store connectivity over azure specific 
> protocol for data egress - ASF JIRA (apache.org)
> The scope of the change is to make AbfsHttpOperation as abstract class and 
> create a child class AbfsHttpConnection. Future connection types will be 
> added as child of AbfsHttpOperation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to