[
https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812306#comment-17812306
]
ASF GitHub Bot commented on HADOOP-18883:
-----------------------------------------
steveloughran merged PR #6511:
URL: https://github.com/apache/hadoop/pull/6511
> Expect-100 JDK bug resolution: prevent multiple server calls
> ------------------------------------------------------------
>
> Key: HADOOP-18883
> URL: https://issues.apache.org/jira/browse/HADOOP-18883
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Reporter: Pranav Saxena
> Assignee: Pranav Saxena
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.5.0
>
>
> This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978].
>
> With the current implementation of HttpURLConnection if server rejects the
> “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be
> thrown from 'expect100Continue()' method.
> After the exception thrown, If we call any other method on the same instance
> (ex getHeaderField(), or getHeaderFields()). They will internally call
> getOuputStream() which invokes writeRequests(), which make the actual server
> call.
> In the AbfsHttpOperation, after sendRequest() we call processResponse()
> method from AbfsRestOperation. Even if the conn.getOutputStream() fails due
> to expect-100 error, we consume the exception and let the code go ahead. So,
> we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which
> will be triggered after getOutputStream is failed. These invocation will lead
> to server calls.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]