[
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17819910#comment-17819910
]
ASF GitHub Bot commented on HADOOP-18910:
-----------------------------------------
anujmodi2021 commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1960744484
> Seeing these failures in branch-3.4 after backporting this and #5881.
These failures are happening even without these changes. @anujmodi2021 Can you
figure out what other commits are missing in branch-3.4 or these are genuine
failures? ITestExponentialRetryPolicy has been renamed recently in #5881 from
UT to IT but it is still failing in older version. Do I need to add some extra
keys in auth-keys.xml?
>
> `[ERROR] Failures: [ERROR] ITestAzureBlobFileSystemLease.testTwoCreate:142
Expected to find 'There is currently a lease on the resource and no lease ID
was specified in the request' but got unexpected exception:
org.apache.hadoop.fs.PathIOException:
`abfs://[abfs-testcontainer-5d2e6422-c3f1-4670-a9a7-4bb79a367...@mthakurdata.dfs.core.windows.net](mailto:abfs-testcontainer-5d2e6422-c3f1-4670-a9a7-4bb79a367...@mthakurdata.dfs.core.windows.net)/fork-0001/test/testTwoCreate71defab45746/testfile':
Input/output error: Parallel access to the create path detected. Failing
request to honor single writer semantics at
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1538)
at
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.create(AzureBlobFileSystem.java:347)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1231) at
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1208) at
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1089) at
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1076) at
org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease.lambda$testTwoCreate$1(ITestAzureBlobFileSystemLease.java:144)
at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:498)
at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:453)
at
org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease.testTwoCreate(ITestAzureBlobFileSystemLease.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.lang.Thread.run(Thread.java:750) Caused by: Parallel access to the create
path detected. Failing request to honor single writer semantics at
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.conditionalCreateOverwriteFile(AzureBlobFileSystemStore.java:711)
at
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.createFile(AzureBlobFileSystemStore.java:622)
at
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.create(AzureBlobFileSystem.java:341)
... 21 more
>
> [ERROR]
ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse:98->unsetAndAssert:109
[getIsNamespaceEnabled should return the value configured for
fs.azure.test.namespace.enabled] expected:<[fals]e> but was:<[tru]e> [ERROR]
ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue:88->unsetAndAssert:109
[getIsNamespaceEnabled should return the value configured for
fs.azure.test.namespace.enabled] expected:<[fals]e> but was:<[tru]e> [ERROR]
ITestGetNameSpaceEnabled.testNonXNSAccount:77->Assert.assertFalse:65->Assert.assertTrue:42->Assert.fail:89
Expecting getIsNamespaceEnabled() return false [ERROR] Errors: [ERROR]
ITestExponentialRetryPolicy.testThrottlingIntercept:106 » KeyProvider Failure
... [INFO] [ERROR] Tests run: 27, Failures: 4, Errors: 1, Skipped: 3 [ERROR]
Tests run: 6, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 91.812 s <<<
FAILURE! - in
org.apache.hadoop.fs.azurebfs.services.ITestExponentialRetryPolicy [ERROR]
testThrottlingIntercept(org.apache.hadoop.fs.azurebfs.services.ITestExponentialRetryPolicy)
Time elapsed: 0.93 s <<< ERROR! Failure to initialize configuration for
dummy.dfs.core.windows.net key ="null": Invalid configuration value detected
for fs.azure.account.key at
org.apache.hadoop.fs.azurebfs.services.SimpleKeyProvider.getStorageAccountKey(SimpleKeyProvider.java:53)
at
org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:646)
at
org.apache.hadoop.fs.azurebfs.services.ITestAbfsClient.createTestClientFromCurrentContext(ITestAbfsClient.java:339)
at
org.apache.hadoop.fs.azurebfs.services.ITestExponentialRetryPolicy.testThrottlingIntercept(ITestExponentialRetryPolicy.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.lang.Thread.run(Thread.java:750) `
Thanks for pointing this out. I am not able to point out any changes missing
in branch3.4
But these are not expected failures.
Let me check from my end what is the issue. Will take this up and create a
PR for 3.4 with the fix.
> ABFS: Adding Support for MD5 Hash based integrity verification of the request
> content during transport
> -------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Reporter: Anuj Modi
> Assignee: Anuj Modi
> Priority: Major
> Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request,
> we will send the appropriate header in response to which server will return
> the MD5 Hash of the data it sends back. On Client we will tally this with the
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are
> sending to the server and specify that in appropriate header. Server on
> finding that header will tally this with the MD5 hash it will compute on the
> data received.
> This whole Checksum Validation Support is guarded behind a config, Config is
> by default disabled because with the use of "https" integrity of data is
> preserved anyways. This is introduced as an additional data integrity check
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following
> config to *"true"* or *"false"* respectively. *Config:
> "fs.azure.enable.checksum.validation"*
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]