[
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=531993&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531993
]
ASF GitHub Bot logged work on HADOOP-16080:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 06/Jan/21 17:20
Start Date: 06/Jan/21 17:20
Worklog Time Spent: 10m
Work Description: sunchao commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-755439216
> some of the tests are parameterized to do test runs with/without dynamoDB.
They shouldn't be run if the -Ddynamo option wasn't set, but what has
inevitably happened is that regressions into the test runs have crept in and
we've not noticed.
I didn't specify the `-Ddynamo` option. The command I used is:
```
mvn -Dparallel-tests -DtestsThreadCount=8 clean verify
```
I'm testing against my own S3A endpoint "s3a://sunchao/" which is in
us-west-1 and I just followed the doc to setup `auth-keys.xml`. I didn't modify
`core-site.xml`.
> BTW, does this mean your initial PR went in without running the ITests?
Unfortunately no ... sorry I was not aware of the test steps here (first
time contributing to hadoop-aws). I'll try to do some remedy in this PR. Test
failures I got:
```
[ERROR] Tests run: 24, Failures: 1, Errors: 16, Skipped: 0, Time elapsed:
20.537 s <<< FAILURE! - in
org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost
[ERROR]
testDeleteSingleFileInDir[raw-delete-markers](org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost)
Time elapsed: 2.036 s <<< FAILURE!
java.lang.AssertionError: operation returning after fs.delete(simpleFile)
action_executor_acquired starting=0 current=0 diff=0, action_http_get_request
starting=0 current=0 diff=0, action_http_head_request starting=4
current=5 diff=1, committer_bytes_committed starting=0 current=0 diff=0,
committer_bytes_uploaded starting=0 current=0 diff=0, committer_commit_job
starting=0 current=0 diff=0, committer_commits.failures starting=0 current=0
diff=0, committer_commits_aborted starting=0 current=0 diff=0,
committer_commits_completed starting=0 current=0 diff=0,
committer_commits_created starting=0 current=0 diff=0,
committer_commits_reverted starting=0 current=0 diff=0,
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed
starting=0 current=0 diff=0, committer_magic_files_created starting=0
current=0 diff=0, committer_materialize_file starting=0 current=0 diff=0,
committer_stage_file_upload starting=0 current=0 diff=0,
committer_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0
diff=0, directories_created starting=2 current=3 diff=1,
directories_deleted starting=0 current=0 diff=0, fake_directories_created
starting=0 current=0 diff=0, fake_directories_deleted starting=6 current=8
diff=2, files_copied starting=0 current=0 diff=0, files_copied_bytes
starting=0 current=0 diff=0, files_created starting=1 current=1 diff=0,
files_delete_rejected starting=0 current=0 diff=0, files_deleted
starting=0 current=1 diff=1, ignored_errors starting=0 current=0 diff=0,
multipart_instantiated starting=0 current=0 diff=0,
multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0,
multipart_upload_aborted starting=0 current=0 diff=0,
multipart_upload_completed starting=0 current=0 diff=0,
multipart_upload_part_put starting=0 current=0 diff=0,
multipart_upload_part_put_bytes starting=0 current=0 diff=0,
multipart_upload_started starting=0 current=0 diff=0,
object_bulk_delete_request starting=3 current=4 diff=1,
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3,
object_delete_request starting=0 current=1 diff=1, object_list_request
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5
diff=1, object_multipart_aborted starting=0 current=0 diff=0,
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1,
object_put_request_completed starting=3 current=4 diff=1,
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0,
op_create_non_recursive starting=0 current=0 diff=0, op_delete
starting=0 current=1 diff=1, op_exists starting=0 current=0 diff=0,
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2
diff=0, op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0
current=0 diff=0, op_is_file starting=0 current=0 diff=0, op_list_files
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2
current=2 diff=0, op_open starting=0 current=0 diff=0, op_rename
starting=0 current=0 diff=0,
s3guard_metadatastore_authoritative_directories_updated starting=0 current=0
diff=0, s3guard_metadatastore_initialization starting=0 current=0 diff=0,
s3guard_metadatastore_put_path_request starting=0 current=0 diff=0,
s3guard_metadatastore_record_deletes starting=0 current=0 diff=0,
s3guard_metadatastore_record_reads starting=0 current=0
diff=0, s3guard_metadatastore_record_writes starting=0 current=0 diff=0,
s3guard_metadatastore_retry starting=0 current=0 diff=0,
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0,
store_io_throttled starting=0 current=0 diff=0, stream_aborted starting=0
current=0 diff=0, stream_read_bytes starting=0 current=0 diff=0,
stream_read_bytes_backwards_on_seek starting=0 current=0 diff=0,
stream_read_bytes_discarded_in_abort starting=0 current=0
diff=0, stream_read_bytes_discarded_in_close starting=0 current=0 diff=0,
stream_read_close_operations starting=0 current=0 diff=0,
stream_read_closed starting=0 current=0 diff=0, stream_read_exceptions
starting=0 current=0 diff=0, stream_read_fully_operations starting=0 current=0
diff=0, stream_read_opened starting=0 current=0 diff=0,
stream_read_operations starting=0 current=0 diff=0,
stream_read_operations_incomplete starting=0 current=0 diff=0,
stream_read_seek_backward_operations starting=0 current=0 diff=0,
stream_read_seek_bytes_discarded starting=0 current=0 diff=0,
stream_read_seek_bytes_skipped starting=0 current=0 diff=0,
stream_read_seek_forward_operations starting=0 current=0 diff=0,
stream_read_seek_operations starting=0 current=0 diff=0,
stream_read_seek_policy_changed starting=0 current=0 diff=0,
stream_read_total_bytes starting=0 current=0 diff=0,
stream_read_version_mismatches starting=0 current=0 diff=0,
stream_write_block_uploads starting=0 current=0 diff=0,
stream_write_block_uploads_aborted starting=0 current=0 diff=0,
stream_write_block_uploads_committed starting=0 current=0 diff=0,
stream_write_bytes starting=0 current=0 diff=0, stream_write_exceptions
starting=0 current=0 diff=0,
stream_write_exceptions_completing_upload starting=0 current=0 diff=0,
stream_write_queue_duration starting=0 current=0 diff=0,
stream_write_total_data starting=0 current=0 diff=0,
stream_write_total_time starting=0 current=0 diff=0: object_delete_objects
expected:<2> but was:<3>
```
And seems most of the failures are due to error like the following:
```
Caused by:
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested
resource not found: Table: sunchao not found (Service: AmazonDynamoDBv2; Status
Code: 400; Error Code: ResourceNotFoundException; Request ID: XXX; Proxy:
null)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1828)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1412)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1374)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
» at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
» at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
» at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
» at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:5413)
» at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:5380)
» at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:2098)
» at
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:2063)
» at
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
» at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStoreTableManager.initTable(DynamoDBMetadataStoreTableManager.java:171)
» ... 23 more
```
Not sure if I missed some steps in my test setup.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 531993)
Time Spent: 6h 40m (was: 6.5h)
> hadoop-aws does not work with hadoop-client-api
> -----------------------------------------------
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 3.2.0, 3.1.1, 3.4.0
> Reporter: Keith Turner
> Assignee: Chao Sun
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1
>
> Time Spent: 6h 40m
> Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
> * hadoop-client-api-3.1.1.jar
> * hadoop-client-runtime-3.1.1.jar
> * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError:
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.<init>(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
> which does not exist in hadoop-client-api-3.1.1.jar. What does exist is
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that
> relocated references to Guava.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]