[
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=519935&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-519935
]
ASF GitHub Bot logged work on HADOOP-16080:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 04/Dec/20 00:11
Start Date: 04/Dec/20 00:11
Worklog Time Spent: 10m
Work Description: hadoop-yetus removed a comment on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-738463415
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Comment |
|:----:|----------:|--------:|:--------|
| +0 :ok: | reexec | 1m 17s | Docker mode activated. |
| -1 :x: | patch | 0m 9s | https://github.com/apache/hadoop/pull/2510
does not apply to branch-3.2. Rebase required? Wrong Branch? See
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.
|
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.40 ServerAPI=1.40 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/4/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/2510 |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/4/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
This message was automatically generated.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 519935)
Time Spent: 1h 40m (was: 1.5h)
> hadoop-aws does not work with hadoop-client-api
> -----------------------------------------------
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 3.2.0, 3.1.1
> Reporter: Keith Turner
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1h 40m
> Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
> * hadoop-client-api-3.1.1.jar
> * hadoop-client-runtime-3.1.1.jar
> * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError:
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.<init>(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
> which does not exist in hadoop-client-api-3.1.1.jar. What does exist is
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that
> relocated references to Guava.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]