[jira] [Created] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-28 Thread Siyao Meng (JIRA)
Siyao Meng created HADOOP-16082:
---

 Summary: FsShell ls: Add option -i to print inode id
 Key: HADOOP-16082
 URL: https://issues.apache.org/jira/browse/HADOOP-16082
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Reporter: Siyao Meng
Assignee: Siyao Meng


When debugging the FSImage corruption issue, I often need to know a file's or 
directory's inode id. At this moment, the only way to do that is to use OIV 
tool to dump the FSImage and look up the filename, which is very inefficient.

Here I propose adding option "-i" in FsShell that prints files' or directories' 
inode id.

h2. Implementation

h3. For hdfs:// (HDFS)
fileId exists in HdfsLocatedFileStatus, which is already returned to 
hdfs-client. We just need to print it in Ls#processPath().

h3. For file://
h4. Linux
Use java.nio.

h4. Windows
Windows has the concept of "File ID" which is similar to inode id. It is unique 
in NTFS and ReFS.

h3. For other FS
The fileId entry will be "0" in FileStatus if it is not set. We could either 
ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-01-28 Thread Sunil G
Hi Folks,

On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.

The artifacts are available here:
http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/

The RC tag in git is release-3.1.2-RC1:
https://github.com/apache/hadoop/commits/release-3.1.2-RC1

The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1215

This vote will run 5 days from now.

3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.

We have done testing with a pseudo cluster and distributed shell job.

My +1 to start.

Best,
Wangda Tan and Sunil Govindan

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
ORDER BY priority DESC


[jira] [Created] (HADOOP-16081) DistCp: Update "Update and Overwrite" doc

2019-01-28 Thread Siyao Meng (JIRA)
Siyao Meng created HADOOP-16081:
---

 Summary: DistCp: Update "Update and Overwrite" doc
 Key: HADOOP-16081
 URL: https://issues.apache.org/jira/browse/HADOOP-16081
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation, tools/distcp
Affects Versions: 3.1.1
Reporter: Siyao Meng
Assignee: Siyao Meng


https://hadoop.apache.org/docs/r3.1.1/hadoop-distcp/DistCp.html#Update_and_Overwrite

In the current doc, it says that -update or -overwrite won't copy the directory 
hierarchies. i.e. the file structure will be "flattened out" on the 
destination. But this has been improved already. (Need to find the jira id that 
made this change.) The dir structure WILL be copied over when -update or 
-overwrite option is in use.

Now the only caveat for -update or -overwrite option is when we are specifying 
multiple sources, there shouldn't be files or directories with same relative 
path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2019-01-28 Thread Keith Turner (JIRA)
Keith Turner created HADOOP-16080:
-

 Summary: hadoop-aws does not work with hadoop-client-api
 Key: HADOOP-16080
 URL: https://issues.apache.org/jira/browse/HADOOP-16080
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.1.1
Reporter: Keith Turner


I attempted to use Accumulo and S3a with the following jars on the classpath.

 * hadoop-client-api-3.1.1.jar
 * hadoop-client-runtime-3.1.1.jar
 * hadoop-aws-3.1.1.jar

This failed with the following exception.

{noformat}
Exception in thread "init" java.lang.NoSuchMethodError: 
org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
at 
org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
at org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
at 
org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
at 
org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
at java.lang.Thread.run(Thread.java:748)
{noformat}

The problem is that {{S3AFileSystem.create()}} looks for 
{{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
 which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
{{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.

To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
relocated references to Guava.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Moving branch-2 to java 8

2019-01-28 Thread Vinod Kumar Vavilapalli
The community made a decision long time ago that we'd like to keep the 
compatibility & so tie branch-2 to Java 7, but do Java 8+ only work on 3.x.

I always assumed that most (all?) downstream users build branch-2 on JDK 7 
only, can anyone confirm? If so, there may be an easier way to address these 
test issues.

+Vinod

> On Jan 28, 2019, at 11:24 AM, Jonathan Hung  wrote:
> 
> Hi folks,
> 
> Forking a discussion based on HADOOP-15711. To summarize, there are issues
> with branch-2 tests running on java 7 (openjdk) which don't exist on java
> 8. From our testing, the build can pass with openjdk 8.
> 
> For branch-3, the work to move the build to use java 8 was done in
> HADOOP-14816 as part of the Dockerfile OS version change. HADOOP-16053 was
> filed to backport this OS version change to branch-2 (but without the java
> 7 -> java 8 change). So my proposal is to also make the java 7 -> java 8
> version change in branch-2.
> 
> As mentioned in HADOOP-15711, the main issue is around source and binary
> compatibility. I don't currently have a great answer, but one initial
> thought is to build source/binary against java 7 to ensure compatibility
> and run the rest of the build as java 8.
> 
> Thoughts?
> 
> Jonathan Hung


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[DISCUSS] Moving branch-2 to java 8

2019-01-28 Thread Jonathan Hung
Hi folks,

Forking a discussion based on HADOOP-15711. To summarize, there are issues
with branch-2 tests running on java 7 (openjdk) which don't exist on java
8. From our testing, the build can pass with openjdk 8.

For branch-3, the work to move the build to use java 8 was done in
HADOOP-14816 as part of the Dockerfile OS version change. HADOOP-16053 was
filed to backport this OS version change to branch-2 (but without the java
7 -> java 8 change). So my proposal is to also make the java 7 -> java 8
version change in branch-2.

As mentioned in HADOOP-15711, the main issue is around source and binary
compatibility. I don't currently have a great answer, but one initial
thought is to build source/binary against java 7 to ensure compatibility
and run the rest of the build as java 8.

Thoughts?

Jonathan Hung


[jira] [Created] (HADOOP-16079) Token.toString faulting if any token listed can't load.

2019-01-28 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16079:
---

 Summary: Token.toString faulting if any token listed can't load.
 Key: HADOOP-16079
 URL: https://issues.apache.org/jira/browse/HADOOP-16079
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.1.2, 3.2.1
Reporter: Steve Loughran
Assignee: Steve Loughran


The patch in HADOOP-15808 turns out not to be enough; Token.toString() fails if 
any token in the service lists isn't known.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/

[Jan 27, 2019 4:59:28 PM] (stevel) HADOOP-16075. Upgrade checkstyle version to 
8.16.
[Jan 27, 2019 7:18:30 PM] (arp) HDDS-989. Check Hdds Volumes for errors. 
Contributed by Arpit Agarwal.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestClientMetrics 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-patch-pylint.txt
  [88K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/patch-unit-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1030/artifact/out/patch-unit-hadoop-hdds_container-service.txt
  [4.0K]