Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1384/

No changes

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-16811) Use JUnit TemporaryFolder Rule in TestFileUtils

2020-01-17 Thread David Mollitor (Jira)
David Mollitor created HADOOP-16811:
---

 Summary: Use JUnit TemporaryFolder Rule in TestFileUtils
 Key: HADOOP-16811
 URL: https://issues.apache.org/jira/browse/HADOOP-16811
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs

2020-01-17 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-16810:
--

 Summary: Increase entropy to improve cryptographic randomness on 
precommit Linux VMs
 Key: HADOOP-16810
 URL: https://issues.apache.org/jira/browse/HADOOP-16810
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


I was investigating a JUnit test (MAPREDUCE-7079 
:TestMRIntermediateDataEncryption is failing in precommit builds) that was 
consistently hanging on Linux VMs and failing Mapreduce pre-builds.
I found that the test hangs slows or hangs indefinitely whenever Java reads the 
random file.

I explored two different ways to get that test case to work properly on my 
local Linux VM running rel7:
# To install "haveged" and "rng-tools" on the virtual machine running Rel7. 
Then, start rngd service {{sudo service rngd start}} . This will fix the 
problem for all the components on the image including java, native and any 
other component.
# Change java configuration to load urandom
{code:bash}
sudo vim $JAVA_HOME/jre/lib/security/java.security
## Change the line “securerandom.source=file:/dev/random” to read: 
securerandom.source=file:/dev/./urandom
{code}

The first solution is better because this will fix the problem for everything 
that requires SSL/TLS or other services that depend upon encryption.

Since the precommit build runs on Docker, then it would be best to mount 
{{/dev/urandom}} from the host as {{/dev/random}} into the container:

{code:java}
docker run -v /dev/urandom:/dev/random
{code}

For Yetus, we need to add the mount to the {{DOCKER_EXTRAARGS}} as follows:

{code:java}
DOCKER_EXTRAARGS+=("-v" "/dev/urandom:/dev/random")
{code}

 ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/

[Jan 16, 2020 3:58:57 AM] (aajisaka) MAPREDUCE-7247. Modify 
HistoryServerRest.html content,change The job




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out//testptch/patchprocess/maven-patch-checkstyle-root.txt
  []

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/patch-mvnsite-root.txt
  [0]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/569/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   

[jira] [Resolved] (HADOOP-16807) Enable Filesystem caching to optionally include URI Path

2020-01-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16807.
-
Resolution: Duplicate

Closing as a duplicate for the WONTFIX. I don't believe this is the right 
solution to address the problem of "multiple AWS credential chains in the same 
bucket", as that is essentially what it is. We don't need to expose this to the 
rest of the FS world, especially as it will lack the dynamicity which I'd like.

> Enable Filesystem caching to optionally include URI Path
> 
>
> Key: HADOOP-16807
> URL: https://issues.apache.org/jira/browse/HADOOP-16807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: David Dudley
>Priority: Major
>
> Implementing AWSCredentialsProviders that dynamically retrieve STS tokens 
> based on the URI being accessed fail if Filesystem caching is enabled and the 
> job accesses more than one URI Path within the same bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16809) Stripping Submarine site from Hadoop site

2020-01-17 Thread Wanqiang Ji (Jira)
Wanqiang Ji created HADOOP-16809:


 Summary: Stripping Submarine site from Hadoop site
 Key: HADOOP-16809
 URL: https://issues.apache.org/jira/browse/HADOOP-16809
 Project: Hadoop Common
  Issue Type: Task
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


Now that Submarine is getting out of Hadoop and has its own repo, it's time to 
stripe the Submarine site from Hadoop site.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org