Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1404/

[Feb 6, 2020 11:25:06 AM] (snemeth) YARN-10101. Support listing of aggregated 
logs for containers belonging
[Feb 6, 2020 2:13:25 PM] (github) HADOOP-16832. S3Guard testing doc: Add 
required parameters for S3Guard
[Feb 6, 2020 6:41:06 PM] (tmarq) HADOOP-16845: Disable
[Feb 6, 2020 6:48:00 PM] (tmarq) HADOOP-16825: 
ITestAzureBlobFileSystemCheckAccess failing. Contributed
[Feb 7, 2020 9:21:24 AM] (github) HADOOP-16596. [pb-upgrade] Use shaded 
protobuf classes from
[Feb 7, 2020 10:30:06 AM] (aajisaka) HADOOP-16834. Replace 
com.sun.istack.Nullable with
[Feb 7, 2020 10:32:10 AM] (github) Bump checkstyle from 8.26 to 8.29 (#1828)
[Feb 7, 2020 7:47:59 PM] (ayushsaxena) HDFS-15136. LOG flooding in secure mode 
when Cookies are not set in

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: Why capacity scheduler not support DRF?

2020-02-07 Thread epa...@apache.org
Hi,

I didn't see anyone respond to your question. If you already got a response,
please ignore this one.

The Capacity Scheduler does support DRF. You can specify which resource
calculator to use by setting the yarn.scheduler.capacity.resource-calculator
property:

  
    yarn.scheduler.capacity.resource-calculator
    
org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
  

The DefaultResourceCalculator is set by default. So if you want the DRF, you
need to set the resource-calculator property as shown above.

Hope that helps,
-Eric

 On Friday, January 31, 2020, 5:04:57 AM CST, 周康  
wrote: 

Why capacity scheduler not support DRF?
Since FairScheduler support

-- 
祝好,
周康

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9

2020-02-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16410.
-
Resolution: Fixed

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --
>
> Key: HADOOP-16410
> URL: https://issues.apache.org/jira/browse/HADOOP-16410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Jose Luis Pedrosa
>Priority: Minor
> Fix For: 3.2.2
>
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16596) [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency

2020-02-07 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-16596.

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
 Release Note: All protobuf classes will be used from 
hadooop-shaded-protobuf_3_7 artifact with package prefix as 
'org.apache.hadoop.thirdparty.protobuf' instead of 'com.google.protobuf'
   Resolution: Fixed

Merged to trunk. Thanks everyone for reviews

> [pb-upgrade] Use shaded protobuf classes from hadoop-thirdparty dependency
> --
>
> Key: HADOOP-16596
> URL: https://issues.apache.org/jira/browse/HADOOP-16596
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Fix For: 3.3.0
>
>
> Use the shaded protobuf classes from "hadoop-thirdparty" in hadoop codebase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org