[DISCUSS] A unified and open Hadoop community sync up schedule?

2019-06-07 Thread Wangda Tan
Hi Hadoop-devs,

Previous we have regular YARN community sync up (1 hr, biweekly, but not
open to public). Recently because of changes in our schedules, Less folks
showed up in the sync up for the last several months.

I saw the K8s community did a pretty good job to run their sig meetings,
there's regular meetings for different topics, notes, agenda, etc. Such as
https://docs.google.com/document/d/13mwye7nvrmV11q9_Eg77z-1w3X7Q1GTbslpml4J7F3A/edit


For Hadoop community, there are less such regular meetings open to the
public except for Ozone project and offline meetups or Bird-of-Features in
Hadoop/DataWorks Summit. Recently we have a few folks joined DataWorks
Summit at Washington DC and Barcelona, and lots (50+) of folks join the
Ozone/Hadoop/YARN BoF, ask (good) questions and roadmaps. I think it is
important to open such conversations to the public and let more
folk/companies join.

Discussed a small group of community members and wrote a short proposal
about the form, time and topic of the community sync up, thanks for
everybody who have contributed to the proposal! Please feel free to add
your thoughts to the Proposal Google doc

.

Especially for the following parts:
- If you have interests to run any of the community sync-ups, please put
your name to the table inside the proposal. We need more volunteers to help
run the sync-ups in different timezones.
- Please add suggestions to the time, frequency and themes and feel free to
share your thoughts if we should do sync ups for other topics which are not
covered by the proposal.

Link to the Proposal Google doc


Thanks,
Wangda Tan


[jira] [Reopened] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reopened HADOOP-16095:


Found an issue with distcp backward compatibility, opened HADOOP-16356 to track 
required changes.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-07 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16356:
--

 Summary: Distcp with webhdfs is not working with 
ProxyUserAuthenticationFilter or AuthenticationFilter
 Key: HADOOP-16356
 URL: https://issues.apache.org/jira/browse/HADOOP-16356
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Eric Yang


When distcp is running with webhdfs://, there is no delegation token issued to 
mapreduce task because mapreduce task does not have kerberos tgt ticket.

This stack trace was thrown when mapreduce task contacts webhdfs:

{code}
Error: org.apache.hadoop.security.AccessControlException: Authentication 
required
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
{code}

There are two proposals:

1. Have a API to issue delegation token to pass along to webhdfs to maintain 
backward compatibility.
2. Have mapreduce task login to kerberos then perform webhdfs fetching.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: MapReduce TeraSort fails on S3

2019-06-07 Thread Steve Loughran
(Prabhu and I will work on this online; if  HADOOP-16058 is in then it is
probably just a test setup problem)

On Fri, Jun 7, 2019 at 3:18 PM Prabhu Joseph 
wrote:

> Hi,
>
>  MapReduce TeraSort Job fails on S3 with Output PathExistsException.
> Is this a known issue?
>
> Thanks,
> Prabhu Joseph
>
>
> [hrt_qa@hostname root]$ yarn jar
>
> /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples-3.1.1.7.0.0.0-115.jar
> terasort s3a:/bucket/INPUT s3a://bucket/OUTPUT
>
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of
> YARN_OPTS.
>
> 19/06/07 14:13:11 INFO terasort.TeraSort: starting
>
> 19/06/07 14:13:12 WARN impl.MetricsConfig: Cannot locate configuration:
> tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
>
> 19/06/07 14:13:12 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot
> period at 10 second(s).
>
> 19/06/07 14:13:12 INFO impl.MetricsSystemImpl: s3a-file-system metrics
> system started
>
> 19/06/07 14:13:14 INFO input.FileInputFormat: Total input files to process
> : 2
>
> Spent 396ms computing base-splits.
>
> Spent 3ms computing TeraScheduler splits.
>
> Computing input splits took 400ms
>
> Sampling 2 splits of 2
>
> Making 80 from 1 sampled records
>
> Computing parititions took 685ms
>
> Spent 1088ms computing partitions.
>
> 19/06/07 14:13:15 INFO client.RMProxy: Connecting to ResourceManager at
> hostname:8032
>
> 19/06/07 14:13:17 INFO mapreduce.JobResourceUploader: Disabling Erasure
> Coding for path: /user/hrt_qa/.staging/job_1559891760159_0011
>
> 19/06/07 14:13:17 INFO mapreduce.JobSubmitter: number of splits:2
>
> 19/06/07 14:13:17 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1559891760159_0011
>
> 19/06/07 14:13:17 INFO mapreduce.JobSubmitter: Executing with tokens: []
>
> 19/06/07 14:13:18 INFO conf.Configuration: resource-types.xml not found
>
> 19/06/07 14:13:18 INFO resource.ResourceUtils: Unable to find
> 'resource-types.xml'.
>
> 19/06/07 14:13:18 INFO impl.YarnClientImpl: Submitted application
> application_1559891760159_0011
>
> 19/06/07 14:13:18 INFO mapreduce.Job: The url to track the job:
> http://hostname:8088/proxy/application_1559891760159_0011/
>
> 19/06/07 14:13:18 INFO mapreduce.Job: Running job: job_1559891760159_0011
>
> 19/06/07 14:13:33 INFO mapreduce.Job: Job job_1559891760159_0011 running in
> uber mode : false
>
> 19/06/07 14:13:33 INFO mapreduce.Job:  map 0% reduce 0%
>
> 19/06/07 14:13:34 INFO mapreduce.Job: Job job_1559891760159_0011 failed
> with state FAILED due to: Job setup failed :
> org.apache.hadoop.fs.PathExistsException: `s3a://bucket/OUTPUT': Setting
> job as Task committer attempt_1559891760159_0011_m_00_0: Destination
> path exists and committer conflict resolution mode is "fail"
>
> at
>
> org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.failDestinationExists(StagingCommitter.java:878)
>
> at
>
> org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter.setupJob(DirectoryStagingCommitter.java:71)
>
> at
>
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobSetup(CommitterEventHandler.java:255)
>
> at
>
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:235)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> 19/06/07 14:13:34 INFO mapreduce.Job: Counters: 2
>
> Job Counters
>
> Total time spent by all maps in occupied slots (ms)=0
>
> Total time spent by all reduces in occupied slots (ms)=0
>
> 19/06/07 14:13:34 INFO terasort.TeraSort: done
>
> 19/06/07 14:13:34 INFO impl.MetricsSystemImpl: Stopping s3a-file-system
> metrics system...
>
> 19/06/07 14:13:34 INFO impl.MetricsSystemImpl: s3a-file-system metrics
> system stopped.
>
> 19/06/07 14:13:34 INFO impl.MetricsSystemImpl: s3a-file-system metrics
> system shutdown complete.
>


[jira] [Created] (HADOOP-16355) ZookeeperMetadataStore: Use Zookeeper as S3Guard backend store

2019-06-07 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-16355:
--

 Summary: ZookeeperMetadataStore: Use Zookeeper as S3Guard backend 
store
 Key: HADOOP-16355
 URL: https://issues.apache.org/jira/browse/HADOOP-16355
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Mingliang Liu


When S3Guard was proposed, there are a couple of valid reasons to choose 
DynamoDB as its default backend store: 0) seamless integration as part of AWS 
ecosystem e.g. client library 1) it's a managed web service which is zero 
operational cost, highly available and infinitely scalable 2) it's performant 
with single digit latency 3) it's proven by Netflix's S3mper (not actively 
maintained) and EMRFS (closed source and usage). As it's pluggable, it's 
possible to implement {{MetadataStore}} with other backend store without 
changing semantics, besides null and in-memory local ones.

Here we propose {{ZookeeperMetadataStore}} which uses Zookeeper as S3Guard 
backend store. Its main motivation is to provide a new MetadataStore option 
which:
 # can be easily integrated as Zookeeper is heavily used in Hadoop community
 # affordable performance as both client and Zookeeper ensemble are usually 
"local" in a Hadoop cluster (ZK/HBase/Hive etc)
 # removes DynamoDB dependency

Obviously all use cases will not prefer this to default DynamoDB store. For 
e.g. ZK might not scale well if there are dozens of S3 buckets and each has 
millions of objects.

Our use case is targeting HBase to store HFiles on S3 instead of HDFS. A total 
solution for HBase on S3 must be HBOSS (see HBASE-22149) for recovering 
atomicity of metadata operations like rename, and S3Guard for consistent 
enumeration and access to object store bucket metadata. We would like to use 
Zookeeper as backend store for both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout

2019-06-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15888.
-
Resolution: Duplicate

> ITestDynamoDBMetadataStore can leak (large) DDB tables in test 
> failures/timeout
> ---
>
> Key: HADOOP-15888
> URL: https://issues.apache.org/jira/browse/HADOOP-15888
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: Screen Shot 2018-10-30 at 17.32.43.png
>
>
> This is me doing some backporting of patches from branch-3.2, so it may be an 
> intermediate condition but
> # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore
> # so I set it up to work with teh right config opts (table and region)
> # but the tests were timing out
> # looking at DDB tables in the AWS console showed a number of DDB tables 
> "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 
> write capacity (i.e. ~$50/month)
> I haven't replicated this in trunk/branch-3.2 itself, but its clearly 
> dangerous. At the very least, we should have a size of 1 R/W in all 
> creations, so the cost of a test failure is neglible, and then we should 
> document the risk and best practise.
> Also: use "s3guard" as the table prefix to make clear its origin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15563) S3Guard to support creating on-demand DDB tables

2019-06-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15563.
-
   Resolution: Fixed
Fix Version/s: 3.3.0
 Release Note: S3Guard now defaults to creating DynamoDB tables as 
"On-Demand", rather than with a prepaid IO capacity. This reduces costs when 
idle to only the storage of the metadata entries, while delivering 
significantly faster performance during query planning and other bursts of IO. 
Consult the S3Guard documentation for further details.

committed to trunk after a +1 from sean on the PR. Thanks!

> S3Guard to support creating on-demand DDB tables
> 
>
> Key: HADOOP-15563
> URL: https://issues.apache.org/jira/browse/HADOOP-15563
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> To keep costs down on DDB, autoscaling is a key feature: you set the max 
> values and when idle, you don't get billed, *at the cost of delayed scale 
> time and risk of not getting the max value when AWS is busy*
> It can be done from the AWS web UI, but not in the s3guard init and 
> set-capacity calls
> It can be done [through the 
> API|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.HowTo.SDK.html]
> Usual issues then: wiring up, CLI params, testing. It'll be hard to test.
> Fully support On-demand DDB tables in S3Guard
> * create (0, 0) will create an on-demand table.
> * set capacity (0, 0) will create an on-demand table.
> * once a table is on demand, any set capacity command other than to (0, 0) 
> will then fail.
> * when loading table, note if it is on-demand or not
> * if on demand, prune() doesn't bother to throttle requests any more by 
> sleeping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-07 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HADOOP-16354:
--

 Summary: Enable AuthFilter as default for WebHdfs
 Key: HADOOP-16354
 URL: https://issues.apache.org/jira/browse/HADOOP-16354
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


HADOOP-16314 provides an generic option to configure 
ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all the 
services. If this is not configured, AuthenticationFIlter is used for NameNode 
UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so that it is 
backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-06-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1160/

[Jun 6, 2019 3:13:39 AM] (xyao) HDDS-1612. Add 'scmcli printTopology' shell 
command to print datanode
[Jun 6, 2019 9:08:18 AM] (stevel) HADOOP-16117. Update AWS SDK to 1.11.563.
[Jun 6, 2019 9:21:55 AM] (sunilg) YARN-9573. DistributedShell cannot specify 
LogAggregationContext.
[Jun 6, 2019 10:24:13 AM] (elek) HDDS-1458. Create a maven profile to run fault 
injection tests.
[Jun 6, 2019 11:52:49 AM] (stevel) HADOOP-16344. Make DurationInfo public 
unstable.
[Jun 6, 2019 1:23:37 PM] (31469764+bshashikant) HDDS-1621. writeData in 
ChunkUtils should not use
[Jun 6, 2019 1:59:01 PM] (wwei) YARN-9590. Correct incompatible, incomplete and 
redundant activities.
[Jun 6, 2019 2:00:00 PM] (elek) HDDS-1645. Change the version of Pico CLI to 
the latest 3.x release -
[Jun 6, 2019 4:49:31 PM] (stevel) Revert "HADOOP-16344. Make DurationInfo 
public unstable."
[Jun 6, 2019 5:13:36 PM] (hanishakoneru) HDDS-1605. Implement AuditLogging for 
OM HA Bucket write requests.
[Jun 6, 2019 5:20:28 PM] (inigoiri) HDFS-14527. Stop all DataNodes may result 
in NN terminate. Contributed
[Jun 6, 2019 6:06:48 PM] (nanda) HDDS-1201. Reporting Corruptions in Containers 
to SCM (#912)
[Jun 6, 2019 6:13:29 PM] (nanda) HDDS-1647 : Recon config tag does not show up 
on Ozone UI. (#914)
[Jun 6, 2019 6:17:59 PM] (nanda) HDDS-1652. HddsDispatcher should not shutdown 
volumeSet. Contributed by
[Jun 6, 2019 6:20:04 PM] (nanda) HDDS-1650. Fix Ozone tests leaking volume 
checker thread. Contributed by
[Jun 6, 2019 6:59:53 PM] (inigoiri) HDFS-14486. The exception classes in some 
throw statements do not
[Jun 6, 2019 7:14:47 PM] (xyao) HDDS-1490. Support configurable container 
placement policy through 'o…
[Jun 6, 2019 8:41:58 PM] (eyang) YARN-9581.  Fixed yarn logs cli to access RM2. 
Contributed
[Jun 7, 2019 1:27:41 AM] (aajisaka) MAPREDUCE-6794. Remove unused properties 
from TTConfig.java




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.yarn.service.TestServiceAM 
   hadoop.tools.TestHadoopArchiveLogsRunner 
   hadoop.ozone.container.common.impl.TestHddsDispatcher 
   hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis 
   hadoop.ozone.client.rpc.TestOzoneAtRestEncryption 
   hadoop.ozone.client.rpc.TestFailureHandlingByClient 
   hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException 
   hadoop.ozone.client.rpc.TestOzoneRpcClient 
   hadoop.ozone.client.rpc.TestSecureOzoneRpcClient 
   hadoop.hdds.scm.pipeline.TestRatisPipelineProvider 
   hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1160/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1160/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1160/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1160/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   

[jira] [Created] (HADOOP-16353) Backport ABFS changes from trunk to branch-3.2

2019-06-07 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16353:
---

 Summary: Backport ABFS changes from trunk to branch-3.2
 Key: HADOOP-16353
 URL: https://issues.apache.org/jira/browse/HADOOP-16353
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


We've been backporting abfs patches from trunk to branch-3.2, mostly, but not 
all have made it in, mostly by errors of omission.

Backport all of those which make sense, which is, at a glance, pretty much all 
of them

avoid: incompatible JAR changes etc. that is: commons-lang, mockito, and the 
like.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-06-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [280K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/345/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   

[jira] [Created] (HADOOP-16352) Fix hugo warnings in hadoop-site

2019-06-07 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16352:
--

 Summary: Fix hugo warnings in hadoop-site
 Key: HADOOP-16352
 URL: https://issues.apache.org/jira/browse/HADOOP-16352
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
 Environment: Hugo v0.55.6
Reporter: Akira Ajisaka


https://github.com/apache/hadoop-site
{noformat}
$ hugo
Building sites … WARN 2019/06/07 18:53:18 Page's .BaseFileName is deprecated 
and will be removed in a future release. Use .File.BaseFileName.
WARN 2019/06/07 18:53:18 Page's .URL is deprecated and will be removed in a 
future release. Use .Permalink or .RelPermalink. If what you want is the front 
matter URL value, use .Params.url.
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org