[jira] [Resolved] (HADOOP-15472) Fix NPE in DefaultUpgradeComponentsFinder

2018-05-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad resolved HADOOP-15472.
---
Resolution: Invalid

> Fix NPE in DefaultUpgradeComponentsFinder 
> --
>
> Key: HADOOP-15472
> URL: https://issues.apache.org/jira/browse/HADOOP-15472
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> In current upgrades for Yarn native services, we do not support 
> addition/deletion of compoents during upgrade. On trying to upgrade with the 
> same number of components in target spec as the current service spec but with 
> the one of the components having a new target spec and name, see the 
> following NPE in service AM logs
> {noformat}
> 2018-05-15 00:10:41,489 [IPC Server handler 0 on 37488] ERROR 
> service.ClientAMService - Error while trying to upgrade service {} 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.lambda$findTargetComponentSpecs$0(UpgradeComponentsFinder.java:103)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)
>   at 
> org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.findTargetComponentSpecs(UpgradeComponentsFinder.java:100)
>   at 
> org.apache.hadoop.yarn.service.ServiceManager.processUpgradeRequest(ServiceManager.java:259)
>   at 
> org.apache.hadoop.yarn.service.ClientAMService.upgrade(ClientAMService.java:163)
>   at 
> org.apache.hadoop.yarn.service.impl.pb.service.ClientAMProtocolPBServiceImpl.upgradeService(ClientAMProtocolPBServiceImpl.java:81)
>   at 
> org.apache.hadoop.yarn.proto.ClientAMProtocol$ClientAMProtocolService$2.callBlockingMethod(ClientAMProtocol.java:5972)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15472) Fix NPE in DefaultUpgradeComponentsFinder

2018-05-15 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created HADOOP-15472:
-

 Summary: Fix NPE in DefaultUpgradeComponentsFinder 
 Key: HADOOP-15472
 URL: https://issues.apache.org/jira/browse/HADOOP-15472
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Suma Shivaprasad
Assignee: Suma Shivaprasad


In current upgrades for Yarn native services, we do not support 
addition/deletion of compoents during upgrade. On trying to upgrade with the 
same number of components in target spec as the current service spec but with 
the one of the components having a new target spec and name, see the following 
NPE in service AM logs

{noformat}
2018-05-15 00:10:41,489 [IPC Server handler 0 on 37488] ERROR 
service.ClientAMService - Error while trying to upgrade service {} 
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.lambda$findTargetComponentSpecs$0(UpgradeComponentsFinder.java:103)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.findTargetComponentSpecs(UpgradeComponentsFinder.java:100)
at 
org.apache.hadoop.yarn.service.ServiceManager.processUpgradeRequest(ServiceManager.java:259)
at 
org.apache.hadoop.yarn.service.ClientAMService.upgrade(ClientAMService.java:163)
at 
org.apache.hadoop.yarn.service.impl.pb.service.ClientAMProtocolPBServiceImpl.upgradeService(ClientAMProtocolPBServiceImpl.java:81)
at 
org.apache.hadoop.yarn.proto.ClientAMProtocol$ClientAMProtocolService$2.callBlockingMethod(ClientAMProtocol.java:5972)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Branch Proposal: HADOOP 15407: ABFS

2018-05-15 Thread Sean Busbey
apologies, copying back in common-dev@ with my question about the code.

On Tue, May 15, 2018 at 2:36 PM, Sean Busbey  wrote:

> >  Internal constraints prevented this feature from being developed in
> Apache, so we want to ensure that all the code is discussed, maintainable,
> and documented by the community before it merges.
>
> Has this code gone through ASF IP Clearance already?
>
> On Tue, May 15, 2018 at 10:34 AM, Steve Loughran 
> wrote:
>
>> Hi
>>
>> Chris Douglas I and I've have a proposal for a short-lived feature branch
>> for the Azure ABFS connector to go into the hadoop-azure package. This will
>> connect to the new azure storage service, which will ultimately replace the
>> one used by wasb. It's a big patch and, like all storage connectors, will
>> inevitably take time to stabilize (i.e: nobody ever get seek() right, even
>> when we think we have).
>>
>> Thomas & Esfandiar will do the coding: they've already done the
>> paperwork. Chris, myself & anyone else interested can be involved in the
>> review and testing.
>>
>> Comments?
>>
>> -
>>
>> The initial HADOOP-15407 patch contains a new filesystem client for the
>> forthcoming Azure ABFS, which is intended to replace Azure WASB as the
>> Azure storage layer. The patch is large, as it contains the replacement
>> client, tests, and generated code.
>>
>> We propose a feature branch, so the module can be broken into salient,
>> reviewable chunks. Internal constraints prevented this feature from being
>> developed in Apache, so we want to ensure that all the code is discussed,
>> maintainable, and documented by the community before it merges.
>>
>> To effect this, we also propose adding two developers as branch
>> committers: Thomas Marquardt tm...@microsoft.com> r...@microsoft.com> Esfandiar Manii esma...@microsoft.com> sma...@microsoft.com>
>>
>> Beyond normal feature branch activity and merge criteria for FS modules,
>> we want to add another merge criterion for ABFS. Some of the client APIs
>> are not GA. It seems reasonable to require that this client works with
>> public endpoints before it merges to trunk.
>>
>> To test the Blob FS driver, Blob FS team (including Esfandiar Manii and
>> Thomas Marquardt) in Azure Storage will need the MSDN subscription ID(s)
>> for all reviewers who want to run the tests. The ABFS team will then
>> whitelist the subscription ID(s) for the Blob FS Preview. At that time,
>> future storage accounts created will have the Blob FS endpoint,
>> .dfs.core.windows.net, which
>> the Blob FS driver relies on.
>>
>> This is a temporary state during the (current) Private Preview and the
>> early phases of Public Preview. In a few months, the whitelisting will not
>> be required and anyone will be able to create a storage account with access
>> to the Blob FS endpoint.
>>
>> Thomas and Esfandiar have been active in the Hadoop project working on
>> the WASB connector (see https://issues.apache.org/jira
>> /browse/HADOOP-14552). They understand the processes and requirements of
>> the software. Working on the branch directly will let them bring this
>> significant feature into the hadoop-azure module without disrupting
>> existing users.
>>
>
>
>
> --
> busbey
>



-- 
busbey


Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-15 Thread Chris Douglas
Yeah, consistently failing nightly builds are just burning resources.
Until someone starts working to fix it, we shouldn't keep submitting
the job. Too bad; I thought build times and resource usage were
becoming more manageable on branch-2.

If anyone has cycles to work on this, the job is here [1]. -C

[1]: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/

On Tue, May 15, 2018 at 10:27 AM, Allen Wittenauer
 wrote:
>
>> On May 15, 2018, at 10:16 AM, Chris Douglas  wrote:
>>
>> They've been failing for a long time. It can't install bats, and
>> that's fatal? -C
>
>
> The bats error is new and causes the build to fail enough that it 
> produces the email output.  For the past few months, it hasn’t been producing 
> email output at all because the builds have been timing out.  (The last 
> ‘good’ report was Feb 26.)  Since no one [*] is paying attention to them 
> enough to notice, I figured it was better to free up the cycles for the rest 
> of the ASF.
>
> * - I noticed a while back, but for various reasons I’ve mostly moved to only 
> working on Hadoop things where I’m getting paid.

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-15 Thread Allen Wittenauer

> On May 15, 2018, at 10:16 AM, Chris Douglas  wrote:
> 
> They've been failing for a long time. It can't install bats, and
> that's fatal? -C


The bats error is new and causes the build to fail enough that it 
produces the email output.  For the past few months, it hasn’t been producing 
email output at all because the builds have been timing out.  (The last ‘good’ 
report was Feb 26.)  Since no one [*] is paying attention to them enough to 
notice, I figured it was better to free up the cycles for the rest of the ASF. 

* - I noticed a while back, but for various reasons I’ve mostly moved to only 
working on Hadoop things where I’m getting paid.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[ANNOUNCE] Apache Hadoop 2.9.1 Release

2018-05-15 Thread Chen, Sammi
Hello everyone,

I am glad to announce that Apache Hadoop 2.9.1 has been released.

Apache Hadoop 2.9.1 is the next release of Apache Hadoop 2.9 line. It includes 
208 bug fixes, improvements and enhancements since previous Apache Hadoop 2.9.0 
release.

 - For major changes included in Hadoop 2.9 line, please refer to Hadoop 2.9.1 
main page [1].
 - For more details about fixes and improvements in 2.9.1 release, please refer 
to CHANGES [2] and RELEASENOTES [3].
 - For download, please got to download page[4]

Thank you all for contributing to the Apache Hadoop 2.9.1.


Last, thanks Yongjun Zhang, Junping Du, Andrew Wang and Chris Douglas for your 
help and support.


Bests,
Sammi Chen

[1] http://hadoop.apache.org/docs/r2.9.1/index.html 
[2] 
http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/CHANGES.2.9.1.html
 
[3] 
http://hadoop.apache.org/docs/r2.9.1/hadoop-project-dist/hadoop-common/release/2.9.1/RELEASENOTES.2.9.1.html
 
[4] http://hadoop.apache.org/releases.html#Download 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-15 Thread Chris Douglas
They've been failing for a long time. It can't install bats, and
that's fatal? -C

On Tue, May 15, 2018 at 9:43 AM, Allen Wittenauer
 wrote:
>
>
> FYI:
>
> I’m going to disable the branch-2 nightly jobs.
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-15 Thread Allen Wittenauer


FYI:

I’m going to disable the branch-2 nightly jobs.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Branch Proposal: HADOOP 15407: ABFS

2018-05-15 Thread Thomas Marquardt
A feature branch seems reasonable to me too.  Note that the WASB connector will 
continue to exist, and live side-by-side with the new Azure Blob Filesystem 
(ABFS) connector.  We will encourage users to move to the new ABFS connector, 
and all of our new feature and performance improvements will target the ABFS 
connector.  ABFS will perform better at no additional cost, so I expect current 
users to migrate in time.  The two connectors are compatible for mainline 
scenarios, but there are some uncommon features in WASB that we chose not to 
carry over in the initial implementation.


So we hope ABFS will replace the usage of WASB, but the WASB connector itself 
will continue to exist.  Maybe we can remove WASB in the future some day, if 
nobody is using it.


I can confirm that nobody ever gets seek() right. :)


Thanks,

Thomas


From: larry mccay 
Sent: Tuesday, May 15, 2018 8:44 AM
To: Steve Loughran
Cc: Hadoop Common
Subject: Re: [DISCUSS] Branch Proposal: HADOOP 15407: ABFS

This seems like a reasonable and effective use of a feature branch and
branch committers to me.


On Tue, May 15, 2018 at 11:34 AM, Steve Loughran 
wrote:

> Hi
>
> Chris Douglas I and I've have a proposal for a short-lived feature branch
> for the Azure ABFS connector to go into the hadoop-azure package. This will
> connect to the new azure storage service, which will ultimately replace the
> one used by wasb. It's a big patch and, like all storage connectors, will
> inevitably take time to stabilize (i.e: nobody ever get seek() right, even
> when we think we have).
>
> Thomas & Esfandiar will do the coding: they've already done the paperwork.
> Chris, myself & anyone else interested can be involved in the review and
> testing.
>
> Comments?
>
> -
>
> The initial HADOOP-15407 patch contains a new filesystem client for the
> forthcoming Azure ABFS, which is intended to replace Azure WASB as the
> Azure storage layer. The patch is large, as it contains the replacement
> client, tests, and generated code.
>
> We propose a feature branch, so the module can be broken into salient,
> reviewable chunks. Internal constraints prevented this feature from being
> developed in Apache, so we want to ensure that all the code is discussed,
> maintainable, and documented by the community before it merges.
>
> To effect this, we also propose adding two developers as branch
> committers: Thomas Marquardt tm...@microsoft.com r...@microsoft.com> Esfandiar Manii esma...@microsoft.com sma...@microsoft.com>
>
> Beyond normal feature branch activity and merge criteria for FS modules,
> we want to add another merge criterion for ABFS. Some of the client APIs
> are not GA. It seems reasonable to require that this client works with
> public endpoints before it merges to trunk.
>
> To test the Blob FS driver, Blob FS team (including Esfandiar Manii and
> Thomas Marquardt) in Azure Storage will need the MSDN subscription ID(s)
> for all reviewers who want to run the tests. The ABFS team will then
> whitelist the subscription ID(s) for the Blob FS Preview. At that time,
> future storage accounts created will have the Blob FS endpoint,
> .dfs.core.windows.net,
>  which
> the Blob FS driver relies on.
>
> This is a temporary state during the (current) Private Preview and the
> early phases of Public Preview. In a few months, the whitelisting will not
> be required and anyone will be able to create a storage account with access
> to the Blob FS endpoint.
>
> Thomas and Esfandiar have been active in the Hadoop project working on the
> WASB connector (see 
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FHADOOP-14552=02%7C01%7Ctmarq%40microsoft.com%7C8cce958a338644ba48e108d5ba7acf7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636619958983989716=QFZt%2BNRDEvpV6HX0rHLjPvKBzTWVAQyxji1o6cbgMr0%3D=0).
> They understand the processes and requirements of the software. Working on
> the branch directly will let them bring this significant feature into the
> hadoop-azure module without disrupting existing users.
>


[jira] [Created] (HADOOP-15470) S3A staging committers to not log FNFEs on job abort listings

2018-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15470:
---

 Summary: S3A staging committers to not log FNFEs on job abort 
listings
 Key: HADOOP-15470
 URL: https://issues.apache.org/jira/browse/HADOOP-15470
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


When aborting a job, the staging committers list staged files in the cluster FS 
to abort...all exceptions are caught & downgraded to log events.

We shouldn't even log FNFEs except at debug level, as all it means is "the job 
is aborting before things got that far. Printing the full stack simply creates 
confusion about what the problem is



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Branch Proposal: HADOOP 15407: ABFS

2018-05-15 Thread larry mccay
This seems like a reasonable and effective use of a feature branch and
branch committers to me.


On Tue, May 15, 2018 at 11:34 AM, Steve Loughran 
wrote:

> Hi
>
> Chris Douglas I and I've have a proposal for a short-lived feature branch
> for the Azure ABFS connector to go into the hadoop-azure package. This will
> connect to the new azure storage service, which will ultimately replace the
> one used by wasb. It's a big patch and, like all storage connectors, will
> inevitably take time to stabilize (i.e: nobody ever get seek() right, even
> when we think we have).
>
> Thomas & Esfandiar will do the coding: they've already done the paperwork.
> Chris, myself & anyone else interested can be involved in the review and
> testing.
>
> Comments?
>
> -
>
> The initial HADOOP-15407 patch contains a new filesystem client for the
> forthcoming Azure ABFS, which is intended to replace Azure WASB as the
> Azure storage layer. The patch is large, as it contains the replacement
> client, tests, and generated code.
>
> We propose a feature branch, so the module can be broken into salient,
> reviewable chunks. Internal constraints prevented this feature from being
> developed in Apache, so we want to ensure that all the code is discussed,
> maintainable, and documented by the community before it merges.
>
> To effect this, we also propose adding two developers as branch
> committers: Thomas Marquardt tm...@microsoft.com r...@microsoft.com> Esfandiar Manii esma...@microsoft.com sma...@microsoft.com>
>
> Beyond normal feature branch activity and merge criteria for FS modules,
> we want to add another merge criterion for ABFS. Some of the client APIs
> are not GA. It seems reasonable to require that this client works with
> public endpoints before it merges to trunk.
>
> To test the Blob FS driver, Blob FS team (including Esfandiar Manii and
> Thomas Marquardt) in Azure Storage will need the MSDN subscription ID(s)
> for all reviewers who want to run the tests. The ABFS team will then
> whitelist the subscription ID(s) for the Blob FS Preview. At that time,
> future storage accounts created will have the Blob FS endpoint,
> .dfs.core.windows.net, which
> the Blob FS driver relies on.
>
> This is a temporary state during the (current) Private Preview and the
> early phases of Public Preview. In a few months, the whitelisting will not
> be required and anyone will be able to create a storage account with access
> to the Blob FS endpoint.
>
> Thomas and Esfandiar have been active in the Hadoop project working on the
> WASB connector (see https://issues.apache.org/jira/browse/HADOOP-14552).
> They understand the processes and requirements of the software. Working on
> the branch directly will let them bring this significant feature into the
> hadoop-azure module without disrupting existing users.
>


[DISCUSS] Branch Proposal: HADOOP 15407: ABFS

2018-05-15 Thread Steve Loughran
Hi

Chris Douglas I and I've have a proposal for a short-lived feature branch for 
the Azure ABFS connector to go into the hadoop-azure package. This will connect 
to the new azure storage service, which will ultimately replace the one used by 
wasb. It's a big patch and, like all storage connectors, will inevitably take 
time to stabilize (i.e: nobody ever get seek() right, even when we think we 
have).

Thomas & Esfandiar will do the coding: they've already done the paperwork. 
Chris, myself & anyone else interested can be involved in the review and 
testing.

Comments?

-

The initial HADOOP-15407 patch contains a new filesystem client for the 
forthcoming Azure ABFS, which is intended to replace Azure WASB as the Azure 
storage layer. The patch is large, as it contains the replacement client, 
tests, and generated code.

We propose a feature branch, so the module can be broken into salient, 
reviewable chunks. Internal constraints prevented this feature from being 
developed in Apache, so we want to ensure that all the code is discussed, 
maintainable, and documented by the community before it merges.

To effect this, we also propose adding two developers as branch committers: 
Thomas Marquardt tm...@microsoft.com Esfandiar 
Manii esma...@microsoft.com

Beyond normal feature branch activity and merge criteria for FS modules, we 
want to add another merge criterion for ABFS. Some of the client APIs are not 
GA. It seems reasonable to require that this client works with public endpoints 
before it merges to trunk.

To test the Blob FS driver, Blob FS team (including Esfandiar Manii and Thomas 
Marquardt) in Azure Storage will need the MSDN subscription ID(s) for all 
reviewers who want to run the tests. The ABFS team will then whitelist the 
subscription ID(s) for the Blob FS Preview. At that time, future storage 
accounts created will have the Blob FS endpoint, 
.dfs.core.windows.net, which the Blob 
FS driver relies on.

This is a temporary state during the (current) Private Preview and the early 
phases of Public Preview. In a few months, the whitelisting will not be 
required and anyone will be able to create a storage account with access to the 
Blob FS endpoint.

Thomas and Esfandiar have been active in the Hadoop project working on the WASB 
connector (see https://issues.apache.org/jira/browse/HADOOP-14552). They 
understand the processes and requirements of the software. Working on the 
branch directly will let them bring this significant feature into the 
hadoop-azure module without disrupting existing users.


[jira] [Resolved] (HADOOP-15468) The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.

2018-05-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-15468.
-
Resolution: Invalid

Please send your questions to common-dev@hadoop.apache.org.

> The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.
> 
>
> Key: HADOOP-15468
> URL: https://issues.apache.org/jira/browse/HADOOP-15468
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.2
>Reporter: Wenming He
>Priority: Minor
> Fix For: 3.0.2
>
>
> 在判断overlay变量是否存在弃用的键时,为什么他是直接判断overlay中的值 ,而不是去判断overlay中存在相同的键。这是个什么逻辑?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/782/

[May 14, 2018 6:24:01 AM] (littlezhou) Add 2.9.1 release notes and changes 
documents
[May 14, 2018 6:38:40 AM] (sammichen) Revert "Add 2.9.1 release notes and 
changes documents"
[May 14, 2018 7:14:02 AM] (sammi.chen) Add 2.9.1 release notes and changes 
documents
[May 14, 2018 3:29:31 PM] (sunilg) YARN-8271. [UI2] Improve labeling of certain 
tables. Contributed by
[May 14, 2018 4:05:23 PM] (naganarasimha_gr) YARN-8288. Fix wrong number of 
table columns in Resource Model doc.
[May 14, 2018 4:10:03 PM] (xyao) HDDS-29. Fix 
TestStorageContainerManager#testRpcPermission. Contributed
[May 14, 2018 4:28:39 PM] (xiao) HDFS-13539. DFSStripedInputStream NPE when 
reportCheckSumFailure.
[May 14, 2018 4:55:03 PM] (msingh) HDDS-19. Update ozone to latest ratis 
snapshot build
[May 14, 2018 5:12:08 PM] (hanishakoneru) HDFS-13544. Improve logging for 
JournalNode in federated cluster.
[May 14, 2018 6:08:42 PM] (haibochen) YARN-8130 Race condition when container 
events are published for KILLED




-1 overall


The following subsystems voted -1:
asflicense findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18039] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18601] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 35184] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 36053] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CreateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13089] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DatanodeBlockID$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1126] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 30491] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 15748] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16224] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23421] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$KeyValue$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1767] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16726] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListKeyRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23958] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 21216] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutSmallFileResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 33434] 
   Useless control flow in 

[jira] [Created] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under set

2018-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15469:
---

 Summary: S3A directory committer commit job fails if _temporary 
directory created under set
 Key: HADOOP-15469
 URL: https://issues.apache.org/jira/browse/HADOOP-15469
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
 Environment: spark test runs
Reporter: Steve Loughran
Assignee: Steve Loughran


The directory staging committer fails in commit job if any temporary files/dirs 
have been created. Spark work can create such a dir for placement of absolute 
files.

This is because commitJob() looks for the dest dir existing, not containing 
non-hidden files.
As the comment says, "its kind of superfluous". More specifically, it means 
jobs which would commit with the classic committer & overwrite=false will fail

Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-15250:
-

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15250-branch-3.1.patch, HADOOP-15250.00.patch, 
> HADOOP-15250.01.patch, HADOOP-15250.02.patch, HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY 
> = 

Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-15 Thread 俊平堵
Thanks all who helped to verify and vote!


I give my binding +1 to conclude the vote for 2.8.4 RC0:

- Built from source and verified signatures

- Deployed a small distributed cluster and run some simple job, like: pi,
sleep, terasort, etc.

- Verified UI of daemons, like: NameNode, DataNode, ResourceManager,
NodeManager, etc


Now, our RC0 for 2.8.4 got:


7 binding +1s, from:

 Sammi Chen, Mingliang Liu, Rohith Sharma K S, Sunil G, Wangda Tan,
Brahma Reddy Battula, Junping Du


4 non-binding +1s, from:

Ajay Kumar, Gabor Bota, Takanobu Asanuma, Zsolt Venczel


and no -1s.


So I am glad to announce that the vote for 2.8.4 RC0 passes.


Thanks everyone listed above who tried the release candidate and vote and
all who ever help with 2.8.4 release effort in all kinds of ways.

I'll push the release bits and send out an announcement for 2.8.4 soon.


Thanks,


Junping


2018-05-09 1:41 GMT+08:00 俊平堵 :

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
> important fixes and improvements.
>
> The RC artifacts are available at: http://home.apache.org/~
> junping_du/hadoop-2.8.4-RC0
>
> The RC tag in git is: release-2.8.4-RC0
>
> The maven artifacts are available via repository.apache.org repository.apache.org> at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1118
>
> Please try the release and vote; the vote will run for the usual 5
> working days, ending on 5/14/2018 PST time.
>
> Thanks,
>
> Junping
>