Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-14 Thread Anu Engineer
+1, (Binding)

Deployed a pseudo-distributed cluster.
Tried out HDFS commands and verified everything works.
--Anu


On 1/14/19, 11:26 AM, "Virajith Jalaparti"  wrote:

Thanks Sunil and others who have worked on the making this release happen!

+1 (non-binding)

- Built from source
- Deployed a pseudo-distributed one node cluster
- Ran basic wordcount, sort, pi jobs
- Basic HDFS/WebHDFS commands
- Ran all the ABFS driver tests against an ADLS Gen 2 account in EAST US

Non-blockers (AFAICT): The following tests in ABFS (HADOOP-15407) fail:
- For ACLs ({{ITestAzureBlobFilesystemAcl}}) -- However, I believe these
have been fixed in trunk.
- 
{{ITestAzureBlobFileSystemE2EScale#testWriteHeavyBytesToFileAcrossThreads}}
fails with an OutOfMemoryError exception. I see the same failure on trunk
as well.


On Mon, Jan 14, 2019 at 6:21 AM Elek, Marton  wrote:

> Thanks Sunil to manage this release.
>
> +1 (non-binding)
>
> 1. built from the source (with clean local maven repo)
> 2. verified signatures + checksum
> 3. deployed 3 node cluster to Google Kubernetes Engine with generated
> k8s resources [1]
> 4. Executed basic HDFS commands
> 5. Executed basic yarn example jobs
>
> Marton
>
> [1]: FTR: resources:
> https://github.com/flokkr/k8s/tree/master/examples/hadoop , generator:
> https://github.com/elek/flekszible
>
>
> On 1/8/19 12:42 PM, Sunil G wrote:
> > Hi folks,
> >
> >
> > Thanks to all of you who helped in this release [1] and for helping to
> vote
> > for RC0. I have created second release candidate (RC1) for Apache Hadoop
> > 3.2.0.
> >
> >
> > Artifacts for this RC are available here:
> >
> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
> >
> >
> > RC tag in git is release-3.2.0-RC1.
> >
> >
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1178/
> >
> >
> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
> PST.
> >
> >
> >
> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> > additions
> >
> > are the highlights of this release.
> >
> > 1. Node Attributes Support in YARN
> >
> > 2. Hadoop Submarine project for running Deep Learning workloads on YARN
> >
> > 3. Support service upgrade via YARN Service API and CLI
> >
> > 4. HDFS Storage Policy Satisfier
> >
> > 5. Support Windows Azure Storage - Blob file system in Hadoop
> >
> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> >
> > 7. Improvements in Router-based HDFS federation
> >
> >
> >
> > Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
> >
> > I have done few testing with my pseudo cluster. My +1 to start.
> >
> >
> >
> > Regards,
> >
> > Sunil
> >
> >
> >
> > [1]
> >
> >
> 
https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> >
> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> > ORDER BY fixVersion ASC
> >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-14 Thread Wangda Tan
HI Brain,
Thanks for responding, could u share how to push to keys to Apache pgp pool?

Best,
Wangda

On Mon, Jan 14, 2019 at 10:44 AM Brian Fox  wrote:

> Did you push your key up to the pgp pool? That's what Nexus is validating
> against. It might take time to propagate if you just pushed it.
>
> On Mon, Jan 14, 2019 at 9:59 AM Elek, Marton  wrote:
>
>> Seems to be an INFRA issue for me:
>>
>> 1. I downloaded a sample jar file [1] + the signature from the
>> repository and it was ok, locally I verified it.
>>
>> 2. I tested it with an other Apache project (Ratis) and my key. I got
>> the same problem even if it worked at last year during the 0.3.0
>> release. (I used exactly the same command)
>>
>> I opened an infra ticket to check the logs of the Nexus as it was
>> suggested in the error message:
>>
>> https://issues.apache.org/jira/browse/INFRA-17649
>>
>> Marton
>>
>>
>> [1]:
>>
>> https://repository.apache.org/service/local/repositories/orgapachehadoop-1183/content/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-javadoc.jar
>>
>>
>> On 1/13/19 6:27 AM, Wangda Tan wrote:
>> > Uploaded sample file and signature.
>> >
>> >
>> >
>> > On Sat, Jan 12, 2019 at 9:18 PM Wangda Tan > > > wrote:
>> >
>> > Actually, among the hundreds of failed messages, the "No public key"
>> > issues still occurred several times:
>> >
>> > failureMessage  No public key: Key with id: (b3fa653d57300d45)
>> > was not able to be located on http://gpg-keyserver.de/. Upload
>> > your public key and try the operation again.
>> > failureMessage  No public key: Key with id: (b3fa653d57300d45)
>> > was not able to be located on
>> > http://pool.sks-keyservers.net:11371. Upload your public key
>> and
>> > try the operation again.
>> > failureMessage  No public key: Key with id: (b3fa653d57300d45)
>> > was not able to be located on http://pgp.mit.edu:11371. Upload
>> > your public key and try the operation again.
>> >
>> > Once the close operation returned, I will upload sample files which
>> > may help troubleshoot the issue.
>> >
>> > Thanks,
>> >
>> > On Sat, Jan 12, 2019 at 9:04 PM Wangda Tan > > > wrote:
>> >
>> > Thanks David for the quick response!
>> >
>> > I just retried, now the "No public key" issue is gone. However,
>> > the issue:
>> >
>> > failureMessage  Failed to validate the pgp signature of
>> >
>>  
>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
>> > check the logs.
>> > failureMessage  Failed to validate the pgp signature of
>> >
>>  
>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
>> > check the logs.
>> > failureMessage  Failed to validate the pgp signature of
>> >
>>  
>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2.pom',
>> > check the logs.
>> >
>> >
>> > Still exists and repeated hundreds of times. Do you know how to
>> > access the logs mentioned by above log?
>> >
>> > Best,
>> > Wangda
>> >
>> > On Sat, Jan 12, 2019 at 8:37 PM David Nalley > > > wrote:
>> >
>> > On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan
>> > mailto:wheele...@gmail.com>> wrote:
>> > >
>> > > Hi Devs,
>> > >
>> > > I'm currently rolling Hadoop 3.1.2 release candidate,
>> > however, I saw an issue when I try to close repo in Nexus.
>> > >
>> > > Logs of
>> https://repository.apache.org/#stagingRepositories
>> > (orgapachehadoop-1183) shows hundreds of lines of the
>> > following error:
>> > >
>> > > failureMessage  No public key: Key with id:
>> > (b3fa653d57300d45) was not able to be located on
>> > http://gpg-keyserver.de/. Upload your public key and try
>> the
>> > operation again.
>> > > failureMessage  No public key: Key with id:
>> > (b3fa653d57300d45) was not able to be located on
>> > http://pool.sks-keyservers.net:11371. Upload your public
>> key
>> > and try the operation again.
>> > > failureMessage  No public key: Key with id:
>> > (b3fa653d57300d45) was not able to be located on
>> > http://pgp.mit.edu:11371. Upload your public key and try
>> the
>> > operation again.
>> > > ...
>> > > failureMessage  Failed to validate the pgp signature of
>> >
>>  
>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
>> > check 

[jira] [Reopened] (HADOOP-15941) [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible

2019-01-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-15941:


> [JDK 11] Compilation failure: package com.sun.jndi.ldap is not visible
> --
>
> Key: HADOOP-15941
> URL: https://issues.apache.org/jira/browse/HADOOP-15941
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Uma Maheswara Rao G
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15941.1.patch, HADOOP-15941.2.patch
>
>
> With JDK 11: Compilation failed because package com.sun.jndi.ldap is not 
> visible.
>  
> {noformat}
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project hadoop-common: Compilation failure
> /C:/Users/umgangum/Work/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java:[545,23]
>  package com.sun.jndi.ldap is not visible
>  (package com.sun.jndi.ldap is declared in module java.naming, which does not 
> export it){noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Mentorship opportunity for aspiring Hadoop developer

2019-01-14 Thread Aaron Fabbri
Hi,

I'd like to offer to mentor a developer who is interested in getting into
Apache Hadoop development.

If you or someone you know is interested, please email me (unicast). This
would probably be a three month thing where we meet every week or two.

I'm also curious if Apache and/or Hadoop communities have any program or
resources on mentoring. I'm interested in growing the dev community so
please forward any good links or experiences.

Cheers,
Aaron


Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-14 Thread Virajith Jalaparti
Thanks Sunil and others who have worked on the making this release happen!

+1 (non-binding)

- Built from source
- Deployed a pseudo-distributed one node cluster
- Ran basic wordcount, sort, pi jobs
- Basic HDFS/WebHDFS commands
- Ran all the ABFS driver tests against an ADLS Gen 2 account in EAST US

Non-blockers (AFAICT): The following tests in ABFS (HADOOP-15407) fail:
- For ACLs ({{ITestAzureBlobFilesystemAcl}}) -- However, I believe these
have been fixed in trunk.
- {{ITestAzureBlobFileSystemE2EScale#testWriteHeavyBytesToFileAcrossThreads}}
fails with an OutOfMemoryError exception. I see the same failure on trunk
as well.


On Mon, Jan 14, 2019 at 6:21 AM Elek, Marton  wrote:

> Thanks Sunil to manage this release.
>
> +1 (non-binding)
>
> 1. built from the source (with clean local maven repo)
> 2. verified signatures + checksum
> 3. deployed 3 node cluster to Google Kubernetes Engine with generated
> k8s resources [1]
> 4. Executed basic HDFS commands
> 5. Executed basic yarn example jobs
>
> Marton
>
> [1]: FTR: resources:
> https://github.com/flokkr/k8s/tree/master/examples/hadoop , generator:
> https://github.com/elek/flekszible
>
>
> On 1/8/19 12:42 PM, Sunil G wrote:
> > Hi folks,
> >
> >
> > Thanks to all of you who helped in this release [1] and for helping to
> vote
> > for RC0. I have created second release candidate (RC1) for Apache Hadoop
> > 3.2.0.
> >
> >
> > Artifacts for this RC are available here:
> >
> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
> >
> >
> > RC tag in git is release-3.2.0-RC1.
> >
> >
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1178/
> >
> >
> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
> PST.
> >
> >
> >
> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> > additions
> >
> > are the highlights of this release.
> >
> > 1. Node Attributes Support in YARN
> >
> > 2. Hadoop Submarine project for running Deep Learning workloads on YARN
> >
> > 3. Support service upgrade via YARN Service API and CLI
> >
> > 4. HDFS Storage Policy Satisfier
> >
> > 5. Support Windows Azure Storage - Blob file system in Hadoop
> >
> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> >
> > 7. Improvements in Router-based HDFS federation
> >
> >
> >
> > Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
> >
> > I have done few testing with my pseudo cluster. My +1 to start.
> >
> >
> >
> > Regards,
> >
> > Sunil
> >
> >
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> >
> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> > ORDER BY fixVersion ASC
> >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-14 Thread Brian Fox
Did you push your key up to the pgp pool? That's what Nexus is validating
against. It might take time to propagate if you just pushed it.

On Mon, Jan 14, 2019 at 9:59 AM Elek, Marton  wrote:

> Seems to be an INFRA issue for me:
>
> 1. I downloaded a sample jar file [1] + the signature from the
> repository and it was ok, locally I verified it.
>
> 2. I tested it with an other Apache project (Ratis) and my key. I got
> the same problem even if it worked at last year during the 0.3.0
> release. (I used exactly the same command)
>
> I opened an infra ticket to check the logs of the Nexus as it was
> suggested in the error message:
>
> https://issues.apache.org/jira/browse/INFRA-17649
>
> Marton
>
>
> [1]:
>
> https://repository.apache.org/service/local/repositories/orgapachehadoop-1183/content/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-javadoc.jar
>
>
> On 1/13/19 6:27 AM, Wangda Tan wrote:
> > Uploaded sample file and signature.
> >
> >
> >
> > On Sat, Jan 12, 2019 at 9:18 PM Wangda Tan  > > wrote:
> >
> > Actually, among the hundreds of failed messages, the "No public key"
> > issues still occurred several times:
> >
> > failureMessage  No public key: Key with id: (b3fa653d57300d45)
> > was not able to be located on http://gpg-keyserver.de/. Upload
> > your public key and try the operation again.
> > failureMessage  No public key: Key with id: (b3fa653d57300d45)
> > was not able to be located on
> > http://pool.sks-keyservers.net:11371. Upload your public key and
> > try the operation again.
> > failureMessage  No public key: Key with id: (b3fa653d57300d45)
> > was not able to be located on http://pgp.mit.edu:11371. Upload
> > your public key and try the operation again.
> >
> > Once the close operation returned, I will upload sample files which
> > may help troubleshoot the issue.
> >
> > Thanks,
> >
> > On Sat, Jan 12, 2019 at 9:04 PM Wangda Tan  > > wrote:
> >
> > Thanks David for the quick response!
> >
> > I just retried, now the "No public key" issue is gone. However,
> > the issue:
> >
> > failureMessage  Failed to validate the pgp signature of
> >
>  
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
> > check the logs.
> > failureMessage  Failed to validate the pgp signature of
> >
>  
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
> > check the logs.
> > failureMessage  Failed to validate the pgp signature of
> >
>  
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2.pom',
> > check the logs.
> >
> >
> > Still exists and repeated hundreds of times. Do you know how to
> > access the logs mentioned by above log?
> >
> > Best,
> > Wangda
> >
> > On Sat, Jan 12, 2019 at 8:37 PM David Nalley  > > wrote:
> >
> > On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan
> > mailto:wheele...@gmail.com>> wrote:
> > >
> > > Hi Devs,
> > >
> > > I'm currently rolling Hadoop 3.1.2 release candidate,
> > however, I saw an issue when I try to close repo in Nexus.
> > >
> > > Logs of https://repository.apache.org/#stagingRepositories
> > (orgapachehadoop-1183) shows hundreds of lines of the
> > following error:
> > >
> > > failureMessage  No public key: Key with id:
> > (b3fa653d57300d45) was not able to be located on
> > http://gpg-keyserver.de/. Upload your public key and try the
> > operation again.
> > > failureMessage  No public key: Key with id:
> > (b3fa653d57300d45) was not able to be located on
> > http://pool.sks-keyservers.net:11371. Upload your public key
> > and try the operation again.
> > > failureMessage  No public key: Key with id:
> > (b3fa653d57300d45) was not able to be located on
> > http://pgp.mit.edu:11371. Upload your public key and try the
> > operation again.
> > > ...
> > > failureMessage  Failed to validate the pgp signature of
> >
>  
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
> > check the logs.
> > > failureMessage  Failed to validate the pgp signature of
> >
>  
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
> > check the logs.
> > > failureMessage  Failed to validate the pgp signature 

[jira] [Resolved] (HADOOP-16047) Avoid expensive rename when DistCp is writing to S3

2019-01-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16047.
-
Resolution: Duplicate

closing as duplicate...please reattach your proposal there

as noted in that one, it's not just performance: if the rename takes so long 
that the workers don't get their heartbeat in: disaster

> Avoid expensive rename when DistCp is writing to S3
> ---
>
> Key: HADOOP-16047
> URL: https://issues.apache.org/jira/browse/HADOOP-16047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, tools/distcp
>Reporter: Andrew Olson
>Priority: Major
>
> When writing to an S3-based target, the temp file and rename logic in 
> RetriableFileCopyCommand adds some unnecessary cost to the job, as the rename 
> operation does a server-side copy + delete in S3 [1]. The renames are 
> parallelized across all of the DistCp map tasks, so the severity is mitigated 
> to some extent. However a configuration property to conditionally allow 
> distributed copies to avoid that expense and write directly to the target 
> path would improve performance considerably.
> [1] 
> https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md#object-stores-vs-filesystems



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16047) Avoid expensive rename when DistCp is writing to S3

2019-01-14 Thread Andrew Olson (JIRA)
Andrew Olson created HADOOP-16047:
-

 Summary: Avoid expensive rename when DistCp is writing to S3
 Key: HADOOP-16047
 URL: https://issues.apache.org/jira/browse/HADOOP-16047
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3, tools/distcp
Reporter: Andrew Olson


When writing to an S3-based target, the temp file and rename logic in 
RetriableFileCopyCommand adds some unnecessary cost to the job, as the rename 
operation does a server-side copy + delete in S3 [1]. The renames are 
parallelized across all of the DistCp map tasks, so the severity is mitigated 
to some extent. However a configuration property to conditionally allow 
distributed copies to avoid that expense and write directly to the target path 
would improve performance considerably.

[1] 
https://github.com/apache/hadoop/blob/release-3.2.0-RC1/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md#object-stores-vs-filesystems



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-14 Thread Elek, Marton
Seems to be an INFRA issue for me:

1. I downloaded a sample jar file [1] + the signature from the
repository and it was ok, locally I verified it.

2. I tested it with an other Apache project (Ratis) and my key. I got
the same problem even if it worked at last year during the 0.3.0
release. (I used exactly the same command)

I opened an infra ticket to check the logs of the Nexus as it was
suggested in the error message:

https://issues.apache.org/jira/browse/INFRA-17649

Marton


[1]:
https://repository.apache.org/service/local/repositories/orgapachehadoop-1183/content/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-javadoc.jar


On 1/13/19 6:27 AM, Wangda Tan wrote:
> Uploaded sample file and signature.  
> 
> 
> 
> On Sat, Jan 12, 2019 at 9:18 PM Wangda Tan  > wrote:
> 
> Actually, among the hundreds of failed messages, the "No public key"
> issues still occurred several times:
> 
> failureMessage  No public key: Key with id: (b3fa653d57300d45)
> was not able to be located on http://gpg-keyserver.de/. Upload
> your public key and try the operation again.
> failureMessage  No public key: Key with id: (b3fa653d57300d45)
> was not able to be located on
> http://pool.sks-keyservers.net:11371. Upload your public key and
> try the operation again.
> failureMessage  No public key: Key with id: (b3fa653d57300d45)
> was not able to be located on http://pgp.mit.edu:11371. Upload
> your public key and try the operation again.
> 
> Once the close operation returned, I will upload sample files which
> may help troubleshoot the issue. 
> 
> Thanks,
> 
> On Sat, Jan 12, 2019 at 9:04 PM Wangda Tan  > wrote:
> 
> Thanks David for the quick response! 
> 
> I just retried, now the "No public key" issue is gone. However, 
> the issue: 
> 
> failureMessage  Failed to validate the pgp signature of
> 
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
> check the logs.
> failureMessage  Failed to validate the pgp signature of
> 
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
> check the logs.
> failureMessage  Failed to validate the pgp signature of
> 
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2.pom',
> check the logs.
> 
> 
> Still exists and repeated hundreds of times. Do you know how to
> access the logs mentioned by above log?
> 
> Best,
> Wangda
> 
> On Sat, Jan 12, 2019 at 8:37 PM David Nalley  > wrote:
> 
> On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan
> mailto:wheele...@gmail.com>> wrote:
> >
> > Hi Devs,
> >
> > I'm currently rolling Hadoop 3.1.2 release candidate,
> however, I saw an issue when I try to close repo in Nexus.
> >
> > Logs of https://repository.apache.org/#stagingRepositories
> (orgapachehadoop-1183) shows hundreds of lines of the
> following error:
> >
> > failureMessage  No public key: Key with id:
> (b3fa653d57300d45) was not able to be located on
> http://gpg-keyserver.de/. Upload your public key and try the
> operation again.
> > failureMessage  No public key: Key with id:
> (b3fa653d57300d45) was not able to be located on
> http://pool.sks-keyservers.net:11371. Upload your public key
> and try the operation again.
> > failureMessage  No public key: Key with id:
> (b3fa653d57300d45) was not able to be located on
> http://pgp.mit.edu:11371. Upload your public key and try the
> operation again.
> > ...
> > failureMessage  Failed to validate the pgp signature of
> 
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
> check the logs.
> > failureMessage  Failed to validate the pgp signature of
> 
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
> check the logs.
> > failureMessage  Failed to validate the pgp signature of
> 
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-sources.jar',
> check the logs.
> >
> >
> > This is the same key I used before (and finished two
> releases), the same environment I used before.
> >
> > I 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.resourcemanager.TestCapacitySchedulerMetrics 
   hadoop.yarn.service.TestServiceAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1016/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]
   

Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-14 Thread Elek, Marton
Thanks Sunil to manage this release.

+1 (non-binding)

1. built from the source (with clean local maven repo)
2. verified signatures + checksum
3. deployed 3 node cluster to Google Kubernetes Engine with generated
k8s resources [1]
4. Executed basic HDFS commands
5. Executed basic yarn example jobs

Marton

[1]: FTR: resources:
https://github.com/flokkr/k8s/tree/master/examples/hadoop , generator:
https://github.com/elek/flekszible


On 1/8/19 12:42 PM, Sunil G wrote:
> Hi folks,
> 
> 
> Thanks to all of you who helped in this release [1] and for helping to vote
> for RC0. I have created second release candidate (RC1) for Apache Hadoop
> 3.2.0.
> 
> 
> Artifacts for this RC are available here:
> 
> http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
> 
> 
> RC tag in git is release-3.2.0-RC1.
> 
> 
> 
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
> 
> 
> This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm PST.
> 
> 
> 
> 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
> additions
> 
> are the highlights of this release.
> 
> 1. Node Attributes Support in YARN
> 
> 2. Hadoop Submarine project for running Deep Learning workloads on YARN
> 
> 3. Support service upgrade via YARN Service API and CLI
> 
> 4. HDFS Storage Policy Satisfier
> 
> 5. Support Windows Azure Storage - Blob file system in Hadoop
> 
> 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
> 
> 7. Improvements in Router-based HDFS federation
> 
> 
> 
> Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
> 
> I have done few testing with my pseudo cluster. My +1 to start.
> 
> 
> 
> Regards,
> 
> Sunil
> 
> 
> 
> [1]
> 
> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
> 
> [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
> AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER BY fixVersion ASC
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org