Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Uploaded sample file and signature.



On Sat, Jan 12, 2019 at 9:18 PM Wangda Tan  wrote:

> Actually, among the hundreds of failed messages, the "No public key"
> issues still occurred several times:
>
> failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
> able to be located on http://gpg-keyserver.de/. Upload your public key
> and try the operation again.
> failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
> able to be located on http://pool.sks-keyservers.net:11371. Upload your
> public key and try the operation again.
> failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
> able to be located on http://pgp.mit.edu:11371. Upload your public key
> and try the operation again.
>
> Once the close operation returned, I will upload sample files which may
> help troubleshoot the issue.
>
> Thanks,
>
> On Sat, Jan 12, 2019 at 9:04 PM Wangda Tan  wrote:
>
>> Thanks David for the quick response!
>>
>> I just retried, now the "No public key" issue is gone. However,  the
>> issue:
>>
>> failureMessage  Failed to validate the pgp signature of
>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
>> check the logs.
>> failureMessage  Failed to validate the pgp signature of
>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
>> check the logs.
>> failureMessage  Failed to validate the pgp signature of
>> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2.pom',
>> check the logs.
>>
>>
>> Still exists and repeated hundreds of times. Do you know how to access
>> the logs mentioned by above log?
>>
>> Best,
>> Wangda
>>
>> On Sat, Jan 12, 2019 at 8:37 PM David Nalley  wrote:
>>
>>> On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan  wrote:
>>> >
>>> > Hi Devs,
>>> >
>>> > I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw
>>> an issue when I try to close repo in Nexus.
>>> >
>>> > Logs of https://repository.apache.org/#stagingRepositories
>>> (orgapachehadoop-1183) shows hundreds of lines of the following error:
>>> >
>>> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
>>> able to be located on http://gpg-keyserver.de/. Upload your public key
>>> and try the operation again.
>>> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
>>> able to be located on http://pool.sks-keyservers.net:11371. Upload your
>>> public key and try the operation again.
>>> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
>>> able to be located on http://pgp.mit.edu:11371. Upload your public key
>>> and try the operation again.
>>> > ...
>>> > failureMessage  Failed to validate the pgp signature of
>>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
>>> check the logs.
>>> > failureMessage  Failed to validate the pgp signature of
>>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
>>> check the logs.
>>> > failureMessage  Failed to validate the pgp signature of
>>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-sources.jar',
>>> check the logs.
>>> >
>>> >
>>> > This is the same key I used before (and finished two releases), the
>>> same environment I used before.
>>> >
>>> > I have tried more than 10 times in the last two days, no luck. And
>>> closing the repo takes almost one hour (Regular time is less than 1 min)
>>> and always fail at the last.
>>> >
>>> > I used following commands to validate key exists on key servers
>>> >
>>> > gpg --keyserver pgp.mit.edu --recv-keys 57300D45
>>> > gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
>>> > gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
>>> > gpg: key B3FA653D57300D45: "Wangda tan " not
>>> changed
>>> > gpg: Total number processed: 1
>>> > gpg:  unchanged: 1
>>> >
>>> > gpg --keyserver pool.sks-keyservers.net --recv-keys B3FA653D57300D45
>>> > gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
>>> > gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
>>> > gpg: key B3FA653D57300D45: "Wangda tan " not
>>> changed
>>> > gpg: Total number processed: 1
>>> > gpg:  unchanged: 1
>>> >
>>>
>>> Both of these report that your key was not found.
>>> I took the key from the KEYS file and uploaded it to both of those
>>> servers.
>>>
>>> You might try the release again and see if this resolves the issue.
>>>
>>

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Actually, among the hundreds of failed messages, the "No public key" issues
still occurred several times:

failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able
to be located on http://gpg-keyserver.de/. Upload your public key and try
the operation again.
failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able
to be located on http://pool.sks-keyservers.net:11371. Upload your public
key and try the operation again.
failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able
to be located on http://pgp.mit.edu:11371. Upload your public key and try
the operation again.

Once the close operation returned, I will upload sample files which may
help troubleshoot the issue.

Thanks,

On Sat, Jan 12, 2019 at 9:04 PM Wangda Tan  wrote:

> Thanks David for the quick response!
>
> I just retried, now the "No public key" issue is gone. However,  the
> issue:
>
> failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
> check the logs.
> failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
> check the logs.
> failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2.pom',
> check the logs.
>
>
> Still exists and repeated hundreds of times. Do you know how to access the
> logs mentioned by above log?
>
> Best,
> Wangda
>
> On Sat, Jan 12, 2019 at 8:37 PM David Nalley  wrote:
>
>> On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan  wrote:
>> >
>> > Hi Devs,
>> >
>> > I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw an
>> issue when I try to close repo in Nexus.
>> >
>> > Logs of https://repository.apache.org/#stagingRepositories
>> (orgapachehadoop-1183) shows hundreds of lines of the following error:
>> >
>> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
>> able to be located on http://gpg-keyserver.de/. Upload your public key
>> and try the operation again.
>> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
>> able to be located on http://pool.sks-keyservers.net:11371. Upload your
>> public key and try the operation again.
>> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
>> able to be located on http://pgp.mit.edu:11371. Upload your public key
>> and try the operation again.
>> > ...
>> > failureMessage  Failed to validate the pgp signature of
>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
>> check the logs.
>> > failureMessage  Failed to validate the pgp signature of
>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
>> check the logs.
>> > failureMessage  Failed to validate the pgp signature of
>> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-sources.jar',
>> check the logs.
>> >
>> >
>> > This is the same key I used before (and finished two releases), the
>> same environment I used before.
>> >
>> > I have tried more than 10 times in the last two days, no luck. And
>> closing the repo takes almost one hour (Regular time is less than 1 min)
>> and always fail at the last.
>> >
>> > I used following commands to validate key exists on key servers
>> >
>> > gpg --keyserver pgp.mit.edu --recv-keys 57300D45
>> > gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
>> > gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
>> > gpg: key B3FA653D57300D45: "Wangda tan " not changed
>> > gpg: Total number processed: 1
>> > gpg:  unchanged: 1
>> >
>> > gpg --keyserver pool.sks-keyservers.net --recv-keys B3FA653D57300D45
>> > gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
>> > gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
>> > gpg: key B3FA653D57300D45: "Wangda tan " not changed
>> > gpg: Total number processed: 1
>> > gpg:  unchanged: 1
>> >
>>
>> Both of these report that your key was not found.
>> I took the key from the KEYS file and uploaded it to both of those
>> servers.
>>
>> You might try the release again and see if this resolves the issue.
>>
>


Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Thanks David for the quick response!

I just retried, now the "No public key" issue is gone. However,  the issue:

failureMessage  Failed to validate the pgp signature of
'/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar',
check the logs.
failureMessage  Failed to validate the pgp signature of
'/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-test-sources.jar',
check the logs.
failureMessage  Failed to validate the pgp signature of
'/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2.pom',
check the logs.


Still exists and repeated hundreds of times. Do you know how to access the
logs mentioned by above log?

Best,
Wangda

On Sat, Jan 12, 2019 at 8:37 PM David Nalley  wrote:

> On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan  wrote:
> >
> > Hi Devs,
> >
> > I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw an
> issue when I try to close repo in Nexus.
> >
> > Logs of https://repository.apache.org/#stagingRepositories
> (orgapachehadoop-1183) shows hundreds of lines of the following error:
> >
> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
> able to be located on http://gpg-keyserver.de/. Upload your public key
> and try the operation again.
> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
> able to be located on http://pool.sks-keyservers.net:11371. Upload your
> public key and try the operation again.
> > failureMessage  No public key: Key with id: (b3fa653d57300d45) was not
> able to be located on http://pgp.mit.edu:11371. Upload your public key
> and try the operation again.
> > ...
> > failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
> check the logs.
> > failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
> check the logs.
> > failureMessage  Failed to validate the pgp signature of
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-sources.jar',
> check the logs.
> >
> >
> > This is the same key I used before (and finished two releases), the same
> environment I used before.
> >
> > I have tried more than 10 times in the last two days, no luck. And
> closing the repo takes almost one hour (Regular time is less than 1 min)
> and always fail at the last.
> >
> > I used following commands to validate key exists on key servers
> >
> > gpg --keyserver pgp.mit.edu --recv-keys 57300D45
> > gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
> > gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
> > gpg: key B3FA653D57300D45: "Wangda tan " not changed
> > gpg: Total number processed: 1
> > gpg:  unchanged: 1
> >
> > gpg --keyserver pool.sks-keyservers.net --recv-keys B3FA653D57300D45
> > gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
> > gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
> > gpg: key B3FA653D57300D45: "Wangda tan " not changed
> > gpg: Total number processed: 1
> > gpg:  unchanged: 1
> >
>
> Both of these report that your key was not found.
> I took the key from the KEYS file and uploaded it to both of those servers.
>
> You might try the release again and see if this resolves the issue.
>


Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread David Nalley
On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan  wrote:
>
> Hi Devs,
>
> I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw an issue 
> when I try to close repo in Nexus.
>
> Logs of https://repository.apache.org/#stagingRepositories 
> (orgapachehadoop-1183) shows hundreds of lines of the following error:
>
> failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able 
> to be located on http://gpg-keyserver.de/. Upload your public key and try the 
> operation again.
> failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able 
> to be located on http://pool.sks-keyservers.net:11371. Upload your public key 
> and try the operation again.
> failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able 
> to be located on http://pgp.mit.edu:11371. Upload your public key and try the 
> operation again.
> ...
> failureMessage  Failed to validate the pgp signature of 
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
>  check the logs.
> failureMessage  Failed to validate the pgp signature of 
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
>  check the logs.
> failureMessage  Failed to validate the pgp signature of 
> '/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-sources.jar',
>  check the logs.
>
>
> This is the same key I used before (and finished two releases), the same 
> environment I used before.
>
> I have tried more than 10 times in the last two days, no luck. And closing 
> the repo takes almost one hour (Regular time is less than 1 min) and always 
> fail at the last.
>
> I used following commands to validate key exists on key servers
>
> gpg --keyserver pgp.mit.edu --recv-keys 57300D45
> gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
> gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
> gpg: key B3FA653D57300D45: "Wangda tan " not changed
> gpg: Total number processed: 1
> gpg:  unchanged: 1
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys B3FA653D57300D45
> gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
> gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
> gpg: key B3FA653D57300D45: "Wangda tan " not changed
> gpg: Total number processed: 1
> gpg:  unchanged: 1
>

Both of these report that your key was not found.
I took the key from the KEYS file and uploaded it to both of those servers.

You might try the release again and see if this resolves the issue.

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



[Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Hi Devs,

I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw an
issue when I try to close repo in Nexus.

Logs of https://repository.apache.org/#stagingRepositories
(orgapachehadoop-1183) shows hundreds of lines of the following error:

failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able
to be located on http://gpg-keyserver.de/. Upload your public key and try
the operation again.
failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able
to be located on http://pool.sks-keyservers.net:11371. Upload your public
key and try the operation again.
failureMessage  No public key: Key with id: (b3fa653d57300d45) was not able
to be located on http://pgp.mit.edu:11371. Upload your public key and try
the operation again.
...
failureMessage  Failed to validate the pgp signature of
'/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-tests.jar',
check the logs.
failureMessage  Failed to validate the pgp signature of
'/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-test-sources.jar',
check the logs.
failureMessage  Failed to validate the pgp signature of
'/org/apache/hadoop/hadoop-yarn-registry/3.1.2/hadoop-yarn-registry-3.1.2-sources.jar',
check the logs.


This is the same key I used before (and finished two releases), the same
environment I used before.

I have tried more than 10 times in the last two days, no luck. And closing
the repo *takes almost one hour* (Regular time is less than 1 min) and
always fail at the last.

I used following commands to validate key exists on key servers

gpg --keyserver pgp.mit.edu --recv-keys 57300D45
gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
gpg: key B3FA653D57300D45: "Wangda tan " not changed
gpg: Total number processed: 1
gpg:  unchanged: 1

gpg --keyserver pool.sks-keyservers.net --recv-keys B3FA653D57300D45
gpg: WARNING: unsafe permissions on homedir '/Users/wtan/.gnupg'
gpg: key B3FA653D57300D45: 1 signature not checked due to a missing key
gpg: key B3FA653D57300D45: "Wangda tan " not changed
gpg: Total number processed: 1
gpg:  unchanged: 1

Did I miss anything? I also checked Nexus errors page, it is filled by logs
(https://repository.apache.org/service/local/feeds/errorWarning):

2019-01-13 02:04:34 WARN  [etcherImpl-task] -
com.sonatype.central.secure.nexus.plugin.internal.AuthtokenFetcherImpl -
Failed to fetch authtoken: org.apache.http.conn.ConnectTimeoutException:
Connect to secure.central.sonatype.com:443 [
secure.central.sonatype.com/207.223.241.90] failed: connect timed out.


Not sure if it is related to error I saw. Also adding Apache Infra user
email list to rule out Nexus issues.

Hope to get your help sooner if possible.

Thanks,
Wangda .


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/

[Jan 11, 2019 2:24:05 AM] (aajisaka) HADOOP-16016. 
TestSSLFactory#testServerWeakCiphers fails on Java
[Jan 11, 2019 9:06:55 AM] (surendralilhore) HDFS-14198. Upload and Create 
button doesn't get enabled after getting
[Jan 11, 2019 11:13:41 AM] (stevel) HADOOP-15975. ABFS: remove timeout check 
for DELETE and RENAME.
[Jan 11, 2019 6:06:05 PM] (hanishakoneru) HDDS-947. Implement OzoneManager 
State Machine.
[Jan 11, 2019 6:54:49 PM] (gifuma) HADOOP-16029. Consecutive 
StringBuilder.append can be reused.
[Jan 11, 2019 8:51:07 PM] (cliang) HADOOP-16013. DecayRpcScheduler decay thread 
should run as a daemon.
[Jan 11, 2019 10:01:23 PM] (cliang) HADOOP-15481. Emit FairCallQueue stats as 
metrics. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancer 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1014/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [84K]
   

Re: [VOTE] Release Apache Hadoop 3.2.0 - RC1

2019-01-12 Thread Sunil G
Thanks Weiwei and Zoltan,

For the content issue in documentation, all the latest features are added
in left panel. It seems whenever new features went in, it was not added as
a summary. I will wait for the voting to see any issues and make a call at
the last.

@zoltan, Thanks for bringing this issue to my attention. I think if i
deploy with new command what Marton has mentioned, we will not have this
problem. And for nexus, we are not doing any checksum validation. @Steve
Loughran   @Wangda Tan   @Vinod
Kumar Vavilapalli  could you please advise whether I
can re-run nexus bit again and create a new link w/o creating another RC.
Please advise.

Thanks
Sunil
On Fri, 11 Jan 2019 at 8:10 AM, Gabor Bota  wrote:

>   Thanks for the work Sunil!
>
>   +1 (non-binding)
>
>   checked out git tag release-3.2.0-RC1.
>   hadoop-aws integration (mvn verify) test run was successful on eu-west-1
> (a known issue is there, it's fixed in trunk)
>   built from source on Mac OS X 10.14.2, java version 8.0.181-oracle
>   deployed on a 3 node cluster
>   verified pi job, teragen, terasort and teravalidate
>
>   Regards,
>   Gabor Bota
>
> On Fri, Jan 11, 2019 at 1:11 PM Zoltan Haindrich  wrote:
>
>> Hello,
>>
>> I would like to note that it seems like 3.2.0-RC1 release misses some
>> source attachments (as all releases lately). David Phillips just commented
>> on that jira yesterday; and
>> I've just noticed that a release vote is already going onso I think
>> now is the best time to talk about this - because
>> https://issues.apache.org/jira/browse/HADOOP-15205
>> is open now for almost a year.
>>
>> This might be just a documentation related issue; but then the
>> HowToRelease doc should be updated.
>> Steve Loughran was able to publish the artifacts in question for 2.7.7 -
>> but releases before and after that are missing these source attachements.
>>
>> People working on downstream projects (or at least me) may find it harder
>> to work with hadoop packages; beacuse of the missing source attachments.
>>
>> example artifact which misses the sources:
>>
>> https://repository.apache.org/content/repositories/orgapachehadoop-1178/org/apache/hadoop/hadoop-mapreduce-client-core/3.2.0/
>>
>> cheers,
>> Zoltan
>>
>> On 1/8/19 12:42 PM, Sunil G wrote:
>> > Hi folks,
>> >
>> >
>> > Thanks to all of you who helped in this release [1] and for helping to
>> vote
>> > for RC0. I have created second release candidate (RC1) for Apache Hadoop
>> > 3.2.0.
>> >
>> >
>> > Artifacts for this RC are available here:
>> >
>> > http://home.apache.org/~sunilg/hadoop-3.2.0-RC1/
>> >
>> >
>> > RC tag in git is release-3.2.0-RC1.
>> >
>> >
>> >
>> > The maven artifacts are available via repository.apache.org at
>> >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1178/
>> >
>> >
>> > This vote will run 7 days (5 weekdays), ending on 14th Jan at 11:59 pm
>> PST.
>> >
>> >
>> >
>> > 3.2.0 contains 1092 [2] fixed JIRA issues since 3.1.0. Below feature
>> > additions
>> >
>> > are the highlights of this release.
>> >
>> > 1. Node Attributes Support in YARN
>> >
>> > 2. Hadoop Submarine project for running Deep Learning workloads on YARN
>> >
>> > 3. Support service upgrade via YARN Service API and CLI
>> >
>> > 4. HDFS Storage Policy Satisfier
>> >
>> > 5. Support Windows Azure Storage - Blob file system in Hadoop
>> >
>> > 6. Phase 3 improvements for S3Guard and Phase 5 improvements S3a
>> >
>> > 7. Improvements in Router-based HDFS federation
>> >
>> >
>> >
>> > Thanks to Wangda, Vinod, Marton for helping me in preparing the release.
>> >
>> > I have done few testing with my pseudo cluster. My +1 to start.
>> >
>> >
>> >
>> > Regards,
>> >
>> > Sunil
>> >
>> >
>> >
>> > [1]
>> >
>> >
>> https://lists.apache.org/thread.html/68c1745dcb65602aecce6f7e6b7f0af3d974b1bf0048e7823e58b06f@%3Cyarn-dev.hadoop.apache.org%3E
>> >
>> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.2.0)
>> > AND fixVersion not in (3.1.0, 3.0.0, 3.0.0-beta1) AND status = Resolved
>> > ORDER BY fixVersion ASC
>> >
>>
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>
>>