Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-07-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/9/

[Jul 28, 2020 8:45:14 PM] (Jonathan Hung) YARN-10343. Legacy RM UI should 
include labeled metrics for allocated, total, and reserved resources. 
Contributed by Eric Payne


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-17163) ABFS: Add debug log for rename failures

2020-07-28 Thread Bilahari T H (Jira)
Bilahari T H created HADOOP-17163:
-

 Summary: ABFS: Add debug log for rename failures
 Key: HADOOP-17163
 URL: https://issues.apache.org/jira/browse/HADOOP-17163
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bilahari T H






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14124) S3AFileSystem silently deletes "fake" directories when writing a file.

2020-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14124:
-

> S3AFileSystem silently deletes "fake" directories when writing a file.
> --
>
> Key: HADOOP-14124
> URL: https://issues.apache.org/jira/browse/HADOOP-14124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.6.0
>Reporter: Joel Baranick
>Priority: Minor
>  Labels: filesystem, s3
>
> I realize that you guys probably have a good reason for {{S3AFileSystem}} to 
> cleanup "fake" folders when a file is written to S3.  That said, that fact 
> that it silently does this feels like a separation of concerns issue.  It 
> also leads to weird behavior issues where calls to 
> {{AmazonS3Client.getObjectMetadata}} for folders work before calling 
> {{S3AFileSystem.create}} but not after.  Also, there seems to be no mention 
> in the javadoc that the {{deleteUnnecessaryFakeDirectories}} method is 
> automatically invoked. Lastly, it seems like the goal of {{FileSystem}} 
> should be to ensure that code built on top of it is portable to different 
> implementations.  This behavior is an example of a case where this can break 
> down.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17150) ABFS: Test failure: Disable ITestAzureBlobFileSystemDelegationSAS tests

2020-07-28 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H resolved HADOOP-17150.
---
Resolution: Works for Me

Wrong observation

> ABFS: Test failure: Disable ITestAzureBlobFileSystemDelegationSAS tests
> ---
>
> Key: HADOOP-17150
> URL: https://issues.apache.org/jira/browse/HADOOP-17150
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> ITestAzureBlobFileSystemDelegationSAS has tests for the SAS feature in 
> preview stage. The tests should not run until the API version reflects the 
> one in preview as when run against production clusters they will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/

[Jul 27, 2020 6:53:21 AM] (pjoseph) YARN-10366. Fix Yarn rmadmin help message 
shows two labels for one node for --replaceLabelsOnNode.
[Jul 27, 2020 4:55:11 PM] (noreply) HDFS-15465. Support WebHDFS accesses to the 
data stored in secure Datanode through insecure Namenode. (#2135)



   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [1016K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
  [8.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-uploader.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-examples.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/217/artifact/out/patch-unit-hadoop-tools_hadoop-aliyun.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-07-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/8/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 504] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   module:hadoop-tools 
   Useless object stored in variable keysToUpdateAsFolder of method 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(Path, FsPermission, 
boolean) At NativeAzureFileSystem.java:keysToUpdateAsFolder of method 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(Path, FsPermission, 
boolean) At NativeAzureFileSystem.java:[line 3013] 
   Dead store to op in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.access(Path, FsAction) 
At 
AzureBlobFileSystemStore.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.access(Path,
 FsAction) At AzureBlobFileSystemStore.java:[line 900] 
   org.apache.hadoop.mapred.gridmix.InputStriper$1.compare(Map$Entry, 
Map$Entry) incorrectly handles double value At InputStriper.java:value At 
InputStriper.java:[line 136] 
   
org.apache.hadoop.mapred.gridmix.emulators.resourceusage.TotalHeapUsageEmulatorPlugin$DefaultHeapUsageEmulator.heapSpace
 is a mutable collection which should be package protected At 
TotalHeapUsageEmulatorPlugin.java:which should be package protected At 
TotalHeapUsageEmulatorPlugin.java:[line 132] 
   Return value of org.codehaus.jackson.map.ObjectMapper.getJsonFactory() 
ignored, but method has no side effect At JsonObjectMapperWriter.java:but 
method has no side effect At JsonObjectMapperWriter.java:[line 59] 
   Return value of new 
org.apache.hadoop.tools.rumen.datatypes.DefaultDataType(String) ignored, but 
method has no side effect At MapReduceJobPropertiesParser.java:ignored, but 
method has no side effect At MapReduceJobPropertiesParser.java:[line 212] 

findbugs :

   module:root 
   Possible null pointer dereference in 
org.apache.hadoop.examples.pi.Parser.parse(File, Map) due to return value of 
called method Dereferenced at 
Parser.java:org.apache.h

[jira] [Resolved] (HADOOP-17160) ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always

2020-07-28 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17160.
-
Resolution: Duplicate

HADOOP-17158 got there first. It's interesting you always see it; for Mehakmeet 
it is intermittent.

Either way: needs fixing. And you will be able to verify the fix is permanent

> ITestAbfsInputStreamStatistics#testReadAheadCounters timing out always
> --
>
> Key: HADOOP-17160
> URL: https://issues.apache.org/jira/browse/HADOOP-17160
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Bilahari T H
>Priority: Major
>
> The test ITestAbfsInputStreamStatistics#testReadAheadCounters timing out 
> always is timing out always



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-28 Thread Steve Loughran
 I should add, that's a binding +1

On Mon, 27 Jul 2020 at 19:55, Steve Loughran  wrote:

>  +1
>
> did a cloudstore clean build and test
> did as well as I could with a spark build.
>
> For anyone having maven problems building hadoop on a mac, homebrew now
> forces its version of maven to use a homebrew specific openjdk 11
> (one /usr/libexec/java_home doesn't locate); bits of the hadoop build don't
> work if maven is running on java 11. Removing the homebrew maven fixes up
> that but now the bits of the spark maven build which call out to the SBT
> build tool are going out of memory. This is one of those times when I think
> "I need a linux box'
>
> Anyway: maven builds are happy, spark compiles with the branch once you
> change its guava version. Well, that's progress
>
> -Steve
>


Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-28 Thread Mukund Madhav Thakur
+1

- Built from source using mvn package -Pdist -DskipTests
-Dmaven.javadoc.skip=true  -DskipShade on mac os with jdk 1.8.
- Ran some hadoop fs commands to s3 store from the packaged distribution.
- Ran s3a integration tests.

On Tue, Jul 28, 2020 at 12:56 AM Dinesh Chitlangia 
wrote:

> +1
>
> - Built from source
> - Verified checksum and signatures
> - Deployed 3 node cluster
> - Able to submit and complete an example mapreduce job
>
> Thanks Gabor for organizing the release.
>
> Regards,
> Dinesh
>
> On Tue, Jul 21, 2020 at 8:52 AM Gabor Bota  wrote:
>
>> Hi folks,
>>
>> I have put together a release candidate (RC4) for Hadoop 3.1.4.
>>
>> *
>> The RC includes in addition to the previous ones:
>> * fix for HDFS-15313. Ensure inodes in active filesystem are not
>> deleted during snapshot delete
>> * fix for YARN-10347. Fix double locking in
>> CapacityScheduler#reinitialize in branch-3.1
>> (https://issues.apache.org/jira/browse/YARN-10347)
>> * the revert of HDFS-14941, as it caused
>> HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
>> (https://issues.apache.org/jira/browse/HDFS-15421)
>> * HDFS-15323, as requested.
>> (https://issues.apache.org/jira/browse/HDFS-15323)
>> *
>>
>> The RC is available at:
>> http://people.apache.org/~gabota/hadoop-3.1.4-RC4/
>> The RC tag in git is here:
>> https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC4
>> The maven artifacts are staged at
>> https://repository.apache.org/content/repositories/orgapachehadoop-1275/
>>
>> You can find my public key at:
>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C
>>
>> Please try the release and vote. The vote will run for 8 weekdays,
>> until July 31. 2020. 23:00 CET.
>>
>>
>> Thanks,
>> Gabor
>>
>> -
>> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>>
>>


Re: [VOTE] Release Apache Hadoop 3.1.4 (RC4)

2020-07-28 Thread Adam Antal
+1 (binding).

**TEST STEPS**
* verified checksum and signature of CHANGES.md, RELEASENOTES.md
* verified checksum and signature of the rat output, site (doc), source and
the compressed binaries
* browsed documentation, checked some relevant changes since 3.1.3
* built successfully on Ubuntu 16.04 (Xenial) using latest OpenJDK 8
(native libraries and PB installed via the steps described in BUILDING.txt)
* ran various YARN unit tests

**Details**
* Java version:
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~16.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)

On Thu, Jul 23, 2020 at 6:24 PM Szilard Nemeth 
wrote:

> +1 (binding).
>
> **TEST STEPS**
> 1. Build from sources (see Maven / Java and OS details below)
> 2. Distribute Hadoop to all nodes
> 3. Start HDFS services + YARN services on nodes
> 4. Run Mapreduce pi job (QuasiMontecarlo)
> 5. Verifed that application was successful through YARN RM Web UI
> 6. Verified version of Hadoop release from YARN RM Web UI
>
> **OS version**
> $ cat /etc/os-release
> NAME="CentOS Linux"
> VERSION="7 (Core)"
> ID="centos"
> ID_LIKE="rhel fedora"
> VERSION_ID="7"
> PRETTY_NAME="CentOS Linux 7 (Core)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:centos:centos:7"
> HOME_URL="https://www.centos.org/";
> BUG_REPORT_URL="https://bugs.centos.org/";
>
> CENTOS_MANTISBT_PROJECT="CentOS-7"
> CENTOS_MANTISBT_PROJECT_VERSION="7"
> REDHAT_SUPPORT_PRODUCT="centos"
> REDHAT_SUPPORT_PRODUCT_VERSION="7"
>
> **Maven version**
> $ mvn -v
> Apache Maven 3.0.5 (Red Hat 3.0.5-17)
> Maven home: /usr/share/maven
>
> **Java version**
> Java version: 1.8.0_191, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre
> Default locale: en_US, platform encoding: ANSI_X3.4-1968
> OS name: "linux", version: "3.10.0-1062.el7.x86_64", arch: "amd64", family:
> "unix"
>
> **Maven command to build from sources**
> mvn clean package -Pdist -DskipTests -Dmaven.javadoc.skip=true
>
>
> **OTHER NOTES**
> 1. Had to manually install maven in order to manually compile Hadoop based
> on these steps:
> https://gist.github.com/miroslavtamas/cdca97f2eafdd6c28b844434eaa3b631
>
> 2. Had to manually install protoc + other required libraries with the
> following commands (in this particular order):
> sudo yum install -y protobuf-devel
> sudo yum install -y gcc gcc-c++ make
> sudo yum install -y openssl-devel
> sudo yum install -y libgsasl
>
>
> Thanks,
> Szilard
>
> On Thu, Jul 23, 2020 at 4:05 PM Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
> > +1 (binding).
> >
> > * verified the checksum and signature of the source tarball.
> > * built from source tarball with native profile on CentOS 7 and OpenJDK
> 8.
> > * built documentation and skimmed the contents.
> > * ran example jobs on 3 nodes docker cluster with NN-HA and RM-HA
> enblaed.
> > * launched pseudo-distributed cluster with Kerberos and SSL enabled, ran
> > basic EZ operation, ran example MR jobs.
> > * followed the reproduction step reported in  HDFS-15313 to see if the
> > fix works.
> >
> > Thanks,
> > Masatake Iwasaki
> >
> > On 2020/07/21 21:50, Gabor Bota wrote:
> > > Hi folks,
> > >
> > > I have put together a release candidate (RC4) for Hadoop 3.1.4.
> > >
> > > *
> > > The RC includes in addition to the previous ones:
> > > * fix for HDFS-15313. Ensure inodes in active filesystem are not
> > > deleted during snapshot delete
> > > * fix for YARN-10347. Fix double locking in
> > > CapacityScheduler#reinitialize in branch-3.1
> > > (https://issues.apache.org/jira/browse/YARN-10347)
> > > * the revert of HDFS-14941, as it caused
> > > HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
> > > (https://issues.apache.org/jira/browse/HDFS-15421)
> > > * HDFS-15323, as requested.
> > > (https://issues.apache.org/jira/browse/HDFS-15323)
> > > *
> > >
> > > The RC is available at:
> > http://people.apache.org/~gabota/hadoop-3.1.4-RC4/
> > > The RC tag in git is here:
> > > https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC4
> > > The maven artifacts are staged at
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1275/
> > >
> > > You can find my public key at:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C
> > >
> > > Please try the release and vote. The vote will run for 8 weekdays,
> > > until July 31. 2020. 23:00 CET.
> > >
> > >
> > > Thanks,
> > > Gabor
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >

Apache Hadoop qbt Report: trunk+JDK8 on Linux/aarch64

2020-07-28 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-linux-ARM-trunk/9/

[Jul 27, 2020 4:55:11 PM] (noreply) HDFS-15465. Support WebHDFS accesses to the 
data stored in secure Datanode through insecure Namenode. (#2135)




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetushttps://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-17162) Ozone /conf endpoint trigger kerberos replay error when SPNEGO is enabled

2020-07-28 Thread Xiaoyu Yao (Jira)
Xiaoyu Yao created HADOOP-17162:
---

 Summary: Ozone /conf endpoint trigger kerberos replay error when 
SPNEGO is enabled 
 Key: HADOOP-17162
 URL: https://issues.apache.org/jira/browse/HADOOP-17162
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nilotpal Nandi
Assignee: Xiaoyu Yao


{code}
curl  -k --negotiate -X GET -u : 
"https://quasar-jsajkc-8.quasar-jsajkc.root.hwx.site:9877/conf";



Error 403 GSSException: Failure unspecified at GSS-API level (Mechanism 
level: Request is a replay (34))

HTTP ERROR 403 GSSException: Failure unspecified at GSS-API level 
(Mechanism level: Request is a replay (34))

URI:/conf
STATUS:403
MESSAGE:GSSException: Failure unspecified at GSS-API level 
(Mechanism level: Request is a replay (34))
SERVLET:conf




{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org