[jira] [Created] (HADOOP-17107) hadoop-azure parallel tests not working on recent JDKs

2020-07-01 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17107:
---

 Summary: hadoop-azure parallel tests not working on recent JDKs
 Key: HADOOP-17107
 URL: https://issues.apache.org/jira/browse/HADOOP-17107
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/azure
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


recent JDKs are failing to run the wasb or abfs parallel test runs -unable to 
instantiate the javascript engine.

Maybe it's been cut from the JVM or the ant script task can't bind to it.

Fix is as HADOOP-14696 -use our own plugin to set up the test dirs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17084) Update Dockerfile_aarch64 to use Bionic

2020-07-01 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-17084.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Verified in ARM. Merged PR.
Thanx [~RenhaiZhao] for the fix and [~RuiChen] for the report!!!

> Update Dockerfile_aarch64 to use Bionic
> ---
>
> Key: HADOOP-17084
> URL: https://issues.apache.org/jira/browse/HADOOP-17084
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: RuiChen
>Assignee: zhaorenhai
>Priority: Major
> Fix For: 3.4.0
>
>
> Dockerfile for x86 have been updated to apply Ubuntu Bionic, JDK11 and other 
> changes, we should make Dockerfile for aarch64 following these changes, keep 
> same behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17111) Replace Guava Optional with Java8+ Optional

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17111:
--

 Summary: Replace Guava Optional with Java8+ Optional
 Key: HADOOP-17111
 URL: https://issues.apache.org/jira/browse/HADOOP-17111
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein



{code:java}
Targets
Occurrences of 'com.google.common.base.Optional' in project with mask 
'*.java'
Found Occurrences  (3 usages found)
org.apache.hadoop.yarn.server.nodemanager  (2 usages found)
DefaultContainerExecutor.java  (1 usage found)
71 import com.google.common.base.Optional;
LinuxContainerExecutor.java  (1 usage found)
22 import com.google.common.base.Optional;
org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
TestZKRMStateStorePerf.java  (1 usage found)
21 import com.google.common.base.Optional;

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17110) Replace Guava Preconditions to avoid Guava dependency

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17110:
--

 Summary: Replace Guava Preconditions to avoid Guava dependency
 Key: HADOOP-17110
 URL: https://issues.apache.org/jira/browse/HADOOP-17110
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


By far, one of the most painful replacement in hadoop. There are two options:
# Using Apache commons
# Using Java wrapper without dependency on third party.

{code:java}
Targets
Occurrences of 'com.google.common.base.Preconditions' in project with mask 
'*.java'
Found Occurrences  (577 usages found)
org.apache.hadoop.conf  (2 usages found)
Configuration.java  (1 usage found)
108 import com.google.common.base.Preconditions;
ReconfigurableBase.java  (1 usage found)
22 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto  (7 usages found)
AesCtrCryptoCodec.java  (1 usage found)
23 import com.google.common.base.Preconditions;
CryptoInputStream.java  (1 usage found)
33 import com.google.common.base.Preconditions;
CryptoOutputStream.java  (1 usage found)
32 import com.google.common.base.Preconditions;
CryptoStreamUtils.java  (1 usage found)
32 import com.google.common.base.Preconditions;
JceAesCtrCryptoCodec.java  (1 usage found)
32 import com.google.common.base.Preconditions;
OpensslAesCtrCryptoCodec.java  (1 usage found)
32 import com.google.common.base.Preconditions;
OpensslCipher.java  (1 usage found)
32 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.key  (2 usages found)
JavaKeyStoreProvider.java  (1 usage found)
21 import com.google.common.base.Preconditions;
KeyProviderCryptoExtension.java  (1 usage found)
32 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.key.kms  (3 usages found)
KMSClientProvider.java  (1 usage found)
83 import com.google.common.base.Preconditions;
LoadBalancingKMSClientProvider.java  (1 usage found)
54 import com.google.common.base.Preconditions;
ValueQueue.java  (1 usage found)
36 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.key.kms.server  (5 usages found)
KeyAuthorizationKeyProvider.java  (1 usage found)
35 import com.google.common.base.Preconditions;
KMS.java  (1 usage found)
20 import com.google.common.base.Preconditions;
KMSAudit.java  (1 usage found)
24 import com.google.common.base.Preconditions;
KMSWebApp.java  (1 usage found)
29 import com.google.common.base.Preconditions;
MiniKMS.java  (1 usage found)
29 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.random  (1 usage found)
OpensslSecureRandom.java  (1 usage found)
25 import com.google.common.base.Preconditions;
org.apache.hadoop.fs  (19 usages found)
ByteBufferUtil.java  (1 usage found)
29 import com.google.common.base.Preconditions;
ChecksumFileSystem.java  (1 usage found)
32 import com.google.common.base.Preconditions;
FileContext.java  (1 usage found)
68 import com.google.common.base.Preconditions;
FileEncryptionInfo.java  (2 usages found)
27 import static com.google.common.base.Preconditions.checkArgument;
28 import static com.google.common.base.Preconditions.checkNotNull;
FileSystem.java  (2 usages found)
86 import com.google.common.base.Preconditions;
91 import static com.google.common.base.Preconditions.checkArgument;
FileSystemStorageStatistics.java  (1 usage found)
23 import com.google.common.base.Preconditions;
FSDataOutputStreamBuilder.java  (1 usage found)
31 import static com.google.common.base.Preconditions.checkNotNull;
FSInputStream.java  (1 usage found)
24 import com.google.common.base.Preconditions;
FsUrlConnection.java  (1 usage found)
27 import com.google.common.base.Preconditions;
GlobalStorageStatistics.java  (1 usage found)
26 import com.google.common.base.Preconditions;
Globber.java  (1 usage found)
35 import static com.google.common.base.Preconditions.checkNotNull;
MultipartUploader.java  (1 usage found)
31 import static com.google.common.base.Preconditions.checkArgument;
PartialListing.java  (1 usage found)
20 import com.google.common.base.Preconditions;
TestEnhancedByteBufferAccess.java  (1 usage found)
74 import com.google.common.base.Preconditions;
TestLocalFileSystem.java  (1 usage found)
 

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/734/

No changes




-1 overall


The following subsystems voted -1:
asflicense compile findbugs golang hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

findbugs :

   module:hadoop-common-project 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 
   Useless object stored in variable seqOs of method 

Re: [VOTE] Release Apache Hadoop 3.1.4 (RC2)

2020-07-01 Thread Mukund Madhav Thakur
Compile the distribution using  mvn package -Pdist -DskipTests
-Dmaven.javadoc.skip=true  -DskipShade and run some hadoop fs commands. All
good there.

Then I ran the hadoop-aws tests and saw following failures:

[*ERROR*] *Failures: *

[*ERROR*] *
ITestS3AMiscOperations.testEmptyFileChecksums:147->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
checksums expected: but
was:*

[*ERROR*] *
ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted:199->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
checksums expected: but
was:*


These were the same failures which I saw in RC0 as well. I think these are
known failures.


Apart from that, all of my AssumedRole tests are failing AccessDenied
exception like

[*ERROR*]
testPartialDeleteSingleDelete(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole)
Time elapsed: 3.359 s  <<< ERROR!

org.apache.hadoop.fs.s3a.AWSServiceIOException: initTable on mthakur-data:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: User:
arn:aws:sts::152813717728:assumed-role/mthakur-assumed-role/valid is not
authorized to perform: dynamodb:DescribeTable on resource:
arn:aws:dynamodb:ap-south-1:152813717728:table/mthakur-data (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException;
Request ID: UJLKVGJ9I1S9TQF3AEPHVGENVJVV4KQNSO5AEMVJF66Q9ASUAAJG): User:
arn:aws:sts::152813717728:assumed-role/mthakur-assumed-role/valid is not
authorized to perform: dynamodb:DescribeTable on resource:
arn:aws:dynamodb:ap-south-1:152813717728:table/mthakur-data (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException;
Request ID: UJLKVGJ9I1S9TQF3AEPHVGENVJVV4KQNSO5AEMVJF66Q9ASUAAJG)

at
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.executePartialDelete(ITestAssumeRole.java:759)

at
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testPartialDeleteSingleDelete(ITestAssumeRole.java:735)


I checked my policy and could verify that dynamodb:DescribeTable access is
present there.


So just to cross check, I ran the AssumedRole test with the same configs
for apache/trunk and it succeeded. Not sure if this is a false alarm but I
think it would be better if someone else run these AssumedRole tests as
well and verify.


Thanks

Mukund

On Fri, Jun 26, 2020 at 7:21 PM Gabor Bota  wrote:

> Hi folks,
>
> I have put together a release candidate (RC2) for Hadoop 3.1.4.
>
> The RC is available at: http://people.apache.org/~gabota/hadoop-3.1.4-RC2/
> The RC tag in git is here:
> https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC2
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1269/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> and http://keys.gnupg.net/pks/lookup?op=get=0xB86249D83539B38C
>
> Please try the release and vote. The vote will run for 5 weekdays,
> until July 6. 2020. 23:00 CET.
>
> The release includes the revert of HDFS-14941, as it caused
> HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
> (https://issues.apache.org/jira/browse/HDFS-15421)
> The release includes HDFS-15323, as requested.
> (https://issues.apache.org/jira/browse/HDFS-15323)
>
> Thanks,
> Gabor
>


[jira] [Created] (HADOOP-17109) Replace Guava base64Url and base64 with Java8+ base64

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17109:
--

 Summary: Replace Guava base64Url and base64 with Java8+ base64
 Key: HADOOP-17109
 URL: https://issues.apache.org/jira/browse/HADOOP-17109
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


One important thing to not here as pointed out by [~jeagles] in [his comment on 
the parent 
task|https://issues.apache.org/jira/browse/HADOOP-17098?focusedCommentId=17147935=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17147935]

{quote}One note to be careful about is that base64 translation is not a 
standard, so the two implementations could produce different results. This 
might matter in the case of serialization, persistence, or client server 
different versions.{quote}


*Base64Url:*

{code:java}
Targets
Occurrences of 'base64Url' in project with mask '*.java'
Found Occurrences  (6 usages found)
org.apache.hadoop.mapreduce  (3 usages found)
CryptoUtils.java  (3 usages found)
wrapIfNecessary(Configuration, FSDataOutputStream, boolean)  (1 
usage found)
138 + Base64.encodeBase64URLSafeString(iv) + "]");
wrapIfNecessary(Configuration, InputStream, long)  (1 usage found)
183 + Base64.encodeBase64URLSafeString(iv) + "]");
wrapIfNecessary(Configuration, FSDataInputStream)  (1 usage found)
218 + Base64.encodeBase64URLSafeString(iv) + "]");
org.apache.hadoop.util  (2 usages found)
KMSUtil.java  (2 usages found)
toJSON(KeyVersion)  (1 usage found)
104 Base64.encodeBase64URLSafeString(
toJSON(EncryptedKeyVersion)  (1 usage found)
117 
.encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv()));
org.apache.hadoop.yarn.server.resourcemanager.webapp  (1 usage found)
TestRMWebServicesAppsModification.java  (1 usage found)
testAppSubmit(String, String)  (1 usage found)
837 .put("test", 
Base64.encodeBase64URLSafeString("value12".getBytes("UTF8")));

{code}

*Base64:*

{code:java}
Targets
Occurrences of 'base64;' in project with mask '*.java'
Found Occurrences  (51 usages found)
org.apache.hadoop.crypto.key.kms  (1 usage found)
KMSClientProvider.java  (1 usage found)
20 import org.apache.commons.codec.binary.Base64;
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
KMS.java  (1 usage found)
22 import org.apache.commons.codec.binary.Base64;
org.apache.hadoop.fs  (2 usages found)
XAttrCodec.java  (2 usages found)
23 import org.apache.commons.codec.binary.Base64;
56 BASE64;
org.apache.hadoop.fs.azure  (3 usages found)
AzureBlobStorageTestAccount.java  (1 usage found)
23 import com.microsoft.azure.storage.core.Base64;
BlockBlobAppendStream.java  (1 usage found)
50 import org.apache.commons.codec.binary.Base64;
ITestBlobDataValidation.java  (1 usage found)
50 import com.microsoft.azure.storage.core.Base64;
org.apache.hadoop.fs.azurebfs  (2 usages found)
AzureBlobFileSystemStore.java  (1 usage found)
99 import org.apache.hadoop.fs.azurebfs.utils.Base64;
TestAbfsConfigurationFieldsValidation.java  (1 usage found)
34 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.azurebfs.diagnostics  (2 usages found)
Base64StringConfigurationBasicValidator.java  (1 usage found)
26 import org.apache.hadoop.fs.azurebfs.utils.Base64;
TestConfigurationValidators.java  (1 usage found)
25 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.azurebfs.extensions  (2 usages found)
MockDelegationSASTokenProvider.java  (1 usage found)
37 import org.apache.hadoop.fs.azurebfs.utils.Base64;
MockSASTokenProvider.java  (1 usage found)
27 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.azurebfs.services  (1 usage found)
SharedKeyCredentials.java  (1 usage found)
47 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.cosn  (1 usage found)
CosNativeFileSystemStore.java  (1 usage found)
61 import com.qcloud.cos.utils.Base64;
org.apache.hadoop.fs.s3a  (1 usage found)
EncryptionTestUtils.java  (1 usage found)
26 import org.apache.commons.net.util.Base64;
org.apache.hadoop.hdfs.protocol.datatransfer.sasl  (3 usages found)
DataTransferSaslUtil.java  (1 usage found)
39 import org.apache.commons.codec.binary.Base64;
SaslDataTransferClient.java  (1 usage found)
47 import org.apache.commons.codec.binary.Base64;
SaslDataTransferServer.java  (1 usage found)
44 

[jira] [Created] (HADOOP-17108) Create Classes to wrap Guava code replacement

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17108:
--

 Summary: Create Classes to wrap Guava code replacement
 Key: HADOOP-17108
 URL: https://issues.apache.org/jira/browse/HADOOP-17108
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


Usage of Guava APIs in hadoop may not have one line replacement in Java8+. We 
need to create some classes to wrap those common functionalities instead of 
reinventing the wheel everywhere.
For example, we should have new package {{package 
org.apache.hadoop.util.collections}}.
Then we create classes like {{MultiMap}} which may have the entire 
implementation from scratch or we can use Apache Commons Collections 4.4 API.
The Pros of using wrapper is to avoid adding more dependencies in POM if we 
vote to use a third party jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/190/

[Jun 30, 2020 1:52:25 AM] (noreply) MAPREDUCE-7280. MiniMRYarnCluster has 
hard-coded timeout waiting to start history server, with no way to disable. 
(#2065)
[Jun 30, 2020 7:52:57 AM] (noreply) YARN-10331. Upgrade node.js to 10.21.0. 
(#2106)
[Jun 30, 2020 9:44:51 AM] (noreply) HADOOP-16798. S3A Committer thread pool 
shutdown problems. (#1963)
[Jun 30, 2020 2:10:17 PM] (Wei-Chiu Chuang) HDFS-15160. ReplicaMap, Disk 
Balancer, Directory Scanner and various FsDatasetImpl methods should use 
datanode readlock. Contributed by Stephen O'Donnell.
[Jun 30, 2020 6:39:16 PM] (Eric Yang) YARN-9809. Added node manager health 
status to resource manager registration call.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

 

[jira] [Reopened] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-07-01 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reopened HADOOP-17102:


Lets see if we can add checkstyle so that no one would import any further Guava 
base classes

> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/191/

[Jul 1, 2020 4:38:07 AM] (Xiaoqiao He) HDFS-15416. Improve 
DataStorage#addStorageLocations() for empty locations. Contibuted by jianghua 
zhu.
[Jul 1, 2020 6:06:27 AM] (Yiqun Lin) HDFS-15410. Add separated config file 
hdfs-fedbalance-default.xml for fedbalance tool. Contributed by Jinglun.
[Jul 1, 2020 6:18:18 AM] (Yiqun Lin) HDFS-15374. Add documentation for 
fedbalance tool. Contributed by Jinglun.
[Jul 1, 2020 7:28:35 AM] (noreply) HADOOP-17032. Fix getContentSummary in 
ViewFileSystem to handle multiple children mountpoints pointing to different 
filesystems (#2060). Contributed by Abhishek Das.
[Jul 1, 2020 7:52:25 AM] (noreply) HADOOP-17090. Increase precommit job timeout 
from 5 hours to 20 hours. (#2111). Contributed by Akira Ajisaka.
[Jul 1, 2020 9:57:11 AM] (noreply) HADOOP-17084 Update Dockerfile_aarch64 to 
use Bionic (#2103). Contributed by zhaorenhai.
[Jul 1, 2020 11:41:30 AM] (Szilard Nemeth) YARN-10325. Document 
max-parallel-apps for Capacity Scheduler. Contributed by Peter Bacsko
[Jul 1, 2020 12:10:55 PM] (Szilard Nemeth) YARN-10330. Add missing test 
scenarios to TestUserGroupMappingPlacementRule and 
TestAppNameMappingPlacementRule. Contributed by Peter Bacsko


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Resolved] (HADOOP-17032) Handle an internal dir in viewfs having multiple children mount points pointing to different filesystems

2020-07-01 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-17032.
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Meged PR. Thanx [~abhishekd] for the contribution

> Handle an internal dir in viewfs having multiple children mount points 
> pointing to different filesystems
> 
>
> Key: HADOOP-17032
> URL: https://issues.apache.org/jira/browse/HADOOP-17032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Reporter: Abhishek Das
>Assignee: Abhishek Das
>Priority: Major
> Fix For: 3.4.0
>
>
> In case the viefs mount table is configured in a way where multiple child 
> mount points are pointing to different file systems, the getContentSummary or 
> getStatus don't return the expected result
> {code:java}
> mount link /a/b/ → hdfs://nn1/a/b
>  mount link /a/d/ → file:///nn2/c/d{code}
> b has two files and d has 1 file. So getContentSummary on / should return 3 
> files.
> It also fails for the following scenario:
> {code:java}
> mount link  /internalDir -> /internalDir/linternalDir2
> mount link  /internalDir -> /internalDir/linkToDir2 -> hdfs://nn1/dir2{code}
> Exception:
> {code:java}
> java.io.IOException: Internal implementation error: expected file name to be 
> /java.io.IOException: Internal implementation error: expected file name to be 
> /
>  at 
> org.apache.hadoop.fs.viewfs.InternalDirOfViewFs.checkPathIsSlash(InternalDirOfViewFs.java:88)
>  at 
> org.apache.hadoop.fs.viewfs.InternalDirOfViewFs.getFileStatus(InternalDirOfViewFs.java:154)
>  at org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1684) 
> at org.apache.hadoop.fs.FileSystem.getContentSummary(FileSystem.java:1695) at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.getContentSummary(ViewFileSystem.java:918)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystemBaseTest.testGetContentSummary(ViewFileSystemBaseTest.java:1106){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org