[jira] [Updated] (HADOOP-17788) Replace IOUtils#closeQuietly usages

2021-07-07 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17788:
--
Target Version/s: 3.4.0

> Replace IOUtils#closeQuietly usages
> ---
>
> Key: HADOOP-17788
> URL: https://issues.apache.org/jira/browse/HADOOP-17788
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> IOUtils#closeQuietly is deprecated since 2.6 release of commons-io without 
> any replacement. Since we already have good replacement available in Hadoop's 
> own IOUtils, we should use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17788) Replace IOUtils#closeQuietly usages

2021-07-02 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17788:
-

 Summary: Replace IOUtils#closeQuietly usages
 Key: HADOOP-17788
 URL: https://issues.apache.org/jira/browse/HADOOP-17788
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


IOUtils#closeQuietly is deprecated since 2.6 release of commons-io without any 
replacement. Since we already have good replacement available in Hadoop's own 
IOUtils, we should use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2021-06-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HADOOP-17114.
---
Resolution: Duplicate

> Replace Guava initialization of Lists.newArrayList
> --
>
> Key: HADOOP-17114
> URL: https://issues.apache.org/jira/browse/HADOOP-17114
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> There are unjustified use of Guava APIs to initialize LinkedLists and 
> ArrayLists. This could be simply replaced by Java API.
> By analyzing hadoop code, the best way to replace guava  is to do the 
> following steps:
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)
>  
> After this class is created, we can simply replace the import statement in 
> all the source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.collect.Lists;' in project with mask 
> '*.java'
> Found Occurrences  (246 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 22 import com.google.common.collect.Lists;
> org.apache.hadoop.crypto  (1 usage found)
> CryptoCodec.java  (1 usage found)
> 35 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.azurebfs  (3 usages found)
> ITestAbfsIdentityTransformer.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> ITestAzureBlobFilesystemAcl.java  (1 usage found)
> 21 import com.google.common.collect.Lists;
> ITestAzureBlobFileSystemCheckAccess.java  (1 usage found)
> 20 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.http.client  (2 usages found)
> BaseTestHttpFSWith.java  (1 usage found)
> 77 import com.google.common.collect.Lists;
> HttpFSFileSystem.java  (1 usage found)
> 75 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.permission  (2 usages found)
> AclStatus.java  (1 usage found)
> 27 import com.google.common.collect.Lists;
> AclUtil.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a  (3 usages found)
> ITestS3AFailureHandling.java  (1 usage found)
> 23 import com.google.common.collect.Lists;
> ITestS3GuardListConsistency.java  (1 usage found)
> 34 import com.google.common.collect.Lists;
> S3AUtils.java  (1 usage found)
> 57 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> RolePolicies.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit  (2 usages found)
> ITestCommitOperations.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> TestMagicCommitPaths.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit.staging  (3 usages found)
> StagingTestBase.java  (1 usage found)
> 47 import com.google.common.collect.Lists;
> TestStagingPartitionedFileListing.java  (1 usage found)
> 31 import com.google.common.collect.Lists;
> TestStagingPartitionedTaskCommit.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.impl  (2 usages found)
> RenameOperation.java  (1 usage found)
> 30 import com.google.common.collect.Lists;
> TestPartialDeleteFailures.java  (1 usage found)
> 37 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.s3guard  (3 usages found)
> DumpS3GuardDynamoTable.java  (1 usage found)
> 38 import com.google.common.collect.Lists;
> DynamoDBMetadataStore.java  (1 usage found)
> 67 import com.google.common.collect.Lists;
> ITestDynamoDBMetadataStore.java  (1 usage found)
> 49 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.shell  (1 usage found)
> AclCommands.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.viewfs

[jira] [Assigned] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2021-06-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17114:
-

Assignee: (was: Viraj Jasani)

> Replace Guava initialization of Lists.newArrayList
> --
>
> Key: HADOOP-17114
> URL: https://issues.apache.org/jira/browse/HADOOP-17114
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> There are unjustified use of Guava APIs to initialize LinkedLists and 
> ArrayLists. This could be simply replaced by Java API.
> By analyzing hadoop code, the best way to replace guava  is to do the 
> following steps:
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)
>  
> After this class is created, we can simply replace the import statement in 
> all the source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.collect.Lists;' in project with mask 
> '*.java'
> Found Occurrences  (246 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 22 import com.google.common.collect.Lists;
> org.apache.hadoop.crypto  (1 usage found)
> CryptoCodec.java  (1 usage found)
> 35 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.azurebfs  (3 usages found)
> ITestAbfsIdentityTransformer.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> ITestAzureBlobFilesystemAcl.java  (1 usage found)
> 21 import com.google.common.collect.Lists;
> ITestAzureBlobFileSystemCheckAccess.java  (1 usage found)
> 20 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.http.client  (2 usages found)
> BaseTestHttpFSWith.java  (1 usage found)
> 77 import com.google.common.collect.Lists;
> HttpFSFileSystem.java  (1 usage found)
> 75 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.permission  (2 usages found)
> AclStatus.java  (1 usage found)
> 27 import com.google.common.collect.Lists;
> AclUtil.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a  (3 usages found)
> ITestS3AFailureHandling.java  (1 usage found)
> 23 import com.google.common.collect.Lists;
> ITestS3GuardListConsistency.java  (1 usage found)
> 34 import com.google.common.collect.Lists;
> S3AUtils.java  (1 usage found)
> 57 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> RolePolicies.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit  (2 usages found)
> ITestCommitOperations.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> TestMagicCommitPaths.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit.staging  (3 usages found)
> StagingTestBase.java  (1 usage found)
> 47 import com.google.common.collect.Lists;
> TestStagingPartitionedFileListing.java  (1 usage found)
> 31 import com.google.common.collect.Lists;
> TestStagingPartitionedTaskCommit.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.impl  (2 usages found)
> RenameOperation.java  (1 usage found)
> 30 import com.google.common.collect.Lists;
> TestPartialDeleteFailures.java  (1 usage found)
> 37 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.s3guard  (3 usages found)
> DumpS3GuardDynamoTable.java  (1 usage found)
> 38 import com.google.common.collect.Lists;
> DynamoDBMetadataStore.java  (1 usage found)
> 67 import com.google.common.collect.Lists;
> ITestDynamoDBMetadataStore.java  (1 usage found)
> 49 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.shell  (1 usage found)
> AclCommands.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache

[jira] [Commented] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2021-06-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365612#comment-17365612
 ] 

Viraj Jasani commented on HADOOP-17114:
---

With HADOOP-17152 and it's sub-tasks resolved, marking this as duplicate.

Thanks

> Replace Guava initialization of Lists.newArrayList
> --
>
> Key: HADOOP-17114
> URL: https://issues.apache.org/jira/browse/HADOOP-17114
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> There are unjustified use of Guava APIs to initialize LinkedLists and 
> ArrayLists. This could be simply replaced by Java API.
> By analyzing hadoop code, the best way to replace guava  is to do the 
> following steps:
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)
>  
> After this class is created, we can simply replace the import statement in 
> all the source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.collect.Lists;' in project with mask 
> '*.java'
> Found Occurrences  (246 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 22 import com.google.common.collect.Lists;
> org.apache.hadoop.crypto  (1 usage found)
> CryptoCodec.java  (1 usage found)
> 35 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.azurebfs  (3 usages found)
> ITestAbfsIdentityTransformer.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> ITestAzureBlobFilesystemAcl.java  (1 usage found)
> 21 import com.google.common.collect.Lists;
> ITestAzureBlobFileSystemCheckAccess.java  (1 usage found)
> 20 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.http.client  (2 usages found)
> BaseTestHttpFSWith.java  (1 usage found)
> 77 import com.google.common.collect.Lists;
> HttpFSFileSystem.java  (1 usage found)
> 75 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.permission  (2 usages found)
> AclStatus.java  (1 usage found)
> 27 import com.google.common.collect.Lists;
> AclUtil.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a  (3 usages found)
> ITestS3AFailureHandling.java  (1 usage found)
> 23 import com.google.common.collect.Lists;
> ITestS3GuardListConsistency.java  (1 usage found)
> 34 import com.google.common.collect.Lists;
> S3AUtils.java  (1 usage found)
> 57 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> RolePolicies.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit  (2 usages found)
> ITestCommitOperations.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> TestMagicCommitPaths.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit.staging  (3 usages found)
> StagingTestBase.java  (1 usage found)
> 47 import com.google.common.collect.Lists;
> TestStagingPartitionedFileListing.java  (1 usage found)
> 31 import com.google.common.collect.Lists;
> TestStagingPartitionedTaskCommit.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.impl  (2 usages found)
> RenameOperation.java  (1 usage found)
> 30 import com.google.common.collect.Lists;
> TestPartialDeleteFailures.java  (1 usage found)
> 37 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.s3guard  (3 usages found)
> DumpS3GuardDynamoTable.java  (1 usage found)
> 38 import com.google.common.collect.Lists;
> DynamoDBMetadataStore.java  (1 usage found)
> 67 import com.google.common.collect.Lists;
> ITestDynamoDBMetadataStore.java  (1 usage found)
> 49 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.shell  (1

[jira] [Commented] (HADOOP-17668) Use profile hbase-2.0 by default and update hbase version

2021-06-14 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362970#comment-17362970
 ] 

Viraj Jasani commented on HADOOP-17668:
---

Ah, we have a problem here. HBase 2 so far ships only HBase artifacts compiled 
with default profile (Hadoop 2) and not the ones compiled with Hadoop 3 
profile. Hence, in order to bump and use HBase 2 by Yarn timeline service, 
HBase 2 should be manually built using Hadoop 3 profile and only then we should 
build Hadoop. If we don't build HBase manually with Hadoop 3 profile, we will 
face this known issue:
{code:java}
java.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was 
expectedjava.lang.IncompatibleClassChangeError: Found interface 
org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:536)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:616)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:611)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:624)
 at 
org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53)
 at 
org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:180)
 at 
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
 at 
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
 at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:662)
 at 
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130)
 at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:848)
 at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:551)
 at 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:492)
 at 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:161)
 at 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:63)
 at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:296) at 
org.apache.hadoop.hbase.master.region.MasterRegion.createWAL(MasterRegion.java:187)
 at 
org.apache.hadoop.hbase.master.region.MasterRegion.bootstrap(MasterRegion.java:207)
 at 
org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:307)
 at 
org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104)
 at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:834)
 at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2091)
 at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:507)
{code}

> Use profile hbase-2.0 by default and update hbase version
> -
>
> Key: HADOOP-17668
> URL: https://issues.apache.org/jira/browse/HADOOP-17668
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> We currently use hbase1 profile by default (for those who aren't aware, the 
> YARN timeline service uses HBase as the underlying storage). There isn't much 
> development activity in HBase 1.x and 2.x is production ready. I think it's 
> time to switch to hbase 2 by default.
>  
> The HBase 2 version being used is 2.0.2. We should use the more recent 
> versions.  (e.g. 2.2/2.3/2.4) (And update hbase 1 version as well)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17744) Path.suffix() raises an exception when the path is in the root dir

2021-06-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17356679#comment-17356679
 ] 

Viraj Jasani edited comment on HADOOP-17744 at 6/9/21, 4:09 PM:


Outputs on trunk as of today:

1.
{code:java}
Path path = new Path("file:///something").suffix("else");
System.out.println(path);
{code}
output:
{code:java}
file:/somethingelse{code}
2.
{code:java}
Path path = new Path("/something").suffix("else");
System.out.println(path);
{code}
output:
{code:java}
/somethingelse{code}


was (Author: vjasani):
On trunk, with scheme, suffix is producing NPE:
{code:java}
new Path("file://something").suffix("else")
{code}
output:
{code:java}
java.lang.NullPointerException
 at org.apache.hadoop.fs.Path.(Path.java:152)
 at org.apache.hadoop.fs.Path.(Path.java:130)
 at org.apache.hadoop.fs.Path.suffix(Path.java:468)
{code}
However, this seems to be working fine:
{code:java}
Path path = new Path("/something").suffix("else");
System.out.println(path);
{code}
output:
{code:java}
/somethingelse{code}
 

> Path.suffix() raises an exception when the path is in the root dir
> --
>
> Key: HADOOP-17744
> URL: https://issues.apache.org/jira/browse/HADOOP-17744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Root cause of SPARK-34298.
> If you have a Path (/something) and call suffix on it.
> {code}
>  new Path("/something").suffix("else")
> {code}
> you see an error because the path doesn't have a parent
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: Can not create 
> a Path from an empty string
> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168)[]
> at org.apache.hadoop.fs.Path.suffix(Path.java:446)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17753) Keep restrict-imports-enforcer-rule for Guava Lists in hadoop-main pom

2021-06-09 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17753:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Lists in 
hadoop-main pom
 Key: HADOOP-17753
 URL: https://issues.apache.org/jira/browse/HADOOP-17753
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17744) Path.suffix() raises an exception when the path is in the root dir

2021-06-03 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17356679#comment-17356679
 ] 

Viraj Jasani edited comment on HADOOP-17744 at 6/3/21, 7:34 PM:


On trunk, with scheme, suffix is producing NPE:
{code:java}
new Path("file://something").suffix("else")
{code}
output:
{code:java}
java.lang.NullPointerException
 at org.apache.hadoop.fs.Path.(Path.java:152)
 at org.apache.hadoop.fs.Path.(Path.java:130)
 at org.apache.hadoop.fs.Path.suffix(Path.java:468)
{code}
However, this seems to be working fine:
{code:java}
Path path = new Path("/something").suffix("else");
System.out.println(path);
{code}
output:
{code:java}
/somethingelse{code}
 


was (Author: vjasani):
On trunk, with scheme, suffix is producing NPE:

 
{code:java}
new Path("file://something").suffix("else")
{code}
 

 

output:
{code:java}
java.lang.NullPointerException
 at org.apache.hadoop.fs.Path.(Path.java:152)
 at org.apache.hadoop.fs.Path.(Path.java:130)
 at org.apache.hadoop.fs.Path.suffix(Path.java:468)
{code}
However, this seems to be working fine:
{code:java}
Path path = new Path("/something").suffix("else");
System.out.println(path);
{code}
output:
{code:java}
/somethingelse{code}
 

> Path.suffix() raises an exception when the path is in the root dir
> --
>
> Key: HADOOP-17744
> URL: https://issues.apache.org/jira/browse/HADOOP-17744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Root cause of SPARK-34298.
> If you have a Path (/something) and call suffix on it.
> {code}
>  new Path("/something").suffix("else")
> {code}
> you see an error because the path doesn't have a parent
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: Can not create 
> a Path from an empty string
> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168)[]
> at org.apache.hadoop.fs.Path.suffix(Path.java:446)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17744) Path.suffix() raises an exception when the path is in the root dir

2021-06-03 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17356679#comment-17356679
 ] 

Viraj Jasani commented on HADOOP-17744:
---

On trunk, with scheme, suffix is producing NPE:

 
{code:java}
new Path("file://something").suffix("else")
{code}
 

 

output:
{code:java}
java.lang.NullPointerException
 at org.apache.hadoop.fs.Path.(Path.java:152)
 at org.apache.hadoop.fs.Path.(Path.java:130)
 at org.apache.hadoop.fs.Path.suffix(Path.java:468)
{code}
However, this seems to be working fine:
{code:java}
Path path = new Path("/something").suffix("else");
System.out.println(path);
{code}
output:
{code:java}
/somethingelse{code}
 

> Path.suffix() raises an exception when the path is in the root dir
> --
>
> Key: HADOOP-17744
> URL: https://issues.apache.org/jira/browse/HADOOP-17744
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Root cause of SPARK-34298.
> If you have a Path (/something) and call suffix on it.
> {code}
>  new Path("/something").suffix("else")
> {code}
> you see an error because the path doesn't have a parent
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: Can not create 
> a Path from an empty string
> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168)[]
> at org.apache.hadoop.fs.Path.suffix(Path.java:446)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17743) Replace Guava Lists usage by Hadoop's own Lists in hadoop-common, hadoop-tools and cloud-storage projects

2021-06-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17743:
-

 Summary: Replace Guava Lists usage by Hadoop's own Lists in 
hadoop-common, hadoop-tools and cloud-storage projects
 Key: HADOOP-17743
 URL: https://issues.apache.org/jira/browse/HADOOP-17743
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-06-03 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17152:
--
Target Version/s: 3.4.0  (was: 3.4.0, 3.1.5, 3.2.3, 3.3.2)

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17649) Update wildfly openssl to 2.1.3.Final

2021-05-29 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17649:
--
Labels:   (was: pull-request-available)

> Update wildfly openssl to 2.1.3.Final
> -
>
> Key: HADOOP-17649
> URL: https://issues.apache.org/jira/browse/HADOOP-17649
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-25644
> A memory leak flaw was found in WildFly OpenSSL in versions prior to 
> 1.1.3.Final, where it removes an HTTP session. It may allow the attacker to 
> cause OOM leading to a denial of service. The highest threat from this 
> vulnerability is to system availability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17649) Update wildfly openssl to 2.1.3.Final

2021-05-29 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17649:
-

Assignee: (was: Viraj Jasani)

> Update wildfly openssl to 2.1.3.Final
> -
>
> Key: HADOOP-17649
> URL: https://issues.apache.org/jira/browse/HADOOP-17649
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-25644
> A memory leak flaw was found in WildFly OpenSSL in versions prior to 
> 1.1.3.Final, where it removes an HTTP session. It may allow the attacker to 
> cause OOM leading to a denial of service. The highest threat from this 
> vulnerability is to system availability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-27 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17352448#comment-17352448
 ] 

Viraj Jasani commented on HADOOP-17152:
---

Thank you [~ahussein].

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-26 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17351687#comment-17351687
 ] 

Viraj Jasani commented on HADOOP-17152:
---

[~ahussein] I can take up HADOOP-17114 and divide it per module since I am 
taking this one? Is that fine with you?

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17732) Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom

2021-05-25 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17732:
--
Target Version/s: 3.4.0

> Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom
> -
>
> Key: HADOOP-17732
> URL: https://issues.apache.org/jira/browse/HADOOP-17732
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now that all sub-tasks to remove dependency on Guava Sets are completed, we 
> should move restrict-imports-enforcer-rule for Guava Sets import in 
> hadoop-main pom and remove from individual project poms.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17732) Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom

2021-05-25 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17732:
-

 Summary: Keep restrict-imports-enforcer-rule for Guava Sets in 
hadoop-main pom
 Key: HADOOP-17732
 URL: https://issues.apache.org/jira/browse/HADOOP-17732
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Now that all sub-tasks to remove dependency on Guava Sets are completed, we 
should move restrict-imports-enforcer-rule for Guava Sets import in hadoop-main 
pom and remove from individual project poms.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-25 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350966#comment-17350966
 ] 

Viraj Jasani commented on HADOOP-17115:
---

Thank you [~ahussein] and [~busbey] for all your reviews. It was great help.

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
> hadoop-tools
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithSt

[jira] [Commented] (HADOOP-17700) ExitUtil#halt info log should log HaltException

2021-05-25 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350965#comment-17350965
 ] 

Viraj Jasani commented on HADOOP-17700:
---

Yeah I think 3.3.1 preparation was anyways going on for many days and now RC is 
prepared. It's fine [~tasanuma], Thanks a lot for your help in merging it.

> ExitUtil#halt info log should log HaltException
> ---
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.1.5, 3.2.3, 3.3.2
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350021#comment-17350021
 ] 

Viraj Jasani edited comment on HADOOP-17725 at 5/23/21, 12:55 PM:
--

[~ste...@apache.org] Would you like to take a look at 
https://github.com/apache/hadoop/pull/3041?

Thanks


was (Author: vjasani):
[~ste...@apache.org] Would you like to take a look?

Thanks

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-23 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17350021#comment-17350021
 ] 

Viraj Jasani commented on HADOOP-17725:
---

[~ste...@apache.org] Would you like to take a look?

Thanks

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17725:
-

Assignee: Viraj Jasani

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17726) Replace Sets#newHashSet() and newTreeSet() with constructors directly

2021-05-21 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17726:
-

 Summary: Replace Sets#newHashSet() and newTreeSet() with 
constructors directly
 Key: HADOOP-17726
 URL: https://issues.apache.org/jira/browse/HADOOP-17726
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani


As per the guidelines provided by Guava Sets#newHashSet() and 
Sets#newTreeSet(), we should get rid of them and use newHashSet<>() and 
newTreeSet<>() directly.

Once HADOOP-17115, HADOOP-17721, HADOOP-17722 and HADOOP-17720 are fixed, 
please feel free to take this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17700) ExitUtil#halt info log should log HaltException

2021-05-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17700:
--
Summary: ExitUtil#halt info log should log HaltException  (was: 
ExitUtil#halt info log with incorrect placeholders)

> ExitUtil#halt info log should log HaltException
> ---
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17721) Replace Guava Sets usage by Hadoop's own Sets in hadoop-yarn-project

2021-05-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17721:
--
Summary: Replace Guava Sets usage by Hadoop's own Sets in 
hadoop-yarn-project  (was: Replace Guava Sets usage by Hadoop's own Sets in 
Yarn)

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-yarn-project
> 
>
> Key: HADOOP-17721
> URL: https://issues.apache.org/jira/browse/HADOOP-17721
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17722) Replace Guava Sets usage by Hadoop's own Sets in hadoop-mapreduce-project

2021-05-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17722:
--
Summary: Replace Guava Sets usage by Hadoop's own Sets in 
hadoop-mapreduce-project  (was: Replace Guava Sets usage by Hadoop's own Sets 
in MapReduce)

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-mapreduce-project
> -
>
> Key: HADOOP-17722
> URL: https://issues.apache.org/jira/browse/HADOOP-17722
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17720) Replace Guava Sets usage by Hadoop's own Sets in hadoop-hdfs-project

2021-05-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17720:
--
Summary: Replace Guava Sets usage by Hadoop's own Sets in 
hadoop-hdfs-project  (was: Replace Guava Sets usage by Hadoop's own Sets in 
HDFS)

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-hdfs-project
> 
>
> Key: HADOOP-17720
> URL: https://issues.apache.org/jira/browse/HADOOP-17720
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17721) Replace Guava Sets usage by Hadoop's own Sets in Yarn

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17721:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in Yarn
 Key: HADOOP-17721
 URL: https://issues.apache.org/jira/browse/HADOOP-17721
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17722) Replace Guava Sets usage by Hadoop's own Sets in MapReduce

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17722:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in MapReduce
 Key: HADOOP-17722
 URL: https://issues.apache.org/jira/browse/HADOOP-17722
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17720) Replace Guava Sets usage by Hadoop's own Sets in HDFS

2021-05-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17720:
-

 Summary: Replace Guava Sets usage by Hadoop's own Sets in HDFS
 Key: HADOOP-17720
 URL: https://issues.apache.org/jira/browse/HADOOP-17720
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani
Assignee: Viraj Jasani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17115:
--
Target Version/s: 3.4.0

> Replace Guava Sets usage by Hadoop's own Sets
> -
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
> 363 Set expectedSet = Sets.newHashSet("host4", 
> "host5");
> org.apache.hadoop.hdfs.qjournal.server  (2 usages found)
> JournalNodeSyncer.jav

[jira] [Updated] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17115:
--
Summary: Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
hadoop-tools  (was: Replace Guava Sets usage by Hadoop's own Sets)

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
> hadoop-tools
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded(

[jira] [Updated] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17700:
--
Status: Patch Available  (was: In Progress)

> ExitUtil#halt info log with incorrect placeholders
> --
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17700 started by Viraj Jasani.
-
> ExitUtil#halt info log with incorrect placeholders
> --
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-16 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17345685#comment-17345685
 ] 

Viraj Jasani commented on HADOOP-17115:
---

Thanks for your interest [~quanli]. Reducing Guava dependency in Hadoop code is 
a big and nice initiative started by [~ahussein] on parent Jira HADOOP-17098. 
Please feel free to take a detailed look, similar discussion has been covered 
there already.
{quote}In general these changes hardly or no advantage.
{quote}
I disagree with this comment because we had to bump Guava multiple times due to 
many CVEs and it keeps coming back up. Now everytime new CVE is introduced, not 
only we need to perform Hadoop release only but thirdparty release first 
followed by Hadoop, and some users (although do not wish to upgrade on any 
patch release) would have to mandatorily keep performing upgrades due to 
various security vulnerabilities.

Anyways, please feel free to take a look at parent Jira HADOOP-17098, lots of 
work has already been done by [~ahussein] and I am just helping out with some 
remaining work, trying our best if we can get rid of this cycle of redundant 
upgrades due to CVEs in Guava.
{quote}saw a comment telling to copy code from guava, is no licence issue here?
{quote}
The patch has new Sets class introduced in Hadoop code base but it just copies 
required method signatures from Guava Sets. Internal logic is not really 
copied. Implementation is really restricted to our use case only.

Thanks

> Replace Guava Sets usage by Hadoop's own Sets
> -
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddre

[jira] [Created] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17700:
-

 Summary: ExitUtil#halt info log with incorrect placeholders
 Key: HADOOP-17700
 URL: https://issues.apache.org/jira/browse/HADOOP-17700
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


ExitUtil#halt with non-zero exit status code provides info log with incorrect 
no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17689) Avoid Potential NPE in org.apache.hadoop.fs

2021-05-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17343301#comment-17343301
 ] 

Viraj Jasani commented on HADOOP-17689:
---

Thank you [~tasanuma] for your quick review. I was wondering if we can 
deprecate old method i.e
{code:java}
/**
 * Returns the parent of a path or null if at root. Better alternative is
 * {@link #getOptionalParentPath()} to handle nullable value for root path.
 *
 * @return the parent of a path or null if at root
 */
public Path getParent() {
  return getParentUtil();
}

{code}
If you agree to deprecate this method, I can provide an addendum PR?

> Avoid Potential NPE in org.apache.hadoop.fs
> ---
>
> Key: HADOOP-17689
> URL: https://issues.apache.org/jira/browse/HADOOP-17689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/
>  Hello,
> Our code analyses found the following potential NPE:
>  
> {code:java}
>   public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null;
> }
> {code}
>  
> {code:java}
>   public FSDataOutputStream createInternal (Path f,
>   EnumSet flag, FsPermission absolutePermission, int 
> bufferSize,
>   short replication, long blockSize, Progressable progress,
>   ChecksumOpt checksumOpt, boolean createParent) throws IOException {
> checkPath(f);
> 
> // Default impl assumes that permissions do not matter
> // calling the regular create is good enough.
> // FSs that implement permissions should override this.if 
> (!createParent) { // parent must exist.
>   // since this.create makes parent dirs automatically
>   // we must throw exception if parent does not exist.
>   final FileStatus stat = getFileStatus(f.getParent()); // NPE!
>   if (stat == null) {
> throw new FileNotFoundException("Missing parent:" + f);
>   }
> {code}
> Full Trace:
> 1. Return null to caller
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432
> 2. The return value of function getParent is used as the 1st parameter in 
> function getFileStatus (the return value of function getParent can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L93
> 3. f is used as the 1st parameter in function checkPath (f can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L127
> 4. path is passed as the this pointer to function toUri (path can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java#L369
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17689) Avoid Potential NPE in org.apache.hadoop.fs

2021-05-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17689:
--
Target Version/s: 3.4.0

> Avoid Potential NPE in org.apache.hadoop.fs
> ---
>
> Key: HADOOP-17689
> URL: https://issues.apache.org/jira/browse/HADOOP-17689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/
>  Hello,
> Our code analyses found the following potential NPE:
>  
> {code:java}
>   public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null;
> }
> {code}
>  
> {code:java}
>   public FSDataOutputStream createInternal (Path f,
>   EnumSet flag, FsPermission absolutePermission, int 
> bufferSize,
>   short replication, long blockSize, Progressable progress,
>   ChecksumOpt checksumOpt, boolean createParent) throws IOException {
> checkPath(f);
> 
> // Default impl assumes that permissions do not matter
> // calling the regular create is good enough.
> // FSs that implement permissions should override this.if 
> (!createParent) { // parent must exist.
>   // since this.create makes parent dirs automatically
>   // we must throw exception if parent does not exist.
>   final FileStatus stat = getFileStatus(f.getParent()); // NPE!
>   if (stat == null) {
> throw new FileNotFoundException("Missing parent:" + f);
>   }
> {code}
> Full Trace:
> 1. Return null to caller
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432
> 2. The return value of function getParent is used as the 1st parameter in 
> function getFileStatus (the return value of function getParent can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L93
> 3. f is used as the 1st parameter in function checkPath (f can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L127
> 4. path is passed as the this pointer to function toUri (path can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java#L369
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17689) Avoid Potential NPE in org.apache.hadoop.fs

2021-05-12 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17343194#comment-17343194
 ] 

Viraj Jasani commented on HADOOP-17689:
---

[~tasanuma] This is also relevant and small fix. Would you like to take a look 
[https://github.com/apache/hadoop/pull/3008] ?

Thanks

> Avoid Potential NPE in org.apache.hadoop.fs
> ---
>
> Key: HADOOP-17689
> URL: https://issues.apache.org/jira/browse/HADOOP-17689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/
>  Hello,
> Our code analyses found the following potential NPE:
>  
> {code:java}
>   public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null;
> }
> {code}
>  
> {code:java}
>   public FSDataOutputStream createInternal (Path f,
>   EnumSet flag, FsPermission absolutePermission, int 
> bufferSize,
>   short replication, long blockSize, Progressable progress,
>   ChecksumOpt checksumOpt, boolean createParent) throws IOException {
> checkPath(f);
> 
> // Default impl assumes that permissions do not matter
> // calling the regular create is good enough.
> // FSs that implement permissions should override this.if 
> (!createParent) { // parent must exist.
>   // since this.create makes parent dirs automatically
>   // we must throw exception if parent does not exist.
>   final FileStatus stat = getFileStatus(f.getParent()); // NPE!
>   if (stat == null) {
> throw new FileNotFoundException("Missing parent:" + f);
>   }
> {code}
> Full Trace:
> 1. Return null to caller
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432
> 2. The return value of function getParent is used as the 1st parameter in 
> function getFileStatus (the return value of function getParent can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L93
> 3. f is used as the 1st parameter in function checkPath (f can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L127
> 4. path is passed as the this pointer to function toUri (path can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java#L369
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17689) Avoid Potential NPE in org.apache.hadoop.fs

2021-05-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17689:
--
Summary: Avoid Potential NPE in org.apache.hadoop.fs  (was: Potential NPE 
in org.apache.hadoop.fs)

> Avoid Potential NPE in org.apache.hadoop.fs
> ---
>
> Key: HADOOP-17689
> URL: https://issues.apache.org/jira/browse/HADOOP-17689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/
>  Hello,
> Our code analyses found the following potential NPE:
>  
> {code:java}
>   public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null;
> }
> {code}
>  
> {code:java}
>   public FSDataOutputStream createInternal (Path f,
>   EnumSet flag, FsPermission absolutePermission, int 
> bufferSize,
>   short replication, long blockSize, Progressable progress,
>   ChecksumOpt checksumOpt, boolean createParent) throws IOException {
> checkPath(f);
> 
> // Default impl assumes that permissions do not matter
> // calling the regular create is good enough.
> // FSs that implement permissions should override this.if 
> (!createParent) { // parent must exist.
>   // since this.create makes parent dirs automatically
>   // we must throw exception if parent does not exist.
>   final FileStatus stat = getFileStatus(f.getParent()); // NPE!
>   if (stat == null) {
> throw new FileNotFoundException("Missing parent:" + f);
>   }
> {code}
> Full Trace:
> 1. Return null to caller
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432
> 2. The return value of function getParent is used as the 1st parameter in 
> function getFileStatus (the return value of function getParent can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L93
> 3. f is used as the 1st parameter in function checkPath (f can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L127
> 4. path is passed as the this pointer to function toUri (path can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java#L369
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17686) Potential NPE in org.apache.hadoop.fs.obs

2021-05-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17342353#comment-17342353
 ] 

Viraj Jasani commented on HADOOP-17686:
---

[~aajisaka] Would you like to take a look? It's a small fix. Thanks

> Potential NPE in org.apache.hadoop.fs.obs
> -
>
> Key: HADOOP-17686
> URL: https://issues.apache.org/jira/browse/HADOOP-17686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hello,
> Our code analyses found a following potential NPE:
>  
> {code:java}
> public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null; <--- Null returned
> }
> {code}
>  
> {code:java}
> private static void getDirectories(final String key, final String sourceKey,
>   final Set directories) {
> Path p = new Path(key);
> Path sourcePath = new Path(sourceKey);
> // directory must add first
> if (key.endsWith("/") && p.compareTo(sourcePath) > 0) {
>   directories.add(p.toString());
> }
> while (p.compareTo(sourcePath) > 0) {
>   p = p.getParent(); <--- NPE
>   if (p.isRoot() || p.compareTo(sourcePath) == 0) {
> break;
>   }
> {code}
> Given a root path, it will lead to NPE at method getDirectories
>  
> Full trace:
>  
> 1. Return null to caller
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432]
> 2. Function getParent executes and returns
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L875]
> 3. The return value of function getParent is passed as the this pointer to 
> function isRoot (the return value of function getParent can be null)
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L876]
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17686) Potential NPE in org.apache.hadoop.fs.obs

2021-05-10 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341788#comment-17341788
 ] 

Viraj Jasani edited comment on HADOOP-17686 at 5/10/21, 9:17 AM:
-

[~smeng] [~tasanuma] Would you like to take a look at PR? It's a small fix. If 
you are fine with approach, I can provide PR for HADOOP-17689 as well which has 
similar analysis.

Thanks


was (Author: vjasani):
[~smeng] [~tasanuma] Would you like to take a look at PR? It's a small fix.

Thanks

> Potential NPE in org.apache.hadoop.fs.obs
> -
>
> Key: HADOOP-17686
> URL: https://issues.apache.org/jira/browse/HADOOP-17686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hello,
> Our code analyses found a following potential NPE:
>  
> {code:java}
> public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null; <--- Null returned
> }
> {code}
>  
> {code:java}
> private static void getDirectories(final String key, final String sourceKey,
>   final Set directories) {
> Path p = new Path(key);
> Path sourcePath = new Path(sourceKey);
> // directory must add first
> if (key.endsWith("/") && p.compareTo(sourcePath) > 0) {
>   directories.add(p.toString());
> }
> while (p.compareTo(sourcePath) > 0) {
>   p = p.getParent(); <--- NPE
>   if (p.isRoot() || p.compareTo(sourcePath) == 0) {
> break;
>   }
> {code}
> Given a root path, it will lead to NPE at method getDirectories
>  
> Full trace:
>  
> 1. Return null to caller
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432]
> 2. Function getParent executes and returns
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L875]
> 3. The return value of function getParent is passed as the this pointer to 
> function isRoot (the return value of function getParent can be null)
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L876]
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17686) Potential NPE in org.apache.hadoop.fs.obs

2021-05-10 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341788#comment-17341788
 ] 

Viraj Jasani commented on HADOOP-17686:
---

[~smeng] [~tasanuma] Would you like to take a look at PR? It's a small fix.

Thanks

> Potential NPE in org.apache.hadoop.fs.obs
> -
>
> Key: HADOOP-17686
> URL: https://issues.apache.org/jira/browse/HADOOP-17686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hello,
> Our code analyses found a following potential NPE:
>  
> {code:java}
> public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null; <--- Null returned
> }
> {code}
>  
> {code:java}
> private static void getDirectories(final String key, final String sourceKey,
>   final Set directories) {
> Path p = new Path(key);
> Path sourcePath = new Path(sourceKey);
> // directory must add first
> if (key.endsWith("/") && p.compareTo(sourcePath) > 0) {
>   directories.add(p.toString());
> }
> while (p.compareTo(sourcePath) > 0) {
>   p = p.getParent(); <--- NPE
>   if (p.isRoot() || p.compareTo(sourcePath) == 0) {
> break;
>   }
> {code}
> Given a root path, it will lead to NPE at method getDirectories
>  
> Full trace:
>  
> 1. Return null to caller
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432]
> 2. Function getParent executes and returns
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L875]
> 3. The return value of function getParent is passed as the this pointer to 
> function isRoot (the return value of function getParent can be null)
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L876]
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17689) Potential NPE in org.apache.hadoop.fs

2021-05-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341665#comment-17341665
 ] 

Viraj Jasani commented on HADOOP-17689:
---

This is similar to HADOOP-17689

> Potential NPE in org.apache.hadoop.fs
> -
>
> Key: HADOOP-17689
> URL: https://issues.apache.org/jira/browse/HADOOP-17689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/
>  Hello,
> Our code analyses found the following potential NPE:
>  
> {code:java}
>   public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null;
> }
> {code}
>  
> {code:java}
>   public FSDataOutputStream createInternal (Path f,
>   EnumSet flag, FsPermission absolutePermission, int 
> bufferSize,
>   short replication, long blockSize, Progressable progress,
>   ChecksumOpt checksumOpt, boolean createParent) throws IOException {
> checkPath(f);
> 
> // Default impl assumes that permissions do not matter
> // calling the regular create is good enough.
> // FSs that implement permissions should override this.if 
> (!createParent) { // parent must exist.
>   // since this.create makes parent dirs automatically
>   // we must throw exception if parent does not exist.
>   final FileStatus stat = getFileStatus(f.getParent()); // NPE!
>   if (stat == null) {
> throw new FileNotFoundException("Missing parent:" + f);
>   }
> {code}
> Full Trace:
> 1. Return null to caller
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432
> 2. The return value of function getParent is used as the 1st parameter in 
> function getFileStatus (the return value of function getParent can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L93
> 3. f is used as the 1st parameter in function checkPath (f can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L127
> 4. path is passed as the this pointer to function toUri (path can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java#L369
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17689) Potential NPE in org.apache.hadoop.fs

2021-05-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17689:
-

Assignee: Viraj Jasani

> Potential NPE in org.apache.hadoop.fs
> -
>
> Key: HADOOP-17689
> URL: https://issues.apache.org/jira/browse/HADOOP-17689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/
>  Hello,
> Our code analyses found the following potential NPE:
>  
> {code:java}
>   public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null;
> }
> {code}
>  
> {code:java}
>   public FSDataOutputStream createInternal (Path f,
>   EnumSet flag, FsPermission absolutePermission, int 
> bufferSize,
>   short replication, long blockSize, Progressable progress,
>   ChecksumOpt checksumOpt, boolean createParent) throws IOException {
> checkPath(f);
> 
> // Default impl assumes that permissions do not matter
> // calling the regular create is good enough.
> // FSs that implement permissions should override this.if 
> (!createParent) { // parent must exist.
>   // since this.create makes parent dirs automatically
>   // we must throw exception if parent does not exist.
>   final FileStatus stat = getFileStatus(f.getParent()); // NPE!
>   if (stat == null) {
> throw new FileNotFoundException("Missing parent:" + f);
>   }
> {code}
> Full Trace:
> 1. Return null to caller
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432
> 2. The return value of function getParent is used as the 1st parameter in 
> function getFileStatus (the return value of function getParent can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L93
> 3. f is used as the 1st parameter in function checkPath (f can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java#L127
> 4. path is passed as the this pointer to function toUri (path can be null)
> https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java#L369
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17686) Potential NPE in org.apache.hadoop.fs.obs

2021-05-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341662#comment-17341662
 ] 

Viraj Jasani commented on HADOOP-17686:
---

[~brahmareddy] [~brahma] Could you please take a look when you get time?

Thanks

> Potential NPE in org.apache.hadoop.fs.obs
> -
>
> Key: HADOOP-17686
> URL: https://issues.apache.org/jira/browse/HADOOP-17686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Hello,
> Our code analyses found a following potential NPE:
>  
> {code:java}
> public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null; <--- Null returned
> }
> {code}
>  
> {code:java}
> private static void getDirectories(final String key, final String sourceKey,
>   final Set directories) {
> Path p = new Path(key);
> Path sourcePath = new Path(sourceKey);
> // directory must add first
> if (key.endsWith("/") && p.compareTo(sourcePath) > 0) {
>   directories.add(p.toString());
> }
> while (p.compareTo(sourcePath) > 0) {
>   p = p.getParent(); <--- NPE
>   if (p.isRoot() || p.compareTo(sourcePath) == 0) {
> break;
>   }
> {code}
> Given a root path, it will lead to NPE at method getDirectories
>  
> Full trace:
>  
> 1. Return null to caller
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432]
> 2. Function getParent executes and returns
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L875]
> 3. The return value of function getParent is passed as the this pointer to 
> function isRoot (the return value of function getParent can be null)
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L876]
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-08 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341217#comment-17341217
 ] 

Viraj Jasani commented on HADOOP-17115:
---

[~ahussein] QA result is available, if you would like to take a look at PR.

Thanks

> Replace Guava Sets usage by Hadoop's own Sets
> -
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
> 363 Set expectedSet = Sets.newHashSet("

[jira] [Assigned] (HADOOP-17686) Potential NPE in org.apache.hadoop.fs.obs

2021-05-08 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17686:
-

Assignee: Viraj Jasani

> Potential NPE in org.apache.hadoop.fs.obs
> -
>
> Key: HADOOP-17686
> URL: https://issues.apache.org/jira/browse/HADOOP-17686
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Error Reporter
>Assignee: Viraj Jasani
>Priority: Major
>
> Hello,
> Our code analyses found a following potential NPE:
>  
> {code:java}
> public Path getParent() {
> String path = uri.getPath();
> int lastSlash = path.lastIndexOf('/');
> int start = startPositionWithoutWindowsDrive(path);
> if ((path.length() == start) ||   // empty path
> (lastSlash == start && path.length() == start+1)) { // at root
>   return null; <--- Null returned
> }
> {code}
>  
> {code:java}
> private static void getDirectories(final String key, final String sourceKey,
>   final Set directories) {
> Path p = new Path(key);
> Path sourcePath = new Path(sourceKey);
> // directory must add first
> if (key.endsWith("/") && p.compareTo(sourcePath) > 0) {
>   directories.add(p.toString());
> }
> while (p.compareTo(sourcePath) > 0) {
>   p = p.getParent(); <--- NPE
>   if (p.isRoot() || p.compareTo(sourcePath) == 0) {
> break;
>   }
> {code}
> Given a root path, it will lead to NPE at method getDirectories
>  
> Full trace:
>  
> 1. Return null to caller
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java#L432]
> 2. Function getParent executes and returns
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L875]
> 3. The return value of function getParent is passed as the this pointer to 
> function isRoot (the return value of function getParent can be null)
> [https://github.com/apache/hadoop/blob/f40e3eb0590f85bb42d2471992bf5d524628fdd6/hadoop-cloud-storage-project/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java#L876]
> Commit: f40e3eb0590f85bb42d2471992bf5d524628fdd6



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-06 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17115:
--
Summary: Replace Guava Sets usage by Hadoop's own Sets  (was: Replace Guava 
initialization of Sets.newHashSet)

> Replace Guava Sets usage by Hadoop's own Sets
> -
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
> 363 Set expectedSet = Sets.newHashSet("host4", 
> "host5");
> org.apache.hadoop.hdfs.qjournal.server  (2 usages found)
> JournalNodeSyncer.java  (2 usag

[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-05 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17339864#comment-17339864
 ] 

Viraj Jasani commented on HADOOP-17152:
---

Oh yes, it seems we are using multiple implementation of newArrayList and not 
just newArrayList(). Thanks

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-05 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17339832#comment-17339832
 ] 

Viraj Jasani commented on HADOOP-17152:
---

[~ahussein] Thanks. Btw I was wondering if we really need a new wrapper. 
Perhaps we can directly replace them with "new ArrayList<>()" ?

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17115) Replace Guava initialization of Sets.newHashSet

2021-05-05 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17339820#comment-17339820
 ] 

Viraj Jasani edited comment on HADOOP-17115 at 5/5/21, 6:22 PM:


[~ahussein] I think this makes sense. If you are fine, can I take this up?

Other than Sets.newHashSet, we also have lots of Maps.newHashMap. Since no of 
occurrences are quite high, I believe we should have another subtask of parent 
Jira, I can create and assign to myself if you don't mind. Moreover, I can also 
take up HADOOP-17152 unless you already have patch available.

Thanks


was (Author: vjasani):
[~ahussein] I think this makes sense. If you are fine, can I take this up?

Other than Sets.newHashSet, we also have lots of Maps.newHashMap. Since no of 
occurrences are quite high, I believe we should have another subtask of parent 
Jira, I can create and assign to myself if you don't mind.

Thanks

> Replace Guava initialization of Sets.newHashSet
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apac

[jira] [Assigned] (HADOOP-17115) Replace Guava initialization of Sets.newHashSet

2021-05-05 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17115:
-

Assignee: Viraj Jasani

> Replace Guava initialization of Sets.newHashSet
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
> 363 Set expectedSet = Sets.newHashSet("host4", 
> "host5");
> org.apache.hadoop.hdfs.qjournal.server  (2 usages found)
> JournalNodeSyncer.java  (2 usages found)
> getOtherJournalNodeAddrs()  (1 usage found)
> 

[jira] [Commented] (HADOOP-17115) Replace Guava initialization of Sets.newHashSet

2021-05-05 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17339820#comment-17339820
 ] 

Viraj Jasani commented on HADOOP-17115:
---

[~ahussein] I think this makes sense. If you are fine, can I take this up?

Other than Sets.newHashSet, we also have lots of Maps.newHashMap. Since no of 
occurrences are quite high, I believe we should have another subtask of parent 
Jira, I can create and assign to myself if you don't mind.

Thanks

> Replace Guava initialization of Sets.newHashSet
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWi

[jira] [Work started] (HADOOP-11616) Remove workaround for Curator's ChildReaper requiring Guava 15+

2021-05-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11616 started by Viraj Jasani.
-
> Remove workaround for Curator's ChildReaper requiring Guava 15+
> ---
>
> Key: HADOOP-11616
> URL: https://issues.apache.org/jira/browse/HADOOP-11616
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Robert Kanter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HADOOP-11612 adds a copy of Curator 2.7.1's {{ChildReaper}} and 
> {{TestChildReaper}} with minor modifications to work with Guava 11.0.2.  We 
> should remove these classes and update any usages to point to Curator itself 
> once we update Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11616) Remove workaround for Curator's ChildReaper requiring Guava 15+

2021-05-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-11616:
--
Status: Patch Available  (was: In Progress)

> Remove workaround for Curator's ChildReaper requiring Guava 15+
> ---
>
> Key: HADOOP-11616
> URL: https://issues.apache.org/jira/browse/HADOOP-11616
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Robert Kanter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HADOOP-11612 adds a copy of Curator 2.7.1's {{ChildReaper}} and 
> {{TestChildReaper}} with minor modifications to work with Guava 11.0.2.  We 
> should remove these classes and update any usages to point to Curator itself 
> once we update Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-11616) Remove workaround for Curator's ChildReaper requiring Guava 15+

2021-05-03 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-11616:
-

Assignee: Viraj Jasani

> Remove workaround for Curator's ChildReaper requiring Guava 15+
> ---
>
> Key: HADOOP-11616
> URL: https://issues.apache.org/jira/browse/HADOOP-11616
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Robert Kanter
>Assignee: Viraj Jasani
>Priority: Major
>
> HADOOP-11612 adds a copy of Curator 2.7.1's {{ChildReaper}} and 
> {{TestChildReaper}} with minor modifications to work with Guava 11.0.2.  We 
> should remove these classes and update any usages to point to Curator itself 
> once we update Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11616) Remove workaround for Curator's ChildReaper requiring Guava 15+

2021-05-03 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-11616:
--
Target Version/s: 3.3.1, 3.4.0  (was: 3.4.0)

> Remove workaround for Curator's ChildReaper requiring Guava 15+
> ---
>
> Key: HADOOP-11616
> URL: https://issues.apache.org/jira/browse/HADOOP-11616
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Robert Kanter
>Assignee: Viraj Jasani
>Priority: Major
>
> HADOOP-11612 adds a copy of Curator 2.7.1's {{ChildReaper}} and 
> {{TestChildReaper}} with minor modifications to work with Guava 11.0.2.  We 
> should remove these classes and update any usages to point to Curator itself 
> once we update Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17676:
--
Target Version/s: 3.3.1, 3.4.0  (was: 3.4.0)

> Restrict imports from org.apache.curator.shaded
> ---
>
> Key: HADOOP-17676
> URL: https://issues.apache.org/jira/browse/HADOOP-17676
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports 
> as discussed on PR#2945. We can use enforcer-rule to restrict imports such 
> that if ever used, mvn build fails.
> Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17676:
--
Fix Version/s: (was: 3.4.0)

> Restrict imports from org.apache.curator.shaded
> ---
>
> Key: HADOOP-17676
> URL: https://issues.apache.org/jira/browse/HADOOP-17676
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports 
> as discussed on PR#2945. We can use enforcer-rule to restrict imports such 
> that if ever used, mvn build fails.
> Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17676:
--
Target Version/s: 3.4.0

> Restrict imports from org.apache.curator.shaded
> ---
>
> Key: HADOOP-17676
> URL: https://issues.apache.org/jira/browse/HADOOP-17676
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports 
> as discussed on PR#2945. We can use enforcer-rule to restrict imports such 
> that if ever used, mvn build fails.
> Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17676:
--
Fix Version/s: 3.4.0
   Status: Patch Available  (was: In Progress)

> Restrict imports from org.apache.curator.shaded
> ---
>
> Key: HADOOP-17676
> URL: https://issues.apache.org/jira/browse/HADOOP-17676
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports 
> as discussed on PR#2945. We can use enforcer-rule to restrict imports such 
> that if ever used, mvn build fails.
> Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-05-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17676 started by Viraj Jasani.
-
> Restrict imports from org.apache.curator.shaded
> ---
>
> Key: HADOOP-17676
> URL: https://issues.apache.org/jira/browse/HADOOP-17676
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports 
> as discussed on PR#2945. We can use enforcer-rule to restrict imports such 
> that if ever used, mvn build fails.
> Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17676) Restrict imports from org.apache.curator.shaded

2021-04-29 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17676:
-

 Summary: Restrict imports from org.apache.curator.shaded
 Key: HADOOP-17676
 URL: https://issues.apache.org/jira/browse/HADOOP-17676
 Project: Hadoop Common
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Once HADOOP-17653 gets in, we should ban "org.apache.curator.shaded" imports as 
discussed on PR#2945. We can use enforcer-rule to restrict imports such that if 
ever used, mvn build fails.

Thanks for the suggestion [~weichiu] [~aajisaka] [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17668) Use profile hbase-2.0 by default and update hbase version

2021-04-27 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1718#comment-1718
 ] 

Viraj Jasani commented on HADOOP-17668:
---

Yeah, makes sense.

> Use profile hbase-2.0 by default and update hbase version
> -
>
> Key: HADOOP-17668
> URL: https://issues.apache.org/jira/browse/HADOOP-17668
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> We currently use hbase1 profile by default (for those who aren't aware, the 
> YARN timeline service uses HBase as the underlying storage). There isn't much 
> development activity in HBase 1.x and 2.x is production ready. I think it's 
> time to switch to hbase 2 by default.
>  
> The HBase 2 version being used is 2.0.2. We should use the more recent 
> versions.  (e.g. 2.2/2.3/2.4) (And update hbase 1 version as well)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17668) Use profile hbase-2.0 by default and update hbase version

2021-04-26 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17668:
-

Assignee: Viraj Jasani

> Use profile hbase-2.0 by default and update hbase version
> -
>
> Key: HADOOP-17668
> URL: https://issues.apache.org/jira/browse/HADOOP-17668
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> We currently use hbase1 profile by default (for those who aren't aware, the 
> YARN timeline service uses HBase as the underlying storage). There isn't much 
> development activity in HBase 1.x and 2.x is production ready. I think it's 
> time to switch to hbase 2 by default.
>  
> The HBase 2 version being used is 2.0.2. We should use the more recent 
> versions.  (e.g. 2.2/2.3/2.4) (And update hbase 1 version as well)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17668) Use profile hbase-2.0 by default and update hbase version

2021-04-26 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17332664#comment-17332664
 ] 

Viraj Jasani commented on HADOOP-17668:
---

This is intended for 3.4.0 only or we are planning to include this in 3.3 
release line as well?

> Use profile hbase-2.0 by default and update hbase version
> -
>
> Key: HADOOP-17668
> URL: https://issues.apache.org/jira/browse/HADOOP-17668
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> We currently use hbase1 profile by default (for those who aren't aware, the 
> YARN timeline service uses HBase as the underlying storage). There isn't much 
> development activity in HBase 1.x and 2.x is production ready. I think it's 
> time to switch to hbase 2 by default.
>  
> The HBase 2 version being used is 2.0.2. We should use the more recent 
> versions.  (e.g. 2.2/2.3/2.4) (And update hbase 1 version as well)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17409) Remove S3Guard - no longer needed

2021-04-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17331423#comment-17331423
 ] 

Viraj Jasani commented on HADOOP-17409:
---

Thank you [~ste...@apache.org], this is really informative.

> Remove S3Guard - no longer needed
> -
>
> Key: HADOOP-17409
> URL: https://issues.apache.org/jira/browse/HADOOP-17409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> With Consistent S3, S3Guard is superfluous. 
> stop developing it and wean people off it as soon as they can.
> Then we can worry about what to do in the code. It has gradually insinuated 
> its way through the layers, especially things like multi-object delete 
> handling (see HADOOP-17244). Things would be a lot simpler without it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-23 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17650:
--
Release Note: In order to resolve build issues with Maven 3.8.1, we have to 
bump SolrJ to latest version 8.8.2 as of now. Hence, we would recommend 
upgrading Solr cluster accordingly before upgrading entire Hadoop cluster to 
3.4.0.  (was: In order to resolve build issues with Maven 3.8.1, we have to 
bump Solr to latest version 8.8.2 as of now. By doing so, we would recommend 
upgrading Solr server accordingly before upgrading entire Hadoop cluster to 
3.4.0.)

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17409) Remove S3Guard - no longer needed

2021-04-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17330029#comment-17330029
 ] 

Viraj Jasani edited comment on HADOOP-17409 at 4/23/21, 6:52 AM:
-

[~ste...@apache.org] Is your proposal to remove S3Guard applicable to 3.4.0 
onwards or is it for next major release (4.0.0) onwards?

Thanks


was (Author: vjasani):
[~ste...@apache.org] Is your intention to remove S3Guard applicable to 3.4.0 
onwards or is it for next major release (4.0.0) onwards?

> Remove S3Guard - no longer needed
> -
>
> Key: HADOOP-17409
> URL: https://issues.apache.org/jira/browse/HADOOP-17409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> With Consistent S3, S3Guard is superfluous. 
> stop developing it and wean people off it as soon as they can.
> Then we can worry about what to do in the code. It has gradually insinuated 
> its way through the layers, especially things like multi-object delete 
> handling (see HADOOP-17244). Things would be a lot simpler without it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17409) Remove S3Guard - no longer needed

2021-04-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17330029#comment-17330029
 ] 

Viraj Jasani commented on HADOOP-17409:
---

[~ste...@apache.org] Is your intention to remove S3Guard applicable to 3.4.0 
onwards or is it for next major release (4.0.0) onwards?

> Remove S3Guard - no longer needed
> -
>
> Key: HADOOP-17409
> URL: https://issues.apache.org/jira/browse/HADOOP-17409
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> With Consistent S3, S3Guard is superfluous. 
> stop developing it and wean people off it as soon as they can.
> Then we can worry about what to do in the code. It has gradually insinuated 
> its way through the layers, especially things like multi-object delete 
> handling (see HADOOP-17244). Things would be a lot simpler without it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17650:
--
Release Note: In order to resolve build issues with Maven 3.8.1, we have to 
bump Solr to latest version 8.8.2 as of now. By doing so, we would recommend 
upgrading Solr server accordingly before upgrading entire Hadoop cluster to 
3.4.0.

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17639) Restrict the "-skipTrash" param for accidentally deletes data

2021-04-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17327713#comment-17327713
 ] 

Viraj Jasani commented on HADOOP-17639:
---

{quote}We already have snapshots, protected directories and trash. Aside from 
banning deletes completely, there are only so many things we can do to prevent 
user error losing data.
{quote}
Thanks [~sodonnell], I see what you meant here. Agree that in general there are 
multiple approaches available in HDFS to prevent complete data loss.

Thanks

> Restrict the "-skipTrash" param for accidentally deletes data
> -
>
> Key: HADOOP-17639
> URL: https://issues.apache.org/jira/browse/HADOOP-17639
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Priority: Major
>
> Suppose the user tries to delete the data from CLI with the "-skipTrash" 
> param but by mistake, he deleted a couple of directories but actually, that 
> directory user want to retain then their is no way to retrieve the delete 
> data.
> It will be good to have a confirm message like: "Skip the trash for the 
> hdfs:///dri1/file.txt files? (Y or N)" ro we can completely disable the 
> "-skipTrash" param.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17639) Restrict the "-skipTrash" param for accidentally deletes data

2021-04-22 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17327248#comment-17327248
 ] 

Viraj Jasani commented on HADOOP-17639:
---

Sounds good, I think yes it makes sense to continue supporting commands without 
breaking arguments.

For this particular case, now I feel that *fs.trash.interval* at server side 
should not be that required. And it should be all upto the client to utilize 
this remove command. How about we introduce "*-useTrash*" as new command line 
argument? That way, rm command will always have a choice to either use 
"*-useTrash*" or "*-skipTrash*" without having to worry about whether server 
has set trash interval > 0. Any thoughts?

> Restrict the "-skipTrash" param for accidentally deletes data
> -
>
> Key: HADOOP-17639
> URL: https://issues.apache.org/jira/browse/HADOOP-17639
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Priority: Major
>
> Suppose the user tries to delete the data from CLI with the "-skipTrash" 
> param but by mistake, he deleted a couple of directories but actually, that 
> directory user want to retain then their is no way to retrieve the delete 
> data.
> It will be good to have a confirm message like: "Skip the trash for the 
> hdfs:///dri1/file.txt files? (Y or N)" ro we can completely disable the 
> "-skipTrash" param.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17650:
--
Target Version/s: 3.4.0

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-20 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17325913#comment-17325913
 ] 

Viraj Jasani commented on HADOOP-17650:
---

We use solr in limited modules. If we bump it up, we can get rid of this issue 
with Maven 3.8.1. solr upgrade requires minor code changes as well. I have 
created PR and confirmed the build works well with both 3.6.3 and 3.8.1 Maven. 
Also, tests are passing after one small change.

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17650:
--
Labels:   (was: pull-request-available)

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-20 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17325780#comment-17325780
 ] 

Viraj Jasani edited comment on HADOOP-17650 at 4/20/21, 1:42 PM:
-

hadoop-yarn-applications-catalog-webapp uses solr-core and solr-test-framework 
both dependencies with test scope. -I think we should exclude org.restlet.jee 
as we don't seem to require it with tests.-

Edit: We require org.restlet.jee in test dependency, let me see if we can 
directly get it.


was (Author: vjasani):
hadoop-yarn-applications-catalog-webapp uses solr-core and solr-test-framework 
both dependencies with test scope. I think we should exclude org.restlet.jee as 
we don't seem to require it with tests. And with the exclusion, the build works 
fine with maven 3.8.1.

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-20 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17325780#comment-17325780
 ] 

Viraj Jasani commented on HADOOP-17650:
---

hadoop-yarn-applications-catalog-webapp uses solr-core and solr-test-framework 
both dependencies with test scope. I think we should exclude org.restlet.jee as 
we don't seem to require it with tests. And with the exclusion, the build works 
fine with maven 3.8.1.

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17650) Fails to build using Maven 3.8.1

2021-04-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17650:
-

Assignee: Viraj Jasani

> Fails to build using Maven 3.8.1
> 
>
> Key: HADOOP-17650
> URL: https://issues.apache.org/jira/browse/HADOOP-17650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> The latest Maven (3.8.1) errors out when building Hadoop (tried trunk)
> {noformat}
> [ERROR] Failed to execute goal on project 
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for 
> project 
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.4.0-SNAPSHOT: 
> Failed to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 -> 
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for 
> org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact 
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker 
> (http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet 
> (http://maven.restlet.org, default, releases+snapshots), apache.snapshots 
> (http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> {noformat}
> According to 
> [https://maven.apache.org/docs/3.8.1/release-notes.html#how-to-fix-when-i-get-a-http-repository-blocked]
>  we need to update our Maven repo.
>  
> Maven 3.6.3 is good.
>  
> (For what is worth, I used my company's mirror to bypass this error. Not sure 
> what is a good fix for Hadoop itself)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17649) Update wildfly openssl to 2.1.3.Final

2021-04-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17649:
--
Summary: Update wildfly openssl to 2.1.3.Final  (was: Update wildfly 
openssl to 1.1.3.Final)

> Update wildfly openssl to 2.1.3.Final
> -
>
> Key: HADOOP-17649
> URL: https://issues.apache.org/jira/browse/HADOOP-17649
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-25644
> A memory leak flaw was found in WildFly OpenSSL in versions prior to 
> 1.1.3.Final, where it removes an HTTP session. It may allow the attacker to 
> cause OOM leading to a denial of service. The highest threat from this 
> vulnerability is to system availability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-19 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324792#comment-17324792
 ] 

Viraj Jasani commented on HADOOP-17648:
---

I see, it looks like this is not for Hadoop but for Curator:
{code:java}


  com.google.guava
  guava
  compile


{code}

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17648:
-

Assignee: (was: Viraj Jasani)

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17648:
-

Assignee: Viraj Jasani

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17649) Update wildfly openssl to 1.1.3.Final

2021-04-18 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17649:
-

Assignee: Viraj Jasani

> Update wildfly openssl to 1.1.3.Final
> -
>
> Key: HADOOP-17649
> URL: https://issues.apache.org/jira/browse/HADOOP-17649
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-25644
> A memory leak flaw was found in WildFly OpenSSL in versions prior to 
> 1.1.3.Final, where it removes an HTTP session. It may allow the attacker to 
> cause OOM leading to a denial of service. The highest threat from this 
> vulnerability is to system availability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324694#comment-17324694
 ] 

Viraj Jasani commented on HADOOP-17648:
---

Sounds good, let me repurpose this Jira and take it up then?

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324691#comment-17324691
 ] 

Viraj Jasani commented on HADOOP-17648:
---

Looks like only *hadoop-auth* and 
*hadoop-yarn-server-timeline-service-hbase-tests* use guava dependency, rest 
are anyways using guava through shaded thirdparty.

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324689#comment-17324689
 ] 

Viraj Jasani edited comment on HADOOP-17648 at 4/19/21, 5:00 AM:
-

We have 2 guava dependencies: hadoop-thirdparty-guava and direct guava 
dependency. Curious to know if the remaining direct guava dependency is planned 
to be replaced with thirdparty guava.

Thanks


was (Author: vjasani):
We have 2 guava dependencies: hadoop-thirdparty-guava and direct guava 
dependency. Curious to know if direct guava dependency is planned to be 
replaced with thirdparty guava.

Thanks

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17648) Update guava to 30.1.1-jre

2021-04-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324689#comment-17324689
 ] 

Viraj Jasani commented on HADOOP-17648:
---

We have 2 guava dependencies: hadoop-thirdparty-guava and direct guava 
dependency. Curious to know if direct guava dependency is planned to be 
replaced with thirdparty guava.

Thanks

> Update guava to 30.1.1-jre
> --
>
> Key: HADOOP-17648
> URL: https://issues.apache.org/jira/browse/HADOOP-17648
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The latest guava version is 30.1.1-jre. Let's bump the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17639) Restrict the "-skipTrash" param for accidentally deletes data

2021-04-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324514#comment-17324514
 ] 

Viraj Jasani commented on HADOOP-17639:
---

{quote}I think changes to the CLI is going to be trouble as there's some 
guarantees of consistency over time, with command line, behaviour and of output.
{quote}
I agree on this [~ste...@apache.org]. However, if we think about this case, we 
already have *fs.trash.interval* to decide whether delete operation should keep 
files in trash at least until trash interval expires. Now we have another 
decision making param *-skipTrash* at command line. If skipTrash is really 
required, we should provide such option in UI as well? (a new button along side 
delete, with name: *skip-trash* maybe) provided we will have trash support 
through web ui as well with HDFS-15982. But perhaps skipTrash is redundant to 
override decision already configured by fs.trash.interval? Maybe yes, maybe 
not. If yes, we can gradually nullify it's effect internally and with next 
major release remove the support (with big enough release notes for users). If 
not, we should have it in ui as well. Thoughts?

> Restrict the "-skipTrash" param for accidentally deletes data
> -
>
> Key: HADOOP-17639
> URL: https://issues.apache.org/jira/browse/HADOOP-17639
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Priority: Major
>
> Suppose the user tries to delete the data from CLI with the "-skipTrash" 
> param but by mistake, he deleted a couple of directories but actually, that 
> directory user want to retain then their is no way to retrieve the delete 
> data.
> It will be good to have a confirm message like: "Skip the trash for the 
> hdfs:///dri1/file.txt files? (Y or N)" ro we can completely disable the 
> "-skipTrash" param.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17524) Remove EventCounter and Log counters from JVM Metrics

2021-04-18 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324447#comment-17324447
 ] 

Viraj Jasani commented on HADOOP-17524:
---

Thanks [~ayushtkn]. I believe the strong dependency on log4j1 (making 
inevitable migration to slf4j difficult) was the main concern and hence this 
Jira was marked "Incompatible change", perhaps we can also make release note 
mention about this?

I agree with Public-Stable API guidelines but since we have strong reason, do 
you think we should be good with one fat notice during 3.4.0 release (in 
addition to release note)? Please let me know what you think should be the 
appropriate steps.

[~zhangduo] [~aajisaka] any opinions you would like to provide?

Thanks

> Remove EventCounter and Log counters from JVM Metrics
> -
>
> Key: HADOOP-17524
> URL: https://issues.apache.org/jira/browse/HADOOP-17524
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> EventCount is using Log4J 1.x API. We need to remove it to drop Log4J 1.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17612) Bump default Zookeeper version to 3.7.0

2021-04-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324226#comment-17324226
 ] 

Viraj Jasani commented on HADOOP-17612:
---

{quote}Do you mind to drop an email on dev@curator?
{quote}
Sounds good. Will do it, and in the meanwhile, perhaps we can repurpose 
CURATOR-588 to bump ZK to 3.6.3?

> Bump default Zookeeper version to 3.7.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can bump Zookeeper version to 3.7.0 for trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17612) Bump default Zookeeper version to 3.7.0

2021-04-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324197#comment-17324197
 ] 

Viraj Jasani commented on HADOOP-17612:
---

Zookeeper 3.6.3 is out. Announcement 
[here|https://mail-archives.apache.org/mod_mbox/zookeeper-dev/202104.mbox/%3CCA%2BhtFBmbxPb8dRd6xnW3dG2nwRk006%3D6JcrUVV%3D8BVcz9u-Y9g%40mail.gmail.com%3E]

I could not find Curator 5.1.1 planning in curator-dev mailing list. 
[~eolivelli] are you aware of the planning?

Thanks

> Bump default Zookeeper version to 3.7.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can bump Zookeeper version to 3.7.0 for trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17642) Could not instantiate class org.apache.hadoop.log.metrics.EventCounter

2021-04-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17324193#comment-17324193
 ] 

Viraj Jasani commented on HADOOP-17642:
---

[~aajisaka] one small change pending. Could you please take a look?

Thanks

> Could not instantiate class org.apache.hadoop.log.metrics.EventCounter
> --
>
> Key: HADOOP-17642
> URL: https://issues.apache.org/jira/browse/HADOOP-17642
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After removal of EventCounter class, we are not able to bring up HDFS cluster.
> {code:java}
> log4j:ERROR Could not instantiate class 
> [org.apache.hadoop.log.metrics.EventCounter].
> java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>   at 
> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>   at 
> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>   at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>   at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>   at 
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
>   at org.apache.hadoop.conf.Configuration.(Configuration.java:229)
>   at org.apache.hadoop.hdfs.tools.GetConf.(GetConf.java:131)
> log4j:ERROR Could not instantiate appender named "EventCounter".
> {code}
> We need to clean up log4j.properties to avoid instantiating appender 
> EventCounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17642) Could not instantiate class org.apache.hadoop.log.metrics.EventCounter

2021-04-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17642:
--
Status: Patch Available  (was: In Progress)

> Could not instantiate class org.apache.hadoop.log.metrics.EventCounter
> --
>
> Key: HADOOP-17642
> URL: https://issues.apache.org/jira/browse/HADOOP-17642
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After removal of EventCounter class, we are not able to bring up HDFS cluster.
> {code:java}
> log4j:ERROR Could not instantiate class 
> [org.apache.hadoop.log.metrics.EventCounter].
> java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>   at 
> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>   at 
> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>   at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>   at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>   at 
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
>   at org.apache.hadoop.conf.Configuration.(Configuration.java:229)
>   at org.apache.hadoop.hdfs.tools.GetConf.(GetConf.java:131)
> log4j:ERROR Could not instantiate appender named "EventCounter".
> {code}
> We need to clean up log4j.properties to avoid instantiating appender 
> EventCounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17642) Could not instantiate class org.apache.hadoop.log.metrics.EventCounter

2021-04-16 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17642 started by Viraj Jasani.
-
> Could not instantiate class org.apache.hadoop.log.metrics.EventCounter
> --
>
> Key: HADOOP-17642
> URL: https://issues.apache.org/jira/browse/HADOOP-17642
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After removal of EventCounter class, we are not able to bring up HDFS cluster.
> {code:java}
> log4j:ERROR Could not instantiate class 
> [org.apache.hadoop.log.metrics.EventCounter].
> java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>   at 
> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>   at 
> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>   at 
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>   at 
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>   at 
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
>   at org.apache.hadoop.conf.Configuration.(Configuration.java:229)
>   at org.apache.hadoop.hdfs.tools.GetConf.(GetConf.java:131)
> log4j:ERROR Could not instantiate appender named "EventCounter".
> {code}
> We need to clean up log4j.properties to avoid instantiating appender 
> EventCounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17642) Could not instantiate class org.apache.hadoop.log.metrics.EventCounter

2021-04-16 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-17642:
-

 Summary: Could not instantiate class 
org.apache.hadoop.log.metrics.EventCounter
 Key: HADOOP-17642
 URL: https://issues.apache.org/jira/browse/HADOOP-17642
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


After removal of EventCounter class, we are not able to bring up HDFS cluster.
{code:java}
log4j:ERROR Could not instantiate class 
[org.apache.hadoop.log.metrics.EventCounter].
java.lang.ClassNotFoundException: org.apache.hadoop.log.metrics.EventCounter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at 
org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at 
org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at 
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at 
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at 
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at 
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at 
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:388)
at org.apache.hadoop.conf.Configuration.(Configuration.java:229)
at org.apache.hadoop.hdfs.tools.GetConf.(GetConf.java:131)
log4j:ERROR Could not instantiate appender named "EventCounter".
{code}
We need to clean up log4j.properties to avoid instantiating appender 
EventCounter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17523) Replace LogCapturer with mock

2021-04-15 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17322351#comment-17322351
 ] 

Viraj Jasani commented on HADOOP-17523:
---

[~zhangduo] [~aajisaka] Mockito also seems to require using log4j API directly. 
I guess this one also can wait for completion of HADOOP-16206.

Any other thoughts?

> Replace LogCapturer with mock
> -
>
> Key: HADOOP-17523
> URL: https://issues.apache.org/jira/browse/HADOOP-17523
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> LogCapturer uses Log4J1 API, and it should be removed. Mockito can be used 
> instead for capturing logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17525) Support log4j2 API in GenericTestUtils.setLogLevel

2021-04-15 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17322331#comment-17322331
 ] 

Viraj Jasani commented on HADOOP-17525:
---

You are right [~zhangduo], it would be better to pick this or any similar 
sub-tasks after actually migrating to log4j2. I was thinking of starting off 
with creating separate profile for log4j2 but if we do so, keeping source with 
log4j1 and test with log4j2 might not work as anticipated (even activating 
profile for test would not be convenient). It might just prepare in advance 
until actual migration happens, but now that I think of it, good to work on 
this later.

> Support log4j2 API in GenericTestUtils.setLogLevel
> --
>
> Key: HADOOP-17525
> URL: https://issues.apache.org/jira/browse/HADOOP-17525
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>
> GenericTestUtils.setLogLevel depends on Log4J 1.x API, should be updated to 
> use Log4J 2.x API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   >