[jira] [Commented] (HADOOP-17225) Update jackson-mapper-asl-1.9.13 to atlassian version to mitigate: CVE-2019-10172

2021-09-06 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17410658#comment-17410658
 ] 

Ranith Sardar commented on HADOOP-17225:


Are we considering this patch to update 
jackson-mapper-asl-1.9.13(CVE-2019-10172)?

> Update jackson-mapper-asl-1.9.13 to atlassian version to mitigate: 
> CVE-2019-10172
> -
>
> Key: HADOOP-17225
> URL: https://issues.apache.org/jira/browse/HADOOP-17225
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-17225-001.patch
>
>
> Currently jersey depends on the jackson, and upgradation of jersey from 1.X 
> to 2.x looks complicated(see HADOOP-15984 and HADOOP-16485).
> Update jackson-mapper-asl-1.9.13 to atlassian version to mitigate: 
> CVE-2019-10172.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-11-13 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973907#comment-16973907
 ] 

Ranith Sardar commented on HADOOP-16585:


Thanks [~surendrasingh]

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16585.001.patch, HADOOP-16585.002.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-11-13 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16585:
---
Attachment: HADOOP-16585.002.patch

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch, HADOOP-16585.002.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-11-12 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16585:
---
Attachment: (was: HADOOP-16585.002.patch)

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-11-12 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16973066#comment-16973066
 ] 

Ranith Sardar commented on HADOOP-16585:


Thanks [~surendrasingh], for reviewing the issue.

I have updated the patch.

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch, HADOOP-16585.002.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-11-12 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16585:
---
Attachment: HADOOP-16585.002.patch

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch, HADOOP-16585.002.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-19 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933790#comment-16933790
 ] 

Ranith Sardar edited comment on HADOOP-16585 at 9/19/19 9:38 PM:
-

Hi [~jlowe], any suggestions regarding the patch?


was (Author: ranith):
[~jlowe], any suggestions regarding the patch?

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-19 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933790#comment-16933790
 ] 

Ranith Sardar commented on HADOOP-16585:


[~jlowe], any suggestions regarding the patch?

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-19 Thread Ranith Sardar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933165#comment-16933165
 ] 

Ranith Sardar commented on HADOOP-16585:


Attached the initial patch.

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16585:
---
Status: Patch Available  (was: Open)

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-19 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16585:
---
Attachment: HADOOP-16585.001.patch

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16585.001.patch
>
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-18 Thread Ranith Sardar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16585:
---
Description: 
{code:java}
// id would be same for multiple file, so it may occur file not found exception
private void write() throws IOException {
  String dirName = dirs.get(r.nextInt(dirs.size()));
  Path file = new Path(dirName, hostname+id);
  
  fc.delete(file, true);
  ..
}
{code}

>  [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating 
> file LoadGenerator#write 
> ---
>
> Key: HADOOP-16585
> URL: https://issues.apache.org/jira/browse/HADOOP-16585
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> {code:java}
> // id would be same for multiple file, so it may occur file not found 
> exception
> private void write() throws IOException {
>   String dirName = dirs.get(r.nextInt(dirs.size()));
>   Path file = new Path(dirName, hostname+id);
>   
>   fc.delete(file, true);
>   ..
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-18 Thread Ranith Sardar (Jira)
Ranith Sardar created HADOOP-16585:
--

 Summary:  [Tool:NNloadGeneratorMR] Multiple threads are using same 
id for creating file LoadGenerator#write 
 Key: HADOOP-16585
 URL: https://issues.apache.org/jira/browse/HADOOP-16585
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15685) Build fails (hadoop pipes) on newer Linux envs (like Fedora 28)

2019-05-29 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850945#comment-16850945
 ] 

Ranith Sardar commented on HADOOP-15685:


This patch makes libtirpc-level should be installed Or else failing with 
following Error:

[exec] /usr/bin/ld: cannot find -ltirpc
[exec] collect2: error: ld returned 1 exit status
[exec] make[2]: *** [examples/pipes-sort] Error 1
[exec] make[1]: *** [CMakeFiles/pipes-sort.dir/all] Error 2
[exec] make: *** [all] Error 2

> Build fails (hadoop pipes) on newer Linux envs (like Fedora 28)
> ---
>
> Key: HADOOP-15685
> URL: https://issues.apache.org/jira/browse/HADOOP-15685
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, tools/pipes
>Affects Versions: 3.2.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Attachments: 15685-3.2.0.txt, 15685-example.txt
>
>
> The rpc/types.h and similar includes are no longer part of glibc.
> Instead tirpc needs to be used now on those systems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16228) Throwing OOM exception for ListStatus (v2) when using S3A

2019-04-04 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16809614#comment-16809614
 ] 

Ranith Sardar commented on HADOOP-16228:


Current trace:

Thread Stack
at java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:48) at 
java.util.Arrays.copyOf([BI)[B (Arrays.java:3236) at 
java.lang.StringCoding.safeTrim([BILjava/nio/charset/Charset;Z)[B 
(StringCoding.java:79) at 
java.lang.StringCoding.encode(Ljava/nio/charset/Charset;[CII)[B 
(StringCoding.java:365) at 
java.lang.String.getBytes(Ljava/nio/charset/Charset;)[B (String.java:941) at 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(Lorg/xml/sax/helpers/DefaultHandler;Ljava/io/InputStream;)Ljava/io/InputStream;
 (XmlResponsesSaxParser.java:205) at 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(Ljava/io/InputStream;Z)Lcom/amazonaws/services/s3/model/transform/XmlResponsesSaxParser$ListObjectsV2Handler;
 (XmlResponsesSaxParser.java:334) at 
com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Ljava/io/InputStream;)Lcom/amazonaws/services/s3/model/ListObjectsV2Result;
 (Unmarshallers.java:88) at 
com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Ljava/lang/Object;)Ljava/lang/Object;
 (Unmarshallers.java:77) at 
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(Lcom/amazonaws/http/HttpResponse;)Lcom/amazonaws/AmazonWebServiceResponse;
 (S3XmlResponseHandler.java:62) at 
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(Lcom/amazonaws/http/HttpResponse;)Ljava/lang/Object;
 (S3XmlResponseHandler.java:31) at 
com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(Lcom/amazonaws/http/HttpResponse;)Ljava/lang/Object;
 (AwsResponseHandlerAdapter.java:70) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(Lcom/amazonaws/http/HttpResponse;)Ljava/lang/Object;
 (AmazonHttpClient.java:1554) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(Lcom/amazonaws/http/AmazonHttpClient$RequestExecutor$ExecOneRequestParams;)Lcom/amazonaws/Response;
 (AmazonHttpClient.java:1272) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper()Lcom/amazonaws/Response;
 (AmazonHttpClient.java:1056) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute()Lcom/amazonaws/Response;
 (AmazonHttpClient.java:743) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer()Lcom/amazonaws/Response;
 (AmazonHttpClient.java:717) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute()Lcom/amazonaws/Response;
 (AmazonHttpClient.java:699) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(Lcom/amazonaws/http/AmazonHttpClient$RequestExecutor;)Lcom/amazonaws/Response;
 (AmazonHttpClient.java:667) at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(Lcom/amazonaws/http/HttpResponseHandler;)Lcom/amazonaws/Response;
 (AmazonHttpClient.java:649) at 
com.amazonaws.http.AmazonHttpClient.execute(Lcom/amazonaws/Request;Lcom/amazonaws/http/HttpResponseHandler;Lcom/amazonaws/http/HttpResponseHandler;Lcom/amazonaws/http/ExecutionContext;)Lcom/amazonaws/Response;
 (AmazonHttpClient.java:513) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(Lcom/amazonaws/Request;Lcom/amazonaws/http/HttpResponseHandler;Ljava/lang/String;Ljava/lang/String;Z)Ljava/lang/Object;
 (AmazonS3Client.java:4325) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(Lcom/amazonaws/Request;Lcom/amazonaws/http/HttpResponseHandler;Ljava/lang/String;Ljava/lang/String;)Ljava/lang/Object;
 (AmazonS3Client.java:4272) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(Lcom/amazonaws/Request;Lcom/amazonaws/transform/Unmarshaller;Ljava/lang/String;Ljava/lang/String;)Ljava/lang/Object;
 (AmazonS3Client.java:4266) at 
com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(Lcom/amazonaws/services/s3/model/ListObjectsV2Request;)Lcom/amazonaws/services/s3/model/ListObjectsV2Result;
 (AmazonS3Client.java:876) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$continueListObjects$6(Lorg/apache/hadoop/fs/s3a/S3ListResult;Lorg/apache/hadoop/fs/s3a/S3ListRequest;)Lorg/apache/hadoop/fs/s3a/S3ListResult;
 (S3AFileSystem.java:1303) at 
org.apache.hadoop.fs.s3a.S3AFileSystem$$Lambda$18.execute()Ljava/lang/Object; 
(Unknown Source) at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Ljava/lang/String;ZLorg/apache/hadoop/fs/s3a/Invoker$Retried;Lorg/apache/hadoop/fs/s3a/Invoker$Operation;)Ljava/lang/Object;
 (Invoker.java:317) at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Ljava/lang/String;ZLorg/apache/hadoop/fs/s3a/Invoker$Operation;)Ljava/lang/Object;
 (Invoker.java:280) at 

[jira] [Commented] (HADOOP-16228) Throwing OOM exception for ListStatus (v2) when using S3A

2019-04-03 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16808738#comment-16808738
 ] 

Ranith Sardar commented on HADOOP-16228:


Trace I have mentioned in the description.

> Throwing OOM exception for ListStatus (v2) when using S3A 
> --
>
> Key: HADOOP-16228
> URL: https://issues.apache.org/jira/browse/HADOOP-16228
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Ranith Sardar
>Priority: Major
>
> {code:java}
>  @InterfaceStability.Unstable
>   public static final String LIST_VERSION = "fs.s3a.list.version";
>   @InterfaceStability.Unstable
>   public static final int DEFAULT_LIST_VERSION = 2;
> {code}
> If the files in the bucket are more than 1k, it will throw OOM error for V2 
> version.
> Exception:
> Caused by: com.amazonaws.SdkClientException: Failed to sanitize XML document 
> destined for handler class 
> com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler
> at 
> com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:219)
> at 
> com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(XmlResponsesSaxParser.java:334)
> at 
> com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:88)
> at 
> com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:77)
> at 
> com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
> at 
> com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
> at 
> com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1554)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1272)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4266)
> at 
> com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:876)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$continueListObjects$6(S3AFileSystem.java:1303)
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:280)
> at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.continueListObjects(S3AFileSystem.java:1292)
> at 
> org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.next(Listing.java:600)
> ... 15 more
> Caused by: java.lang.OutOfMemoryError: Java heap space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16228) Throwing OOM exception for ListStatus (v2) when using S3A

2019-04-02 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16228:
---
Description: 
{code:java}
 @InterfaceStability.Unstable
  public static final String LIST_VERSION = "fs.s3a.list.version";
  @InterfaceStability.Unstable
  public static final int DEFAULT_LIST_VERSION = 2;
{code}
If the files in the bucket are more than 1k, it will throw OOM error for V2 
version.

Exception:
Caused by: com.amazonaws.SdkClientException: Failed to sanitize XML document 
destined for handler class 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler
at 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:219)
at 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(XmlResponsesSaxParser.java:334)
at 
com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:88)
at 
com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:77)
at 
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at 
com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at 
com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1554)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1272)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4266)
at 
com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:876)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$continueListObjects$6(S3AFileSystem.java:1303)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:280)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.continueListObjects(S3AFileSystem.java:1292)
at org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.next(Listing.java:600)
... 15 more
Caused by: java.lang.OutOfMemoryError: Java heap space

  was:
{code:java}
 @InterfaceStability.Unstable
  public static final String LIST_VERSION = "fs.s3a.list.version";
  @InterfaceStability.Unstable
  public static final int DEFAULT_LIST_VERSION = 2;
{code}

If the files in the bucket are more than 1k, it will throw OOM error for V2 
version. 


> Throwing OOM exception for ListStatus (v2) when using S3A 
> --
>
> Key: HADOOP-16228
> URL: https://issues.apache.org/jira/browse/HADOOP-16228
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Ranith Sardar
>Priority: Major
>
> {code:java}
>  @InterfaceStability.Unstable
>   public static final String LIST_VERSION = "fs.s3a.list.version";
>   @InterfaceStability.Unstable
>   public static final int DEFAULT_LIST_VERSION = 2;
> {code}
> If the files in the bucket are more than 1k, it will throw OOM error for V2 
> version.
> Exception:
> Caused by: com.amazonaws.SdkClientException: Failed to sanitize XML document 
> destined for handler class 
> com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListObjectsV2Handler
> at 
> com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.sanitizeXmlDocument(XmlResponsesSaxParser.java:219)
> at 
> com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListObjectsV2Response(XmlResponsesSaxParser.java:334)
> at 
> com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:88)
> at 
> com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsV2Unmarshaller.unmarshall(Unmarshallers.java:77)
> at 
> com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
> at 
> 

[jira] [Updated] (HADOOP-16228) Throwing OOM exception for ListStatus when using s3A

2019-04-01 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16228:
---
Description: 
{code:java}
 @InterfaceStability.Unstable
  public static final String LIST_VERSION = "fs.s3a.list.version";
  @InterfaceStability.Unstable
  public static final int DEFAULT_LIST_VERSION = 2;
{code}

If the files in the bucket are more than 1k, it will throw OOM error for V2 
version. 

  was:

{code:java}
  @InterfaceStability.Unstable
  public static final int DEFAULT_LIST_VERSION = 2;
{code}

If the files in the bucket are more than 1k, it will throw OOM error for V2 
version. 


> Throwing OOM exception for ListStatus when using s3A 
> -
>
> Key: HADOOP-16228
> URL: https://issues.apache.org/jira/browse/HADOOP-16228
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Ranith Sardar
>Priority: Major
>
> {code:java}
>  @InterfaceStability.Unstable
>   public static final String LIST_VERSION = "fs.s3a.list.version";
>   @InterfaceStability.Unstable
>   public static final int DEFAULT_LIST_VERSION = 2;
> {code}
> If the files in the bucket are more than 1k, it will throw OOM error for V2 
> version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16228) Throwing OOM exception for ListStatus when using s3A

2019-04-01 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16228:
---
Description: 

{code:java}
  @InterfaceStability.Unstable
  public static final int DEFAULT_LIST_VERSION = 2;
{code}

If the files in the bucket are more than 1k, it will throw OOM error for V2 
version. 

> Throwing OOM exception for ListStatus when using s3A 
> -
>
> Key: HADOOP-16228
> URL: https://issues.apache.org/jira/browse/HADOOP-16228
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Ranith Sardar
>Priority: Major
>
> {code:java}
>   @InterfaceStability.Unstable
>   public static final int DEFAULT_LIST_VERSION = 2;
> {code}
> If the files in the bucket are more than 1k, it will throw OOM error for V2 
> version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16228) Throwing OOM exception for ListStatus when using s3A

2019-04-01 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16228:
---
Issue Type: Bug  (was: Improvement)

> Throwing OOM exception for ListStatus when using s3A 
> -
>
> Key: HADOOP-16228
> URL: https://issues.apache.org/jira/browse/HADOOP-16228
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Ranith Sardar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16228) Throwing OOM exception for ListStatus when using s3A

2019-04-01 Thread Ranith Sardar (JIRA)
Ranith Sardar created HADOOP-16228:
--

 Summary: Throwing OOM exception for ListStatus when using s3A 
 Key: HADOOP-16228
 URL: https://issues.apache.org/jira/browse/HADOOP-16228
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16145) Add Quota Preservation to DistCp

2019-03-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785620#comment-16785620
 ] 

Ranith Sardar edited comment on HADOOP-16145 at 3/6/19 1:19 PM:


[~ste...@apache.org] Yes, it will work for only HDFS, for other filesystems it 
will throw exception.
Have attached the patch.


was (Author: ranith):
[~ste...@apache.org] Yes, it will work for only HDFS, for other filesystem it 
will throw exception.
Have attached the patch.

> Add Quota Preservation to DistCp
> 
>
> Key: HADOOP-16145
> URL: https://issues.apache.org/jira/browse/HADOOP-16145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16145.000.patch
>
>
> This JIRA to track the distcp support to handle the quota with preserving 
> options.
> Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16145) Add Quota Preservation to DistCp

2019-03-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785620#comment-16785620
 ] 

Ranith Sardar edited comment on HADOOP-16145 at 3/6/19 1:19 PM:


[~ste...@apache.org] Yes, it will work for only HDFS, for other filesystem it 
will throw exception.
Have attached the patch.


was (Author: ranith):
[~ste...@apache.org] Yes, it will work for only HDFS, for other filesystem it 
will throw exception.


> Add Quota Preservation to DistCp
> 
>
> Key: HADOOP-16145
> URL: https://issues.apache.org/jira/browse/HADOOP-16145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16145.000.patch
>
>
> This JIRA to track the distcp support to handle the quota with preserving 
> options.
> Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16145) Add Quota Preservation to DistCp

2019-03-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16145:
---
Status: Patch Available  (was: Open)

> Add Quota Preservation to DistCp
> 
>
> Key: HADOOP-16145
> URL: https://issues.apache.org/jira/browse/HADOOP-16145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16145.000.patch
>
>
> This JIRA to track the distcp support to handle the quota with preserving 
> options.
> Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16145) Add Quota Preservation to DistCp

2019-03-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785620#comment-16785620
 ] 

Ranith Sardar commented on HADOOP-16145:


[~ste...@apache.org] Yes, it will work for only HDFS, for other filesystem it 
will throw exception.


> Add Quota Preservation to DistCp
> 
>
> Key: HADOOP-16145
> URL: https://issues.apache.org/jira/browse/HADOOP-16145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16145.000.patch
>
>
> This JIRA to track the distcp support to handle the quota with preserving 
> options.
> Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16145) Add Quota Preservation to DistCp

2019-03-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16145:
---
Attachment: HADOOP-16145.000.patch

> Add Quota Preservation to DistCp
> 
>
> Key: HADOOP-16145
> URL: https://issues.apache.org/jira/browse/HADOOP-16145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16145.000.patch
>
>
> This JIRA to track the distcp support to handle the quota with preserving 
> options.
> Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16145) Add Quota Preservation to DistCp

2019-02-24 Thread Ranith Sardar (JIRA)
Ranith Sardar created HADOOP-16145:
--

 Summary: Add Quota Preservation to DistCp
 Key: HADOOP-16145
 URL: https://issues.apache.org/jira/browse/HADOOP-16145
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Reporter: Ranith Sardar
Assignee: Ranith Sardar


This JIRA to track the distcp support to handle the quota with preserving 
options.
Add new command line argument to support that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-10 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16764666#comment-16764666
 ] 

Ranith Sardar commented on HADOOP-16032:


Thanks [~ste...@apache.org] for committing :)

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch, 
> HADOOP-16032.002.patch, HADOOP-16032.003.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-07 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16763097#comment-16763097
 ] 

Ranith Sardar commented on HADOOP-16032:


[~ste...@apache.org], thank you for reviewing the patch, 
ranithsardar...@gmail.com this email id i use. 

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch, 
> HADOOP-16032.002.patch, HADOOP-16032.003.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-06 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16762426#comment-16762426
 ] 

Ranith Sardar commented on HADOOP-16032:


Uploaded patch with said changes. Please review the patch.

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch, 
> HADOOP-16032.002.patch, HADOOP-16032.003.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Attachment: HADOOP-16032.003.patch

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch, 
> HADOOP-16032.002.patch, HADOOP-16032.003.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-04 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Attachment: HADOOP-16032.002.patch

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch, 
> HADOOP-16032.002.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-04 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16759690#comment-16759690
 ] 

Ranith Sardar commented on HADOOP-16032:


[~surendrasingh], updated the patch. please check it once.

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-02-03 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Attachment: HADOOP-16032.001.patch

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch, HADOOP-16032.001.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-31 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757235#comment-16757235
 ] 

Ranith Sardar commented on HADOOP-16032:


[~ste...@apache.org], [~surendrasingh] could you please check it once.

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-28 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754373#comment-16754373
 ] 

Ranith Sardar commented on HADOOP-16032:


Uploaded the patch with UT (showing the scenario). Please review the patch.

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-28 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Status: Patch Available  (was: Open)

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-28 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Attachment: HADOOP-16032.000.patch

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HADOOP-16032.000.patch
>
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16735799#comment-16735799
 ] 

Ranith Sardar commented on HADOOP-16032:


Hi, [~ste...@apache.org] Updated the affected version.

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) Distcp It should clear sub directory ACL before applying new ACL on it.

2019-01-07 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Affects Version/s: 3.1.1

> Distcp It should clear sub directory ACL before applying new ACL on it.
> ---
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) It should clear sub directory ACL before applying new ACL on it.

2019-01-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Description: Distcp preserve can't update the ACL info properly when source 
dir has access  ACL and dest dir has default ACL. It will only modify the basic 
ACL part.   (was: Distcp preserve can't update the ACL info properly when 
source dir has access  ACL and dest dir has default acls)

> It should clear sub directory ACL before applying new ACL on it.
> 
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default ACL. It will only modify the basic ACL part. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) It should clear sub directory ACL before applying new ACL on it.

2019-01-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Description: Distcp preserve can't update the ACL info properly when source 
dir has access  ACL and dest dir has default acls  (was: Distcp preserve can 
not update the ACL info when source dir just has access acls and dest dir has 
default acls)

> It should clear sub directory ACL before applying new ACL on it.
> 
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can't update the ACL info properly when source dir has access 
>  ACL and dest dir has default acls



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16032) It should clear sub directory ACL before applying new ACL on it.

2019-01-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-16032:
---
Description: Distcp preserve can not update the ACL info when source dir 
just has access acls and dest dir has default acls

> It should clear sub directory ACL before applying new ACL on it.
> 
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Distcp preserve can not update the ACL info when source dir just has access 
> acls and dest dir has default acls



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16032) It should clear sub directory ACL before applying new ACL on it.

2019-01-06 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HADOOP-16032:
--

Assignee: Ranith Sardar

> It should clear sub directory ACL before applying new ACL on it.
> 
>
> Key: HADOOP-16032
> URL: https://issues.apache.org/jira/browse/HADOOP-16032
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16032) It should clear sub directory ACL before applying new ACL on it.

2019-01-06 Thread Ranith Sardar (JIRA)
Ranith Sardar created HADOOP-16032:
--

 Summary: It should clear sub directory ACL before applying new ACL 
on it.
 Key: HADOOP-16032
 URL: https://issues.apache.org/jira/browse/HADOOP-16032
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HADOOP-15459:
--

Assignee: (was: Ranith Sardar)

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477477#comment-16477477
 ] 

Ranith Sardar commented on HADOOP-15459:


[~shahrs87] No problem.

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Ranith Sardar
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-14 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HADOOP-15459:
--

Assignee: Ranith Sardar

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Ranith Sardar
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-15180:
---
Attachment: HADOOP-15180-branch-2-002.patch

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180-branch-2-002.patch, 
> HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437398#comment-16437398
 ] 

Ranith Sardar edited comment on HADOOP-15180 at 4/13/18 2:57 PM:
-

Thanks [~brahmareddy] for your review and Thanks [~vinayrpet] for assigning me.
 I have updated the patch name. Please review it once.


was (Author: ranith):
Thanks [~brahmareddy] for your review and Thanks [~vinayrpet] for assigning me.
I have updated the patch name. Please review it once.

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437398#comment-16437398
 ] 

Ranith Sardar commented on HADOOP-15180:


Thanks [~brahmareddy] for your review and Thanks [~vinayrpet] for assigning me.
I have updated the patch name. Please review it once.

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-02-21 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-15180:
---
Status: Patch Available  (was: Open)

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-02-21 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-15180:
---
Attachment: HADOOP-15180_branch-2.diff

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org