[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

Added property to kms-default.xml instead of core-site.xml. Please suggest the 
source path or documentation where I can add this configuration. core-site.xml 
is empty by default.

There is no separate KMSClientProvider unit test file, so I've added the test 
for the parameter in TestLoadBalancingKMSClientProvider.


> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-1.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206025#comment-16206025
 ] 

Gabor Bota edited comment on HADOOP-14880 at 10/16/17 2:57 PM:
---

Added property to kms-default.xml instead of core-site.xml. Please suggest the 
source path or documentation where I can add this configuration. core-site.xml 
is empty by default.

There is no separate KMSClientProvider unit test file, so I've added the test 
for the config in TestLoadBalancingKMSClientProvider.



was (Author: gabor.bota):
Added property to kms-default.xml instead of core-site.xml. Please suggest the 
source path or documentation where I can add this configuration. core-site.xml 
is empty by default.

There is no separate KMSClientProvider unit test file, so I've added the test 
for the parameter in TestLoadBalancingKMSClientProvider.


> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-2.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

It seems like there is already a test for this in 
org.apache.hadoop.crypto.key.kms.server.TestKMS.java/testKMSTimeout

I've moved the config name and default value to CommonConfigurationKeysPublic 
and added the description to core-default.xml

Hope the patch won't fail on some unrelated unit test.

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

Patch failed with docker error. Retrying later today

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

Retrying, hope docker won't fail.

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14880:
---

Assignee: (was: Gabor Bota)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: newbie
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2017-10-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202132#comment-16202132
 ] 

Gabor Bota commented on HADOOP-12162:
-

It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException those operations. 
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2017-10-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202132#comment-16202132
 ] 

Gabor Bota edited comment on HADOOP-12162 at 10/12/17 3:50 PM:
---

It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException for all methods. 
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.


was (Author: gabor.bota):
It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException for those. 
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2017-10-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202132#comment-16202132
 ] 

Gabor Bota edited comment on HADOOP-12162 at 10/12/17 3:50 PM:
---

It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException for those. 
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.


was (Author: gabor.bota):
It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException those operations. 
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2017-10-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202132#comment-16202132
 ] 

Gabor Bota edited comment on HADOOP-12162 at 10/12/17 3:50 PM:
---

It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException.
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.


was (Author: gabor.bota):
It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException for all methods. 
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2017-10-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202132#comment-16202132
 ] 

Gabor Bota edited comment on HADOOP-12162 at 10/12/17 3:51 PM:
---

It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException.
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

Is there some rule to include it as "not supported"?


was (Author: gabor.bota):
It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException.
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12162) Add ACL APIs to the FileSystem specification

2017-10-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202132#comment-16202132
 ] 

Gabor Bota edited comment on HADOOP-12162 at 10/12/17 3:51 PM:
---

It seems like in org.apache.hadoop.fs.FileSystem all the methods listed above 
throw UnsupportedOperationException.
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

Is there some rule to include it as "not supported"?


was (Author: gabor.bota):
It seems like in org.apache.hadoop.fs.FileSystem all the methods throw 
UnsupportedOperationException.
FileSystem implementation does not support these, this should be a reason that 
this is omitted from the documentation.

Is there some rule to include it as "not supported"?

> Add ACL APIs to the FileSystem specification
> 
>
> Key: HADOOP-12162
> URL: https://issues.apache.org/jira/browse/HADOOP-12162
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Arpit Agarwal
>  Labels: newbie
>
> The following ACL APIs should be added to the [FileSystem 
> specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html]
> # modifyAclEntries
> # removeAclEntries
> # removeDefaultAcl
> # removeAcl
> # setAcl
> # getAclStatus 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-16 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14880:
---

Assignee: Gabor Bota

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14880:
---

Assignee: Gabor Bota

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

Fix checkstyle

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-3.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-3.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: (was: HADOOP-14880-3.patch)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-2.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: (was: HADOOP-14880-2.patch)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-4.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

Submitting the corrected patch - config description changed to Wei-Chiu 
Chuang's. 
Thank you for helping!

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Open  (was: Patch Available)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Status: Patch Available  (was: Open)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: HADOOP-14880-4.patch

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14880:

Attachment: (was: HADOOP-14880-4.patch)

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14880) [KMS] Document missing KMS client side configs

2017-10-18 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16209938#comment-16209938
 ] 

Gabor Bota commented on HADOOP-14880:
-

Is there a fix for it, or should I submit the patch again?
How should I proceed?

> [KMS] Document missing KMS client side configs
> ---
>
> Key: HADOOP-14880
> URL: https://issues.apache.org/jira/browse/HADOOP-14880
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14880-1.patch, HADOOP-14880-2.patch, 
> HADOOP-14880-3.patch, HADOOP-14880-4.patch
>
>
> Similar to HADOOP-14783, I did a sweep of KMS client code and found an 
> undocumented KMS client config. It should be added into core-site.xml.
> hadoop.security.kms.client.timeout
> From the code it appears this config affects both client side connection 
> timeout and read timeout.
> In fact it doesn't looks like this config is tested. So would be really nice 
> add a test for it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15441) After HADOOP-14445, encryption zone operations print unnecessary INFO logs

2018-05-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15441:

Attachment: HADOOP-15441.002.patch

> After HADOOP-14445, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14445, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14445, encryption zone operations print unnecessary INFO logs

2018-05-04 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16464263#comment-16464263
 ] 

Gabor Bota commented on HADOOP-15441:
-

Thanks for the review [~shahrs87], I've fixed the patch, and I will use this 
pattern when using debug log.

> After HADOOP-14445, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14445, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-07 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-13649:

Attachment: HADOOP-13649.002.patch

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-07 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465636#comment-16465636
 ] 

Gabor Bota commented on HADOOP-13649:
-

Thanks for the review.
 # I've created HADOOP-15423 to merge the two caches into one.
 # .expireAfterWrite() vs .expireAfterAccess()
 ** I think that access could be better in this situation, as long as there's no
modification in the underlying bucket from another client - so no one else is 
modifying the s3
bucket like deleting files while the cache is in use - that way we can
say that the cache is up to date.
This store is only used for testing right now, so I can say that's right to 
choose expireAfterAccess.
 # Locking
 ** The com.google.common.cache.LocalCache has locking for write (e.g put, 
replace, remove) but not for simple read (getIfPresent).
 ** LocalMetadataStore has a lock for read too: synchronized (this) in get().
 ** As the merge of the two caches will happen in HADOOP-15423, I think that's 
a topic to discuss further on that issue.
 # Performance testing
 ** I've done some performance testing to compare the cache vs hash performance.
 ** I hope that used sane parameters during the tests.
 ** Based on this, there will be some performance decrease with this 
implementation, but nothing significant with the basic test settings - in my 
tests I've modified the settings a little bit. Move() performance should 
improve when merging the caches - it will be interesting to compare what's 
happening after that change.
 ** Test results are in the following gist: 
[https://gist.github.com/bgaborg/2220fd53e553ec971c8edd1adf2493cd] 

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-07 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465636#comment-16465636
 ] 

Gabor Bota edited comment on HADOOP-13649 at 5/7/18 8:43 AM:
-

Thanks for the review.
 # I've created HADOOP-15423 to merge the two caches into one.
 # .expireAfterWrite() vs .expireAfterAccess()
 ** I think that access could be better in this situation, as long as there's no
modification in the underlying bucket from another client - so no one else is 
modifying the s3 bucket like deleting files while the cache is in use - that 
way we can
say that the cache is up to date.
 ** This store is only used for testing right now, so I can say that's right to 
choose expireAfterAccess.
 # Locking
 ** The com.google.common.cache.LocalCache has locking for write (e.g put, 
replace, remove) but not for simple read (getIfPresent).
 ** LocalMetadataStore has a lock for read too: synchronized (this) in get().
 ** As the merge of the two caches will happen in HADOOP-15423, I think that's 
a topic to discuss further on that issue.
 # Performance testing
 ** I've done some performance testing to compare the cache vs hash performance.
 ** I hope that used sane parameters during the tests.
 ** Based on this, there will be some performance decrease with this 
implementation, but nothing significant with the basic test settings - in my 
tests I've modified the settings a little bit. Move() performance should 
improve when merging the caches - it will be interesting to compare what's 
happening after that change.
 ** Test results are in the following gist: 
[https://gist.github.com/bgaborg/2220fd53e553ec971c8edd1adf2493cd] 


was (Author: gabor.bota):
Thanks for the review.
 # I've created HADOOP-15423 to merge the two caches into one.
 # .expireAfterWrite() vs .expireAfterAccess()
 ** I think that access could be better in this situation, as long as there's no
modification in the underlying bucket from another client - so no one else is 
modifying the s3
bucket like deleting files while the cache is in use - that way we can
say that the cache is up to date.
This store is only used for testing right now, so I can say that's right to 
choose expireAfterAccess.
 # Locking
 ** The com.google.common.cache.LocalCache has locking for write (e.g put, 
replace, remove) but not for simple read (getIfPresent).
 ** LocalMetadataStore has a lock for read too: synchronized (this) in get().
 ** As the merge of the two caches will happen in HADOOP-15423, I think that's 
a topic to discuss further on that issue.
 # Performance testing
 ** I've done some performance testing to compare the cache vs hash performance.
 ** I hope that used sane parameters during the tests.
 ** Based on this, there will be some performance decrease with this 
implementation, but nothing significant with the basic test settings - in my 
tests I've modified the settings a little bit. Move() performance should 
improve when merging the caches - it will be interesting to compare what's 
happening after that change.
 ** Test results are in the following gist: 
[https://gist.github.com/bgaborg/2220fd53e553ec971c8edd1adf2493cd] 

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-07 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465636#comment-16465636
 ] 

Gabor Bota edited comment on HADOOP-13649 at 5/7/18 8:53 AM:
-

Thanks for the review.
 # I've created HADOOP-15423 to merge the two caches into one.
 # .expireAfterWrite() vs .expireAfterAccess()
 ** I think that access could be better in this situation, as long as there's no
modification in the underlying bucket from another client - so no one else is 
modifying the s3 bucket like deleting files while the cache is in use - that 
way we can
say that the cache is up to date.
 ** This store is only used for testing right now, so I can say that's right to 
choose expireAfterAccess.
 # Locking
 ** The com.google.common.cache.LocalCache has locking for write (e.g put, 
replace, remove) but not for simple read (getIfPresent).
 ** LocalMetadataStore has a lock for read too: synchronized (this) in get().
 ** As the merge of the two caches will happen in HADOOP-15423, I think that's 
a topic to discuss further on that issue.
 # Performance testing
 ** I've done some performance testing to compare the cache vs hash performance.
 ** I hope that used sane parameters during the tests.
 ** Based on this, there will be some performance decrease with this 
implementation, but nothing significant with the basic test settings - in my 
tests I've modified (increased) the settings a little. Move() performance 
should improve when merging the caches - it will be interesting to compare 
what's happening after that change.
 ** Test results are in the following gist: 
[https://gist.github.com/bgaborg/2220fd53e553ec971c8edd1adf2493cd] 


was (Author: gabor.bota):
Thanks for the review.
 # I've created HADOOP-15423 to merge the two caches into one.
 # .expireAfterWrite() vs .expireAfterAccess()
 ** I think that access could be better in this situation, as long as there's no
modification in the underlying bucket from another client - so no one else is 
modifying the s3 bucket like deleting files while the cache is in use - that 
way we can
say that the cache is up to date.
 ** This store is only used for testing right now, so I can say that's right to 
choose expireAfterAccess.
 # Locking
 ** The com.google.common.cache.LocalCache has locking for write (e.g put, 
replace, remove) but not for simple read (getIfPresent).
 ** LocalMetadataStore has a lock for read too: synchronized (this) in get().
 ** As the merge of the two caches will happen in HADOOP-15423, I think that's 
a topic to discuss further on that issue.
 # Performance testing
 ** I've done some performance testing to compare the cache vs hash performance.
 ** I hope that used sane parameters during the tests.
 ** Based on this, there will be some performance decrease with this 
implementation, but nothing significant with the basic test settings - in my 
tests I've modified the settings a little bit. Move() performance should 
improve when merging the caches - it will be interesting to compare what's 
happening after that change.
 ** Test results are in the following gist: 
[https://gist.github.com/bgaborg/2220fd53e553ec971c8edd1adf2493cd] 

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-07 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465802#comment-16465802
 ] 

Gabor Bota commented on HADOOP-13649:
-

Mvn test and verify were successful on eu-west-1 with 
fs.s3a.s3guard.test.enabled _-Ds3guard._

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-07 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465802#comment-16465802
 ] 

Gabor Bota edited comment on HADOOP-13649 at 5/7/18 11:23 AM:
--

Mvn test and verify were successful on eu-west-1 with 
fs.s3a.s3guard.test.enabled (_-Ds3guard)._


was (Author: gabor.bota):
Mvn test and verify were successful on eu-west-1 with 
fs.s3a.s3guard.test.enabled _-Ds3guard._

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15420:

Status: Patch Available  (was: Open)

# I've fixed the issue in the LocalMetadataStore#expired.
# Moved the testDiffCommand to AbstractS3GuardToolTestBase from 
ITestS3GuardToolLocal.
# Tested on eu-west-1 (test and verify). 
ITestS3GuardToolDynamoDB#testDestroyNoBucket and 
ITestS3GuardToolLocal#testDestroyNoBucket both failed on verify, but that a 
known issue: HADOOP-14927

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>  Pruned children count 
> [PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}, 
> PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}] expected:<1> but was:<2>{code}
>  
> Looking through the code, I'm noticing a couple of issues.
>  
> 1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should 
> really be running for all MetadataStore implementations.  Seems like it 
> should live in {{AbstractS3GuardToolTestBase}}.
> 2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
> {{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, 
> but the fs is initialized with a MetadataStore present, so seem like the fs 
> will still put the file in the MetadataStore?
> There are other tests which explicitly go around the MetadataStore by using 
> {{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
> something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding 
> any issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15420:

Attachment: HADOOP-15420.001.patch

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>  Pruned children count 
> [PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}, 
> PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}] expected:<1> but was:<2>{code}
>  
> Looking through the code, I'm noticing a couple of issues.
>  
> 1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should 
> really be running for all MetadataStore implementations.  Seems like it 
> should live in {{AbstractS3GuardToolTestBase}}.
> 2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
> {{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, 
> but the fs is initialized with a MetadataStore present, so seem like the fs 
> will still put the file in the MetadataStore?
> There are other tests which explicitly go around the MetadataStore by using 
> {{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
> something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding 
> any issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-07 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466387#comment-16466387
 ] 

Gabor Bota commented on HADOOP-15420:
-

Fixed checkstyle issues

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch, HADOOP-15420.002.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>  Pruned children count 
> [PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}, 
> PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}] expected:<1> but was:<2>{code}
>  
> Looking through the code, I'm noticing a couple of issues.
>  
> 1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should 
> really be running for all MetadataStore implementations.  Seems like it 
> should live in {{AbstractS3GuardToolTestBase}}.
> 2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
> {{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, 
> but the fs is initialized with a MetadataStore present, so seem like the fs 
> will still put the file in the MetadataStore?
> There are other tests which explicitly go around the MetadataStore by using 
> {{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
> something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding 
> any issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-07 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15420:

Attachment: HADOOP-15420.002.patch

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch, HADOOP-15420.002.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>  Pruned children count 
> [PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}, 
> PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}] expected:<1> but was:<2>{code}
>  
> Looking through the code, I'm noticing a couple of issues.
>  
> 1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should 
> really be running for all MetadataStore implementations.  Seems like it 
> should live in {{AbstractS3GuardToolTestBase}}.
> 2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
> {{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, 
> but the fs is initialized with a MetadataStore present, so seem like the fs 
> will still put the file in the MetadataStore?
> There are other tests which explicitly go around the MetadataStore by using 
> {{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
> something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding 
> any issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15416) s3guard diff assert failure if source path not found

2018-05-07 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15416:
---

Assignee: Gabor Bota

> s3guard diff assert failure if source path not found
> 
>
> Key: HADOOP-15416
> URL: https://issues.apache.org/jira/browse/HADOOP-15416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: s3a with fault injection turned on
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> Got an illegal argument exception trying to do a s3guard diff in a test run. 
> Underlying cause: directory in supplied s3a path didn't exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2018-05-08 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466969#comment-16466969
 ] 

Gabor Bota commented on HADOOP-13649:
-

Changes totally make sense, +1 (nonbinding).

Thanks for the correction, the following mistake in my patch is astonishing: 
{code:java}
expireAfterAccess(expiryAfterWrite, TimeUnit.SECONDS)
{code}
I'm still not quite there with the naming.

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-13649.001.patch, HADOOP-13649.002.patch, 
> HADOOP-13649.003.patch
>
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-09 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15441:

Summary: After HADOOP-14987, encryption zone operations print unnecessary 
INFO logs  (was: After HADOOP-14445, encryption zone operations print 
unnecessary INFO logs)

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14445, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-09 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469312#comment-16469312
 ] 

Gabor Bota commented on HADOOP-15441:
-

Sure [~shahrs87], I've updated both.

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-09 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15441:

Description: 
It looks like after HADOOP-14987, any encryption zone operations prints extra 
INFO log messages as follows:
{code:java}
$ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
kms://ht...@hadoop3-1.example.com:16000/kms created.
{code}

It might make sense to make it a DEBUG message instead.

  was:
It looks like after HADOOP-14445, any encryption zone operations prints extra 
INFO log messages as follows:
{code:java}
$ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
kms://ht...@hadoop3-1.example.com:16000/kms created.
{code}

It might make sense to make it a DEBUG message instead.


> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-04-27 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15420:
---

Assignee: Gabor Bota

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>  Pruned children count 
> [PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}, 
> PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}] expected:<1> but was:<2>{code}
>  
> Looking through the code, I'm noticing a couple of issues.
>  
> 1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should 
> really be running for all MetadataStore implementations.  Seems like it 
> should live in {{AbstractS3GuardToolTestBase}}.
> 2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
> {{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, 
> but the fs is initialized with a MetadataStore present, so seem like the fs 
> will still put the file in the MetadataStore?
> There are other tests which explicitly go around the MetadataStore by using 
> {{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
> something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding 
> any issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15423) Use single hash Path -> tuple(DirListingMetadata, PathMetadata) in LocalMetadataStore

2018-04-27 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15423:

Description: 
Right now the s3guard.LocalMetadataStore uses two HashMap in the implementation 
- one for the file and one for the dir hash.
{code:java}
  /** Contains directories and files. */
  private LruHashMap fileHash;

  /** Contains directory listings. */
  private LruHashMap dirHash;
{code}

It would be nice to have only one hash instead of these two for storing the 
values. An idea for the implementation would be to have a class with nullable 
fields:

{code:java}
  static class LocalMetaEntry {
@Nullable
public PathMetadata pathMetadata;
@Nullable
public DirListingMetadata dirListingMetadata;
  }
{code}

or a Pair (tuple):

{code:java}
Pair metaEntry;
{code}

And only one hash/cache for these elements.

  was:
Right now the s3guard.LocalMetadataStore uses two HashMap in the implementation 
one for the file and one for the dir hash.
{code:java}
  /** Contains directories and files. */
  private LruHashMap fileHash;

  /** Contains directory listings. */
  private LruHashMap dirHash;
{code}

It would be nice to have only one hash instead of these two for storing the 
values. An idea for the implementation would be to have a class with nullable 
fields:

{code:java}
  static class LocalMetaEntry {
@Nullable
public PathMetadata pathMetadata;
@Nullable
public DirListingMetadata dirListingMetadata;
  }
{code}

or a Pair (tuple):

{code:java}
Pair metaEntry;
{code}

And only one hash/cache for these elements.


> Use single hash Path -> tuple(DirListingMetadata, PathMetadata) in 
> LocalMetadataStore
> -
>
> Key: HADOOP-15423
> URL: https://issues.apache.org/jira/browse/HADOOP-15423
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Right now the s3guard.LocalMetadataStore uses two HashMap in the 
> implementation - one for the file and one for the dir hash.
> {code:java}
>   /** Contains directories and files. */
>   private LruHashMap fileHash;
>   /** Contains directory listings. */
>   private LruHashMap dirHash;
> {code}
> It would be nice to have only one hash instead of these two for storing the 
> values. An idea for the implementation would be to have a class with nullable 
> fields:
> {code:java}
>   static class LocalMetaEntry {
> @Nullable
> public PathMetadata pathMetadata;
> @Nullable
> public DirListingMetadata dirListingMetadata;
>   }
> {code}
> or a Pair (tuple):
> {code:java}
> Pair metaEntry;
> {code}
> And only one hash/cache for these elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15423) Use single hash Path -> tuple(DirListingMetadata, PathMetadata) in LocalMetadataStore

2018-04-27 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15423:
---

 Summary: Use single hash Path -> tuple(DirListingMetadata, 
PathMetadata) in LocalMetadataStore
 Key: HADOOP-15423
 URL: https://issues.apache.org/jira/browse/HADOOP-15423
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota
Assignee: Gabor Bota


Right now the s3guard.LocalMetadataStore uses two HashMap in the implementation 
one for the file and one for the dir hash.
{code:java}
  /** Contains directories and files. */
  private LruHashMap fileHash;

  /** Contains directory listings. */
  private LruHashMap dirHash;
{code}

It would be nice to have only one hash instead of these two for storing the 
values. An idea for the implementation would be to have a class with nullable 
fields:

{code:java}
  static class LocalMetaEntry {
@Nullable
public PathMetadata pathMetadata;
@Nullable
public DirListingMetadata dirListingMetadata;
  }
{code}

or a Pair (tuple):

{code:java}
Pair metaEntry;
{code}

And only one hash/cache for these elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13756) LocalMetadataStore#put(DirListingMetadata) should also put file metadata into fileHash.

2018-04-27 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456485#comment-16456485
 ] 

Gabor Bota commented on HADOOP-13756:
-

Thanks [~fabbri]. I've issed https://issues.apache.org/jira/browse/HADOOP-15423 
for the Path -> tuple(DirListingMetadata, PathMetadata) change.

> LocalMetadataStore#put(DirListingMetadata) should also put file metadata into 
> fileHash.
> ---
>
> Key: HADOOP-13756
> URL: https://issues.apache.org/jira/browse/HADOOP-13756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-13756.001.patch
>
>
> {{LocalMetadataStore#put(DirListingMetadata)}} only puts the metadata into 
> {{dirHash}}, thus all {{FileStatus}} s are missing from 
> {{LocalMedataStore#fileHash()}}, which makes it confuse to use.
> So in the current way, to correctly put file status into the store (and also 
> set {{authoriative}} flag), you need to run  {code}
> List metas = new ArrayList();
> boolean authorizative = true;
> for (S3AFileStatus status : files) {
>PathMetadata meta = new PathMetadata(status);
>store.put(meta);
> }
> DirListingMetadata dirMeta = new DirMeta(parent, metas, authorizative);
> store.put(dirMeta);
> {code}
> Since solely calling {{store.put(dirMeta)}} is not correct, and calling 
> {{store.put(dirMeta);}} after putting all sub-file {{FileStatuss}} does the 
> repetitive jobs. Can we just use a {{put(PathMetadata)}} and a 
> {{get/setAuthorative()}}   in the MetadataStore interface instead?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13756) LocalMetadataStore#put(DirListingMetadata) should also put file metadata into fileHash.

2018-04-27 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456485#comment-16456485
 ] 

Gabor Bota edited comment on HADOOP-13756 at 4/27/18 2:01 PM:
--

Thanks [~fabbri]. I've created 
https://issues.apache.org/jira/browse/HADOOP-15423 for the Path -> 
tuple(DirListingMetadata, PathMetadata) change.


was (Author: gabor.bota):
Thanks [~fabbri]. I've issed https://issues.apache.org/jira/browse/HADOOP-15423 
for the Path -> tuple(DirListingMetadata, PathMetadata) change.

> LocalMetadataStore#put(DirListingMetadata) should also put file metadata into 
> fileHash.
> ---
>
> Key: HADOOP-13756
> URL: https://issues.apache.org/jira/browse/HADOOP-13756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
>Reporter: Lei (Eddy) Xu
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-13756.001.patch
>
>
> {{LocalMetadataStore#put(DirListingMetadata)}} only puts the metadata into 
> {{dirHash}}, thus all {{FileStatus}} s are missing from 
> {{LocalMedataStore#fileHash()}}, which makes it confuse to use.
> So in the current way, to correctly put file status into the store (and also 
> set {{authoriative}} flag), you need to run  {code}
> List metas = new ArrayList();
> boolean authorizative = true;
> for (S3AFileStatus status : files) {
>PathMetadata meta = new PathMetadata(status);
>store.put(meta);
> }
> DirListingMetadata dirMeta = new DirMeta(parent, metas, authorizative);
> store.put(dirMeta);
> {code}
> Since solely calling {{store.put(dirMeta)}} is not correct, and calling 
> {{store.put(dirMeta);}} after putting all sub-file {{FileStatuss}} does the 
> repetitive jobs. Can we just use a {{put(PathMetadata)}} and a 
> {{get/setAuthorative()}}   in the MetadataStore interface instead?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-12 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473017#comment-16473017
 ] 

Gabor Bota commented on HADOOP-15441:
-

Sure, I've uploaded v003.

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch, 
> HADOOP-15441.003.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-12 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15441:

Attachment: HADOOP-15441.003.patch

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch, 
> HADOOP-15441.003.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-08 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467244#comment-16467244
 ] 

Gabor Bota edited comment on HADOOP-15420 at 5/8/18 11:20 AM:
--

*using {{standardize(Path)}}*

Do you mean using it like:
{code:java}
  private boolean expired(FileStatus status, long expiry, String keyPrefix) {
Path path = standardize(status.getPath());
String bucket = path.toUri().getHost();
String translatedPath = "";
if(bucket != null && !bucket.isEmpty()){
  translatedPath =
  PathMetadataDynamoDBTranslation.pathToParentKey(path);
} else {
  translatedPath = path.toString();
}
return status.getModificationTime() < expiry && !status.isDirectory()
  && translatedPath.startsWith(keyPrefix);
  }
{code}
I need to check for the bucket. Removing the check for the {{getHost}} 
(existing bucket) the following tests will fail:
 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneUnsetsAuthoritative
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneFiles
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneDirs

in these cases the {{status.getPath()}}=file:/unpruned-root-dir; and it can not 
be digested by {{PathMetadataDynamoDBTranslation.pathToParentKey()}}, the test 
will fail with the following:
{noformat}
java.lang.IllegalArgumentException: Path missing bucket
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation.pathToParentKey(PathMetadataDynamoDBTranslation.java:255)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.expired(LocalMetadataStore.java:358)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:308)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:730)
(...)
{noformat}
*Testing with local dynamo*
 Here is my output for {{mvn clean test -Ds3guard -Ddynamo}}: 
[https://pastebin.com/raw/qGuwuS4F]
 Short summary: Tests run: 398, Failures: 0, Errors: 0, Skipped: 2


was (Author: gabor.bota):
*using {{standardize(Path)}}*

Do you mean using it like:
{code:java}
  private boolean expired(FileStatus status, long expiry, String keyPrefix) {
Path path = standardize(status.getPath());
String bucket = path.toUri().getHost();
String translatedPath = "";
if(bucket != null && !bucket.isEmpty()){
  translatedPath =
  PathMetadataDynamoDBTranslation.pathToParentKey(path);
} else {
  translatedPath = path.toString();
}
return status.getModificationTime() < expiry && !status.isDirectory()
  && translatedPath.startsWith(keyPrefix);
  }
{code}
I need to check for the bucket. Removing the check for the {{getHost}} 
(existing bucket) the following tests will fail:
 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneUnsetsAuthoritative
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneFiles
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneDirs

in these cases the {{status.getPath()}}=file:/unpruned-root-dir; and it can not 
be digested by {{PathMetadataDynamoDBTranslation.pathToParentKey()}}, the test 
will fail with the following:
{noformat}
java.lang.IllegalArgumentException: Path missing bucket
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation.pathToParentKey(PathMetadataDynamoDBTranslation.java:255)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.expired(LocalMetadataStore.java:358)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:308)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:730)
(...)
{noformat}
*Testing with local dynamo*
 Here is my output for {{mvn clean test -Ds3guard -Ddynamo}}: 
[https://pastebin.com/qGuwuS4F]
 Short summary: Tests run: 398, Failures: 0, Errors: 0, Skipped: 2

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch, HADOOP-15420.002.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] 

[jira] [Comment Edited] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-08 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467244#comment-16467244
 ] 

Gabor Bota edited comment on HADOOP-15420 at 5/8/18 11:16 AM:
--

*using {{standardize(Path)}}*

Do you mean using it like:
{code:java}
  private boolean expired(FileStatus status, long expiry, String keyPrefix) {
Path path = standardize(status.getPath());
String bucket = path.toUri().getHost();
String translatedPath = "";
if(bucket != null && !bucket.isEmpty()){
  translatedPath =
  PathMetadataDynamoDBTranslation.pathToParentKey(path);
} else {
  translatedPath = path.toString();
}
return status.getModificationTime() < expiry && !status.isDirectory()
  && translatedPath.startsWith(keyPrefix);
  }
{code}
I need to check for the bucket. Removing the check for the {{getHost}} 
(existing bucket) the following tests will fail:
 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneUnsetsAuthoritative
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneFiles
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneDirs

in these cases the {{status.getPath()}}=file:/unpruned-root-dir; and it can not 
be digested by {{PathMetadataDynamoDBTranslation.pathToParentKey()}}, the test 
will fail with the following:
{noformat}
java.lang.IllegalArgumentException: Path missing bucket
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation.pathToParentKey(PathMetadataDynamoDBTranslation.java:255)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.expired(LocalMetadataStore.java:358)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:308)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:730)
(...)
{noformat}
*Testing with local dynamo*
 Here is my output for {{mvn clean test -Ds3guard -Ddynamo}}: 
[https://pastebin.com/qGuwuS4F]
 Short summary: Tests run: 398, Failures: 0, Errors: 0, Skipped: 2


was (Author: gabor.bota):
*using {{standardize(Path)}}

Do you mean using it like:
{code:java}
  private boolean expired(FileStatus status, long expiry, String keyPrefix) {
Path path = standardize(status.getPath());
String bucket = path.toUri().getHost();
String translatedPath = "";
if(bucket != null && !bucket.isEmpty()){
  translatedPath =
  PathMetadataDynamoDBTranslation.pathToParentKey(path);
} else {
  translatedPath = path.toString();
}
return status.getModificationTime() < expiry && !status.isDirectory()
  && translatedPath.startsWith(keyPrefix);
  }
{code}
I need to check for the bucket. Removing the check for the {{getHost}} 
(existing bucket) the following tests will fail:
 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneUnsetsAuthoritative
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneFiles
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneDirs

in these cases the {{status.getPath()}}=file:/unpruned-root-dir; and it can not 
be digested by {{PathMetadataDynamoDBTranslation.pathToParentKey()}}, the test 
will fail with the following:
{noformat}
java.lang.IllegalArgumentException: Path missing bucket
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation.pathToParentKey(PathMetadataDynamoDBTranslation.java:255)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.expired(LocalMetadataStore.java:358)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:308)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:730)
(...)
{noformat}
*Testing with local dynamo*
 Here is my output for {{mvn clean test -Ds3guard -Ddynamo}}: 
[https://pastebin.com/qGuwuS4F]
 Short summary: Tests run: 398, Failures: 0, Errors: 0, Skipped: 2

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch, HADOOP-15420.002.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] 

[jira] [Commented] (HADOOP-15420) s3guard ITestS3GuardToolLocal failures in diff tests

2018-05-08 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467244#comment-16467244
 ] 

Gabor Bota commented on HADOOP-15420:
-

*using {{standardize(Path)}}

Do you mean using it like:
{code:java}
  private boolean expired(FileStatus status, long expiry, String keyPrefix) {
Path path = standardize(status.getPath());
String bucket = path.toUri().getHost();
String translatedPath = "";
if(bucket != null && !bucket.isEmpty()){
  translatedPath =
  PathMetadataDynamoDBTranslation.pathToParentKey(path);
} else {
  translatedPath = path.toString();
}
return status.getModificationTime() < expiry && !status.isDirectory()
  && translatedPath.startsWith(keyPrefix);
  }
{code}
I need to check for the bucket. Removing the check for the {{getHost}} 
(existing bucket) the following tests will fail:
 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneUnsetsAuthoritative
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneFiles
 org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase#testPruneDirs

in these cases the {{status.getPath()}}=file:/unpruned-root-dir; and it can not 
be digested by {{PathMetadataDynamoDBTranslation.pathToParentKey()}}, the test 
will fail with the following:
{noformat}
java.lang.IllegalArgumentException: Path missing bucket
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation.pathToParentKey(PathMetadataDynamoDBTranslation.java:255)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.expired(LocalMetadataStore.java:358)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(LocalMetadataStore.java:308)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:730)
(...)
{noformat}
*Testing with local dynamo*
 Here is my output for {{mvn clean test -Ds3guard -Ddynamo}}: 
[https://pastebin.com/qGuwuS4F]
 Short summary: Tests run: 398, Failures: 0, Errors: 0, Skipped: 2

> s3guard ITestS3GuardToolLocal failures in diff tests
> 
>
> Key: HADOOP-15420
> URL: https://issues.apache.org/jira/browse/HADOOP-15420
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15420.001.patch, HADOOP-15420.002.patch
>
>
> Noticed this when testing the patch for HADOOP-13756.
>  
> {code:java}
> [ERROR] Failures:
> [ERROR]   
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>  Pruned children count 
> [PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/stale;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798258286; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}, 
> PathMetadata{fileStatus=S3AFileStatus{path=s3a://bucket-new/test/testPruneCommandCLI/fresh;
>  isDirectory=false; length=100; replication=1; blocksize=512; 
> modification_time=1524798262583; access_time=0; owner=hdfs; group=hdfs; 
> permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
> isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
> isDeleted=false}] expected:<1> but was:<2>{code}
>  
> Looking through the code, I'm noticing a couple of issues.
>  
> 1. {{testDiffCommand()}} is in {{ITestS3GuardToolLocal}}, but it should 
> really be running for all MetadataStore implementations.  Seems like it 
> should live in {{AbstractS3GuardToolTestBase}}.
> 2. {{AbstractS3GuardToolTestBase#createFile()}} seems wrong. When 
> {{onMetadataStore}} is false, it does a {{ContractTestUtils.touch(file)}}, 
> but the fs is initialized with a MetadataStore present, so seem like the fs 
> will still put the file in the MetadataStore?
> There are other tests which explicitly go around the MetadataStore by using 
> {{fs.setMetadataStore(nullMS)}}, e.g. ITestS3AInconsistency. We should do 
> something similar in {{AbstractS3GuardToolTestBase#createFile()}}, minding 
> any issues with parallel test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Work started] (HADOOP-14918) remove the Local Dynamo DB test option

2018-05-08 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14918 started by Gabor Bota.
---
> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) remove the Local Dynamo DB test option

2018-05-08 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16467471#comment-16467471
 ] 

Gabor Bota commented on HADOOP-14918:
-

Created a branch with the latest patch rebased to trunk: 
https://github.com/bgaborg/hadoop/tree/HADOOP-14918

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-05-04 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463678#comment-16463678
 ] 

Gabor Bota commented on HADOOP-14927:
-

Thanks for the patch [~fabbri].

After applying the patch, testDestroyNoBucket() still failing for me when I set 
fs.s3a.s3guard.ddb.region in auth-keys.xml with 
{noformat}
java.lang.IllegalArgumentException: No DynamoDB table name configured

at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:324)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:266)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:549)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1489)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:95)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDestroyNoBucket(AbstractS3GuardToolTestBase.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

If I don't set fs.s3a.s3guard.ddb.region in auth-keys.xml, the test will fail 
with:
{noformat}
java.io.FileNotFoundException: Bucket this-bucket-does-not-exist-000 
does not exist

at 
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:374)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:308)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3377)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:530)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:306)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseDynamoDBRegion(S3GuardTool.java:182)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:542)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1489)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:95)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDestroyNoBucket(AbstractS3GuardToolTestBase.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 

[jira] [Updated] (HADOOP-15441) After HADOOP-14445, encryption zone operations print unnecessary INFO logs

2018-05-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15441:

Status: Patch Available  (was: Open)

> After HADOOP-14445, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch
>
>
> It looks like after HADOOP-14445, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-05-04 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463678#comment-16463678
 ] 

Gabor Bota edited comment on HADOOP-14927 at 5/4/18 10:36 AM:
--

Thanks for the patch [~fabbri].

After applying the patch, testDestroyNoBucket() still failing for me when I set 
fs.s3a.s3guard.ddb.region in auth-keys.xml with 
{noformat}
java.lang.IllegalArgumentException: No DynamoDB table name configured

at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:324)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:266)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:549)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1489)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:95)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDestroyNoBucket(AbstractS3GuardToolTestBase.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

If I don't set fs.s3a.s3guard.ddb.region in auth-keys.xml, the test will fail 
with:
{noformat}
java.io.FileNotFoundException: Bucket this-bucket-does-not-exist-000 
does not exist

at 
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:374)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:308)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3377)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:530)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:306)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseDynamoDBRegion(S3GuardTool.java:182)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:542)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1489)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:95)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDestroyNoBucket(AbstractS3GuardToolTestBase.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 

[jira] [Updated] (HADOOP-15441) After HADOOP-14445, encryption zone operations print unnecessary INFO logs

2018-05-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15441:

Attachment: HADOOP-15441.001.patch

> After HADOOP-14445, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-15441.001.patch
>
>
> It looks like after HADOOP-14445, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15441) After HADOOP-14445, encryption zone operations print unnecessary INFO logs

2018-05-04 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15441:
---

Assignee: Gabor Bota

> After HADOOP-14445, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch
>
>
> It looks like after HADOOP-14445, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-05-04 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463678#comment-16463678
 ] 

Gabor Bota edited comment on HADOOP-14927 at 5/4/18 11:14 AM:
--

Thanks for the patch [~fabbri].

*After applying* the patch, testDestroyNoBucket() still failing for me when I 
set fs.s3a.s3guard.ddb.region in auth-keys.xml with 
{noformat}
java.lang.IllegalArgumentException: No DynamoDB table name configured

at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:324)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:266)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:549)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1489)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:95)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDestroyNoBucket(AbstractS3GuardToolTestBase.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

If I don't set fs.s3a.s3guard.ddb.region in auth-keys.xml, the test will fail 
with:
{noformat}
java.io.FileNotFoundException: Bucket this-bucket-does-not-exist-000 
does not exist

at 
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:374)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:308)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3377)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:530)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:306)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseDynamoDBRegion(S3GuardTool.java:182)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:542)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:350)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1489)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:95)
at 
org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDestroyNoBucket(AbstractS3GuardToolTestBase.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 

[jira] [Commented] (HADOOP-15416) s3guard diff assert failure if source path not found

2018-05-20 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481873#comment-16481873
 ] 

Gabor Bota commented on HADOOP-15416:
-

[~ste...@apache.org] what tests did you run exactly? I'd like to reproduce the 
issue, and create some tests for it which will fail before starting to fix.

> s3guard diff assert failure if source path not found
> 
>
> Key: HADOOP-15416
> URL: https://issues.apache.org/jira/browse/HADOOP-15416
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: s3a with fault injection turned on
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> Got an illegal argument exception trying to do a s3guard diff in a test run. 
> Underlying cause: directory in supplied s3a path didn't exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-20 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Status: Patch Available  (was: Open)

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15309) default maven in path under start-build-env.sh is the wrong one

2018-05-20 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16481868#comment-16481868
 ] 

Gabor Bota commented on HADOOP-15309:
-

I was not able to find {{/usr/bin/mvn}} in {{start-build-env.sh}} on {{trunk}}. 
Could you clarify the description of the issue please?

> default maven in path under start-build-env.sh is the wrong one
> ---
>
> Key: HADOOP-15309
> URL: https://issues.apache.org/jira/browse/HADOOP-15309
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Gabor Bota
>Priority: Trivial
>
> PATH points to /usr/bin/mvn, should be /opt/maven/bin/mvn



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15480) AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo

2018-05-20 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15480:

Attachment: HADOOP-15480.001.patch

> AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo
> ---
>
> Key: HADOOP-15480
> URL: https://issues.apache.org/jira/browse/HADOOP-15480
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15480.001.patch
>
>
> When running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB, the 
> testDiffCommand test fails with the following:
> {noformat}
> testDiffCommand(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)  
> Time elapsed: 8.059 s  <<< FAILURE!
> java.lang.AssertionError: 
> Mismatched metadata store outputs: MS D   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
>  expected:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4]> 
> but was:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDiffCommand(AbstractS3GuardToolTestBase.java:382)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> 

[jira] [Updated] (HADOOP-15480) AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo

2018-05-20 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15480:

Status: Patch Available  (was: Open)

Thanks for the hint [~fabbri]. 

I've added a helper method to 
org.apache.hadoop.fs.s3a.S3ATestUtils#setMetadataStore because I wanted to 
avoid changing the visibility of 
org.apache.hadoop.fs.s3a.S3AFileSystem#setMetadataStore.

> AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo
> ---
>
> Key: HADOOP-15480
> URL: https://issues.apache.org/jira/browse/HADOOP-15480
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15480.001.patch
>
>
> When running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB, the 
> testDiffCommand test fails with the following:
> {noformat}
> testDiffCommand(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)  
> Time elapsed: 8.059 s  <<< FAILURE!
> java.lang.AssertionError: 
> Mismatched metadata store outputs: MS D   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
>  expected:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4]> 
> but was:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDiffCommand(AbstractS3GuardToolTestBase.java:382)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> 

[jira] [Commented] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-17 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478933#comment-16478933
 ] 

Gabor Bota commented on HADOOP-15473:
-

Hi [~ajisakaa],
Thanks for the reviews and helping me to solve this issue.
I've made the changes you asked in my latest patch.

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HDFS-13494.001.patch, 
> HDFS-13494.002.patch, HDFS-13494.003.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15473:

Attachment: HADOOP-15473.004.patch

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HDFS-13494.001.patch, 
> HDFS-13494.002.patch, HDFS-13494.003.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14946) S3Guard testPruneCommandCLI can fail

2018-05-17 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478997#comment-16478997
 ] 

Gabor Bota commented on HADOOP-14946:
-

Sure I'll start working on this shortly.

> S3Guard testPruneCommandCLI can fail
> 
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option

2018-05-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14918:

Status: Patch Available  (was: In Progress)

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0, 2.9.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option

2018-05-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14918:

Attachment: HADOOP-14918-004.patch

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14946) S3Guard testPruneCommandCLI can fail

2018-05-17 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14946 started by Gabor Bota.
---
> S3Guard testPruneCommandCLI can fail
> 
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14918) remove the Local Dynamo DB test option

2018-05-17 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479461#comment-16479461
 ] 

Gabor Bota commented on HADOOP-14918:
-

Some notes on the patch:
The test runs without failure on eu-west-1 (ireland), but runs slow - that's 
why the increased timeout is needed as a system setting in the pom file. 
ITestDynamoDBMetadataStore extends MetadataStoreTestBase extends 
HadoopTestBase, and HadoopTestBase has this 100s TEST_DEFAULT_TIMEOUT_VALUE, 
and I need to double that to be sure it won't time out. Maybe it would be a 
good idea to move this to the scale test group? 

Right now the test cannot be run parallelly, so I've added it to the 
sequential-integration-tests group.

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15423) Merge fileCache and dirCache into one single cache in LocalMetadataStore

2018-05-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15423:

Description: 
Right now the s3guard.LocalMetadataStore uses two HashMap in the implementation 
- one for the file and one for the dir hash.
{code:java}
  /** Contains directories and files. */
  private Cache fileCache;

  /** Contains directory listings. */
  private Cache dirCache;
{code}

It would be nice to have only one hash instead of these two for storing the 
values. An idea for the implementation would be to have a class with nullable 
fields:

{code:java}
  static class LocalMetaEntry {
@Nullable
public PathMetadata pathMetadata;
@Nullable
public DirListingMetadata dirListingMetadata;
  }
{code}

or a Pair (tuple):

{code:java}
Pair metaEntry;
{code}

And only one hash/cache for these elements.

  was:
Right now the s3guard.LocalMetadataStore uses two HashMap in the implementation 
- one for the file and one for the dir hash.
{code:java}
  /** Contains directories and files. */
  private LruHashMap fileHash;

  /** Contains directory listings. */
  private LruHashMap dirHash;
{code}

It would be nice to have only one hash instead of these two for storing the 
values. An idea for the implementation would be to have a class with nullable 
fields:

{code:java}
  static class LocalMetaEntry {
@Nullable
public PathMetadata pathMetadata;
@Nullable
public DirListingMetadata dirListingMetadata;
  }
{code}

or a Pair (tuple):

{code:java}
Pair metaEntry;
{code}

And only one hash/cache for these elements.


> Merge fileCache and dirCache into one single cache in LocalMetadataStore
> 
>
> Key: HADOOP-15423
> URL: https://issues.apache.org/jira/browse/HADOOP-15423
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Right now the s3guard.LocalMetadataStore uses two HashMap in the 
> implementation - one for the file and one for the dir hash.
> {code:java}
>   /** Contains directories and files. */
>   private Cache fileCache;
>   /** Contains directory listings. */
>   private Cache dirCache;
> {code}
> It would be nice to have only one hash instead of these two for storing the 
> values. An idea for the implementation would be to have a class with nullable 
> fields:
> {code:java}
>   static class LocalMetaEntry {
> @Nullable
> public PathMetadata pathMetadata;
> @Nullable
> public DirListingMetadata dirListingMetadata;
>   }
> {code}
> or a Pair (tuple):
> {code:java}
> Pair metaEntry;
> {code}
> And only one hash/cache for these elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15423) Use single hash Path -> tuple(DirListingMetadata, PathMetadata) in LocalMetadataStore

2018-05-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15423 started by Gabor Bota.
---
> Use single hash Path -> tuple(DirListingMetadata, PathMetadata) in 
> LocalMetadataStore
> -
>
> Key: HADOOP-15423
> URL: https://issues.apache.org/jira/browse/HADOOP-15423
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Right now the s3guard.LocalMetadataStore uses two HashMap in the 
> implementation - one for the file and one for the dir hash.
> {code:java}
>   /** Contains directories and files. */
>   private LruHashMap fileHash;
>   /** Contains directory listings. */
>   private LruHashMap dirHash;
> {code}
> It would be nice to have only one hash instead of these two for storing the 
> values. An idea for the implementation would be to have a class with nullable 
> fields:
> {code:java}
>   static class LocalMetaEntry {
> @Nullable
> public PathMetadata pathMetadata;
> @Nullable
> public DirListingMetadata dirListingMetadata;
>   }
> {code}
> or a Pair (tuple):
> {code:java}
> Pair metaEntry;
> {code}
> And only one hash/cache for these elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15423) Merge fileCache and dirCache into one single cache in LocalMetadataStore

2018-05-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15423:

Summary: Merge fileCache and dirCache into one single cache in 
LocalMetadataStore  (was: Use single hash Path -> tuple(DirListingMetadata, 
PathMetadata) in LocalMetadataStore)

> Merge fileCache and dirCache into one single cache in LocalMetadataStore
> 
>
> Key: HADOOP-15423
> URL: https://issues.apache.org/jira/browse/HADOOP-15423
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Right now the s3guard.LocalMetadataStore uses two HashMap in the 
> implementation - one for the file and one for the dir hash.
> {code:java}
>   /** Contains directories and files. */
>   private LruHashMap fileHash;
>   /** Contains directory listings. */
>   private LruHashMap dirHash;
> {code}
> It would be nice to have only one hash instead of these two for storing the 
> values. An idea for the implementation would be to have a class with nullable 
> fields:
> {code:java}
>   static class LocalMetaEntry {
> @Nullable
> public PathMetadata pathMetadata;
> @Nullable
> public DirListingMetadata dirListingMetadata;
>   }
> {code}
> or a Pair (tuple):
> {code:java}
> Pair metaEntry;
> {code}
> And only one hash/cache for these elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-23 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487116#comment-16487116
 ] 

Gabor Bota edited comment on HADOOP-15473 at 5/23/18 11:42 AM:
---

Good point [~xiaochen], I've added the check for the already set property and 
documented the behavior in the v5 patch


was (Author: gabor.bota):
Good point [~xiaochen], I've added the check for the already set property and 
documented the behavior in the v4 patch

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HADOOP-15473.005.patch, 
> HDFS-13494.001.patch, HDFS-13494.002.patch, HDFS-13494.003.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-23 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487116#comment-16487116
 ] 

Gabor Bota commented on HADOOP-15473:
-

Good point [~xiaochen], I've added the check for the already set property and 
documented the behavior in the v4 patch

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HADOOP-15473.005.patch, 
> HDFS-13494.001.patch, HDFS-13494.002.patch, HDFS-13494.003.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-23 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15473:

Attachment: HADOOP-15473.005.patch

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HADOOP-15473.005.patch, 
> HDFS-13494.001.patch, HDFS-13494.002.patch, HDFS-13494.003.patch, 
> org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: HADOOP-15307.002.patch

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487251#comment-16487251
 ] 

Gabor Bota commented on HADOOP-15307:
-

Thanks for the review [~knanasi], I've created my v2 patch based on your 
comments.

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Status: Patch Available  (was: Open)

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: HADOOP-15307.002.patch

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Status: Open  (was: Patch Available)

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: (was: HADOOP-15307.002.patch)

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-24 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15473:

Attachment: HADOOP-15473.006.patch

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HADOOP-15473.005.patch, 
> HADOOP-15473.006.patch, HDFS-13494.001.patch, HDFS-13494.002.patch, 
> HDFS-13494.003.patch, org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-24 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488716#comment-16488716
 ] 

Gabor Bota commented on HADOOP-15473:
-

Thanks [~ajisakaa], I've corrected it.

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HADOOP-15473.005.patch, 
> HADOOP-15473.006.patch, HDFS-13494.001.patch, HDFS-13494.002.patch, 
> HDFS-13494.003.patch, org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15480) AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo

2018-05-22 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484407#comment-16484407
 ] 

Gabor Bota edited comment on HADOOP-15480 at 5/22/18 8:00 PM:
--

Tested with eu-west-1 with [ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal) 
(known from HADOOP-14927), otherwise no errors or failures.


was (Author: gabor.bota):
Tested with eu-west-1 with [ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal) 
(know from HADOOP-14927), otherwise no errors or failures.

> AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo
> ---
>
> Key: HADOOP-15480
> URL: https://issues.apache.org/jira/browse/HADOOP-15480
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15480.001.patch
>
>
> When running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB, the 
> testDiffCommand test fails with the following:
> {noformat}
> testDiffCommand(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)  
> Time elapsed: 8.059 s  <<< FAILURE!
> java.lang.AssertionError: 
> Mismatched metadata store outputs: MS D   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
>  expected:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4]> 
> but was:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDiffCommand(AbstractS3GuardToolTestBase.java:382)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

[jira] [Updated] (HADOOP-14918) remove the Local Dynamo DB test option

2018-05-25 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14918:

Status: In Progress  (was: Patch Available)

> remove the Local Dynamo DB test option
> --
>
> Key: HADOOP-14918
> URL: https://issues.apache.org/jira/browse/HADOOP-14918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0, 2.9.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14918-001.patch, HADOOP-14918-002.patch, 
> HADOOP-14918-003.patch, HADOOP-14918-004.patch
>
>
> I'm going to propose cutting out the localdynamo test option for s3guard
> * the local DDB JAR is unmaintained/lags the SDK We work with...eventually 
> there'll be differences in API.
> * as the local dynamo DB is unshaded. it complicates classpath setup for the 
> build. Remove it and there's no need to worry about versions of anything 
> other than the shaded AWS
> * it complicates test runs. Now we need to test for both localdynamo *and* 
> real dynamo
> * but we can't ignore real dynamo, because that's the one which matters
> While the local option promises to reduce test costs, really, it's just 
> adding complexity. If you are testing with s3guard, you need to have a real 
> table to test against., And with the exception of those people testing s3a 
> against non-AWS, consistent endpoints, everyone should be testing with 
> S3Guard.
> -Straightforward to remove.-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15480) AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo

2018-05-25 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490591#comment-16490591
 ] 

Gabor Bota commented on HADOOP-15480:
-

In my latest patch, I use S3AFileSystem) FileSystem.newInstance(fsUri, conf); 
to create a new instance in testDiffCommand, and I set the 
S3GUARD_METASTORE_NULL for S3_METADATA_STORE_IMPL in the configuration.

I had to add a new fs parameter to the mkdirs and createFile method, and I've 
also created a method which will use the same number of parameters and call 
getFileSystem() to use the default initialized S3AFileSystem as the original 
method did.

> AbstractS3GuardToolTestBase.testDiffCommand fails when using dynamo
> ---
>
> Key: HADOOP-15480
> URL: https://issues.apache.org/jira/browse/HADOOP-15480
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15480.001.patch, HADOOP-15480.002.patch
>
>
> When running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB, the 
> testDiffCommand test fails with the following:
> {noformat}
> testDiffCommand(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)  
> Time elapsed: 8.059 s  <<< FAILURE!
> java.lang.AssertionError: 
> Mismatched metadata store outputs: MS D   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2
> MSF   100 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3
> S3F   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
> MSF   0   
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4
>  expected:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4]> 
> but was:<[
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-0, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-1, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-2, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-3, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/ms_only/file-4, 
> s3a://cloudera-dev-gabor-ireland/test/test-diff/s3_only/file-4]>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testDiffCommand(AbstractS3GuardToolTestBase.java:382)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

  1   2   3   4   5   6   7   8   9   10   >