[jira] [Comment Edited] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757647#comment-16757647
 ] 

Anu Engineer edited comment on HDFS-14234 at 1/31/19 7:22 PM:
--

Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:*
 # nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
 # Remove unused imports.
 # Line 152: Unused variable, overrideConfigs
 # Line 160: Clean up some args in the new
 # Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.
 # Don't understand the use case, but I am going to assume that
 _// Map is {"user": [subnet, path]}_
 user is some user in Kerberos/Directory and user can also be group.

*HostRestrictingAuthorizationFilterHandler.java*
 # Unused imports.
 # Nit: Remove XXX in line 194
 # Nit: Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*
 # Remove Unused Imports.

*TestHostRestrictingAuthorizationFilterHandler.java*
 # Remove unused imports.
 # This file looks incomplete, care to fix or remove from this patch.

Ps. I have also added you to the contributors group, so you can assign JIRAs to 
yourself.


was (Author: anu):
Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:*
# nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
# Remove unused imports.
# Line 152: Unused variable, overrideConfigs
# Line 160: Clean up some args in the new
# Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.
# Don't understand the use case, but I am going to assume that 
 _// Map is \{"user": [subnet, path]}_
user is some user in Kerberos/Directory and user can also be group.

*HostRestrictingAuthorizationFilterHandler.java*
# Unused imports.
# Nit: Remove XXX in line 194
#  Nit: Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*
 # Remove Unused Imports.

*TestHostRestrictingAuthorizationFilterHandler.java*
# Remove unused imports.
# This file looks incomplete, care to fix or remove from this patch.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757647#comment-16757647
 ] 

Anu Engineer edited comment on HDFS-14234 at 1/31/19 7:20 PM:
--

Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:*
# nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
# Remove unused imports.
# Line 152: Unused variable, overrideConfigs
# Line 160: Clean up some args in the new
# Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.
# Don't understand the use case, but I am going to assume that 
 _// Map is \{"user": [subnet, path]}_
user is some user in Kerberos/Directory and user can also be group.

*HostRestrictingAuthorizationFilterHandler.java*
# Unused imports.
# Nit: Remove XXX in line 194
#  Nit: Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*
 # Remove Unused Imports.

*TestHostRestrictingAuthorizationFilterHandler.java*
# Remove unused imports.
# This file looks incomplete, care to fix or remove from this patch.


was (Author: anu):
Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:
 *1. nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
 2. Remove unused imports.
 3. Line 152: Unused variable, overrideConfigs
 4. Line 160: Clean up some args in the new
 5. Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.

6. Don't understand the use case, but I am going to assume that 
 "// Map is

{"user": [subnet, path]}

" means that user is some user in Kerberos/Directory and user can also be 
groupgit s.

*HostRestrictingAuthorizationFilterHandler.java*

1. Unused imports.
 2. Nit: Remove XXX in line 194
 3. Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*

1. Remove Unused Imports.
 *
 TestHostRestrictingAuthorizationFilterHandler.java*

1. Remove unused imports.
 2. This file looks incomplete, care to fix or remove from this patch.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-29 Thread Clay B. (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755508#comment-16755508
 ] 

Clay B. edited comment on HDFS-14234 at 1/30/19 12:26 AM:
--

This is a work-in-progress needing more tests and is mainly to ensure I am on a 
reasonable path others think is sane too.


was (Author: clayb):
This is a work-in-progress trying to get more tests and is mainly to ensure I 
am on a reasonable path others think is sane too.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org