[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553814#comment-16553814 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 7m 54s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 35m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 35m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 17s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 19s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}315m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed CTEST tests |
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553562#comment-16553562 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 23s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 75m 24s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 39s{color} | {color:red} hadoop-hdfs-native-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}252m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static | | | test_libhdfs_threaded_hdfspp_test_shim_static | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445076#comment-16445076 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 23m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 42s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}142m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 14s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 53s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}335m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | |
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16444672#comment-16444672 ] Bharat Viswanadham commented on HADOOP-12953: - Thank You [~arpitagarwal] for review. {quote}We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, libhdfs_wapper_defines.h etc. {quote} Added in hdfs.h, this patch is only taken care of change in hdfs c client. For further changes to libhdfs c++, it can be taken care in a new jira. {quote}Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar to hdfsConnectAsUser. {quote} As old methods are deprecated, so not added similar method for proxyUser. {quote}Nitpick: single statement if/else blocks should still have curly braces. e.g. here: {quote} {code:java} if (bld->createProxyUser) methodToCall = "newInstanceAsProxyUser"; else methodToCall = "newInstance";{code} Addressed this. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch, HADOOP-12953.004.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432734#comment-16432734 ] Arpit Agarwal commented on HADOOP-12953: Thanks for taking up this change [~bharatviswa]. We probably need to add hdfsBuilderSetCreateProxyUser to hdfs.h, hdfs_shim, libhdfs_wapper_defines.h etc. Also it may be helpful to define a new method hdfsConnectAsProxyUser, similar to hdfsConnectAsUser. Nitpick: single statement if/else blocks should still have curly braces. e.e. here: {code} if (bld->createProxyUser) methodToCall = "newInstanceAsProxyUser"; else methodToCall = "newInstance"; {code} > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429193#comment-16429193 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 2s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 34m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 32m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 32m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 49s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 14s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}334m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428847#comment-16428847 ] Bharat Viswanadham commented on HADOOP-12953: - Attached the rebased patch, and also added testcases for newly added API's in FileSystem.java. I am not much familiar with native code, not worked on adding new API's in native for same. Left as it is, as original author. If neeed, we can work on new jira. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch, > HADOOP-12953.003.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425734#comment-16425734 ] Bharat Viswanadham commented on HADOOP-12953: - [~udayk] Are you still working on this? > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423157#comment-16423157 ] genericqa commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HADOOP-12953 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12953 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12796835/HADOOP-12953.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14424/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423154#comment-16423154 ] Bharat Viswanadham commented on HADOOP-12953: - [~udayk] Thanks for the patch. The patch needs to be rebased to latest trunk. Patch LGTM. Few more changes can be done like we can add a newmethod hdfsConnectAsProxyUserNewInstance similar to hdfsConnectAsUserNewInstance. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale >Priority: Major > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15830410#comment-15830410 ] Andres Perez commented on HADOOP-12953: --- Retesting this patch > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674793#comment-15674793 ] Hadoop QA commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 51s{color} | {color:orange} root: The patch generated 9 new + 127 unchanged - 0 fixed = 136 total (was 127) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 1s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 10s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 77m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-12953 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12796835/HADOOP-12953.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | |
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674612#comment-15674612 ] Andres Perez commented on HADOOP-12953: --- This patch provides a good solution, given that it doesn't modify the signature of existing methods and just adds additional functionality. This is something that is still relevant still in 3.0.0-aplha > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.2 >Reporter: Uday Kale >Assignee: Uday Kale > Attachments: HADOOP-12953.001.patch, HADOOP-12953.002.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15224313#comment-15224313 ] Hadoop QA commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 21s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s {color} | {color:green} trunk passed with JDK v1.8.0_77 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped branch modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} trunk passed with JDK v1.8.0_77 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s {color} | {color:green} the patch passed with JDK v1.8.0_77 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 31s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s {color} | {color:red} root: patch generated 9 new + 131 unchanged - 0 fixed = 140 total (was 131) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patch modules with no Java source: hadoop-hdfs-project/hadoop-hdfs-native-client {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 3s {color} | {color:red} hadoop-common-project_hadoop-common-jdk1.8.0_77 with JDK v1.8.0_77 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed with JDK v1.8.0_77 {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 36s
[jira] [Commented] (HADOOP-12953) New API for libhdfs to get FileSystem object as a proxy user
[ https://issues.apache.org/jira/browse/HADOOP-12953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15207216#comment-15207216 ] Hadoop QA commented on HADOOP-12953: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 13s {color} | {color:red} HADOOP-12953 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12794826/HADOOP-12953.001.patch | | JIRA Issue | HADOOP-12953 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/8895/console | | Powered by | Apache Yetus 0.2.0 http://yetus.apache.org | This message was automatically generated. > New API for libhdfs to get FileSystem object as a proxy user > > > Key: HADOOP-12953 > URL: https://issues.apache.org/jira/browse/HADOOP-12953 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Uday Kale >Assignee: Uday Kale > Attachments: HADOOP-12953.001.patch > > > Secure impersonation in HDFS needs users to create proxy users and work with > those. In libhdfs, the hdfsBuilder accepts a userName but calls > FileSytem.get() or FileSystem.newInstance() with the user name to connect as. > But, both these interfaces use getBestUGI() to get the UGI for the given > user. This is not necessarily true for all services whose end-users would not > access HDFS directly, but go via the service to first get authenticated with > LDAP, then the service owner can impersonate the end-user to eventually > provide the underlying data. > For such services that authenticate end-users via LDAP, the end users are not > authenticated by Kerberos, so their authentication details wont be in the > Kerberos ticket cache. HADOOP_PROXY_USER is not a thread-safe way to get this > either. > Hence the need for the new API for libhdfs to get the FileSystem object as a > proxy user using the 'secure impersonation' recommendations. This approach is > secure since HDFS authenticates the service owner and then validates the > right for the service owner to impersonate the given user as allowed by > hadoop.proxyusers.* parameters of HDFS config. -- This message was sent by Atlassian JIRA (v6.3.4#6332)