[ https://issues.apache.org/jira/browse/HDFS-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Srikanth Sundarrajan updated HDFS-1012: --------------------------------------- Attachment: HDFS-1012.patch {quote} * HDFS_PATH_PATTERN would not work if the port number is omitted. * I suggest to store the namenode string in init(..). Then, conf could become a local variable in init(..) * The current test seems not covering all of the cases. {quote} Nicholas, Thanks for your review comments. Revised patch uploaded with the following changes 1. HDFS_PATH_PATTERN replaced from ^hdfs://([\\w\\-]+(\\.)?)+:\\d+ to (^hdfs://([\\w\\-]+(\\.)?)+:\\d+|^hdfs://([\\w\\-]+(\\.)?)+) to support cases with no port # 2. conf.get("fs.default.name") is retrieved once in init and namenode url stored for future reference and access. This doesn't change within the filter while webapp context is running 3. Three additional test cases have been added * Case for allowing a request with a valid unqualified documentLocation (path) * Case for allowing a request with a valid qualified documentLocation * Case for rejecting request where documentLocation doesn't contain a valid path for the cluster in question ------ Output from test-patch & test-contrib ------ [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 5 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [cactus] Running org.apache.hadoop.hdfsproxy.TestAuthorizationFilter [cactus] Tomcat 5.x started on port [30300] [cactus] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.972 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestLdapIpDirFilter [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.789 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyFilter [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.867 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyForwardServlet [cactus] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.508 sec [cactus] Running org.apache.hadoop.hdfsproxy.TestProxyUtil [cactus] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.84 sec [cactus] Tomcat 5.x is stopping... [cactus] Tomcat 5.x is stopped test: BUILD SUCCESSFUL Total time: 4 minutes 8 seconds > documentLocation attribute in LdapEntry for HDFSProxy isn't specific to a > cluster > --------------------------------------------------------------------------------- > > Key: HDFS-1012 > URL: https://issues.apache.org/jira/browse/HDFS-1012 > Project: Hadoop HDFS > Issue Type: Improvement > Components: contrib/hdfsproxy > Affects Versions: 0.20.1, 0.20.2, 0.21.0, 0.22.0 > Reporter: Srikanth Sundarrajan > Assignee: Srikanth Sundarrajan > Fix For: 0.22.0 > > Attachments: HDFS-1012-bp-y20.patch, HDFS-1012-bp-y20s.patch, > HDFS-1012.patch, HDFS-1012.patch > > > List of allowed document locations accessible through HDFSProxy isn't > specific to a cluster. LDAP entries can include the name of the cluster to > which the path belongs to have better control on which clusters/paths are > accessible through HDFSProxy by the user. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.