[
https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12849922#action_12849922
]
Tsz Wo (Nicholas), SZE commented on HDFS-481:
---------------------------------------------
@Srikanth
- Is you patch still fixing the bugs stated in the description?
- Could you revert the white space changes like the following? Otherwise, it
is hard to review your patch.
{code}
-<property>
- <name>fs.default.name</name>
- <!-- cluster variant -->
- <value>hdfs://localhost:54321</value>
- <description>The name of the default file system. Either the
- literal string "local" or a host:port for NDFS.</description>
- <final>true</final>
- </property>
+ <property>
+ <name>fs.default.name</name>
+ <!-- cluster variant -->
+ <value>hdfs://localhost:54321</value>
+ <description>The name of the default file system. Either the
+ literal string "local" or a host:port for NDFS.</description>
+ <final>true</final>
+ </property>
{code}
> Bug Fixes
> ---------
>
> Key: HDFS-481
> URL: https://issues.apache.org/jira/browse/HDFS-481
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: contrib/hdfsproxy
> Affects Versions: 0.21.0
> Reporter: zhiyong zhang
> Assignee: Srikanth Sundarrajan
> Attachments: HDFS-481-bp-y20.patch, HDFS-481-bp-y20s.patch,
> HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch
>
>
> 1. hadoop-version is not recognized if run ant command from src/contrib/ or
> from src/contrib/hdfsproxy
> If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed
> to contrib's build through subant. But if running from src/contrib or
> src/contrib/hdfsproxy, the hadoop-version will not be recognized.
> 2. ssl.client.do.not.authenticate.server setting can only be set by hdfs's
> configuration files, need to move this setting to ssl-client.xml.
> 3. Solve some race conditions for LdapIpDirFilter.java. (userId, groupName,
> and paths need to be moved to doFilter() instead of as class members
> 4. Addressed the following StackOverflowError.
> ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
> ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw
> exception
> java.lang.StackOverflowError
> at
> org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
> equest.java:229)
> This is due to when the target war (/target.war) does not exist, the
> forwarding war will forward to its parent context path /, which defines the
> forwarding war itself. This cause infinite loop. Added "HDFS Proxy
> Forward".equals(dstContext.getServletContextName() in the if logic to break
> the loop.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.