[ 
https://issues.apache.org/jira/browse/HDFS-1619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12993737#comment-12993737
 ] 

Roman Shaposhnik commented on HDFS-1619:
----------------------------------------

Brian, to answer your immediate question -- I believe Solaris 8 would fail (not 
sure about Solaris 9).  As for your note -- I'm aware of it.  One needs a 
manually assembled Java toolchain on CentOS/RHEL which is, in my opinion,   
tolerable compared to   a manually assembled native build tool chain (e.g. 
providing upstream autoconf & automake). But this is, of course, subject to 
YMMV.

> Does libhdfs really need to depend on AC_TYPE_INT16_T, AC_TYPE_INT32_T, 
> AC_TYPE_INT64_T and AC_TYPE_UINT16_T ?
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1619
>                 URL: https://issues.apache.org/jira/browse/HDFS-1619
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Roman Shaposhnik
>            Assignee: Konstantin Shvachko
>
> Currently configure.ac uses AC_TYPE_INT16_T, AC_TYPE_INT32_T, AC_TYPE_INT64_T 
> and AC_TYPE_UINT16_T and thus requires autoconf 2.61 or higher. 
> This prevents using it on such platforms as CentOS/RHEL 5.4 and 5.5. Given 
> that those are pretty popular and also given that it is really difficult to 
> find a platform
> these days that doesn't natively define  intXX_t types I'm curious as to 
> whether we can simply remove those macros or perhaps fail ONLY if we happen 
> to be on such
> a platform. 
> Here's a link to GNU autoconf docs for your reference:
>     http://www.gnu.org/software/hello/manual/autoconf/Particular-Types.html

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to