[ 
https://issues.apache.org/jira/browse/HDFS-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13032644#comment-13032644
 ] 

Luke Lu commented on HDFS-1917:
-------------------------------

Though I understand the goal is to separate the hdfs only dependencies for 
easier dedup, it seems to me that if you keep the common profile as is and add 
an hdfs profile for common-daemon. The patch would be smaller and less 
confusing (the common profile now contains hdfs only dependencies and the 
compile profile is actually from common.)

> Clean up duplication of dependent jar files
> -------------------------------------------
>
>                 Key: HDFS-1917
>                 URL: https://issues.apache.org/jira/browse/HDFS-1917
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: build
>    Affects Versions: 0.23.0
>         Environment: Java 6, RHEL 5.5
>            Reporter: Eric Yang
>            Assignee: Eric Yang
>         Attachments: HDFS-1917.patch
>
>
> For trunk, the build and deployment tree look like this:
> hadoop-common-0.2x.y
> hadoop-hdfs-0.2x.y
> hadoop-mapred-0.2x.y
> Technically, hdfs's the third party dependent jar files should be fetch from 
> hadoop-common.  However, it is currently fetching from hadoop-hdfs/lib only.  
> It would be nice to eliminate the need to repeat duplicated jar files at 
> build time.
> There are two options to manage this dependency list, continue to enhance ant 
> build structure to fetch and filter jar file dependencies using ivy.  On the 
> other hand, it would be a good opportunity to convert the build structure to 
> maven, and use maven to manage the provided jar files.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to