[ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13199067#comment-13199067
 ] 

Hadoop QA commented on HADOOP-8009:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12513012/HADOOP-8009-existing-releases.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    -1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/558//console

This message is automatically generated.
                
> Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-8009
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8009
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: build
>    Affects Versions: 0.22.0, 0.23.0, 0.24.0, 0.23.1, 1.0.0
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>            Priority: Critical
>             Fix For: 0.23.1
>
>         Attachments: HADOOP-8009-existing-releases.patch, HADOOP-8009.patch
>
>
> Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
> system that interacts with Hadoop is quite challenging for the following 
> reasons:
> * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
> 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
> are several (common, hdfs, mapred*, yarn*)
> * *There are no 'client' artifacts:* Current artifacts include all JARs 
> needed to run the services, thus bringing into clients several JARs that are 
> not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
> * *Doing testing on the client side is also quite challenging as more 
> artifacts have to be included than the dependencies define:* for example, the 
> history-server artifact has to be explicitly included. If using Hadoop 1 
> artifacts, jersey-server has to be explicitly included.
> * *3rd party dependencies change in Hadoop from version to version:* This 
> makes things complicated for projects that have to deal with multiple 
> versions of Hadoop as their exclusions list become a huge mix & match of 
> artifacts from different Hadoop versions and it may be break things when a 
> particular version of Hadoop requires a dependency that other version of 
> Hadoop does not require.
> Because of this it would be quite convenient to have the following 
> 'aggregator' artifacts:
> * *org.apache.hadoop:hadoop-client* : it includes all required JARs to use 
> Hadoop client APIs (excluding all JARs that are not needed for it)
> * *org.apache.hadoop:hadoop-test* : it includes all required JARs to run 
> Hadoop Mini Clusters
> These aggregator artifacts would be created for current branches under 
> development (trunk, 0.22, 0.23, 1.0) and for released versions that are still 
> in use.
> For branches under development, these artifacts would be generated as part of 
> the build.
> For released versions we would have a a special branch used only as vehicle 
> for publishing the corresponding 'aggregator' artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to