[ 
https://issues.apache.org/jira/browse/HDFS-641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759036#action_12759036
 ] 

Owen O'Malley commented on HDFS-641:
------------------------------------

That had been the plan, but in practice, it didn't work well. To build, you 
needed to compile common, and update the common jars in hdfs and mapred. Then 
you compile hdfs and push the jar to mapreduce. Then you compile mapreduce and 
push to hdfs. Then you compile hdfs-test and push to mapreduce. Then you 
compile mapreduce test and push it to hdfs. Then you run the hdfs tests. Then 
you run the mapreduce tests. 

By comparison, if we break the cycle, we can compile common, test common, 
compile hdfs, test hdfs, compile mapreduce, and test mapreduce. Yes, we need to 
do more work to test hdfs without mapreduce. But this is a good change.

> Move all of the benchmarks and tests that depend on mapreduce to mapreduce
> --------------------------------------------------------------------------
>
>                 Key: HDFS-641
>                 URL: https://issues.apache.org/jira/browse/HDFS-641
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.20.2
>            Reporter: Owen O'Malley
>            Assignee: Owen O'Malley
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Currently, we have a bad cycle where to build hdfs you need to test mapreduce 
> and iterate once. This is broken.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to