[
https://issues.apache.org/jira/browse/HDFS-641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759433#action_12759433
]
Vinod K V commented on HDFS-641:
--------------------------------
bq. The problem is that we will have breakage that won't be detected until HDFS
is updated.
A similar problem happens with your proposal too no? Let's say someone changes
some functionality in common/hdfs and do not update the corresponding the
test-case in mapreduce(for e.g.,
org.apache.hadoop.security.authorize.TestServiceLevelAuthorization.java). This
will NOT be detected till mapreduce is built. If we talk of Hudson, then this
won't happen till a mapreduce build is triggered by Hudson, which will be only
when some other mapreduce patch is committed.
What do others in the community think of this problem in general? Is it OK to
move these hdfs/common related tests/benchmarks into mapreduce? If not, what
are the alternative suggestions?
> Move all of the benchmarks and tests that depend on mapreduce to mapreduce
> --------------------------------------------------------------------------
>
> Key: HDFS-641
> URL: https://issues.apache.org/jira/browse/HDFS-641
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 0.20.2
> Reporter: Owen O'Malley
> Assignee: Owen O'Malley
> Priority: Blocker
> Fix For: 0.21.0
>
>
> Currently, we have a bad cycle where to build hdfs you need to test mapreduce
> and iterate once. This is broken.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.