[
https://issues.apache.org/jira/browse/MAPREDUCE-1819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12919092#action_12919092
]
Ramkumar Vadali commented on MAPREDUCE-1819:
--------------------------------------------
I re-ran ant test-patch, that succeeded. I ran ant test-patch under
src/contrib/raid and that succeeded too. Since the patch touches code under
src/contrib/raid only, I did not run ant test from the top level. I think that
should be OK.
[exec]
[exec] +1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include 16 new or
modified tests.
[exec]
[exec] +1 javadoc. The javadoc tool did not generate any warning
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number
of javac compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs
warnings.
[exec]
[exec] +1 release audit. The applied patch does not increase the
total number of release audit warnings.
[exec]
[exec] +1 system tests framework. The patch passed system tests
framework compile.
[exec]
[exec]
[exec]
[exec]
[exec]
======================================================================
[exec]
======================================================================
[exec] Finished build.
[exec]
======================================================================
[exec]
======================================================================
[exec]
[exec]
BUILD SUCCESSFUL
Total time: 18 minutes 17 seconds
test-junit:
[junit] WARNING: multiple versions of ant detected in path for junit
[junit]
jar:file:/home/rvadali/local/external/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit] and
jar:file:/home/rvadali/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Running org.apache.hadoop.hdfs.TestRaidDfs
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 37.74 sec
[junit] Running org.apache.hadoop.raid.TestDirectoryTraversal
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 8.185 sec
[junit] Running org.apache.hadoop.raid.TestRaidHar
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 68.763 sec
[junit] Running org.apache.hadoop.raid.TestRaidNode
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 464.922 sec
[junit] Running org.apache.hadoop.raid.TestRaidPurge
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 44.036 sec
test:
BUILD SUCCESSFUL
Total time: 10 minutes 38 seconds
> RaidNode should be smarter in submitting Raid jobs
> --------------------------------------------------
>
> Key: MAPREDUCE-1819
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1819
> Project: Hadoop Map/Reduce
> Issue Type: Task
> Components: contrib/raid
> Affects Versions: 0.20.1
> Reporter: Ramkumar Vadali
> Assignee: Ramkumar Vadali
> Attachments: MAPREDUCE-1819.4.patch, MAPREDUCE-1819.patch,
> MAPREDUCE-1819.patch.2, MAPREDUCE-1819.patch.3
>
>
> The RaidNode currently computes parity files as follows:
> 1. Using RaidNode.selectFiles() to figure out what files to raid for a policy
> 2. Using #1 repeatedly for each configured policy to accumulate a list of
> files.
> 3. Submitting a mapreduce job with the list of files from #2 using
> DistRaid.doDistRaid()
> This task addresses the fact that #2 and #3 happen sequentially. The proposal
> is to submit a separate mapreduce job for the list of files for each policy
> and use another thread to track the progress of the submitted jobs. This will
> help reduce the time taken for files to be raided.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.