[
https://issues.apache.org/jira/browse/MAPREDUCE-931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dick King updated MAPREDUCE-931:
--------------------------------
Attachment: patch-931.patch
There is no separate test case for this patch, which is a code cleanup.
ZombieJob was introduced in MAPREDUCE-751's patch, but it had an ad hoc
interpolation engine. We refactored it here to use the new interpolation
engine. This can affect performance during simulations, because ZombieJob
created a whole new local interpolation table every time it needed an
interpolation, even if it's identical to the one that would have been created
for another interpolation in the same job. We now only create one interpolator
for each job.
TestZombieJob and TestPiecewiseLinearInterpolation are test cases for this
technology.
> rumen should use its own interpolation classes to create runtimes for
> simulated tasks
> -------------------------------------------------------------------------------------
>
> Key: MAPREDUCE-931
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-931
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Reporter: Dick King
> Assignee: Dick King
> Priority: Minor
> Attachments: patch-931.patch
>
>
> Currently, when a simulator or benchmark is running and simulating hadoop
> jobs using rumen data, and rumen's runtime system is used to get execution
> times for the tasks in the simulated jobs, rumen would use some ad hoc code,
> despite the fact that rumen has a perfectly good interpolation framework to
> generate random variables that fit discrete CDFs.
> We should use the interpolation framework.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.