[ 
https://issues.apache.org/jira/browse/CRUNCH-294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13824417#comment-13824417
 ] 

Gabriel Reid commented on CRUNCH-294:
-------------------------------------

I definitely like the idea, but I'm thinking the cost calculation might need to 
be a bit more complex than what it currently is if we want to optimize for 
minimal IO. From what I see, the cost is calculated independently for each 
NodePath within an Edge, but I'm thinking that the costs for all splits within 
an edge should be considered as a whole. This can be an issue if there are 
multiple NodePaths for a single Edge.

To illustrate, I make a mini test case that replicates the original issue on 
the mailing list.

This is the job plan produced by the current planner, without the patch. By 
writing S1 to disk, the total write size is 8.
!jobplan-default-old.png|width=800!

This is the job plan produced after the patch, with the same (default) scale 
factors. By writing both S2 and S3 to disk, the total write size is 12, so this 
actually has a larger write footprint than the version before the patch.
!jobplan-default-new.png|width=800!

With the patch, if S2 and S3 have a large scale factor then S1 is written to 
disk, which is indeed what we want if we're optimizing for minimal disk writes:
!jobplan-large_s2_s3.png|width=800!

But if S3 has a large scale factor and S2 has a small scale factor, we 
serialize S1 and S2, with a write size of 11.
!jobplan-lopsided.png|width=800!

I'm thinking that there must be some kind of clever method of doing this 
optimization without considering all possible combinations of splits over all 
NodePaths in an edge, but it's not clear to me what that method would be right 
now.

Also, sorry if the job plan images are all wacked in terms of size -- it seems 
the thumbnail functionality isn't working, so I put custom sizes on all the 
images.

> Cost-based job planning
> -----------------------
>
>                 Key: CRUNCH-294
>                 URL: https://issues.apache.org/jira/browse/CRUNCH-294
>             Project: Crunch
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Josh Wills
>            Assignee: Josh Wills
>         Attachments: CRUNCH-294.patch, jobplan-default-new.png, 
> jobplan-default-old.png, jobplan-large_s2_s3.png, jobplan-lopsided.png
>
>
> A bug report on the user list drove me to revisit some of the core planning 
> logic, particularly around how we decide where to split up DoFns between two 
> dependent MapReduce jobs.
> I found an old TODO about using the scale factor from a DoFn to decide where 
> to split up the nodes between dependent GBKs, so I implemented a new version 
> of the split algorithm that takes advantage of how we've propagated support 
> for multiple outputs on both the map and reduce sides of a job to do 
> finer-grained splits that use information from the scaleFactor calculations 
> to make smarter split decisions.
> One high-level change along with this: I changed the default scaleFactor() 
> value in DoFn to 0.99f to slightly prefer writes that occur later in a 
> pipeline flow by default.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to