[ 
https://issues.apache.org/jira/browse/FLINK-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13244:
-----------------------------------
      Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
    Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Delayed Scheduler in Flink
> --------------------------
>
>                 Key: FLINK-13244
>                 URL: https://issues.apache.org/jira/browse/FLINK-13244
>             Project: Flink
>          Issue Type: Improvement
>          Components: flink-contrib
>            Reporter: Mridul Verma
>            Priority: Not a Priority
>              Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently with the flink scheduler and local splits
>  * Suppose i use LocalSplitter to create splits for per hostname
>  * Now a host requests for a local splits, there are chances that this node 
> will get a local split but in case of no local split present , this node 
> might get remote split and hence the data locality is of no use in these 
> cases because there might be just some other node just round the corner who 
> was just going to ask for its own local split but because this node asked for 
> the split first , both the hosts/node will be given remote splits and hence 
> the overall throughput of the system might decrease.
>  * Proposal is to use Delayed scheduling. This has shown to be quite 
> effective against these cases and might help us increase the overall 
> throughput given the latency difference between execution of local and remote 
> split is significant.
>  * [https://cs.stanford.edu/~matei/papers/2010/eurosys_delay_scheduling.pdf]
>  *



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to