Github user scwf commented on the issue:

    https://github.com/apache/spark/pull/16633
  
    @viirya @rxin i support the idea of @wzhfy in the maillist 
http://apache-spark-developers-list.1001551.n3.nabble.com/Limit-Query-Performance-Suggestion-td20570.html,
 it solved the single partition issue in the global limit without break the job 
chain. 
    
    For local limit it still compute the all partitions, i think we can 
consider resolve the local limit issue with some changes in core scheduler in 
future,  we may provide a mechanism: do not compute all the tasks in a stage if 
some condition is satisfied for the stage.
    
    what do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to