Github user squito commented on the pull request:
https://github.com/apache/spark/pull/12951#issuecomment-220615364
ah, I completely overlooked `Pool.maxShare` referencing `runningTasks`,
makes more sense now.
The added tests verify that `maxShare` is getting propagated correctly, and
limited by the parent's max share. But there aren't any tests which show that
scheduling is actually limited by it at all. We'd want tests to cover that.
The code needs more comments explaining what is going on. I'd even add
comments on the tests explaining what is being tested (eg., the line where the
maxShare is limited by the parent pool).
Calling it "maxShare" is pretty confusing -- with this implementation it
should probably be called "maxRunningTasks" or something. It also seems pretty
hard to configure, though, I wonder if users really do want maxShare. We
should be sure that whatever we add is what want long-term, so we're not stuck
with complexity from a legacy setting.
honestly I am still uncertain about adding the feature, need to think about
it more -- I'm just giving my comments on the code here. A very clean,
well-tested PR can help make your case, but OTOH can also turn into wasted
effort ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]