Github user feynmanliang commented on the pull request:
https://github.com/apache/spark/pull/7412#issuecomment-123885048
Made some suggestions; see how perf changes after them. Unfortunately,
scanning the dataset to ensure suffixes are bounded will introduce a
performance hit. I still think it's worth it though since it's certainly better
than just failin.
It may be worthwhile to test that these changes prevent executor failure
due to overload. One way to do that would be to use a large enough dataset and
set `spark.akka.maxFrameSize` small enough s.t. the first method fails but the
latter method passes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]