Github user chenghao-intel commented on the pull request:

    https://github.com/apache/spark/pull/8805#issuecomment-141304224
  
    And the default data page is 4MB (grows only a single record size greater 
than that), so it will causes lots of spills if large amount records come, 
that's also lead performance issue when do the merge with PriorityQueue in the 
external sorting. Large number of records can be very common cases sql, as 
people may set a relative smaller partition number, as you know, that's the 
motive people use the sort-merge(join/aggregation) instead of the hash-based.
    
    Ideally, we'd better to take the large memory as data page, to reduce the 
spill times. But this requires a better strategy for the ShuffleMemory 
management.
    
    Sorry, again, it's not about this PR, but hopefully we can find a better 
mechanism for the memory allocation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to