[
https://issues.apache.org/jira/browse/SPARK-23964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444043#comment-16444043
]
Thomas Graves commented on SPARK-23964:
---------------------------------------
so far in my testing I haven't seen any performance regressions. Doing the
accounting to acquire more memory takes no time at all. Obviously if you have a
small heap and it can't acquire more memory, it will spill but that is what you
want so you don't oom.
> why does Spillable wait for 32 elements?
> ----------------------------------------
>
> Key: SPARK-23964
> URL: https://issues.apache.org/jira/browse/SPARK-23964
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.2.1
> Reporter: Thomas Graves
> Priority: Major
>
> The spillable class has a check in maybeSpill as to when it tries to acquire
> more memory and determine if it should spill:
> if (elementsRead % 32 == 0 && currentMemory >= myMemoryThreshold) {
> Before it looks to see if it should spill.
> [https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/collection/Spillable.scala#L83]
> I'm wondering why it has the elementsRead %32 in it? If I have a small
> number of elements that are huge this can easily cause OOM before we actually
> spill.
> I saw a few conversations on this and one Jira related:
> https://issues.apache.org/jira/browse/SPARK-4456 . but I've never seen an
> answer to this.
> anyone have history on this?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]