Github user zecevicp commented on a diff in the pull request:
https://github.com/apache/spark/pull/21109#discussion_r193763364
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/InMemoryUnsafeRowQueue.scala
---
@@ -0,0 +1,183 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import java.util.ConcurrentModificationException
+
+import scala.collection.mutable
+import scala.collection.mutable.ArrayBuffer
+
+import org.apache.spark.{SparkEnv, TaskContext}
+import org.apache.spark.memory.TaskMemoryManager
+import org.apache.spark.serializer.SerializerManager
+import org.apache.spark.sql.catalyst.expressions.UnsafeRow
+import
org.apache.spark.sql.execution.ExternalAppendOnlyUnsafeRowArray.DefaultInitialSizeOfInMemoryBuffer
+import org.apache.spark.storage.BlockManager
+
+/**
+ * An append-only array for [[UnsafeRow]]s that strictly keeps content in
an in-memory array
+ * until [[numRowsInMemoryBufferThreshold]] is reached post which it will
switch to a mode which
+ * would flush to disk after [[numRowsSpillThreshold]] is met (or before
if there is
+ * excessive memory consumption). Setting these threshold involves
following trade-offs:
+ *
+ * - If [[numRowsInMemoryBufferThreshold]] is too high, the in-memory
array may occupy more memory
+ * than is available, resulting in OOM.
+ * - If [[numRowsSpillThreshold]] is too low, data will be spilled
frequently and lead to
+ * excessive disk writes. This may lead to a performance regression
compared to the normal case
+ * of using an [[ArrayBuffer]] or [[Array]].
+ */
+private[sql] class InMemoryUnsafeRowQueue(
+ taskMemoryManager: TaskMemoryManager,
+ blockManager: BlockManager,
+ serializerManager: SerializerManager,
+ taskContext: TaskContext,
+ initialSize: Int,
+ pageSizeBytes: Long,
+ numRowsInMemoryBufferThreshold: Int,
+ numRowsSpillThreshold: Int)
+ extends ExternalAppendOnlyUnsafeRowArray(taskMemoryManager,
+ blockManager,
+ serializerManager,
+ taskContext,
+ initialSize,
+ pageSizeBytes,
+ numRowsInMemoryBufferThreshold,
+ numRowsSpillThreshold) {
+
+ def this(numRowsInMemoryBufferThreshold: Int, numRowsSpillThreshold:
Int) {
+ this(
+ TaskContext.get().taskMemoryManager(),
+ SparkEnv.get.blockManager,
+ SparkEnv.get.serializerManager,
+ TaskContext.get(),
+ 1024,
+ SparkEnv.get.memoryManager.pageSizeBytes,
+ numRowsInMemoryBufferThreshold,
+ numRowsSpillThreshold)
+ }
+
+ private val initialSizeOfInMemoryBuffer =
+ Math.min(DefaultInitialSizeOfInMemoryBuffer,
numRowsInMemoryBufferThreshold)
+
+ private val inMemoryQueue = if (initialSizeOfInMemoryBuffer > 0) {
+ new mutable.Queue[UnsafeRow]()
+ } else {
+ null
+ }
+
+// private var spillableArray: UnsafeExternalSorter = _
--- End diff --
No, it's not. Thank you
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]