Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22241
thanks @maropu
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22241#discussion_r212897158
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/OpenHashMapSuite.scala ---
@@ -194,4 +194,42 @@ class OpenHashMapSuite extends
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/22241
[SPARK-25249][TEST]add a unit test for OpenHashMap
## What changes were proposed in this pull request?
This PR adds a unit test for OpenHashMap , this can help developers to
distinguish
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22163#discussion_r212168161
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleExternalSorter.java ---
@@ -206,14 +211,21 @@ private void writeSortedFile(boolean
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22163#discussion_r212166322
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleExternalSorter.java ---
@@ -206,14 +211,21 @@ private void writeSortedFile(boolean
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22163#discussion_r212165385
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleExternalSorter.java ---
@@ -206,14 +211,21 @@ private void writeSortedFile(boolean
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22163#discussion_r212162206
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleExternalSorter.java ---
@@ -206,14 +211,21 @@ private void writeSortedFile(boolean
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22163
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22163#discussion_r212154100
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleExternalSorter.java ---
@@ -206,14 +211,21 @@ private void writeSortedFile(boolean
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22163
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22163
The current buffer is `writeBuffer`, I mean copying `writeBuffer` to
'diskWriteBuffer' or other buffer
---
-
To unsu
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22163
The current buffer is `writeBuffer`. I mean copying `writeBuffer` to
`diskWriteBuffer` or other buffer
---
-
To unsubscribe
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22065
This is end-to-end performance improvement, although our data is very small.
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/22163
[SPARK-25166][CORE]Reduce the number of write operations for shuffle write.
## What changes were proposed in this pull request?
Currently, only one record is written to a buffer each time
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22068
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19832
cc @cloud-fan @jiangxb1987
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22068
I will update,thanks @kiszk @HyukjinKwon
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/22065
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/22068
[MINOR][DOC]Add missing compression codec .
## What changes were proposed in this pull request?
Parquet file provides six codecs: "snappy", "gzip", "lzo&
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/22065
[SPARK-23992][CORE] ShuffleDependency does not need to be deserialized
every time
In the same stage, 'ShuffleDependency' is not necessary to be deserialized
each time.
I hav
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/21079
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21957
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21957
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21957
If so,this will have a big impact. I see a lot of unit tests failed.
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/21957
[SPARK-24994][SQL] When the data type of the field is converted to other
types, it can also support pushdown to parquet
## What changes were proposed in this pull request?
For this statement
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21825
Thanks @srowen
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19832#discussion_r205997728
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -671,10 +671,23 @@ private[deploy] class Master(
// If the
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21825
I think it's necessary to let the end users know:
1.this feature is already stable
2.People can disable it if their network is stable, doing this is good for
perfor
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/21825
[SPARK-18188][DOC][FOLLOW-UP]Add `spark.broadcast.checksum` to configuration
## What changes were proposed in this pull request?
This pr add `spark.broadcast.checksum` to configuration
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21781
please take out #21079
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/21079
I'm sorry to be so late reply to you!
I have tested in my production environment, it has a bit of performance
improvement. @jiangxb1987 @Hyukji
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21079#discussion_r181942230
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/ShuffleMapTask.scala ---
@@ -113,3 +118,24 @@ private[spark] class ShuffleMapTask
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/21079
[SPARK-23992][CORE] ShuffleDependency does not need to be deserialized
every time
## What changes were proposed in this pull request?
In the same stage, 'ShuffleDependency
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/20862
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20862#discussion_r178688855
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -301,7 +301,10 @@ private class ReadableChannelFileRegion(source
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20862#discussion_r177993186
--- Diff: core/src/test/scala/org/apache/spark/storage/DiskStoreSuite.scala
---
@@ -197,7 +197,7 @@ class DiskStoreSuite extends SparkFunSuite
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20862#discussion_r177992858
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -301,7 +301,10 @@ private class ReadableChannelFileRegion(source
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20862
cc @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20862
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20862
[SPARK-23744][CORE]Fix memory leak in ReadableChannelFileRegion
## What changes were proposed in this pull request?
In the class `ReadableChannelFileRegion`, the `buffer` is direct memory
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/20802
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20690
@jiangxb1987 @jerryshao Could you help review it ? thanks
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20802#discussion_r174332968
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -975,6 +975,8 @@ private[spark] object Utils extends Logging {
def
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20802
[SPARK-23651][core]Add a check for host name
I encountered an error like this:
`org.apache.spark.SparkException: Invalid Spark URL:
spark://HeartbeatReceiver@ci_164:42849 at
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/20801
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20801
[SPARK-23651]Add a check for host name
## What changes were proposed in this pull request?
I encountered a error like this:
`org.apache.spark.SparkException: Invalid Spark URL:
spark
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20676
Thanks all , i will close this PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/20676
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20576#discussion_r171452995
--- Diff:
core/src/test/scala/org/apache/spark/shuffle/sort/SortShuffleManagerSuite.scala
---
@@ -85,6 +85,14 @@ class SortShuffleManagerSuite extends
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20690
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20576#discussion_r171437565
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/sort/SortShuffleManager.scala ---
@@ -188,9 +188,8 @@ private[spark] object SortShuffleManager
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20576
@cloud-fan @JoshRosen Could you help review it ? thanks
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20690
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20690
[SPARK-23532][Standalone]Improve data locality when launching new executors
for dynamic allocation
## What changes were proposed in this pull request?
Currently Spark on Yarn supports better
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20676#discussion_r171116974
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -246,18 +246,18 @@ private[spark] class MemoryStore
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20676
In `StaticMemoryManager`, the storage memory and unroll memory is managed
separately, but, unroll memory is also storage memory, so we do not need
release unroll memory really,Just need to
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20676#discussion_r171115245
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -246,18 +246,18 @@ private[spark] class MemoryStore
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20676
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20676
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20676
[SPARK-23516][CORE] It is unnecessary to transfer unroll memory to storage
memory
## What changes were proposed in this pull request?
In fact, unroll memory is also storage memory,so i think
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20596
Yeahï¼the buffer passed in is always on-heapï¼thanks all
I will close this PR
---
-
To unsubscribe, e-mail: reviews
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/20596
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20596
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20596
[SPARK-23404][CORE]When the underlying buffers are direct, we should copy
them to the heap memory
## What changes were proposed in this pull request?
If the memory mode is `ON_HEAP`,when the
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20581
@srowen I will check all of our code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/20581
@srowen I am sorry, i didn't notice this place, i found it yesterday.
---
-
To unsubscribe, e-mail: reviews-uns
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20581
[SPARK-23358][CORE][FOLLOW-UP]When reduceId is greater than 2^28,
reduceId*8 will overflow
## What changes were proposed in this pull request?
In the `getBlockData`,`blockId.reduceId` is the
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20576
[CORE]The shuffle dependency specifies aggregation ,and
`dependency.mapSideCombine=false`,it seems that serialized sorting can be used
## What changes were proposed in this pull request?
The
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/20544
[SPARK-23358][CORE]When the number of partitions is greater than 2^28, it
will result in an error result
## What changes were proposed in this pull request?
In the `checkIndexAndDataFile`,the
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19077
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r166836868
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -46,9 +46,12 @@ private boolean shouldPool(long size
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r166836613
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -46,9 +46,12 @@ private boolean shouldPool(long size
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r166832418
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -46,9 +47,10 @@ private boolean shouldPool(long size
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r166817332
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/array/ByteArrayMethods.java
---
@@ -40,6 +40,15 @@ public static int
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r166815823
--- Diff:
common/unsafe/src/test/java/org/apache/spark/unsafe/PlatformUtilSuite.java ---
@@ -134,4 +135,24 @@ public void memoryDebugFillEnabledInTest
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r166815274
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/MemoryBlock.java ---
@@ -20,6 +20,7 @@
import javax.annotation.Nullable
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19832#discussion_r166266580
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -671,10 +671,23 @@ private[deploy] class Master(
// If the
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19077
I have updated this PR, just a little bit of improvement, please help
review it again,thanks @jiangxb1987 @cloud-fan
---
-
To
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19690
yea, so we make the default value for this configuration is `true`.
I set spark.executor.extraJavaOptions=-verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19832
[SPARK-22628][CORE]Some situationsï¼ the assignment of executors on
workers is not what we expected when `spark.deploy.spreadOut=true`.
## What changes were proposed in this pull request
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r150442437
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/MemoryBlock.java ---
@@ -48,6 +49,15 @@ public long size
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19690
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19690
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19690
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19690
[SPARK-22467]Added a switch to support whether `stdout_stream` and
`stderr_stream` output to disk
## What changes were proposed in this pull request?
We should add a switch to control
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19572
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19572
cc @sameeragarwal @ericl
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19572
[SPARK-22349]In on-heap mode, when allocating memory from pool,we should
fill memory with `MEMORY_DEBUG_FILL_CLEAN_VALUE`
## What changes were proposed in this pull request?
In on-heap mode
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r144216267
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/MemoryBlock.java ---
@@ -48,6 +49,15 @@ public long size
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r143439369
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -116,9 +116,10 @@ private [sql] object
Github user 10110346 closed the pull request at:
https://github.com/apache/spark/pull/19303
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19303
yes,it is no problemï¼this is just an optimization.
You are right,it is standalone mode
---
-
To unsubscribe, e-mail
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19303
[SPARK-22085][CORE]When the applicationhas no core leftï¼do not request
more executors from the cluster manager in ExecutorAllocationManager
## What changes were proposed in this pull request
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19155
@dongjoon-hyun thanks, I have created a JIRA issue.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19155
[MINOR][TEST] Tables created in unit tests should be dropped after use
## What changes were proposed in this pull request?
Tables should be dropped after use in unit tests.
## How was
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19077
restest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r137442821
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/MemoryBlock.java ---
@@ -48,6 +48,13 @@ public long size
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r137442763
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -47,23 +48,29 @@ private boolean shouldPool(long
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19077
@jerryshao @JoshRosen yes, it would not generally be arbitrary sized
allocations. Basically, we allocate memory in multiples of 4 or 8 bytesï¼even
so, I think this change is also beneficial
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19077#discussion_r136489896
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
---
@@ -47,23 +47,29 @@ private boolean shouldPool(long
101 - 200 of 377 matches
Mail list logo