Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/19222
I think that one memory block in each iteration is more representative with
having possibility of megamorphism. This is because in the typicalusages in
Spark, a data structure is actually dominated by one of memory types.
For example, `UTF8String` uses only `ByteArrayMemoryBlock` while
`OnHeapMemoryBlock` and `ByteArrayMemoryBlock` are loaded
In the future, I think that we will use only one of three Memoryblocks for
`UnsafeRow` depends on the setting in `SparkConf`. We will not use
`OffHeapMemoryBlock` for some of `UnsafeRow` and `OnHeapMemoryBlock` for the
rest of `UnsafeRow`.
I think that current concern is whether there is performance degradation at
possible megamorphic call sites when three MemoryBlock are loaded.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]