Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16478
@metasim Thanks for comment. This patch takes the approach to use Scala
data types on UDT and let SparkSQL's encoder to convert user data to internal
format.
I'm not sure which part the serialization/deserialization cost you'd like
to avoid. Following the link to `InternalRowTile`, seems it wraps an
`InternalRow` and access some fields in the row. You still need to deserialize
to `InternalRow` before accessing.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]