Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2762#issuecomment-60192374
Another high level comments on performance, which we can leave to another
separate PR.
We can turn all these "wrap" and "unwrap" functions into function
factories, which generate proper composed wrapper and unwrapper functions
according to the type information provided by input object inspectors. In this
way, we don't need to pay per value dispatching and pattern matching costs.
Similar tricks have been played in
[`TableReader.scala`](https://github.com/apache/spark/blob/c5882c663e054adcd3ecd9f11e91a1929dbc14a3/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala#L277-L296)
to optimize performance.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]