Github user davies commented on the pull request:

    https://github.com/apache/spark/pull/5279#issuecomment-90269142
  
    @vlyubin This optimization seems good to me, we also did something similar 
in Python API.
    
    Another king of optimization we could add is that we only call converter 
for some data types, bypass others. For example, If you do not have 
UDF/DataType in your schema, you do not even need a converter. This also apply 
to some nested type (StructType, MapType, ArrayType).
    
    For UDT, we may need another converter for the result from 
`udf.serialize()`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to