Github user chenghao-intel commented on the pull request:

    https://github.com/apache/spark/pull/2284#issuecomment-54924719
  
    Thank you guys for the explanation and voting, the boxing/unboxing is quite 
annoy problem for performance. But from the normal developer point of view, the 
`Row` api is the key to interact with the SparkSQL, complete data type (11 
primitive data types currently) support (for getter / setter) may make more 
sense for people. 
    
    And if we used the generic type here, people may confused what the 
scala/java object type is if the data type is `TimeStamp` specified via 
`schema`, and even they probably add an object of `java.security.Timestamp` for 
the data type `Timestamp`.
    
    Sorry, probably I missed some of the original discussions for row API 
design.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to