Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/1737#issuecomment-53632075
  
    Hi @joesu, thanks for reporting and working on this issue.  Instead of 
creating a new datatype, what do you think about just reading in fixed length 
byte arrays as our already existing BinaryType?  This would give us 
compatibility without the added overhead of creating a new datatype.
    
    While I think it might be a reasonable optimization to add a fixed length 
byte type at some point in the future, doing so is a fairly major undertaking.  
Basically every place in the code where we match on datatypes will need to be 
updated.  Therefore, before doing this I'd want to see a use case where the 
optimization paid off and a design doc on how we would implement it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to