Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1737#issuecomment-60827613
The problem is that dataTypes are a public api so once we add one we are
stuck with it for ever. Also, each new datatype adds significant overhead so
I'd like to be pretty cautious about adding them when they are just special
cases of existing types.
We are already exploring the pattern of a single datatype with multiple
settings elsewhere. There is a patch in the works that adds support for fixed
and arbitrary precision decimal arithmetic using a single type. So if it is
possible to do here as well I think that would be good.
If the concern is primarily reading data from existing systems, what about
a smaller initial patch that allows Spark SQL to read fixed length binary data,
but just uses the existing BinaryType? We wouldn't be able to write out fixed
length data, but this does seems like a good first step
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]