I currently want to try to use druid to store detailed data, using computing
engine such as presto, spark directly pull data from the druid for sql query.
With druid columnar storage and inverted indexing, I expect it will achieve
good query performance, while supporting real-time data write
I don't see why it shouldn't support other data types, but like Charles
said, it should ideally be use case driven, since adding new data types
bumps up the complexity level of the core code permanently. I don't see a
reason to add every single SQL data type just because it exists, but it
does make
For my team we start from the other direction. What are people DOING with
the data. For example, if they are doing counts and sums with basic
predicates, then in what ways does the existing feature set not meet those
needs?
If they are doing other things, what is the end result they are trying to