clintropolis commented on issue #12546:
URL: https://github.com/apache/druid/issues/12546#issuecomment-1196153297

   Thanks for the additional details :+1:
   
   >sqlType: One of Druid's scalar data types: `VARCHAR`, `FLOAT`, `DOUBLE` or 
`BIGINT`. Case-insensitive.
   
   How do you plan to support Druid's complex typed columns? (Such as the 
recently added `COMPLEX<json>` columns)? Complex types are currently case 
sensitive since they are registered internally in a map however they are 
defined (potentially via extensions), so it would take some work (and 
potentially be backwards incompatible to make them not be case sensitive).
   
   The reason I'm asking is that i'm still a bit worried about how we are going 
to cleanly map this to Druids type system. Is it going to be a strict mapping, 
like exactly 1 SQL type to 1 Druid type? Or will it be permissive? (e.g. 
`INTEGER`, `BIGINT`, etc all just map to most appropriate Druid type, `LONG` in 
this case). I guess I wonder if we should also allow using a `RowSignature` or 
something here which is defined in Druid's native type system so that these 
schemas can model all possible schemas that can be created today (and the way 
internal via segment metadata schemas are currently built) as an alternative to 
defining types using SQL types since the native types also serialize into 
simple strings.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to