Hi Lukáš, Yes, your correct, our TIME type in Phoenix is non standard (see PHOENIX-868). If possible, I'd stick to using the DATE type. The underlying data is stored to the millisecond granularity (even if your SQL client may only display the date part of it). Most SQL clients have a way of displaying the full granularity. Another workaround might be to use our mod operator on a long value and cast it to a TIME to normalize them when you input the data (though I haven't tried this):
UPSERT INTO T(time_col) VALUES(CAST (? % 86400000 AS TIME)); It'd be good to fix this, but we'd need to do this in a major release and have a good story around existing schemas using TIME. One approach might be to "invent" a new SQL type for our wonky TIME type and migrate existing schemas to use this new type. Comments on that JIRA are welcome (as are patches). As we get further in our Phoenix/Calcite integration, it might be a good opportunity to do this in that branch. Thanks, James On Thu, Jul 2, 2015 at 10:15 AM, Nick Dimiduk <[email protected]> wrote: > Hi Lukas, > > I cannot answer all of your questions, but let me try one what I can. > > Even stranger is how the query server handles the types. It only accepts >> the formatted string or to_time/to_date function on it, but always outputs >> a number, which is the number of milliseconds since 00:00:00.000 for TIME >> and days since 1970-01-01 for DATE. So while you are forced to enter the >> full date/time on input, and it's also stored as such, you only get a part >> of it on output. Is the asymmetry intended? I'd have expected to have the >> same type on both input and output. >> > > This sounds like a "quirk" :) Your earlier comments about Phoenix's > non-standard handling of these types are probably conflicting with > Calcite's implementation, based on stated standard. Or maybe it's just > another bug. Either way, I think it's not intended. > > The remote JDBC driver also crashes when fetching TIME/TIMESTAMP values >> (CALCITE-779), but that's definitely a bug. :) >> > > *nod* > > Another possible problem is that when parsing date/time values, the code >> raises java.lang.IllegalArgumentException instead of a "data error" >> java.sql.SQLException, so it has to be handled as "internal error" on the >> client, not a proper SQL exception. >> > > Please file a Phoenix bug for this. > > I guess the main question is what is the expected behavior for >> TIME/DATE/TIMESTAMP on all levels, internally in HBase, in the native JDBC >> driver and over the Avarica RPC? >> > > The expectation of Avatica RPC is that is is a transparent pass-through > layer. Using the query server should not impact the semantics of any query > or its results. >
