Github user vdiravka commented on a diff in the pull request:
https://github.com/apache/drill/pull/600#discussion_r83710133
--- Diff:
exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/writer/TestParquetWriter.java
---
@@ -754,15 +764,45 @@ public void testImpalaParquetVarBinary_DictChange()
throws Exception {
compareParquetReadersColumnar("field_impala_ts",
"cp.`parquet/int96_dict_change.parquet`");
}
+ @Test
+ public void testImpalaParquetBinaryTimeStamp_DictChange() throws
Exception {
+ try {
+ test("alter session set %s = true",
ExecConstants.PARQUET_READER_INT96_AS_TIMESTAMP);
+ compareParquetReadersColumnar("field_impala_ts",
"cp.`parquet/int96_dict_change.parquet`");
--- End diff --
1. Is it better to compare result with baseline columns and values from the
file or it is ok to compare with `sqlBaselineQuery` and disabled new
`PARQUET_READER_INT96_AS_TIMESTAMP` option?
2. In the process of investigating this test I found that the primitive
data type of the column in the file `int96_dict_change.parquet` is BINARY, not
INT96.
I am a little bit confused with this. Do we need convert this BINARY to
TIMESTAMP as well?
CONVERT_FROM function with IMPALA_TIMESTAMP argument works properly for
this field.
I will investigate a little more about does impala and hive can store
timestamps into parquet BINARY.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---