asheeshgarg commented on issue #1787:
URL: https://github.com/apache/hudi/issues/1787#issuecomment-656772241
@vinothchandar sorry for the delay I will try to pull the logs and attach.
Another quick question I have I have created external table using presto for
the data I have written to s3 using
CREATE TABLE test.hoodie_test2(
"_hoodie_commit_time" varchar,
"_hoodie_commit_seqno" varchar,
"_hoodie_record_key" varchar,
"_hoodie_partition_path" varchar,
"_hoodie_file_name" varchar,
"column" varchar,
"data_type" varchar,
"is_data_type_inferred" varchar,
"completeness" double,
"approximate_num_distinct_values" bigint,
"histogram" array(row(count bigint, ratio double, value varchar)),
"mean" double,
"maximum" double,
"minimum" double,
"sum" double,
"std_dev" double,
approx_percentiles ARRAY <double> )
WITH (
format='parquet',
external_location='s3a://tempwrite/hudi/'
)
It worked fine and able to query it with presto.
I haven't added the jar
presto_install>/plugin/hive-hadoop2/hudi-presto-bundle.jar still it work fine I
think it reading the parquet files directly? Is it the right way to do it or
need to be done differently?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]