didip opened a new issue, #12746:
URL: https://github.com/apache/druid/issues/12746

   ### Description
   
   The way `index_parallel` downloads `*.parquet` files is a bit too low level. 
It has no idea about Iceberg's versioning scheme. So if you just point 
`index_parallel` to Iceberg's S3 folder, you will get duplicated data.
   
   Now, what if you use SqlInputStream to `SELECT * from iceberg_table where 
date=2022-07-01`? Then you will get the latest version of Iceberg data and the 
ingestion will be correct.
   
   ### Motivation
   
   Now let's expand this "new feature" to also supports Hive and Presto JDBC...
   Druid's `index_parallel` will become **so much more flexible** to the point 
where it's no longer necessary to have a Spark connector plugin. The entire 
Druid ecosystem will be so much more awesome.
   (Provided that the ingestion is fast enough and parallelizable enough).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to