tommss commented on issue #6038:
URL: https://github.com/apache/hudi/issues/6038#issuecomment-1174100066

   What exactly do you mean by file layout and timeline? Could you elaborate?
   I am reading just 1 table from SQL DB table which has basic column types 
(around 15 columns). I have chosen the simplest table possible to begin with. I 
have created default partition for now and all 7million rows go into the same 
partition.
   It has become necessary to use HoodieJavaWriteClient as  sparksession, 
sparkcontext and sqlcontext are not available inside worker nodes of databricks 
cluster. And I believe without sparkcontext it is not possible to create 
DataFrameReader.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to