gtwuser opened a new issue, #8820:
URL: https://github.com/apache/hudi/issues/8820

   
   **Describe the problem you faced**
   
   A clear and concise description of the problem.
   I need to know from hoodie gurus what would be the best configuration for 
high performing read / write operations. In the given scenario i have multiple 
files each with average sized of `25MB`. Total size of all files together would 
be 6GB. Total number of files is 354. Its all JSON data. We want to ingest it 
into hudi as soon as possible with the hudi metadata.  
   
   Please not we don't have fix field for partitioning. So if possible can you 
tell us configs for un-partitioned data
   
   So the ask is what would be precise hoodie configs which we can use to get 
quick write and read. 
   
   ^^ @nsivabalan @vinothchandar @umehrot2 or anybody who may have some 
insights.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to