wangxujin1221 opened a new issue #3858:
URL: https://github.com/apache/iceberg/issues/3858


   Hi team,
   
   I'm new to iceberg, and i have a question about query big table. 
   
   We have a Hive table with a total of 3.6 million records and 120 fields per 
record. and we want to transfer all the records in this table to other 
databases, such as pg, kafak, etc. 
   
   Currently we do like this:
   ` 
   Dataset<Row> dataset = 
connection.client.read().format("iceberg").load("default.table");
               // here will  stuck for a very long time
               dataset.foreachPartition(par ->{
                   par.forEachRemaining(row ->{
                      ```
                   });
               });
   `
   but it can get stuck for a long time in the foreach process.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to