ElapsedSoul commented on issue #4837:
URL: https://github.com/apache/iceberg/issues/4837#issuecomment-1135294638

   Thanks for @ajantha-bhat ,
   I put the demo code here:
   ```java
   
   import java.util.HashMap;
   import java.util.Map;
   
   import org.apache.iceberg.Table;
   import org.apache.iceberg.catalog.TableIdentifier;
   import org.apache.iceberg.hive.HiveCatalog;
   import org.apache.iceberg.spark.actions.SparkActions;
   import org.apache.spark.sql.SparkSession;
   
   public class IcebergMergeFile {
       public static void main(String[] args) throws Exception {
   
           SparkSession spark = SparkSession
           .builder()
           .appName("IcebergMergeFile")
           .getOrCreate();
   
           HiveCatalog catalog = new HiveCatalog();
           catalog.setConf(spark.sparkContext().hadoopConfiguration());
   
           Map <String, String> properties = new HashMap<String, String>();
           properties.put("uri", "thrift://127.0.0.1:9083");
   
           catalog.initialize("hive", properties);
   
           TableIdentifier name = TableIdentifier.of("ice_database", "ice_t1");
   
           // or to load an existing table, use the following line
           Table table = catalog.loadTable(name);
           SparkActions
               .get()
               .rewriteDataFiles(table)
               .option("target-file-size-bytes", Long.toString(500 * 1024 * 
1024)) // 500 MB
               .execute();
   
           spark.stop();
       }
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to