ashu1806 opened a new issue, #9415:
URL: https://github.com/apache/seatunnel/issues/9415

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   The Iceberg connector in SeaTunnel does not properly delegate sink table's 
"iceberg.table.auto-create-props" properties and "iceberg.table.write-props" 
properties to the underlying Iceberg API when creating tables. This limits 
customization of Iceberg tables including critical features like compression 
settings, file format specifications, and table layout optimizations.
   
   ### SeaTunnel Version
   
   2.3.11
   
   ### SeaTunnel Config
   
   ```conf
   env {
     job.mode = BATCH
     job.name = "SeaTunnel-HDFS-To-Iceberg-Sink"
     spark.executor.instances = 1
     spark.executor.cores = 1
     spark.executor.memory = "1g"
     spark.master = "local"
     
spark.driver.extraJavaOptions="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"
   }
   
   source {
     FakeSource {
       plugin_output = "fake"
       row.num = 5
       schema = {
         fields {
           order_id = "bigint"
           cust_name = "string"
           customer_id = "bigint"
           order_amount = "decimal(10,2)"
           order_ts = "timestamp"
           category = "string"
         }
       }
     }
   }
   
   sink {
     Iceberg {
       plugin_input = "fake"
       catalog_name = "my_catalog"
       namespace = "darts9"
       table = "iceberg_seatunnel_test9"
   
       # Catalog Properties
       iceberg.catalog.config={
         type="hive"
         uri="thrift://localhost:9083"
         warehouse="s3a://test/"
         io-impl= "org.apache.iceberg.hadoop.HadoopFileIO" 
#"org.apache.iceberg.aws.s3.S3FileIO" #"org.apache.iceberg.hadoop.HadoopFileIO" 
#"org.apache.iceberg.aws.s3.S3FileIO"
         #prefix for table default properties -
         #table-default. -
   
         #prefix for table override properties -
         #table-override.
       }
   
       hadoop.config={
         "fs.s3a.path.style.access" = "true"
         "fs.s3a.impl" = "org.apache.hadoop.fs.s3a.S3AFileSystem"
         "fs.s3a.aws.credentials.provider" = 
"org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
         "fs.s3a.endpoint.region" = "us-east-1"
         "fs.s3a.access.key" = "minioadmin"
         "fs.s3a.secret.key" = "minioadmin"
         "fs.s3a.endpoint" = "http://localhost:9050";
         "fs.s3a.attempts.maximum" = "2"
       }
       iceberg.table.write-props = {
         write.format.default = "parquet"
         write.target-file-size-bytes = 536870912
         write.parquet.compression-codec = 'snappy'
         write.data.path = "s3a://test/warehouse/custom/data"
         write.metadata.path = "s3a://test/warehouse/custom/metadata"
   
       }
       iceberg.table.partition-keys = "category"
       iceberg.table.schema-evolution-enabled = true
       case_sensitive = false
       iceberg.table.auto-create-props = {
         auto-create-table.enabled = true
       }
     }
   }
   ```
   
   ### Running Command
   
   ```shell
   ./start-seatunnel-spark-3-connector-v2.sh --config 
/examples/spark-file-to-iceberg-sink.conf
   ```
   
   ### Error Exception
   
   ```log
   no error, just skipping the iceberg properties.
   ```
   
   ### Zeta or Flink or Spark Version
   
   Spark - 3.3.0
   
   ### Java or Scala Version
   
   Java -1.8
   scala - 2.12.15
   
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to