kukayiyi opened a new issue, #7396:
URL: https://github.com/apache/iceberg/issues/7396

   ### Query engine
   
   spark
   
   ### Question
   
   I encountered the following error when reading and writing using 
iceberg+spark+minio
   `
   Exception in thread "main" java.lang.IllegalArgumentException: Cannot 
initialize FileIO, missing no-arg constructor: 
org.apache.iceberg.aws.s3.S3FileIO
        at org.apache.iceberg.CatalogUtil.loadFileIO(CatalogUtil.java:312)
        at 
org.apache.iceberg.hadoop.HadoopCatalog.initialize(HadoopCatalog.java:118)
        at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:239)
        at 
org.apache.iceberg.CatalogUtil.buildIcebergCatalog(CatalogUtil.java:284)
        at 
org.apache.iceberg.spark.SparkCatalog.buildIcebergCatalog(SparkCatalog.java:135)
        at 
org.apache.iceberg.spark.SparkCatalog.initialize(SparkCatalog.java:537)
        at 
org.apache.iceberg.spark.SparkSessionCatalog.buildSparkCatalog(SparkSessionCatalog.java:77)
        at 
org.apache.iceberg.spark.SparkSessionCatalog.initialize(SparkSessionCatalog.java:307)
        at 
org.apache.spark.sql.connector.catalog.Catalogs$.load(Catalogs.scala:60)
        at 
org.apache.spark.sql.connector.catalog.CatalogManager.$anonfun$catalog$1(CatalogManager.scala:53)
        at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:86)
        at 
org.apache.spark.sql.connector.catalog.CatalogManager.catalog(CatalogManager.scala:53)
        at 
org.apache.spark.sql.connector.catalog.CatalogManager.currentCatalog(CatalogManager.scala:122)
        at 
org.apache.spark.sql.connector.catalog.LookupCatalog.currentCatalog(LookupCatalog.scala:34)
        at 
org.apache.spark.sql.connector.catalog.LookupCatalog.currentCatalog$(LookupCatalog.scala:34)
        at 
org.apache.spark.sql.catalyst.analysis.Analyzer.currentCatalog(Analyzer.scala:188)
        at 
org.apache.spark.sql.connector.catalog.LookupCatalog$CatalogAndIdentifier$.unapply(LookupCatalog.scala:125)
        at 
org.apache.spark.sql.connector.catalog.LookupCatalog$NonSessionCatalogAndIdentifier$.unapply(LookupCatalog.scala:72)
        at 
org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:565)
        at 
com.yisa.iceberg.MinioIcebergExample.main(MinioIcebergExample.java:77)
   `
   My config is:
   `
   ("spark.sql.extensions", 
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
   ("spark.sql.catalog.demo", "org.apache.iceberg.spark.SparkSessionCatalog")
   ("spark.sql.catalog.demo.io-impl", "org.apache.iceberg.aws.s3.S3FileIO")
   ("spark.sql.catalog.demo.warehouse", "s3a://iceberg")
   ("spark.sql.catalog.demo.s3.endpoint", "http://127.0.0.19000";)
   ("spark.sql.defaultCatalog", "demo")
   ("spark.sql.catalogImplementation", "in-memory")
   ("spark.sql.catalog.demo.type", "hadoop")
   ("spark.executor.heartbeatInterval", "300000")
   ("spark.network.timeout", "400000");
   ("spark.hadoop.fs.s3a.access.key", "minioadmin");
   ("spark.hadoop.fs.s3a.secret.key", "minioadmin");
   ("spark.hadoop.fs.s3a.endpoint",  "127.0.0.1:9000");
   ("spark.hadoop.fs.s3a.connection.ssl.enabled", "false");
   ("spark.hadoop.fs.s3a.path.style.access", "true");
   ("spark.hadoop.fs.s3a.attempts.maximum", "1");
   ("spark.hadoop.fs.s3a.connection.establish.timeout", "5000");
   ("spark.hadoop.fs.s3a.connection.timeout", "10000");
   `
   This config is based on:https://blog.min.io/manage-iceberg-tables-with-spark/
   I used the following jar package: 
   `
   <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-core_2.12</artifactId>
       <version>3.3.2</version>
   </dependency>
   <dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-sql_2.12</artifactId>
       <version>3.3.2</version>
   </dependency>
   <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-aws</artifactId>
       <version>3.3.2</version>
   </dependency>
   <dependency>
       <groupId>com.amazonaws</groupId>
       <artifactId>aws-java-sdk-bundle</artifactId>
       <version>1.12.452</version>
   </dependency>
   <dependency>
       <groupId>org.apache.iceberg</groupId>
       <artifactId>iceberg-spark-runtime-3.3_2.12</artifactId>
       <version>1.2.1</version>
   </dependency>
   `
   ll I did was read a CSV file from Minio and convert it to an Iceberg table 
for storage, just like the link I attached above. I also tried writing a 
regular dataset as an Iceberg table and got the same error.By the way, I have 
tried it on java/python/spark-sql and reported the same error.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to