LuPan2015 opened a new issue #4475:
URL: https://github.com/apache/hudi/issues/4475


   
   **Describe the problem you faced**
   
   The Hudi table is created not successfully and the data is stored in S3
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Configure core-site.xml under spark/conf
   2. Start spark-sql client
   3. Create Hudi external table to store data to S3
   
   **Expected behavior**
   
   The Hudi table is created successfully and the data is stored in S3
   
   **Environment Description**
   
   * Hudi version : 0.10.0 
   
   * Spark version : 3.1.2
   
   * Hive version : 2.3.7
   
   * Hadoop version : 3.2 
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   spark/conf/core-site.xml
   ```
   <?xml version="1.0"?>
   <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
   
   <configuration>
     <property>
             <name>fs.s3a.access.key</name>
             <value>xxxxxx</value>
     <description>AWS access key ID.
      Omit for IAM role-based or provider-based authentication.</description>
   </property>
   
   <property>
           <name>fs.s3a.secret.key</name>
           <value>xxxxxx</value>
     <description>AWS secret key.
      Omit for IAM role-based or provider-based authentication.</description>
   </property>
   
   <property>
           <name>fs.s3a.aws.credentials.provider</name>
           
<value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,com.amazonaws.auth.EnvironmentVariableCredentialsProvider,com.amazonaws.auth.InstanceProfileCredentialsProvider</value>
   </property>
           <property>
                   <name>fs.s3a.endpoint</name>
                   <value>s3.cn-northwest-1.amazonaws.com.cn</value>
     <description>AWS S3 endpoint to connect to. An up-to-date list is
       provided in the AWS Documentation: regions and endpoints. Without this
       property, the standard region (s3.amazonaws.com) is assumed.
     </description>
   </property>
           <property>
     <name>fs.s3a.impl</name>
     <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
     <description>The implementation class of the S3A Filesystem</description>
   </property>
   
   <property>
     <name>fs.AbstractFileSystem.s3a.impl</name>
     <value>org.apache.hadoop.fs.s3a.S3A</value>
     <description>The implementation class of the S3A 
AbstractFileSystem.</description>
   </property>
   
   </configuration>
   ```
   
   Spark SQL:
   ```
   
    spark/bin/spark-sql --packages 
org.apache.hudi:hudi-spark3-bundle_2.12:0.10.0,org.apache.spark:spark-avro_2.12:3.1.2,org.apache.hadoop:hadoop-aws:3.2.0,com.amazonaws:aws-java-sdk-bundle:1.12.131
 \
    --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
    --conf 
'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
   
   create table default.hudi_mor_s31 (
     id bigint,
     name string,
     dt string
   ) using hudi
   tblproperties (
     type = 'mor',
     primaryKey = 'id'
    )
   partitioned by (dt)
   location 's3://iceberg-bucket/hudi-warehouse/']
   ```
   
   **Stacktrace**
   
   ```
   java.io.IOException: regular upload failed: java.lang.NoSuchMethodError: 
com.amazonaws.util.IOUtils.release(Ljava/io/Closeable;Lcom/amazonaws/thirdparty/apache/logging/Log;)V
           at 
org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:303)
           at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:453)
           at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
           at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
           at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
           at 
org.apache.hudi.common.table.HoodieTableConfig.create(HoodieTableConfig.java:319)
           at 
org.apache.hudi.common.table.HoodieTableMetaClient.initTableAndGetMetaClient(HoodieTableMetaClient.java:380)
           at 
org.apache.hudi.common.table.HoodieTableMetaClient$PropertyBuilder.initTable(HoodieTableMetaClient.java:887)
           at 
org.apache.spark.sql.catalyst.catalog.HoodieCatalogTable.initHoodieTable(HoodieCatalogTable.scala:167)
           at 
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.run(CreateHoodieTableCommand.scala:67)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
           at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
           at 
org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228)
           at 
org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
           at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
           at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
           at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
           at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:67)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:381)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:500)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:494)
           at scala.collection.Iterator.foreach(Iterator.scala:941)
           at scala.collection.Iterator.foreach$(Iterator.scala:941)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
           at scala.collection.IterableLike.foreach(IterableLike.scala:74)
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:494)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:284)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to