codejoyan commented on issue #2592:
URL: https://github.com/apache/hudi/issues/2592#issuecomment-792916097
Unfortunately even after building the file and using
`-Pspark-shade-unbundle-avro` I am running into the same issue. Am I missing
anything
**Build command:**
`mvn -DskipTests -Pspark-shade-unbundle-avro clean package`
**Spark command**
```
spark-shell \
> --jars
hudi-spark-bundle_2.11-0.7.0.jar,/u/users/j0s0j7j/.ivy2/jars/org.apache.spark_spark-avro_2.11-2.4.4.jar,/u/users/joyan/.ivy2/jars/org.apache.avro_avro-1.8.2.jar
\
> --conf
spark.driver.extraClassPath=/u/users/joyan/.ivy2/jars/org.apache.avro_avro-1.8.2.jar:/u/users/joyan/.ivy2/jars/org.apache.spark_spark-avro_2.11-2.4.4.jar
\
> --conf
spark.executor.extraClassPath=/u/users/joyan/.ivy2/jars/org.apache.avro_avro-1.8.2.jar:/u/users/joyan/.ivy2/jars/org.apache.spark_spark-avro_2.11-2.4.4.jar
\
> --conf "spark.sql.hive.convertMetastoreParquet=false" \
> --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
```
**Error Message**
```
scala> transformedDF.write.format("org.apache.hudi").
| options(getQuickstartWriteConfigs).
| option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "col_9").
| option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY,
"col_2,col_1,col_3").
| option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY,
"partitionpath").
| option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,
"org.apache.hudi.keygen.ComplexKeyGenerator").
| option("hoodie.upsert.shuffle.parallelism","2").
| option("hoodie.insert.shuffle.parallelism","2").
| option(HoodieWriteConfig.TABLE_NAME, "targetTableHudi").
| mode(SaveMode.Append).
| save(targetPath)
java.lang.NoSuchMethodError:
org.apache.spark.sql.types.Decimal$.minBytesForPrecision()[I
at
org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:156)
at
org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:176)
at
org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:174)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
at
org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:174)
at
org.apache.hudi.AvroConversionUtils$.convertStructTypeToAvroSchema(AvroConversionUtils.scala:52)
at
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:139)
at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:134)
at
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
at
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
... 59 elided
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]