codejoyan edited a comment on issue #2592:
URL: https://github.com/apache/hudi/issues/2592#issuecomment-791988738


   Hi @umehrot2 
   
   I tried using `databricks-avro` along with `avro 1.7.7` and `avro 1.8.2` but 
I still get `NoSuchMethod` exceptions still.
   I am yet to try and build Hudi. Will check on that.
   
   **databricks-avro and avro 1.8.2**
   ```
   spark-shell \
   > --packages 
org.apache.hudi:hudi-spark-bundle_2.11:0.7.0,com.databricks:spark-avro_2.11:4.0.0,org.apache.avro:avro:1.8.2
 \
   > --conf 
spark.driver.extraClassPath=/u/users/j0s0j7j/.ivy2/jars/com.databricks_spark-avro_2.11-4.0.0.jar,/u/users/j0s0j7j/.ivy2/jars/org.apache.avro_avro-1.8.2.jar
 \
   > --conf 
spark.executor.extraClassPath=/u/users/j0s0j7j/.ivy2/jars/com.databricks_spark-avro_2.11-4.0.0.jar,/u/users/j0s0j7j/.ivy2/jars/org.apache.avro_avro-1.8.2.jar
 \
   > --conf "spark.sql.hive.convertMetastoreParquet=false" \
   > --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
   
   java.lang.NoSuchMethodError: 
org.apache.avro.Schema.createUnion([Lorg/apache/avro/Schema;)Lorg/apache/avro/Schema;
     at 
org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:185)
     at 
org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:176)
     at 
org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:174)
     at scala.collection.Iterator$class.foreach(Iterator.scala:893)
     at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
     at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
     at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
     at 
org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:174)
     at 
org.apache.hudi.AvroConversionUtils$.convertStructTypeToAvroSchema(AvroConversionUtils.scala:52)
     at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:139)
     at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:134)
     at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
     at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
     ... 59 elided
   ```
   
   **databricks-avro and avro 1.7.7**
   
   ```
   spark-shell \
   > --packages 
org.apache.hudi:hudi-spark-bundle_2.11:0.7.0,com.databricks:spark-avro_2.11:4.0.0,org.apache.avro:avro:1.7.7
 \
   > --conf 
spark.driver.extraClassPath=/u/users/j0s0j7j/.ivy2/jars/com.databricks_spark-avro_2.11-4.0.0.jar,/u/users/j0s0j7j/.ivy2/jars/org.apache.avro_avro-1.7.7.jar
 \
   > --conf 
spark.executor.extraClassPath=/u/users/j0s0j7j/.ivy2/jars/com.databricks_spark-avro_2.11-4.0.0.jar,/u/users/j0s0j7j/.ivy2/jars/org.apache.avro_avro-1.7.7.jar
 \
   > --conf "spark.sql.hive.convertMetastoreParquet=false" \
   > --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
   
   java.lang.NoClassDefFoundError: org/apache/avro/LogicalType
     at 
org.apache.hudi.AvroConversionUtils$.getAvroRecordNameAndNamespace(AvroConversionUtils.scala:71)
     at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:135)
     at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:134)
     at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
     at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
     at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
     ... 59 elided
   Caused by: java.lang.ClassNotFoundException: org.apache.avro.LogicalType
     at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
     ... 81 more
   
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to