[
https://issues.apache.org/jira/browse/HUDI-1063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17334773#comment-17334773
]
sivabalan narayanan edited comment on HUDI-1063 at 4/28/21, 2:50 PM:
---------------------------------------------------------------------
[~WaterKnight]: I could not reproduce the issue w/ latest master. things are
working fine.
Command I used to launch spark shell
```
/usr/lib/spark/bin/spark-shell --packages
org.apache.spark:spark-avro_2.12:3.0.0 --conf
'spark.serializer=org.apache.spark.serializer.KryoSerializer' --jars
/home/n_siva_b/hudi-spark3-bundle_2.12-0.9.0-SNAPSHOT.jar
```
[Link|https://gist.github.com/nsivabalan/03736cda20c10781957b83a89e2f6650] to
gist for steps I tried out.
Not sure if Hadoop 3+ was tried w/ 0.5.3. Hudi has few more releases after
0.5.0 with latest as 0.8.0 which is tested for hadoop3. If you want to try out
hudi 0.5.3, would recommend trying out hadoop2.7 may be.
was (Author: shivnarayan):
[~WaterKnight]: I could not reproduce the issue w/ latest master. things are
working fine.
Command I used to launch spark shell
```
/usr/lib/spark/bin/spark-shell --packages
org.apache.spark:spark-avro_2.12:3.0.0 --conf
'spark.serializer=org.apache.spark.serializer.KryoSerializer' --jars
/home/n_siva_b/hudi-spark3-bundle_2.12-0.9.0-SNAPSHOT.jar
```
[Link|https://gist.github.com/nsivabalan/03736cda20c10781957b83a89e2f6650] to
gist for steps I tried out.
> Save in Google Cloud Storage not working
> ----------------------------------------
>
> Key: HUDI-1063
> URL: https://issues.apache.org/jira/browse/HUDI-1063
> Project: Apache Hudi
> Issue Type: Bug
> Components: Spark Integration
> Affects Versions: 0.9.0
> Reporter: David Lacalle Castillo
> Priority: Critical
> Labels: sev:critical, user-support-issues
> Fix For: 0.9.0
>
>
> I added to spark submit the following properties:
> {{--packages
> org.apache.hudi:hudi-spark-bundle_2.11:0.5.3,org.apache.spark:spark-avro_2.11:2.4.4
> \ --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'}}
> Spark version 2.4.5 and Hadoop version 3.2.1
>
> I am trying to save a Dataframe as follows in Google Cloud Storage as follows:
> tableName = "forecasts"
> basePath = "gs://hudi-datalake/" + tableName
> hudi_options = {
> 'hoodie.table.name': tableName,
> 'hoodie.datasource.write.recordkey.field': 'uuid',
> 'hoodie.datasource.write.partitionpath.field': 'partitionpath',
> 'hoodie.datasource.write.table.name': tableName,
> 'hoodie.datasource.write.operation': 'insert',
> 'hoodie.datasource.write.precombine.field': 'ts',
> 'hoodie.upsert.shuffle.parallelism': 2,
> 'hoodie.insert.shuffle.parallelism': 2
> }
> results = results.selectExpr(
> "ds as date",
> "store",
> "item",
> "y as sales",
> "yhat as sales_predicted",
> "yhat_upper as sales_predicted_upper",
> "yhat_lower as sales_predicted_lower",
> "training_date")
> results.write.format("hudi"). \
> options(**hudi_options). \
> mode("overwrite"). \
> save(basePath)
> I am getting the following error:
> Py4JJavaError: An error occurred while calling o312.save. :
> java.lang.NoSuchMethodError:
> org.eclipse.jetty.server.session.SessionHandler.setHttpOnly(Z)V at
> io.javalin.core.util.JettyServerUtil.defaultSessionHandler(JettyServerUtil.kt:50)
> at io.javalin.Javalin.<init>(Javalin.java:94) at
> io.javalin.Javalin.create(Javalin.java:107) at
> org.apache.hudi.timeline.service.TimelineService.startService(TimelineService.java:102)
> at
> org.apache.hudi.client.embedded.EmbeddedTimelineService.startServer(EmbeddedTimelineService.java:74)
> at
> org.apache.hudi.client.AbstractHoodieClient.startEmbeddedServerView(AbstractHoodieClient.java:102)
> at
> org.apache.hudi.client.AbstractHoodieClient.<init>(AbstractHoodieClient.java:69)
> at
> org.apache.hudi.client.AbstractHoodieWriteClient.<init>(AbstractHoodieWriteClient.java:83)
> at
> org.apache.hudi.client.HoodieWriteClient.<init>(HoodieWriteClient.java:137)
> at
> org.apache.hudi.client.HoodieWriteClient.<init>(HoodieWriteClient.java:124)
> at
> org.apache.hudi.client.HoodieWriteClient.<init>(HoodieWriteClient.java:120)
> at
> org.apache.hudi.DataSourceUtils.createHoodieClient(DataSourceUtils.java:195)
> at
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:135)
> at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:108) at
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
> at
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
> at
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
> at
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
> at
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
> at
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
> at
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
> at
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676) at
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271) at
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at
> py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
> py4j.Gateway.invoke(Gateway.java:282) at
> py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at
> py4j.commands.CallCommand.execute(CallCommand.java:79) at
> py4j.GatewayConnection.run(GatewayConnection.java:238) at
> java.lang.Thread.run(Thread.java:748) (<class 'py4j.protocol.Py4JJavaError'>,
> Py4JJavaError('An error occurred while calling o312.save.\n', JavaObject
> id=o313), <traceback object at 0x7f33e56e6048>)
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)