[ 
https://issues.apache.org/jira/browse/LIVY-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyorgy Gal reassigned LIVY-1010:
--------------------------------

    Assignee: Gyorgy Gal

> Add support for Spark 3.5.4
> ---------------------------
>
>                 Key: LIVY-1010
>                 URL: https://issues.apache.org/jira/browse/LIVY-1010
>             Project: Livy
>          Issue Type: Improvement
>    Affects Versions: 0.8.0
>            Reporter: Mnr Bsf
>            Assignee: Gyorgy Gal
>            Priority: Major
>             Fix For: 0.8.0
>
>
> It will be good to keep the Apache Livy project up to date with the latest 
> Spark version (3.5.4).
> Tried Apache Livy 0.8 for Spark 3.5.4 (with Java 11 and 17), and did not work 
> as expected like Apache Livy 0.8 and Spark version 3.0.0.
> Java 17, I was not able to start an Apache Livy session
> {code:java}
> Exception in thread "main" java.util.concurrent.ExecutionException: 
> javax.security.sasl.SaslException: Client closed before SASL negotiation 
> finished {code}
> Java 11, I was able to start an Apache Livy session but running any sample 
> code was throwing this exception
> {code:java}
> 'JavaPackage' object is not callable
> Traceback (most recent call last):
>   File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 1443, 
> in createDataFrame
>     return self._create_dataframe(
>   File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 1485, 
> in _create_dataframe
>     rdd, struct = self._createFromLocal(map(prepare, data), schema)
>   File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 1093, 
> in _createFromLocal
>     struct = self._inferSchemaFromList(data, names=schema)
>   File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 954, 
> in _inferSchemaFromList
>     prefer_timestamp_ntz = is_timestamp_ntz_preferred()
>   File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 153, in 
> is_timestamp_ntz_preferred
>     return jvm is not None and jvm.PythonSQLUtils.isTimestampNTZPreferred()
> TypeError: 'JavaPackage' object is not callable {code}
> The only way to make Spark 3.5.4 working with Java 11 is to issue some 
> java_imports when a session starts
> {code:java}
> from py4j.java_gateway import java_import
> java_import(spark._sc._jvm, "org.apache.spark.SparkConf")
> java_import(spark._sc._jvm, "org.apache.spark.api.java.*")
> java_import(spark._sc._jvm, "org.apache.spark.api.python.*")
> java_import(spark._sc._jvm, "org.apache.spark.ml.python.*")
> java_import(spark._sc._jvm, "org.apache.spark.mllib.api.python.*")
> java_import(spark._sc._jvm, "org.apache.spark.resource.*")
> java_import(spark._sc._jvm, "org.apache.spark.sql.*")
> java_import(spark._sc._jvm, "org.apache.spark.sql.api.python.*")
> java_import(spark._sc._jvm, "org.apache.spark.sql.hive.*") {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to