[ 
https://issues.apache.org/jira/browse/LIVY-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyorgy Gal updated LIVY-321:
----------------------------
    Fix Version/s: 0.10.0
                       (was: 0.9.0)

This issue has been moved to the 0.10.0 release as part of a bulk update. If 
you feel this is moved out inappropriately, feel free to provide justification 
and reset the Fix Version to 0.9.0.

> LivyClient comes out NullPointException while return spark-sql result
> ---------------------------------------------------------------------
>
>                 Key: LIVY-321
>                 URL: https://issues.apache.org/jira/browse/LIVY-321
>             Project: Livy
>          Issue Type: Bug
>          Components: API, RSC
>    Affects Versions: 0.3, 0.4.0
>         Environment: spark version : 2.0.1
> livy: 0.4.0.SNAPSHOT
> hbase:1.2.1
> hadoop:2.7.3
>            Reporter: zzzhy
>            Priority: Major
>             Fix For: 0.10.0
>
>         Attachments: client-log.png, livy-server log.png, spark web ui.png
>
>
> It seems the spark-sql job is successfully done in addition to the 
> livy-server log and the spark-web-ui.
> but some problems just show up at the later procedure.
> the following is my spark-sql job implement:
> {code:java}
> class SQLQueryJob(tableName: String, sql: String, schemas: 
> JMap[String,String]) extends Job[JList[Row]] {
>   override def call(jc: JobContext): JList[Row] = {
>     val session:SparkSession = jc.sparkSession()
>     implicit val hConfig = HbConfig()
>     query(session, tableName, sql, 
> schemasToStructType(schemas)).collectAsList()
>   }
>   private[this] def query(sparkSession: SparkSession,
>                           tableName: String,
>                           sql: String,
>                           schemas: StructType)
>                          (implicit config: HbConfig): DataFrame = {
>     val rdd = sparkSession.readHbase(tableName, COLUMN_FAMILY, schemas)
>     val df = sparkSession.createDataFrame(rdd, schemas)
>     df.createOrReplaceTempView(tableName)
>     sparkSession.sql(sql)
>   }
> }
> {code}
> and the following is my client call:
> {code:java}
> List<Row> rows = livyClient.submit(job).get();
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to