GitHub user chenghao-intel opened a pull request:

    https://github.com/apache/spark/pull/6396

    [SPARK-7853] [SQL] Fix bug of class loader issue in Spark SQL

    ```
    bin/spark-sql --jars 
./sql/hive/src/test/resources/hive-hcatalog-core-0.13.1.jar
    CREATE TABLE t1(a string, b string) ROW FORMAT SERDE 
'org.apache.hive.hcatalog.data.JsonSerDe';
    ```
    
    Throws exception like
    ```
    15/05/26 00:16:33 ERROR SparkSQLDriver: Failed in [CREATE TABLE t1(a 
string, b string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe']
    org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution 
Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Cannot 
validate serde: org.apache.hive.hcatalog.data.JsonSerDe
    at 
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:333)
    at 
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)
    at 
org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
    at 
org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)
    at 
org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)
    at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:457)
    at 
org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)
    at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
    at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
    at 
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
    at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
    at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
    at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
    at 
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:922)
    at 
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:922)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:147)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:131)
    at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:727)
    at 
org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57)
    at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLC
    ```

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/chenghao-intel/spark classloader

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/6396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #6396
    
----
commit 547cb09252afa8673dfe2a046407228bc5e7f982
Author: Cheng Hao <[email protected]>
Date:   2015-05-25T15:56:15Z

    change the routing of the classloader

commit 7bc8502f97b7ab9c36e88dedafd03ce8ebc7ad56
Author: Cheng Hao <[email protected]>
Date:   2015-05-25T16:05:19Z

    update the classloader for TableReader

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to