What we're trying to achieve is a fast way of testing the validity of our
SQL queries within Unit tests without going through the time consuming task
of setting up an Hive Test Context.
If there is any way to speed this step up, any help would be appreciated.

Thanks,
Sebastian

*Sebastian Nadorp*
Software Developer

nugg.ad AG - Predictive Behavioral Targeting
Rotherstraße 16 - 10245 Berlin

sebastian.nad...@nugg.ad

www.nugg.ad * http://blog.nugg.ad/ * www.twitter.com/nuggad *
www.facebook.com/nuggad

*Registergericht/District court*: Charlottenburg HRB 102226 B
*Vorsitzender des Aufsichtsrates/Chairman of the supervisory board: *Dr.
Detlev Ruland
*Vorstand/Executive board:* Martin Hubert

*nugg.ad <http://nugg.ad/> is a company of Deutsche Post DHL.*


On Tue, Oct 20, 2015 at 9:20 PM, Xiao Li <gatorsm...@gmail.com> wrote:

> Just curious why you are using parseSql APIs?
>
> It works well if you use the external APIs. For example, in your case:
>
> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
> hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING,
> `foo` INT) PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET
> Location 'temp'")
>
> Good luck,
>
> Xiao Li
>
>
> 2015-10-20 10:23 GMT-07:00 Michael Armbrust <mich...@databricks.com>:
>
>> Thats not really intended to be a public API as there is some internal
>> setup that needs to be done for Hive to work.  Have you created a
>> HiveContext in the same thread?  Is there more to that stacktrace?
>>
>> On Tue, Oct 20, 2015 at 2:25 AM, Ayoub <benali.ayoub.i...@gmail.com>
>> wrote:
>>
>>> Hello,
>>>
>>> when upgrading to spark 1.5.1 from 1.4.1 the following code crashed on
>>> runtime. It is mainly used to parse HiveQL queries and check that they
>>> are
>>> valid.
>>>
>>> package org.apache.spark.sql.hive
>>>
>>> val sql = "CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo`
>>> INT)
>>> PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET Location
>>> 'temp'"
>>>
>>> HiveQl.parseSql(sql)
>>>
>>> org.apache.spark.sql.AnalysisException: null;
>>>         at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
>>>         at
>>>
>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>>         at
>>>
>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>>         at
>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>>         at
>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>>         at
>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>>         at
>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>>         at
>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>         at
>>>
>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>>         at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>>         at
>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>>         at
>>>
>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>>         at
>>>
>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>>>         at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:277)
>>>         at
>>> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>>>         at
>>> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>>>
>>> Should that be done differently on spark 1.5.1 ?
>>>
>>> Thanks,
>>> Ayoub
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/spark1-5-1-HiveQl-parse-throws-org-apache-spark-sql-AnalysisException-null-tp25138.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>

Reply via email to