Re: [spark1.5.1] HiveQl.parse throws org.apache.spark.sql.AnalysisException: null

2015-10-22 Thread Xiao Li
Hi, Sebastian,

To use private APIs, you have to be very familiar with the code path;
otherwise, it is very easy to hit an exception or a bug.

My suggestion is to use IntelliJ to step-by-step step in the
function hiveContext.sql until you hit the parseSql API. Then, you will
know if you have to call the other APIs before calling this API.

Note, lazy evaluation is a little bit annoying when you traverse the code
base.

Good luck,

Xiao Li


2015-10-21 3:06 GMT-07:00 Sebastian Nadorp :

> What we're trying to achieve is a fast way of testing the validity of our
> SQL queries within Unit tests without going through the time consuming task
> of setting up an Hive Test Context.
> If there is any way to speed this step up, any help would be appreciated.
>
> Thanks,
> Sebastian
>
> *Sebastian Nadorp*
> Software Developer
>
> nugg.ad AG - Predictive Behavioral Targeting
> Rotherstraße 16 - 10245 Berlin
>
> sebastian.nad...@nugg.ad
>
> www.nugg.ad * http://blog.nugg.ad/ * www.twitter.com/nuggad *
> www.facebook.com/nuggad
>
> *Registergericht/District court*: Charlottenburg HRB 102226 B
> *Vorsitzender des Aufsichtsrates/Chairman of the supervisory board: *Dr.
> Detlev Ruland
> *Vorstand/Executive board:* Martin Hubert
>
> *nugg.ad  is a company of Deutsche Post DHL.*
>
>
> On Tue, Oct 20, 2015 at 9:20 PM, Xiao Li  wrote:
>
>> Just curious why you are using parseSql APIs?
>>
>> It works well if you use the external APIs. For example, in your case:
>>
>> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>> hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING,
>> `foo` INT) PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET
>> Location 'temp'")
>>
>> Good luck,
>>
>> Xiao Li
>>
>>
>> 2015-10-20 10:23 GMT-07:00 Michael Armbrust :
>>
>>> Thats not really intended to be a public API as there is some internal
>>> setup that needs to be done for Hive to work.  Have you created a
>>> HiveContext in the same thread?  Is there more to that stacktrace?
>>>
>>> On Tue, Oct 20, 2015 at 2:25 AM, Ayoub 
>>> wrote:
>>>
 Hello,

 when upgrading to spark 1.5.1 from 1.4.1 the following code crashed on
 runtime. It is mainly used to parse HiveQL queries and check that they
 are
 valid.

 package org.apache.spark.sql.hive

 val sql = "CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo`
 INT)
 PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET Location
 'temp'"

 HiveQl.parseSql(sql)

 org.apache.spark.sql.AnalysisException: null;
 at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
 at

 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
 at

 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
 at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
 at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
 at

 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at

 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at

 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
 at

 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
 at
 scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
 at

 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at

 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at

 scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
 at

 scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
 at
 scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
 at
 scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
 at

 scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
 at

 org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
 at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:277)
 at

 org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
 at

 org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLC

Re: [spark1.5.1] HiveQl.parse throws org.apache.spark.sql.AnalysisException: null

2015-10-21 Thread Sebastian Nadorp
What we're trying to achieve is a fast way of testing the validity of our
SQL queries within Unit tests without going through the time consuming task
of setting up an Hive Test Context.
If there is any way to speed this step up, any help would be appreciated.

Thanks,
Sebastian

*Sebastian Nadorp*
Software Developer

nugg.ad AG - Predictive Behavioral Targeting
Rotherstraße 16 - 10245 Berlin

sebastian.nad...@nugg.ad

www.nugg.ad * http://blog.nugg.ad/ * www.twitter.com/nuggad *
www.facebook.com/nuggad

*Registergericht/District court*: Charlottenburg HRB 102226 B
*Vorsitzender des Aufsichtsrates/Chairman of the supervisory board: *Dr.
Detlev Ruland
*Vorstand/Executive board:* Martin Hubert

*nugg.ad  is a company of Deutsche Post DHL.*


On Tue, Oct 20, 2015 at 9:20 PM, Xiao Li  wrote:

> Just curious why you are using parseSql APIs?
>
> It works well if you use the external APIs. For example, in your case:
>
> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
> hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING,
> `foo` INT) PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET
> Location 'temp'")
>
> Good luck,
>
> Xiao Li
>
>
> 2015-10-20 10:23 GMT-07:00 Michael Armbrust :
>
>> Thats not really intended to be a public API as there is some internal
>> setup that needs to be done for Hive to work.  Have you created a
>> HiveContext in the same thread?  Is there more to that stacktrace?
>>
>> On Tue, Oct 20, 2015 at 2:25 AM, Ayoub 
>> wrote:
>>
>>> Hello,
>>>
>>> when upgrading to spark 1.5.1 from 1.4.1 the following code crashed on
>>> runtime. It is mainly used to parse HiveQL queries and check that they
>>> are
>>> valid.
>>>
>>> package org.apache.spark.sql.hive
>>>
>>> val sql = "CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo`
>>> INT)
>>> PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET Location
>>> 'temp'"
>>>
>>> HiveQl.parseSql(sql)
>>>
>>> org.apache.spark.sql.AnalysisException: null;
>>> at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
>>> at
>>>
>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>> at
>>>
>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>> at
>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>> at
>>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>> at
>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>> at
>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>> at
>>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>> at
>>>
>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>> at
>>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>> at
>>>
>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>> at
>>>
>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>>> at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:277)
>>> at
>>> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>>> at
>>> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>>>
>>> Should that be done differently on spark 1.5.1 ?
>>>
>>> Thanks,
>>> Ayoub
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/spark1-5-1-HiveQl-parse-throws-org-apache-spark-sql-AnalysisException-null-tp25138.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>


Re: [spark1.5.1] HiveQl.parse throws org.apache.spark.sql.AnalysisException: null

2015-10-20 Thread Xiao Li
Just curious why you are using parseSql APIs?

It works well if you use the external APIs. For example, in your case:

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo`
INT) PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET
Location 'temp'")

Good luck,

Xiao Li


2015-10-20 10:23 GMT-07:00 Michael Armbrust :

> Thats not really intended to be a public API as there is some internal
> setup that needs to be done for Hive to work.  Have you created a
> HiveContext in the same thread?  Is there more to that stacktrace?
>
> On Tue, Oct 20, 2015 at 2:25 AM, Ayoub 
> wrote:
>
>> Hello,
>>
>> when upgrading to spark 1.5.1 from 1.4.1 the following code crashed on
>> runtime. It is mainly used to parse HiveQL queries and check that they are
>> valid.
>>
>> package org.apache.spark.sql.hive
>>
>> val sql = "CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo` INT)
>> PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET Location
>> 'temp'"
>>
>> HiveQl.parseSql(sql)
>>
>> org.apache.spark.sql.AnalysisException: null;
>> at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
>> at
>>
>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>> at
>>
>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>> at
>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>> at
>> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>> at
>>
>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>> at
>>
>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>> at
>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>> at
>>
>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>> at
>>
>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>> at
>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>> at
>>
>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>> at
>>
>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>> at
>> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>> at
>>
>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>> at
>>
>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>> at
>> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>> at
>>
>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>> at
>>
>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>> at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:277)
>> at
>> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>> at
>> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>>
>> Should that be done differently on spark 1.5.1 ?
>>
>> Thanks,
>> Ayoub
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/spark1-5-1-HiveQl-parse-throws-org-apache-spark-sql-AnalysisException-null-tp25138.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


Re: [spark1.5.1] HiveQl.parse throws org.apache.spark.sql.AnalysisException: null

2015-10-20 Thread Michael Armbrust
Thats not really intended to be a public API as there is some internal
setup that needs to be done for Hive to work.  Have you created a
HiveContext in the same thread?  Is there more to that stacktrace?

On Tue, Oct 20, 2015 at 2:25 AM, Ayoub  wrote:

> Hello,
>
> when upgrading to spark 1.5.1 from 1.4.1 the following code crashed on
> runtime. It is mainly used to parse HiveQL queries and check that they are
> valid.
>
> package org.apache.spark.sql.hive
>
> val sql = "CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo` INT)
> PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET Location
> 'temp'"
>
> HiveQl.parseSql(sql)
>
> org.apache.spark.sql.AnalysisException: null;
> at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
> at
>
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
> at
>
> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
> at
> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
> at
> scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
> at
>
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at
>
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
> at
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at
>
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> at
>
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
> at
> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
> at
>
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at
>
> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
> at
> scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
> at
>
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> at
>
> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
> at
> scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
> at
>
> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
> at
>
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
> at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:277)
> at
> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
> at
> org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
>
> Should that be done differently on spark 1.5.1 ?
>
> Thanks,
> Ayoub
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark1-5-1-HiveQl-parse-throws-org-apache-spark-sql-AnalysisException-null-tp25138.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


[spark1.5.1] HiveQl.parse throws org.apache.spark.sql.AnalysisException: null

2015-10-20 Thread Ayoub
Hello,

when upgrading to spark 1.5.1 from 1.4.1 the following code crashed on
runtime. It is mainly used to parse HiveQL queries and check that they are
valid.

package org.apache.spark.sql.hive

val sql = "CREATE EXTERNAL TABLE IF NOT EXISTS `t`(`id` STRING, `foo` INT)
PARTITIONED BY (year INT, month INT, day INT) STORED AS PARQUET Location
'temp'"

HiveQl.parseSql(sql)

org.apache.spark.sql.AnalysisException: null;
at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at 
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at
scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at 
scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
at
scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
at
org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:277)
at
org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)
at
org.apache.spark.sql.hive.SQLChecker$$anonfun$1.apply(SQLChecker.scala:9)

Should that be done differently on spark 1.5.1 ? 

Thanks,
Ayoub





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-5-1-HiveQl-parse-throws-org-apache-spark-sql-AnalysisException-null-tp25138.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org