Hey Yin,

Thanks for answer. I thought that this could be problem but i can not create 
HiveContext because i can not import org.apache.spark.sql.hive.HiveContext. It 
does not see this package. 

I read that i should build spark with -PHive but i’m running on Amazon EMR 
1.4.1 and on spark-shell i can import hive package but can not do the same on 
spark-submit. Do you have any idea why? Because if it’s related to build with 
-PHive, how can i import it in spark-shell?

> On 19 Aug 2015, at 18:59, Yin Huai <yh...@databricks.com> wrote:
> 
> Can you try to use HiveContext instead of SQLContext? Your query is trying to 
> create a table and persist the metadata of the table in metastore, which is 
> only supported by HiveContext.
> 
> On Wed, Aug 19, 2015 at 8:44 AM, Yusuf Can Gürkan <yu...@useinsider.com 
> <mailto:yu...@useinsider.com>> wrote:
> Hello,
> 
> I’m trying to create a table with sqlContext.sql method as below:
> 
> val sc = new SparkContext()
> val sqlContext = new SQLContext(sc)
> 
> import sqlContext.implicits._
> 
> sqlContext.sql(s"""
> create table if not exists landing (
> date string,
> referrer string
> )
> partitioned by (partnerid string,dt string)
> row format delimited fields terminated by '\t' lines terminated by '\n'
> STORED AS TEXTFILE LOCATION 's3n://...' <>
>       "”")
> 
> 
> It gives error on spark-submit:
> 
> Exception in thread "main" java.lang.RuntimeException: [2.1] failure: 
> ``with'' expected but identifier create found
> 
> create external table if not exists landing (
> 
> ^
>       at scala.sys.package$.error(package.scala:27)
>       at 
> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:36)
>       at 
> org.apache.spark.sql.catalyst.DefaultParserDialect.parse(ParserDialect.scala:67)
> 
> 
> 
> What can be the reason??
> 

Reply via email to