Re: Anyone has some simple example with spark-sql with spark 1.3

2015-03-31 Thread Vincent He
It works,thanks for your great help.

On Mon, Mar 30, 2015 at 10:07 PM, Denny Lee denny.g@gmail.com wrote:

 Hi Vincent,

 This may be a case that you're missing a semi-colon after your CREATE
 TEMPORARY TABLE statement.  I ran your original statement (missing the
 semi-colon) and got the same error as you did.  As soon as I added it in, I
 was good to go again:

 CREATE TEMPORARY TABLE jsonTable
 USING org.apache.spark.sql.json
 OPTIONS (
   path /samples/people.json
 );
 -- above needed a semi-colon so the temporary table could be created first
 SELECT * FROM jsonTable;

 HTH!
 Denny


 On Sun, Mar 29, 2015 at 6:59 AM Vincent He vincent.he.andr...@gmail.com
 wrote:

 No luck, it does not work, anyone know whether there some special setting
 for spark-sql cli so we do not need to write code to use spark sql? Anyone
 have some simple example on this? appreciate any help. thanks in advance.

 On Sat, Mar 28, 2015 at 9:05 AM, Ted Yu yuzhih...@gmail.com wrote:

 See
 https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html

 I haven't tried the SQL statements in above blog myself.

 Cheers

 On Sat, Mar 28, 2015 at 5:39 AM, Vincent He 
 vincent.he.andr...@gmail.com wrote:

 thanks for your information . I have read it, I can run sample with
 scala or python, but for spark-sql shell, I can not get an exmaple running
 successfully, can you give me an example I can run with ./bin/spark-sql
 without writing any code? thanks

 On Sat, Mar 28, 2015 at 7:35 AM, Ted Yu yuzhih...@gmail.com wrote:

 Please take a look at
 https://spark.apache.org/docs/latest/sql-programming-guide.html

 Cheers



  On Mar 28, 2015, at 5:08 AM, Vincent He 
 vincent.he.andr...@gmail.com wrote:
 
 
  I am learning spark sql and try spark-sql example,  I running
 following code, but I got exception ERROR CliDriver:
 org.apache.spark.sql.AnalysisException: cannot recognize input near
 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17, I have two
 questions,
  1. Do we have a list of the statement supported in spark-sql ?
  2. Does spark-sql shell support hiveql ? If yes, how to set?
 
  The example I tried:
  CREATE TEMPORARY TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  The exception I got,
   CREATE TEMPORARY TABLE jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
;
  15/03/28 17:38:34 INFO ParseDriver: Parsing command: CREATE
 TEMPORARY TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  NoViableAltException(241@[654:1: ddlStatement : (
 createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement 
 |
 createTableStatement | dropTableStatement | truncateTableStatement |
 alterStatement | descStatement | showStatement | metastoreCheck |
 createViewStatement | dropViewStatement | createFunctionStatement |
 createMacroStatement | createIndexStatement | dropIndexStatement |
 dropFunctionStatement | dropMacroStatement | analyzeStatement |
 lockStatement | unlockStatement | lockDatabase | unlockDatabase |
 createRoleStatement | dropRoleStatement | grantPrivileges |
 revokePrivileges | showGrants | showRoleGrants | showRolePrincipals |
 showRoles | grantRole | revokeRole | setRole | showCurrentRole );])
  at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
  at org.antlr.runtime.DFA.predict(DFA.java:144)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2090)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
  at
 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
  at
 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
  at
 org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227)
  at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:241)
  at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
  at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
  at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
  at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
 at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
  at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222

Re: Does Spark HiveContext supported with JavaSparkContext?

2015-03-30 Thread Vincent He
thanks. That is what I have tried. JavaSparkContext does not extend
SparkContext, it can not be used here.

Anyone else know whether we can use HiveContext with JavaSparkContext, from
API documents, seems this is not supported. thanks.

On Sun, Mar 29, 2015 at 9:24 AM, Cheng Lian lian.cs@gmail.com wrote:

  I mean JavaSparkContext has a field name sc, whose type is
 SparkContext. You may pass this sc to HiveContext.


 On 3/29/15 9:59 PM, Vincent He wrote:

 thanks .
 It does not work, and can not pass compile as HiveContext constructor does
 not accept JaveSparkContext and JaveSparkContext is not subclass of
 SparkContext.
 Anyone else have any idea? I suspect this is supported now.

 On Sun, Mar 29, 2015 at 8:54 AM, Cheng Lian lian.cs@gmail.com wrote:

 You may simply pass in JavaSparkContext.sc


 On 3/29/15 9:25 PM, Vincent He wrote:

 All,

 I try Spark SQL with Java, I find HiveContext does not accept
 JavaSparkContext, is this true? Or any special build of Spark I need to do
 (I build with Hive and thrift server)? Can we use HiveContext in Java?
 thanks in advance.







Re: Anyone has some simple example with spark-sql with spark 1.3

2015-03-29 Thread Vincent He
No luck, it does not work, anyone know whether there some special setting
for spark-sql cli so we do not need to write code to use spark sql? Anyone
have some simple example on this? appreciate any help. thanks in advance.

On Sat, Mar 28, 2015 at 9:05 AM, Ted Yu yuzhih...@gmail.com wrote:

 See
 https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html

 I haven't tried the SQL statements in above blog myself.

 Cheers

 On Sat, Mar 28, 2015 at 5:39 AM, Vincent He vincent.he.andr...@gmail.com
 wrote:

 thanks for your information . I have read it, I can run sample with scala
 or python, but for spark-sql shell, I can not get an exmaple running
 successfully, can you give me an example I can run with ./bin/spark-sql
 without writing any code? thanks

 On Sat, Mar 28, 2015 at 7:35 AM, Ted Yu yuzhih...@gmail.com wrote:

 Please take a look at
 https://spark.apache.org/docs/latest/sql-programming-guide.html

 Cheers



  On Mar 28, 2015, at 5:08 AM, Vincent He vincent.he.andr...@gmail.com
 wrote:
 
 
  I am learning spark sql and try spark-sql example,  I running
 following code, but I got exception ERROR CliDriver:
 org.apache.spark.sql.AnalysisException: cannot recognize input near
 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17, I have two
 questions,
  1. Do we have a list of the statement supported in spark-sql ?
  2. Does spark-sql shell support hiveql ? If yes, how to set?
 
  The example I tried:
  CREATE TEMPORARY TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  The exception I got,
   CREATE TEMPORARY TABLE jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
;
  15/03/28 17:38:34 INFO ParseDriver: Parsing command: CREATE TEMPORARY
 TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  NoViableAltException(241@[654:1: ddlStatement : (
 createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement |
 createTableStatement | dropTableStatement | truncateTableStatement |
 alterStatement | descStatement | showStatement | metastoreCheck |
 createViewStatement | dropViewStatement | createFunctionStatement |
 createMacroStatement | createIndexStatement | dropIndexStatement |
 dropFunctionStatement | dropMacroStatement | analyzeStatement |
 lockStatement | unlockStatement | lockDatabase | unlockDatabase |
 createRoleStatement | dropRoleStatement | grantPrivileges |
 revokePrivileges | showGrants | showRoleGrants | showRolePrincipals |
 showRoles | grantRole | revokeRole | setRole | showCurrentRole );])
  at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
  at org.antlr.runtime.DFA.predict(DFA.java:144)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2090)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
  at
 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
  at
 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
  at
 org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227)
  at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:241)
  at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
  at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
  at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
  at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
 at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
  at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at
 scala.util.parsing.combinator.Parsers$$anon$2

Re: Does Spark HiveContext supported with JavaSparkContext?

2015-03-29 Thread Vincent He
thanks .
It does not work, and can not pass compile as HiveContext constructor does
not accept JaveSparkContext and JaveSparkContext is not subclass of
SparkContext.
Anyone else have any idea? I suspect this is supported now.

On Sun, Mar 29, 2015 at 8:54 AM, Cheng Lian lian.cs@gmail.com wrote:

 You may simply pass in JavaSparkContext.sc


 On 3/29/15 9:25 PM, Vincent He wrote:

 All,

 I try Spark SQL with Java, I find HiveContext does not accept
 JavaSparkContext, is this true? Or any special build of Spark I need to do
 (I build with Hive and thrift server)? Can we use HiveContext in Java?
 thanks in advance.





Does Spark HiveContext supported with JavaSparkContext?

2015-03-29 Thread Vincent He
All,

I try Spark SQL with Java, I find HiveContext does not accept
JavaSparkContext, is this true? Or any special build of Spark I need to do
(I build with Hive and thrift server)? Can we use HiveContext in Java?
thanks in advance.


Anyone has some simple example with spark-sql with spark 1.3

2015-03-28 Thread Vincent He
I am learning spark sql and try spark-sql example,  I running following
code, but I got exception ERROR CliDriver:
org.apache.spark.sql.AnalysisException: cannot recognize input near
'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17, I have two
questions,
1. Do we have a list of the statement supported in spark-sql ?
2. Does spark-sql shell support hiveql ? If yes, how to set?

The example I tried:
CREATE TEMPORARY TABLE jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
The exception I got,
 CREATE TEMPORARY TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  ;
15/03/28 17:38:34 INFO ParseDriver: Parsing command: CREATE TEMPORARY TABLE
jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
NoViableAltException(241@[654:1: ddlStatement : ( createDatabaseStatement |
switchDatabaseStatement | dropDatabaseStatement | createTableStatement |
dropTableStatement | truncateTableStatement | alterStatement |
descStatement | showStatement | metastoreCheck | createViewStatement |
dropViewStatement | createFunctionStatement | createMacroStatement |
createIndexStatement | dropIndexStatement | dropFunctionStatement |
dropMacroStatement | analyzeStatement | lockStatement | unlockStatement |
lockDatabase | unlockDatabase | createRoleStatement | dropRoleStatement |
grantPrivileges | revokePrivileges | showGrants | showRoleGrants |
showRolePrincipals | showRoles | grantRole | revokeRole | setRole |
showCurrentRole );])
at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
at org.antlr.runtime.DFA.predict(DFA.java:144)
at
org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2090)
at
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
at
org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
at
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
at
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227)
at
org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:241)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at
scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at
scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at
scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
at
scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
at
org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
at
org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
at
org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
at
org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
at
org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
   at

Anyone has some simple example with spark-sql with spark 1.3

2015-03-28 Thread Vincent He
I am learning spark sql and try spark-sql example,  I running following
code, but I got exception ERROR CliDriver:
org.apache.spark.sql.AnalysisException: cannot recognize input near
'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17, I have two
questions,
1. Do we have a list of the statement supported in spark-sql ?
2. Does spark-sql shell support hiveql ? If yes, how to set?

The example I tried:
CREATE TEMPORARY TABLE jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
The exception I got,
 CREATE TEMPORARY TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  ;
15/03/28 17:38:34 INFO ParseDriver: Parsing command: CREATE TEMPORARY TABLE
jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
NoViableAltException(241@[654:1: ddlStatement : ( createDatabaseStatement |
switchDatabaseStatement | dropDatabaseStatement | createTableStatement |
dropTableStatement | truncateTableStatement | alterStatement |
descStatement | showStatement | metastoreCheck | createViewStatement |
dropViewStatement | createFunctionStatement | createMacroStatement |
createIndexStatement | dropIndexStatement | dropFunctionStatement |
dropMacroStatement | analyzeStatement | lockStatement | unlockStatement |
lockDatabase | unlockDatabase | createRoleStatement | dropRoleStatement |
grantPrivileges | revokePrivileges | showGrants | showRoleGrants |
showRolePrincipals | showRoles | grantRole | revokeRole | setRole |
showCurrentRole );])
at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
at org.antlr.runtime.DFA.predict(DFA.java:144)
at
org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2090)
at
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
at
org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
at
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
at
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227)
at
org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:241)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at
scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at
scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at
scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
at
scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
at
org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38)
at
org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
at
org.apache.spark.sql.hive.HiveQl$$anonfun$3.apply(HiveQl.scala:138)
at
org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:96)
at
org.apache.spark.sql.SparkSQLParser$$anonfun$org$apache$spark$sql$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:95)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
   at

Re: Anyone has some simple example with spark-sql with spark 1.3

2015-03-28 Thread Vincent He
thanks for your information . I have read it, I can run sample with scala
or python, but for spark-sql shell, I can not get an exmaple running
successfully, can you give me an example I can run with ./bin/spark-sql
without writing any code? thanks

On Sat, Mar 28, 2015 at 7:35 AM, Ted Yu yuzhih...@gmail.com wrote:

 Please take a look at
 https://spark.apache.org/docs/latest/sql-programming-guide.html

 Cheers



  On Mar 28, 2015, at 5:08 AM, Vincent He vincent.he.andr...@gmail.com
 wrote:
 
 
  I am learning spark sql and try spark-sql example,  I running following
 code, but I got exception ERROR CliDriver:
 org.apache.spark.sql.AnalysisException: cannot recognize input near
 'CREATE' 'TEMPORARY' 'TABLE' in ddl statement; line 1 pos 17, I have two
 questions,
  1. Do we have a list of the statement supported in spark-sql ?
  2. Does spark-sql shell support hiveql ? If yes, how to set?
 
  The example I tried:
  CREATE TEMPORARY TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  The exception I got,
   CREATE TEMPORARY TABLE jsonTable
USING org.apache.spark.sql.json
OPTIONS (
  path examples/src/main/resources/people.json
)
SELECT * FROM jsonTable
;
  15/03/28 17:38:34 INFO ParseDriver: Parsing command: CREATE TEMPORARY
 TABLE jsonTable
  USING org.apache.spark.sql.json
  OPTIONS (
path examples/src/main/resources/people.json
  )
  SELECT * FROM jsonTable
  NoViableAltException(241@[654:1: ddlStatement : (
 createDatabaseStatement | switchDatabaseStatement | dropDatabaseStatement |
 createTableStatement | dropTableStatement | truncateTableStatement |
 alterStatement | descStatement | showStatement | metastoreCheck |
 createViewStatement | dropViewStatement | createFunctionStatement |
 createMacroStatement | createIndexStatement | dropIndexStatement |
 dropFunctionStatement | dropMacroStatement | analyzeStatement |
 lockStatement | unlockStatement | lockDatabase | unlockDatabase |
 createRoleStatement | dropRoleStatement | grantPrivileges |
 revokePrivileges | showGrants | showRoleGrants | showRolePrincipals |
 showRoles | grantRole | revokeRole | setRole | showCurrentRole );])
  at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
  at org.antlr.runtime.DFA.predict(DFA.java:144)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2090)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1398)
  at
 org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1036)
  at
 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
  at
 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
  at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227)
  at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:241)
  at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
  at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
  at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
  at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
 at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
  at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at
 scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at
 scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
  at
 scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
  at
 scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
  at
 scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
  at
 scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
  at
 org.apache.spark.sql.catalyst.AbstractSparkSQLParser.apply(AbstractSparkSQLParser.scala:38