Re: TreeNodeException: Unresolved attributes

2015-03-04 Thread Anusha Shamanur
I tried. I still get the same error.

15/03/04 09:01:50 INFO parse.ParseDriver: Parsing command: select * from
TableName where value like '%Restaurant%'

15/03/04 09:01:50 INFO parse.ParseDriver: Parse Completed.

15/03/04 09:01:50 INFO metastore.HiveMetaStore: 0: get_table : db=default
tbl=TableName

15/03/04 09:01:50 INFO HiveMetaStore.audit: ugi=as7339
ip=unknown-ip-addr cmd=get_table
: db=default tbl=TableName
results: org.apache.spark.sql.SchemaRDD =

SchemaRDD[86] at RDD at SchemaRDD.scala:108
== Query Plan ==

== Physical Plan ==

org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
attributes: *, tree:

'Project [*]

'Filter ('value LIKE Restaurant)
  MetastoreRelation default, TableName, None



On Wed, Mar 4, 2015 at 5:39 AM, Arush Kharbanda ar...@sigmoidanalytics.com
wrote:

 Why don't you formulate a string before you pass it to the hql function
 (appending strings), and hql function is deprecated. You should use sql.


 http://spark.apache.org/docs/1.1.0/api/scala/index.html#org.apache.spark.sql.hive.HiveContext

 On Wed, Mar 4, 2015 at 6:15 AM, Anusha Shamanur anushas...@gmail.com
 wrote:

 Hi,


 I am trying to run a simple select query on a table.


 val restaurants=hiveCtx.hql(select * from TableName where column like
 '%SomeString%' )

 This gives an error as below:

 org.apache.spark.sql.catalyst.errors.package$TreeNodeException:
 Unresolved attributes: *, tree:

 How do I solve this?


 --
 Regards,
 Anusha




 --

 [image: Sigmoid Analytics] http://htmlsig.com/www.sigmoidanalytics.com

 *Arush Kharbanda* || Technical Teamlead

 ar...@sigmoidanalytics.com || www.sigmoidanalytics.com




-- 
Regards,
Anusha


TreeNodeException: Unresolved attributes

2015-03-03 Thread Anusha Shamanur
Hi,


I am trying to run a simple select query on a table.


val restaurants=hiveCtx.hql(select * from TableName where column like
'%SomeString%' )

This gives an error as below:

org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
attributes: *, tree:

How do I solve this?


-- 
Regards,
Anusha


Re: Spark SQL Thrift Server start exception : java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory

2015-03-03 Thread Anusha Shamanur
I downloaded different versions of the jars and it worked.

Thanks!

On Tue, Mar 3, 2015 at 4:45 PM, Cheng, Hao hao.ch...@intel.com wrote:

  Which version / distribution are you using? Please references this blog
 that Felix C posted if you’re running on CDH.


 http://eradiating.wordpress.com/2015/02/22/getting-hivecontext-to-work-in-cdh/



 Or you may also need to download the datanucleus*.jar files try to add the
 option of “--jars” while starting the spark shell.



 *From:* Anusha Shamanur [mailto:anushas...@gmail.com]
 *Sent:* Wednesday, March 4, 2015 5:07 AM
 *To:* Cheng, Hao
 *Subject:* Re: Spark SQL Thrift Server start exception :
 java.lang.ClassNotFoundException:
 org.datanucleus.api.jdo.JDOPersistenceManagerFactory



 Hi,



 I am getting the same error. There is no lib folder in my $SPARK_HOME. But
 I included these jars while calling spark-shell.



 Now, I get this:

 Caused by: org.datanucleus.exceptions.ClassNotResolvedException: Class
 org.datanucleus.store.rdbms.RDBMSStoreManager was not found in the
 CLASSPATH. Please check your specification and your CLASSPATH.

at
 org.datanucleus.ClassLoaderResolverImpl.classForName(ClassLoaderResolverImpl.java:218)



 How do I solve this?



 On Mon, Mar 2, 2015 at 11:04 PM, Cheng, Hao hao.ch...@intel.com wrote:

 Copy those jars into the $SPARK_HOME/lib/

 datanucleus-api-jdo-3.2.6.jar
 datanucleus-core-3.2.10.jar
 datanucleus-rdbms-3.2.9.jar

 see
 https://github.com/apache/spark/blob/master/bin/compute-classpath.sh#L120



 -Original Message-
 From: fanooos [mailto:dev.fano...@gmail.com]
 Sent: Tuesday, March 3, 2015 2:50 PM
 To: user@spark.apache.org
 Subject: Spark SQL Thrift Server start exception :
 java.lang.ClassNotFoundException:
 org.datanucleus.api.jdo.JDOPersistenceManagerFactory

 I have installed a hadoop cluster (version : 2.6.0), apache spark (version
 :
 1.2.1 preBuilt for hadoop 2.4 and later), and hive (version 1.0.0).

 When I try to start the spark sql thrift server I am getting the following
 exception.

 Exception in thread main java.lang.RuntimeException:
 java.lang.RuntimeException: Unable to instantiate
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient
 at
 org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
 at

 org.apache.spark.sql.hive.HiveContext$$anonfun$4.apply(HiveContext.scala:235)
 at

 org.apache.spark.sql.hive.HiveContext$$anonfun$4.apply(HiveContext.scala:231)
 at scala.Option.orElse(Option.scala:257)
 at
 org.apache.spark.sql.hive.HiveContext.x$3$lzycompute(HiveContext.scala:231)
 at org.apache.spark.sql.hive.HiveContext.x$3(HiveContext.scala:229)
 at

 org.apache.spark.sql.hive.HiveContext.hiveconf$lzycompute(HiveContext.scala:229)
 at
 org.apache.spark.sql.hive.HiveContext.hiveconf(HiveContext.scala:229)
 at
 org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:292)
 at
 org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:276)
 at
 org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:248)
 at
 org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:91)
 at
 org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:90)
 at

 scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at
 scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
 at org.apache.spark.sql.SQLContext.init(SQLContext.scala:90)
 at
 org.apache.spark.sql.hive.HiveContext.init(HiveContext.scala:72)
 at

 org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:51)
 at

 org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:56)
 at

 org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Caused by: java.lang.RuntimeException: Unable to instantiate
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient
 at

 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
 at

 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:62)
 at

 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
 at

 org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453

Re: Failed to parse Hive query

2015-02-28 Thread Anusha Shamanur
Hi,
I reconfigured everything. Still facing the same issue.
Can someone please help?

On Friday, February 27, 2015, Anusha Shamanur anushas...@gmail.com wrote:

 I do.
 What tags should I change in this?
 I changed the value of hive.exec.scratchdir to /tmp/hive.
 What else?

 On Fri, Feb 27, 2015 at 2:14 PM, Michael Armbrust mich...@databricks.com
 javascript:_e(%7B%7D,'cvml','mich...@databricks.com'); wrote:

 Do you have a hive-site.xml file or a core-site.xml file?  Perhaps
 something is misconfigured there?

 On Fri, Feb 27, 2015 at 7:17 AM, Anusha Shamanur anushas...@gmail.com
 javascript:_e(%7B%7D,'cvml','anushas...@gmail.com'); wrote:

 Hi,

 I am trying to do this in spark-shell:

 val hiveCtx = neworg.apache.spark.sql.hive.HiveContext(sc) val listTables 
 =hiveCtx.hql(show tables)

 The second line fails to execute with this message:

 warning: there were 1 deprecation warning(s); re-run with -deprecation
 for details org.apache.spark.sql.hive.HiveQl$ParseException: Failed to
 parse: show tables at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:239) at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
 at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
 at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)

 ... at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Caused by: java.lang.NullPointerException: Conf non-local session path
 expected to be non-null at
 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
 at
 org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:586)
 at org.apache.hadoop.hive.ql.Context.(Context.java:129) at
 org.apache.hadoop.hive.ql.Context.(Context.java:116) at
 org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227) at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:240) ... 87 more


 Any help would be appreciated.



 --
 Sent from Gmail mobile





 --
 Regards,
 Anusha



-- 
Sent from Gmail mobile


Failed to parse Hive query

2015-02-27 Thread Anusha Shamanur
Hi,

I am trying to do this in spark-shell:

val hiveCtx = neworg.apache.spark.sql.hive.HiveContext(sc) val
listTables =hiveCtx.hql(show tables)

The second line fails to execute with this message:

warning: there were 1 deprecation warning(s); re-run with -deprecation for
details org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse:
show tables at
org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:239) at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)

... at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused
by: java.lang.NullPointerException: Conf non-local session path expected to
be non-null at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at
org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:586)
at org.apache.hadoop.hive.ql.Context.(Context.java:129) at
org.apache.hadoop.hive.ql.Context.(Context.java:116) at
org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227) at
org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:240) ... 87 more


Any help would be appreciated.



-- 
Sent from Gmail mobile


Running hive query from spark

2015-02-27 Thread Anusha Shamanur
Hi,

I am trying to do this in spark-shell:

val hiveCtx = neworg.apache.spark.sql.hive.HiveContext(sc) val
listTables =hiveCtx.hql(show tables)

The second line fails to execute with this message:

warning: there were 1 deprecation warning(s); re-run with -deprecation for
details org.apache.spark.sql.hive.HiveQl$ParseException: Failed to parse:
show tables at
org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:239) at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
at
org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) at
scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)

... at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused
by: java.lang.NullPointerException: Conf non-local session path expected to
be non-null at
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at
org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:586)
at org.apache.hadoop.hive.ql.Context.(Context.java:129) at
org.apache.hadoop.hive.ql.Context.(Context.java:116) at
org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227) at
org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:240) ... 87 more


Any help would be appreciated. I am stuck with this from past two days.


Re: Failed to parse Hive query

2015-02-27 Thread Anusha Shamanur
I do.
What tags should I change in this?
I changed the value of hive.exec.scratchdir to /tmp/hive.
What else?

On Fri, Feb 27, 2015 at 2:14 PM, Michael Armbrust mich...@databricks.com
wrote:

 Do you have a hive-site.xml file or a core-site.xml file?  Perhaps
 something is misconfigured there?

 On Fri, Feb 27, 2015 at 7:17 AM, Anusha Shamanur anushas...@gmail.com
 wrote:

 Hi,

 I am trying to do this in spark-shell:

 val hiveCtx = neworg.apache.spark.sql.hive.HiveContext(sc) val listTables 
 =hiveCtx.hql(show tables)

 The second line fails to execute with this message:

 warning: there were 1 deprecation warning(s); re-run with -deprecation
 for details org.apache.spark.sql.hive.HiveQl$ParseException: Failed to
 parse: show tables at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:239) at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:50)
 at
 org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:49)
 at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) at
 scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) at
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)

 ... at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused
 by: java.lang.NullPointerException: Conf non-local session path expected to
 be non-null at
 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
 at
 org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:586)
 at org.apache.hadoop.hive.ql.Context.(Context.java:129) at
 org.apache.hadoop.hive.ql.Context.(Context.java:116) at
 org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:227) at
 org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:240) ... 87 more


 Any help would be appreciated.



 --
 Sent from Gmail mobile





-- 
Regards,
Anusha