Hi Jeetendra,
Please try the following in spark shell. it is like executing an sql command.
sqlContext.sql("use ")
Regards,
Ishwardeep
From: Jeetendra Gangele
Sent: Tuesday, August 25, 2015 12:57 PM
To: Ishwardeep Singh
Cc: user
Subject: R
Hi Jeetendra,
I faced this issue. I did not specify the database where this table exists.
Please set the database by using "use " command before executing the
query.
Regards,
Ishwardeep
From: Jeetendra Gangele
Sent: Monday, August 24, 2015 5:47 PM
To: user
Thanks Steve and Michael for your response.
Is there a tentative release date for Spark 1.5?
From: Michael Armbrust [mailto:mich...@databricks.com]
Sent: Tuesday, August 4, 2015 11:53 PM
To: Steve Loughran
Cc: Ishwardeep Singh ; user@spark.apache.org
Subject: Re: Spark SQL support for Hive 0.14
Hi,
Does spark SQL support Hive 0.14? The documentation refers to Hive 0.13. Is
there a way to compile spark with Hive 0.14?
Currently we are using Spark 1.3.1.
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-support-for-Hive-0-14-tp2412
Your table is in which database - default or result. By default spark will
try to look for table in "default" database.
If the table exists in the "result" database try to prefix the table name
with database name like "select * from result.salarytest" or set the
database by executing "use "
-
Hi Michael & Ayan,
Thank you for your response to my problem.
Michael do we have a tentative release date for Spark version 1.4?
Regards,
Ishwardeep
From: Michael Armbrust [mailto:mich...@databricks.com]
Sent: Wednesday, May 13, 2015 10:54 PM
To: ayan guha
Cc: Ishwardeep Singh; user
Sub
Hi,
I am using spark-shell and the steps using which I can reproduce the issue
are as follows:
scala> val dateDimDF=
sqlContext.load("jdbc",Map("url"->"jdbc:teradata://192.168.145.58/DBS_PORT=1025,DATABASE=BENCHQADS,LOB_SUPPORT=OFF,USER=
BENCHQADS,PASSWORD=abc","dbtable" -> "date_dim"))
scala>
Hi ,
I am using Spark SQL 1.3.1.
I have created a dataFrame using jdbc data source and am using saveAsTable()
method but got the following 2 exceptions:
java.lang.RuntimeException: Unsupported datatype DecimalType()
at scala.sys.package$.error(package.scala:27)
at
org.apache.spar
Finally got it working.
I was trying to access hive using the jdbc driver like I was trying to
access the terradata.
It took me some time to figure out that default sqlContext created by Spark
supported hive and it uses the hive-site.xml in spark conf folder to access
hive.
I had to use my data
,
Ishwardeep
From: ankitjindal [via Apache Spark User List]
[mailto:ml-node+s1001560n22766...@n3.nabble.com]
Sent: Tuesday, May 5, 2015 5:00 PM
To: Ishwardeep Singh
Subject: RE: Unable to join table across data sources using sparkSQL
Just check the Schema of both the tables using frame.printSchema
User List]
[mailto:ml-node+s1001560n22762...@n3.nabble.com]
Sent: Tuesday, May 5, 2015 1:26 PM
To: Ishwardeep Singh
Subject: Re: Unable to join table across data sources using sparkSQL
Hi
I was doing the same but with a file in hadoop as a temp table and one
table in sql server but i succeeded
Hi ,
I am trying to use sparkSQL to join tables in different data sources - hive
and teradata. I can access the tables individually but when I run join query
I get an query exception.
The same query runs if all the tables exist in teradata.
Any help would be appreciated.
I am running the foll
Hi Judy,
Thank you for your response.
When I try to compile using maven "mvn -Dhadoop.version=1.2.1 -DskipTests
clean package" I get an error "Error: Could not find or load main class" .
I have maven 3.0.4.
And when I run command "sbt package" I get the same exception as earlier.
I have done t
Hi,
I am trying to compile spark 1.1.0 on windows 8.1 but I get the following
exception.
[info] Compiling 3 Scala sources to
D:\myworkplace\software\spark-1.1.0\project\target\scala-2.10\sbt0.13\classes...
[error] D:\myworkplace\software\spark-1.1.0\project\SparkBuild.scala:26:
object sbt is not
14 matches
Mail list logo