What is the query
On Fri, May 3, 2019 at 5:28 PM KhajaAsmath Mohammed
wrote:
> Hi
>
> I have followed link
> https://community.teradata.com/t5/Connectivity/Teradata-JDBC-Driver-returns-the-wrong-schema-column-nullability/m-p/77824
> to
> connect teradata from spark.
>
> I was able to print
select 10 sample rows for columns id, ctime from each (MySQL and spark)
tables and post the output please.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
:22 PM:
> From: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> To: Richard Hillegas/San Francisco/IBM@IBMUS
> Cc: "u...@spark.incubator.apache.org"
> <u...@spark.incubator.apache.org>, "user@spark.apache.org"
> <user@spark.apache.org>
> Date: 11/0
ard Hillegas/San Francisco/IBM@IBMUS
> To: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> Cc: "user@spark.apache.org" <user@spark.apache.org>,
> "u...@spark.incubator.apache.org" <u...@spark.incubator.apache.org>
> Date: 11/05/2015 09:17 AM
>
Hi Rajesh,
I think that you may be referring to
https://issues.apache.org/jira/browse/SPARK-10909. A pull request on that
issue was submitted more than a month ago but it has not been committed. I
think that the committers are busy working on issues which were targeted
for 1.6 and I doubt that
rom: Richard Hillegas/San Francisco/IBM@IBMUS
> > To: Madabhattula Rajesh Kumar <mrajaf...@gmail.com>
> > Cc: "user@spark.apache.org" <user@spark.apache.org>,
> > "u...@spark.incubator.apache.org" <u...@spark.incubator.apache.org>
> > Date: 1
Can some one help me here? Please
On Sat, Jun 20, 2015 at 9:54 AM Sathish Kumaran Vairavelu
vsathishkuma...@gmail.com wrote:
Hi,
In Spark SQL JDBC data source there is an option to specify upper/lower
bound and num of partitions. How Spark handles data distribution, if we do
not give the
Sounds like SPARK-5456 https://issues.apache.org/jira/browse/SPARK-5456.
Which is fixed in Spark 1.4.
On Sun, Jun 14, 2015 at 11:57 AM, Sathish Kumaran Vairavelu
vsathishkuma...@gmail.com wrote:
Hello Everyone,
I pulled 2 different tables from the JDBC source and then joined them
using the
Thank you.. it works in Spark 1.4.
On Sun, Jun 14, 2015 at 3:51 PM Michael Armbrust mich...@databricks.com
wrote:
Sounds like SPARK-5456 https://issues.apache.org/jira/browse/SPARK-5456.
Which is fixed in Spark 1.4.
On Sun, Jun 14, 2015 at 11:57 AM, Sathish Kumaran Vairavelu
can confirm on this though.
*From:* Cheng Lian [mailto:lian.cs@gmail.com]
*Sent:* Tuesday, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather than
on the forum can confirm on this though.
*From:* Cheng Lian [mailto:lian.cs@gmail.com]
*Sent:* Tuesday, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext
, December 9, 2014 6:42 AM
*To:* Anas Mosaad
*Cc:* Judy Nash; user@spark.apache.org
*Subject:* Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather than
HiveContext. To interact with Hive, HiveContext *must* be used.
Please refer to this page
http
SQL experts on the forum can confirm on this though.
From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Tuesday, December 9, 2014 6:42 AM
To: Anas Mosaad
Cc: Judy Nash; user@spark.apache.org
Subject: Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather
Thanks Judy, this is exactly what I'm looking for. However, and plz forgive
me if it's a dump question is: It seems to me that thrift is the same as
hive2 JDBC driver, does this mean that starting thrift will start hive as
well on the server?
On Mon, Dec 8, 2014 at 9:11 PM, Judy Nash
Essentially, the Spark SQL JDBC Thrift server is just a Spark port of
HiveServer2. You don't need to run Hive, but you do need a working
Metastore.
On 12/9/14 3:59 PM, Anas Mosaad wrote:
Thanks Judy, this is exactly what I'm looking for. However, and plz
forgive me if it's a dump question is:
Thanks Cheng,
I thought spark-sql is using the same exact metastore, right? However, it
didn't work as expected. Here's what I did.
In spark-shell, I loaded a csv files and registered the table, say
countries.
Started the thrift server.
Connected using beeline. When I run show tables or !tables,
How did you register the table under spark-shell? Two things to notice:
1. To interact with Hive, HiveContext instead of SQLContext must be used.
2. `registerTempTable` doesn't persist the table into Hive metastore,
and the table is lost after quitting spark-shell. Instead, you must use
Back to the first question, this will mandate that hive is up and running?
When I try it, I get the following exception. The documentation says that
this method works only on SchemaRDD. I though that countries.saveAsTable
did not work for that a reason so I created a tmp that contains the results
According to the stacktrace, you were still using SQLContext rather than
HiveContext. To interact with Hive, HiveContext *must* be used.
Please refer to this page
http://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables
On 12/9/14 6:26 PM, Anas Mosaad wrote:
Back to the
You can use thrift server for this purpose then test it with beeline.
See doc:
https://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server
From: Anas Mosaad [mailto:anas.mos...@incorta.com]
Sent: Monday, December 8, 2014 11:01 AM
To: user@spark.apache.org
Even when I comment out those 3 lines, I still get the same error. Did
someone solve this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-JDBC-tp11369p13992.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
When you re-ran sbt did you clear out the packages first and ensure that
the datanucleus jars were generated within lib_managed? I remembered
having to do that when I was working testing out different configs.
On Thu, Sep 11, 2014 at 10:50 AM, alexandria1101
alexandria.shea...@gmail.com wrote:
...@spark.incubator.apache.org
Subject: Re: Spark SQL JDBC
When you re-ran sbt did you clear out the packages first and ensure that the
datanucleus jars were generated within lib_managed? I remembered having to do
that when I was working testing out different configs.
On Thu, Sep 11, 2014 at 10:50 AM
Oh, thanks for reporting this. This should be a bug since SPARK_HIVE was
deprecated, we shouldn’t rely on it any more.
On Wed, Aug 13, 2014 at 1:23 PM, ZHENG, Xu-dong dong...@gmail.com wrote:
Just find this is because below lines in make_distribution.sh doesn't work:
if [ $SPARK_HIVE ==
Yin helped me with that, and I appreciate the onlist followup. A few
questions: Why is this the case? I guess, does building it with
thriftserver add much more time/size to the final build? It seems that
unless documented well, people will miss that and this situation would
occur, why would we
Hive pulls in a ton of dependencies that we were afraid would break
existing spark applications. For this reason all hive submodules are
optional.
On Tue, Aug 12, 2014 at 7:43 AM, John Omernik j...@omernik.com wrote:
Yin helped me with that, and I appreciate the onlist followup. A few
Hi Cheng,
I also meet some issues when I try to start ThriftServer based a build from
master branch (I could successfully run it from the branch-1.0-jdbc
branch). Below is my build command:
./make-distribution.sh --skip-java-test -Phadoop-2.4 -Phive -Pyarn
-Dyarn.version=2.4.0
Just find this is because below lines in make_distribution.sh doesn't work:
if [ $SPARK_HIVE == true ]; then
cp $FWDIR/lib_managed/jars/datanucleus*.jar $DISTDIR/lib/
fi
There is no definition of $SPARK_HIVE in make_distribution.sh. I should set
it explicitly.
On Wed, Aug 13, 2014 at 1:10
Hi John, the JDBC Thrift server resides in its own build profile and need
to be enabled explicitly by ./sbt/sbt -Phive-thriftserver assembly.
On Tue, Aug 5, 2014 at 4:54 AM, John Omernik j...@omernik.com wrote:
I am using spark-1.1.0-SNAPSHOT right now and trying to get familiar with
the
For the time being, we decided to take a different route. We created a Rest
API layer in our app and allowed SQL query passing via the Rest. Internally
we pass that query to the SparkSQL layer on the RDD and return back the
results. With this Spark SQL is supported for our RDDs via this rest API
Very cool. Glad you found a solution that works.
On Wed, Jul 30, 2014 at 1:04 PM, Venkat Subramanian vsubr...@gmail.com
wrote:
For the time being, we decided to take a different route. We created a Rest
API layer in our app and allowed SQL query passing via the Rest. Internally
we pass that
1) If I have a standalone spark application that has already built a RDD,
how can SharkServer2 or for that matter Shark access 'that' RDD and do
queries on it. All the examples I have seen for Shark, the RDD (tables) are
created within Shark's spark context and processed.
This is not possible out
[Venkat] Are you saying - pull in the SharkServer2 code in my standalone
spark application (as a part of the standalone application process), pass
in
the spark context of the standalone app to SharkServer2 Sparkcontext at
startup and viola we get a SQL/JDBC interfaces for the RDDs of the
On Wed, May 28, 2014 at 11:39 PM, Venkat Subramanian vsubr...@gmail.comwrote:
We are planning to use the latest Spark SQL on RDDs. If a third party
application wants to connect to Spark via JDBC, does Spark SQL have
support?
(We want to avoid going though Shark/Hive JDBC layer as we need good
Thanks Michael.
OK will try SharkServer2..
But I have some basic questions on a related area:
1) If I have a standalone spark application that has already built a RDD,
how can SharkServer2 or for that matter Shark access 'that' RDD and do
queries on it. All the examples I have seen for Shark,
35 matches
Mail list logo