eated Cassandra table.
Spark SQL does not provide any feature for safe parameter binding, so
I thought about using the JDBC thrift server and the JDBC interface.
Inserting data into an external table from hive is performed by
running CREATE EXTERNAL TABLE ... STORED BY...
However, when tryi
thought about using the JDBC thrift server and the JDBC interface.
Inserting data into an external table from hive is performed by running
CREATE EXTERNAL TABLE ... STORED BY...
However, when trying to execute this statement through the thrift
server, I always get the following error
Sorry, we’re running 1.5.1.
y
From: Sathish Kumaran Vairavelu [mailto:vsathishkuma...@gmail.com]
Sent: October-08-15 12:39 PM
To: Younes Naguib; user@spark.apache.org
Subject: Re: JDBC thrift server
Which version of spark you are using? You might encounter
SPARK-6882<https://issues.apache.
Which version of spark you are using? You might encounter SPARK-6882
<https://issues.apache.org/jira/browse/SPARK-6882> if Kerberos is enabled.
-Sathish
On Thu, Oct 8, 2015 at 10:46 AM Younes Naguib <
younes.nag...@tritondigital.com> wrote:
> Hi,
>
>
>
> We’ve been u
Hi,
We've been using the JDBC thrift server for a couple of weeks now and running
queries on it like a regular RDBMS.
We're about to deploy it in a shared production cluster.
Any advice, warning on a such setup. Yarn or Mesos?
How about dynamic resource allocation in a already runn
. Could you please advice.
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19963.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19963.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr
-> [Help 1]
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19945.html
Sent from the Apache Spark User List m
0\python"): CreateProcess error=2, The system cannot
find the file specified -> [Help 1]
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19945.html
Sen
vance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19937.html
Sent from the Apache Spark User List mailing list archive at Nabbl
Yes, I'm building it from Spark 1.1.0
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19937.html
Sent from the Apache Spark User List mailing list archi
mand.
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean
package
Regards.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19933.html
Sent from the Apache Spark User Li
Thanks for your response.
I'm using the following command.
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean
package
Regards.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-t
What’s the command line you used to build Spark? Notice that you need to
add |-Phive-thriftserver| to build the JDBC Thrift server. This profile
was once removed in in v1.1.0, but added back in v1.2.0 because of
dependency issue introduced by Scala 2.11 support.
On 11/27/14 12:53 AM
nable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional com
Which version are you using? Also |.saveAsTable()| saves the table to
Hive metastore, so you need to make sure your Spark application points
to the same Hive metastore instance as the JDBC Thrift server. For
example, put |hive-site.xml| under |$SPARK_HOME/conf|, and run
|spark-shell| and
I am writing a Spark job to persist data using HiveContext so that it can
be accessed via the JDBC Thrift server. Although my code doesn't throw an
error, I am unable to see my persisted data when I query from the Thrift
server.
I tried three different ways to get this to work:
1)
17 matches
Mail list logo