Bummer - out of curiosity, if you were to use the classpath.first or
perhaps copy the jar to the slaves could that actually do the trick? The
latter isn't really all that efficient but just curious if that could do
the trick.
On Thu, Apr 16, 2015 at 7:14 AM ARose wrote:
> I take it back. My so
I take it back. My solution only works when you set the master to "local". I
get the same error when I try to run it on the cluster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Microsoft-SQL-jdbc-support-from-spark-sql-tp22399p22525.html
Sent from the Ap
Looks a good option. BTW v3.0 is round the corner.
http://slick.typesafe.com/news/2015/04/02/slick-3.0.0-RC3-released.html
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Microsoft-SQL-jdbc-support-from-spark-sql-tp22399p22521.html
Sent from the Apach
I am running the queries from spark-sql. I don't think it can communicate
with thrift server. Can you tell how I should run the quries to make it
work.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Microsoft-SQL-jdbc-support-from-spark-sql-tp22399p22516.ht
I was running the spark shell and sql with --jars option containing the paths
when I got my error. What is the correct way to add jars I am not sure. I
tried placing the jar inside the directory you said but still get the error.
I will give the code you posted a try. Thanks.
--
View this message
I have found that it works if you place the sqljdbc41.jar directly in the
following folder:
YOUR_SPARK_HOME/core/target/jars/
So Spark will have the SQL Server jdbc driver when it computes its
classpath.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Micr
Your first DDL should be correct (as long as the JDBC URL is correct).
The string after USING should be the data source name
("org.apache.spark.sql.jdbc" or simply "jdbc").
The SQLException here indicates that Spark SQL couldn't find SQL Server
JDBC driver in the classpath.
As what Denny sai
That's correct, at this time MS SQL Server is not supported through the
JDBC data source at this time. In my environment, we've been using Hadoop
streaming to extract out data from multiple SQL Servers, pushing the data
into HDFS, creating the Hive tables and/or converting them into Parquet,
and t
I am having the same issue with my java application.
String url = "jdbc:sqlserver://" + host + ":1433;DatabaseName=" +
database + ";integratedSecurity=true";
String driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
SparkConf conf = new
SparkConf().setAppName(appName
Thanks for the information. Hopefully this will happen in near future. For
now my best bet would be to export data and import it in spark sql.
On 7 April 2015 at 11:28, Denny Lee wrote:
> At this time, the JDBC Data source is not extensible so it cannot support
> SQL Server. There was some tho
At this time, the JDBC Data source is not extensible so it cannot support
SQL Server. There was some thoughts - credit to Cheng Lian for this -
about making the JDBC data source extensible for third party support
possibly via slick.
On Mon, Apr 6, 2015 at 10:41 PM bipin wrote:
> Hi, I am try
11 matches
Mail list logo