The best option certainly would be to recompile the Spark Connector for MS SQL server using the Spark 3.0.1/Scala 2.12 dependencies, and just fix the compiler errors as you go. The code is open source on github (https://github.com/microsoft/sql-spark-connector).  Looks like this connector is using Data Frame instead of RDD, so I would expect there shouldn't be many API changes that would cause compiler errors.

You may also want to go through the forks and it looks like some people have already tried to convert the connector to Spark 3.0.1, like this one https://github.com/datarootsio/sql-spark-connector-spark-3

-- ND

On 10/26/20 6:18 AM, alejandra.le...@vattenfall.com wrote:

Hi,

In a project where I work with Databricks, we use this connector to read / write data to Azure SQL Database. Currently with Spark 2.4.5 and Scala 2.11.

But those setups are getting old. What happens if we update Spark to 3.0.1 or higher and Scala 2.12.

This connector does not work according to the versions it supports. What should we do? Don't use the connector or is there another way to work?

I appreciate any type of information that helps me.

Med vänlig hälsning / Best regards

*Alejandra Lemmo*
/Data Engineer
/
Customer Analytic

Address: Evenemangsgatan 13, 169 56 Solna, 16956 Solna

D +46735249832

        

M +46735249832

        


alejandra.le...@vattenfall.com
www.vattenfall.se <http://www.vattenfall.se>

Please consider the environment before printing this e-mail


Confidentiality: C2 - Internal

Reply via email to