Hi,

We are using spark sql (1.3.1) to load data from Microsoft sql server using
jdbc (as described in
https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases
).

It is working fine except when there is a space in column names (we can't
modify the schemas to remove space as it is a legacy database).

Sqoop is able to handle such scenarios by enclosing column names in '[ ]' -
the recommended method from microsoft sql server. (
https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/manager/SQLServerManager.java
- line no 319)

Is there a way to handle this in spark sql?

Thanks,
sachin

Reply via email to