sadikovi commented on PR #42896:
URL: https://github.com/apache/spark/pull/42896#issuecomment-1718347021
Thank you @dongjoon-hyun!
I forgot to mention in the PR, I did test the dialect manually for a few
tables and queries. For example, this query works:
```
scala> val df = spark.read.format("jdbc")
.option("url",
"jdbc:databricks://<host>.cloud.databricks.com:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/23982398239232323/0912-023325-neasdf78;AuthMech=3;UID=token;PWD=<token>")
.option("dbtable", "ivan_test")
.option("driver", "com.databricks.client.jdbc.Driver") // for some reason,
the driver was not loading automatically on my machine
.load()
df: org.apache.spark.sql.DataFrame = [a: int, b: string ... 3 more fields]
scala> df.show
+---+---+----+---+---+
| a| b| c| d| e|
+---+---+----+---+---+
| 1| 2|true|3.4|5.6|
+---+---+----+---+---+
```
while it fails without the dialect with some type conversion errors.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]