Github user seancxmao commented on a diff in the pull request:
https://github.com/apache/spark/pull/22461#discussion_r219539062
--- Diff: docs/sql-programming-guide.md ---
@@ -1287,8 +1287,18 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9.
Tables from the remote database can be loaded as a DataFrame or Spark SQL
temporary view using
the Data Sources API. Users can specify the JDBC connection properties in
the data source options.
<code>user</code> and <code>password</code> are normally provided as
connection properties for
-logging into the data sources. In addition to the connection properties,
Spark also supports
-the following case-insensitive options:
+logging into the data sources. Vendor-specific connection properties can
also be passed to the
+underlying JDBC driver in the same way. For example:
+
+{% highlight scala %}
+// oracle.jdbc.mapDateToTimestamp defaults to true. If this flag is not
disabled, a column of Oracle
+// DATE type will be resolved as Catalyst TimestampType, which is probably
not the desired behavior.
+spark.read.format("jdbc")
+ .option("url", oracleJdbcUrl)
+ .option("oracle.jdbc.mapDateToTimestamp", "false")
+{% endhighlight %}
+
--- End diff --
I have moved this description to `Troubleshooting` section. I also tried to
brush up the description. Writing good documentation is sometimes difficult
than writing code. really need your help :)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]