yaooqinn commented on code in PR #46173:
URL: https://github.com/apache/spark/pull/46173#discussion_r1575666204


##########
docs/sql-data-sources-jdbc.md:
##########
@@ -1441,3 +1441,192 @@ The Spark Catalyst data types below are not supported 
with suitable Oracle types
 - NullType
 - ObjectType
 - VariantType
+
+### Mapping Spark SQL Data Types from Microsoft SQL Server
+
+The below table describes the data type conversions from Microsoft SQL Server 
data types to Spark SQL Data Types,
+when reading data from a Microsoft SQL Server table using the built-in jdbc 
data source with the mssql-jdbc
+as the activated JDBC Driver.
+
+
+<table>
+  <thead>
+    <tr>
+      <th><b>SQL Server  Data Type</b></th>
+      <th><b>Spark SQL Data Type</b></th>
+      <th><b>Remarks</b></th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>bit</td>
+      <td>BooleanType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>tinyint</td>
+      <td>ShortType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>smallint</td>
+      <td>ShortType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>int</td>
+      <td>IntegerType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>bigint</td>
+      <td>LongType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>float(p), real</td>
+      <td>FloatType</td>
+      <td>1 &le; p &le; 24</td>
+    </tr>
+    <tr>
+      <td>float[(p)]</td>
+      <td>DoubleType</td>
+      <td>25 &le; p &le; 53</td>
+    </tr>
+    <tr>
+      <td>double precision</td>
+      <td>DoubleType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>smallmoney</td>
+      <td>DecimalType(10, 4)</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>money</td>
+      <td>DecimalType(19, 4)</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>decimal[(p[, s])], numeric[(p[, s])]</td>
+      <td>DecimalType(p, s)</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>date</td>
+      <td>DateType</td>
+      <td></td>
+    </tr>
+    <tr>
+      <td>datetime</td>
+      <td>TimestampType</td>
+      <td>(Default)preferTimestampNTZ=false or 
spark.sql.timestampType=TIMESTAMP_LTZ</td>
+    </tr>
+    <tr>
+      <td>datetime</td>
+      <td>TimestampNTZType</td>
+      <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td>
+    </tr>
+    <tr>
+      <td>datetime2 [ (fractional seconds precision) ]</td>
+      <td>TimestampType</td>
+      <td>(Default)preferTimestampNTZ=false or 
spark.sql.timestampType=TIMESTAMP_LTZ</td>
+    </tr>
+    <tr>
+      <td>datetime2 [ (fractional seconds precision) ]</td>
+      <td>TimestampNTZType</td>
+      <td>preferTimestampNTZ=true or spark.sql.timestampType=TIMESTAMP_NTZ</td>
+    </tr>
+    <tr>
+      <td>datetimeoffset [ (fractional seconds precision) ]</td>
+      <td>StringType</td>

Review Comment:
   
https://github.com/apache/spark/blob/9d715ba491710969340d9e8a49a21d11f51ef7d3/sql/core/src/main/scala/org/apache/spark/sql/jdbc/MsSqlServerDialect.scala#L112-L114
   
   This comment appears not true as we use mssql-jdbc



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to