srowen commented on a change in pull request #28433:
URL: https://github.com/apache/spark/pull/28433#discussion_r418974993



##########
File path: docs/sql-ref-syntax-qry-select-tvf.md
##########
@@ -50,31 +50,12 @@ function_name ( expression [ , ... ] ) [ table_alias ]
 
 ### Supported Table-valued Functions
 
-<table class="table">
-  <thead>
-    <tr><th style="width:25%">Function</th><th>Argument 
Type(s)</th><th>Description</th></tr>
-  </thead>
-    <tr>
-      <td><b> range </b>( <i>end</i> )</td>
-      <td> Long </td>
-      <td>Creates a table with a single <code>LongType</code> column named 
<code>id</code>, containing rows in a range from 0 to <code>end</code> 
(exclusive) with step value 1.</td>
-    </tr>
-    <tr>
-      <td><b> range </b>( <i> start, end</i> )</td>
-      <td> Long, Long </td>
-      <td width="60%">Creates a table with a single <code>LongType</code> 
column named <code>id</code>, containing rows in a range from 
<code>start</code> to <code>end</code> (exclusive) with step value 1.</td>
-    </tr>
-    <tr>
-      <td><b> range </b>( <i> start, end, step</i> )</td>
-      <td> Long, Long, Long </td>
-      <td width="60%">Creates a table with a single <code>LongType</code> 
column named <code>id</code>, containing rows in a range from 
<code>start</code> to <code>end</code> (exclusive) with <code>step</code> 
value.</td>
-     </tr>
-    <tr>
-      <td><b> range </b>( <i> start, end, step, numPartitions</i> )</td>
-      <td> Long, Long, Long, Int </td>
-      <td width="60%">Creates a table with a single <code>LongType</code> 
column named <code>id</code>, containing rows in a range from 
<code>start</code> to <code>end</code> (exclusive) with <code>step</code> 
value, with partition number <code>numPartitions</code> specified. </td>
-    </tr>
-</table>
+|Function|Argument Type(s)|Description|
+|--------|----------------|-----------|
+|**range** ( end )|Long|Creates a table with a single *LongType* column named 
*id*, containing<br> rows in a range from 0 to *end* (exclusive) with step 
value 1.|

Review comment:
       Nit, but can we preserve the italics on _end_ for example?

##########
File path: docs/sql-ref-ansi-compliance.md
##########
@@ -27,35 +27,10 @@ The casting behaviours are defined as store assignment 
rules in the standard.
 
 When `spark.sql.storeAssignmentPolicy` is set to `ANSI`, Spark SQL complies 
with the ANSI store assignment rules. This is a separate configuration because 
its default value is `ANSI`, while the configuration `spark.sql.ansi.enabled` 
is disabled by default.
 
-<table class="table">
-<tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr>
-<tr>
-  <td><code>spark.sql.ansi.enabled</code></td>
-  <td>false</td>
-  <td>
-    (Experimental) When true, Spark tries to conform to the ANSI SQL 
specification:
-    1. Spark will throw a runtime exception if an overflow occurs in any 
operation on integral/decimal field.
-    2. Spark will forbid using the reserved keywords of ANSI SQL as 
identifiers in the SQL parser.
-  </td>
-  <td>3.0.0</td>
-</tr>
-<tr>
-  <td><code>spark.sql.storeAssignmentPolicy</code></td>
-  <td>ANSI</td>
-  <td>
-    (Experimental) When inserting a value into a column with different data 
type, Spark will perform type coercion.
-    Currently, we support 3 policies for the type coercion rules: ANSI, legacy 
and strict. With ANSI policy,
-    Spark performs the type coercion as per ANSI SQL. In practice, the 
behavior is mostly the same as PostgreSQL.
-    It disallows certain unreasonable type conversions such as converting 
string to int or double to boolean.
-    With legacy policy, Spark allows the type coercion as long as it is a 
valid Cast, which is very loose.
-    e.g. converting string to int or double to boolean is allowed.
-    It is also the only behavior in Spark 2.x and it is compatible with Hive.
-    With strict policy, Spark doesn't allow any possible precision loss or 
data truncation in type coercion,
-    e.g. converting double to int or decimal to double is not allowed.
-  </td>
-  <td>3.0.0</td>
-</tr>
-</table>
+|Property Name|Default|Meaning|Since Version|
+|-------------|-------|-------|-------------|
+|`spark.sql.ansi.enabled`|false|(Experimental) When true, Spark tries to 
conform to the ANSI SQL specification: <br> 1. Spark will throw a runtime 
exception if an overflow occurs in any operation on integral/decimal field. 
<br> 2. Spark will forbid using the reserved keywords of ANSI SQL as 
identifiers in the SQL parser.|3.0.0|

Review comment:
       Also not a big deal, but if you can search/replace `<br>` for `<br/>` 
that's slightly more correct.

##########
File path: docs/sql-ref-ansi-compliance.md
##########
@@ -27,35 +27,10 @@ The casting behaviours are defined as store assignment 
rules in the standard.
 
 When `spark.sql.storeAssignmentPolicy` is set to `ANSI`, Spark SQL complies 
with the ANSI store assignment rules. This is a separate configuration because 
its default value is `ANSI`, while the configuration `spark.sql.ansi.enabled` 
is disabled by default.
 
-<table class="table">
-<tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr>
-<tr>
-  <td><code>spark.sql.ansi.enabled</code></td>
-  <td>false</td>
-  <td>
-    (Experimental) When true, Spark tries to conform to the ANSI SQL 
specification:
-    1. Spark will throw a runtime exception if an overflow occurs in any 
operation on integral/decimal field.
-    2. Spark will forbid using the reserved keywords of ANSI SQL as 
identifiers in the SQL parser.
-  </td>
-  <td>3.0.0</td>
-</tr>
-<tr>
-  <td><code>spark.sql.storeAssignmentPolicy</code></td>
-  <td>ANSI</td>
-  <td>
-    (Experimental) When inserting a value into a column with different data 
type, Spark will perform type coercion.
-    Currently, we support 3 policies for the type coercion rules: ANSI, legacy 
and strict. With ANSI policy,
-    Spark performs the type coercion as per ANSI SQL. In practice, the 
behavior is mostly the same as PostgreSQL.
-    It disallows certain unreasonable type conversions such as converting 
string to int or double to boolean.
-    With legacy policy, Spark allows the type coercion as long as it is a 
valid Cast, which is very loose.
-    e.g. converting string to int or double to boolean is allowed.
-    It is also the only behavior in Spark 2.x and it is compatible with Hive.
-    With strict policy, Spark doesn't allow any possible precision loss or 
data truncation in type coercion,
-    e.g. converting double to int or decimal to double is not allowed.
-  </td>
-  <td>3.0.0</td>
-</tr>
-</table>
+|Property Name|Default|Meaning|Since Version|
+|-------------|-------|-------|-------------|
+|`spark.sql.ansi.enabled`|false|(Experimental) When true, Spark tries to 
conform to the ANSI SQL specification: <br> 1. Spark will throw a runtime 
exception if an overflow occurs in any operation on integral/decimal field. 
<br> 2. Spark will forbid using the reserved keywords of ANSI SQL as 
identifiers in the SQL parser.|3.0.0|

Review comment:
       For long table cells is there any standard way to wrap the markdown? the 
markdown source is harder to read now, but not terrible.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to