Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81587688
--- Diff: docs/sql-programming-guide.md ---
@@ -1048,28 +1049,42 @@ the Data Sources API. The following options are
supported:
<code>partitionColumn</code> must be a numeric column from the table
in question. Notice
that <code>lowerBound</code> and <code>upperBound</code> are just
used to decide the
partition stride, not for filtering the rows in table. So all rows
in the table will be
- partitioned and returned.
+ partitioned and returned. This option is only for reading.
</td>
</tr>
<tr>
<td><code>fetchsize</code></td>
<td>
- The JDBC fetch size, which determines how many rows to fetch per
round trip. This can help performance on JDBC drivers which default to low
fetch size (eg. Oracle with 10 rows).
+ The JDBC fetch size, which determines how many rows to fetch per
round trip. This can help performance on JDBC drivers which default to low
fetch size (eg. Oracle with 10 rows). This option is only for reading.
</td>
</tr>
-
+
+ <tr>
+ <td><code>batchsize</code></td>
+ <td>
+ The JDBC batch size, which determines how many rows to insert per
round trip. This can help performance on JDBC drivers. This option is only for
writing.
--- End diff --
Here, you also need to document the default size. I checked the code. It is
`1000`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]