Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81802372
--- Diff: docs/sql-programming-guide.md ---
@@ -1048,28 +1049,42 @@ the Data Sources API. The following options are
supported:
<code>partitionColumn</code> must be a numeric column from the table
in question. Notice
that <code>lowerBound</code> and <code>upperBound</code> are just
used to decide the
partition stride, not for filtering the rows in table. So all rows
in the table will be
- partitioned and returned.
+ partitioned and returned. This option applies only to reading.
</td>
</tr>
<tr>
<td><code>fetchsize</code></td>
<td>
- The JDBC fetch size, which determines how many rows to fetch per
round trip. This can help performance on JDBC drivers which default to low
fetch size (eg. Oracle with 10 rows).
+ The JDBC fetch size, which determines how many rows to fetch per
round trip. This can help performance on JDBC drivers which default to low
fetch size (eg. Oracle with 10 rows). This option applies only to reading.
</td>
</tr>
-
+
+ <tr>
+ <td><code>batchsize</code></td>
+ <td>
+ The JDBC batch size, which determines how many rows to insert per
round trip. This can help performance on JDBC drivers. This option applies only
to writing.
--- End diff --
Could you also add the default size here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]