Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15292#discussion_r81585325
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1048,28 +1049,42 @@ the Data Sources API. The following options are 
supported:
           <code>partitionColumn</code> must be a numeric column from the table 
in question. Notice
           that <code>lowerBound</code> and <code>upperBound</code> are just 
used to decide the
           partition stride, not for filtering the rows in table. So all rows 
in the table will be
    -      partitioned and returned.
    +      partitioned and returned. This option is only for reading.
         </td>
       </tr>
     
       <tr>
         <td><code>fetchsize</code></td>
         <td>
    -      The JDBC fetch size, which determines how many rows to fetch per 
round trip. This can help performance on JDBC drivers which default to low 
fetch size (eg. Oracle with 10 rows).
    +      The JDBC fetch size, which determines how many rows to fetch per 
round trip. This can help performance on JDBC drivers which default to low 
fetch size (eg. Oracle with 10 rows). This option is only for reading.
         </td>
       </tr>
    -  
    +
    +  <tr>
    +     <td><code>batchsize</code></td>
    +     <td>
    +       The JDBC batch size, which determines how many rows to insert per 
round trip. This can help performance on JDBC drivers. This option is only for 
writing.
    +     </td>
    +  </tr>
    +
    +  <tr>
    +     <td><code>isolationLevel</code></td>
    +     <td>
    +       The transaction isolation level, which applies to current 
connection. Please refer the documenation in <code>java.sql.Connection</code>. 
This option is only for writing.
    +     </td>
    +   </tr>
    +
       <tr>
         <td><code>truncate</code></td>
         <td>
    -     This is a JDBC writer related option. When 
<code>SaveMode.Overwrite</code> is enabled, this option causes Spark to 
truncate an existing table instead of dropping and recreating it. This can be 
more efficient, and prevents the table metadata (e.g. indices) from being 
removed. However, it will not work in some cases, such as when the new data has 
a different schema. It defaults to <code>false</code>. 
    +     This is a JDBC writer related option. When 
<code>SaveMode.Overwrite</code> is enabled, this option causes Spark to 
truncate an existing table instead of dropping and recreating it. This can be 
more efficient, and prevents the table metadata (e.g. indices) from being 
removed. However, it will not work in some cases, such as when the new data has 
a different schema. It defaults to <code>false</code>. This option is only for 
writing.
        </td>
       </tr>
       
       <tr>
         <td><code>createTableOptions</code></td>
         <td>
    -     This is a JDBC writer related option. If specified, this option 
allows setting of database-specific table and partition options when creating a 
table. For example: <code>CREATE TABLE t (name string) ENGINE=InnoDB.</code>
    +     This is a JDBC writer related option. If specified, this option 
allows setting of database-specific table and partition options when creating a 
table (e.g. <code>CREATE TABLE t (name string) ENGINE=InnoDB.</code>). This 
option is only for writing.
    --- End diff --
    
    Normally, we put a comma after `e.g.`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to