GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15975
Fix Concurrent Table Fetching Using DataFrameReader JDBC APIs
### What changes were proposed in this pull request?
The following two `DataFrameReader` JDBC APIs ignore the user-specified
parameters of parallelism degree.
```Scala
def jdbc(
url: String,
table: String,
columnName: String,
lowerBound: Long,
upperBound: Long,
numPartitions: Int,
connectionProperties: Properties): DataFrame
```
```Scala
def jdbc(
url: String,
table: String,
predicates: Array[String],
connectionProperties: Properties): DataFrame
```
This PR is to fix the issues. To verify the behavior correctness, we
improve the plan output of `EXPLAIN` command by adding `numPartitions` in the
`JDBCRelation` node.
Before the fix,
```
== Physical Plan ==
*Scan JDBCRelation(TEST.PEOPLE) [NAME#1896,THEID#1897] ReadSchema:
struct<NAME:string,THEID:int>
```
After the fix,
```
== Physical Plan ==
*Scan JDBCRelation(TEST.PEOPLE) [numPartitions=3] [NAME#1896,THEID#1897]
ReadSchema: struct<NAME:string,THEID:int>
```
### How was this patch tested?
Added the verification logics on all the test cases for JDBC concurrent
fetching.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/gatorsmile/spark jdbc
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/15975.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #15975
----
commit bcc86c0395ddc24cb629f46af9f985bdff0387a6
Author: gatorsmile <[email protected]>
Date: 2016-11-22T05:49:42Z
fix.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]