Github user micaelcapitao commented on the pull request:
https://github.com/apache/spark/pull/5488#issuecomment-93301868
Hi.
The doc for the SQLContext.jdbc() method says:
> @param columnName the name of a column of integral type that will be used
for partitioning.
> @param lowerBound the minimum value of `columnName` to retrieve
> @param upperBound the maximum value of `columnName` to retrieve
> @param numPartitions the number of partitions. the range
`minValue`-`maxValue` will be split
> evenly into this many partitions
This doc should be updated too.
How can one add predicates to limit the scope of data being pushed from the
DB using the SQLContext API?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]