The spark docs section for "JDBC to Other Databases"
(https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases)
describes the partitioning as "... Notice that lowerBound and upperBound
are just used to decide the partition stride, not for filtering the rows
in tab
I'm not a python expert, so I'm wondering if anybody has a working
example of a partitioner for the "partitionFunc" argument (default
"portable_hash") to rdd.partitionBy()?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apach
The spark docs section for "JDBC to Other Databases"
(https://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases)
describes the partitioning as "... Notice that lowerBound and upperBound
are just used to decide the partition stride, not for filtering the rows
in tab