Github user koeninger commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14502#discussion_r73681837
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
    @@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
         val conn = getConnection()
         val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
     
    -    // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
    -    // rather than pulling entire resultset into memory.
    -    // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
    -    if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
    +    val url = conn.getMetaData.getURL
    +    if (url.matches("jdbc:mysql:.*")) {
    +      // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
    +      // rather than pulling entire resultset into memory.
    +      // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
    +
           stmt.setFetchSize(Integer.MIN_VALUE)
           logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
    +    } else if (url.matches("jdbc:postgresql:*")) {
    --- End diff --
    
    As I recall, the issue there is that otherwise the mysql driver will
    attempt to materialize the entire result set in memory at once, regardless
    of how big it is.
    On Aug 5, 2016 2:07 AM, "Sean Owen" <[email protected]> wrote:
    
    > In core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala
    > <https://github.com/apache/spark/pull/14502#discussion_r73650348>:
    >
    > >        stmt.setFetchSize(Integer.MIN_VALUE)
    > >        logInfo("statement fetch size set to: " + stmt.getFetchSize + " 
to force MySQL streaming ")
    > > +    } else if (url.matches("jdbc:postgresql:*")) {
    >
    > The regex is wrong, as it matches 0 or more : at the end. Actually, why
    > don't we just use startsWith in both cases?
    >
    > Seems reasonable even if 10 is arbitrary. Is that low? but then again
    > mysql above is asked to retrieve row by row, and I'm not actually sure
    > that's a good idea. I wonder if we should dispense with this and just set
    > it to something moderate like 1000 for all drivers? CC @koeninger
    > <https://github.com/koeninger>
    >
    > Can you update the MySQL link above while you're here to
    > https://dev.mysql.com/doc/connector-j/5.1/en/connector-
    > j-reference-implementation-notes.html as the existing one doesn't work.
    >
    > —
    > You are receiving this because you were mentioned.
    > Reply to this email directly, view it on GitHub
    > 
<https://github.com/apache/spark/pull/14502/files/c45f8212add3ac7ce04edd0ea1b3903ff9782c6d#r73650348>,
    > or mute the thread
    > 
<https://github.com/notifications/unsubscribe-auth/AAGAB6tXSgLc0hjxbUQQRfhZPWXNb7Brks5qcuExgaJpZM4JdU3L>
    > .
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to