[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14502


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-07 Thread princejwesley
Github user princejwesley commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73797732
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,14 +79,19 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.startsWith("jdbc:mysql:")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
--- End diff --

@srowen Updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73781961
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,14 +79,19 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.startsWith("jdbc:mysql:")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
--- End diff --

(This line is too long, fails style checks)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread princejwesley
Github user princejwesley commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73781306
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,17 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.startsWith("jdbc:mysql:")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
-  logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+  logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming")
+} else {
+  stmt.setFetchSize(100)
+  logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force streaming")
--- End diff --

@srowen Addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73737921
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,17 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.startsWith("jdbc:mysql:")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
-  logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+  logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming")
+} else {
+  stmt.setFetchSize(100)
+  logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force streaming")
--- End diff --

Nit: you could use string interpolation in both statements. Also, it's not 
really streamed a record at a time in this case here. Maybe just log the 
statement fetch size.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread princejwesley
Github user princejwesley commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73719607
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

@srowen Updated!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73714157
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

OK got it. We'll leave that, but perhaps `setFetchSize(100)` for everything 
else. @princejwesley 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread koeninger
Github user koeninger commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73713830
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

What I'm saying is that, at least at the time, the mysql driver ignored the
actual number set there, unless it was min value.  It only used it to
toggle between "stream results" and "fetch all results", with nothing in
between.

On Fri, Aug 5, 2016 at 10:14 AM, Sean Owen  wrote:

> In core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala
> :
>
> >stmt.setFetchSize(Integer.MIN_VALUE)
> >logInfo("statement fetch size set to: " + stmt.getFetchSize + " 
to force MySQL streaming ")
> > +} else if (url.matches("jdbc:postgresql:*")) {
>
> Yeah that's why the fetch size shouldn't be effectively infinite, but I
> think this mode means fetch one at a time, which is the other extreme. 
What
> if this were, say, fetching 100 records at once? if that strikes you as 
OK,
> maybe that's better for efficiency.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
,
> or mute the thread
> 

> .
>



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73708585
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

Yeah that's why the fetch size shouldn't be effectively infinite, but I 
think this mode means fetch one at a time, which is the other extreme. What if 
this were, say, fetching 100 records at once? if that strikes you as OK, maybe 
that's better for efficiency.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread koeninger
Github user koeninger commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73681837
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

As I recall, the issue there is that otherwise the mysql driver will
attempt to materialize the entire result set in memory at once, regardless
of how big it is.
On Aug 5, 2016 2:07 AM, "Sean Owen"  wrote:

> In core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala
> :
>
> >stmt.setFetchSize(Integer.MIN_VALUE)
> >logInfo("statement fetch size set to: " + stmt.getFetchSize + " 
to force MySQL streaming ")
> > +} else if (url.matches("jdbc:postgresql:*")) {
>
> The regex is wrong, as it matches 0 or more : at the end. Actually, why
> don't we just use startsWith in both cases?
>
> Seems reasonable even if 10 is arbitrary. Is that low? but then again
> mysql above is asked to retrieve row by row, and I'm not actually sure
> that's a good idea. I wonder if we should dispense with this and just set
> it to something moderate like 1000 for all drivers? CC @koeninger
> 
>
> Can you update the MySQL link above while you're here to
> https://dev.mysql.com/doc/connector-j/5.1/en/connector-
> j-reference-implementation-notes.html as the existing one doesn't work.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
,
> or mute the thread
> 

> .
>



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread princejwesley
Github user princejwesley commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73651002
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

I'll update the PR tonight IST time

On Aug 5, 2016 12:37 PM, "Sean Owen"  wrote:

> In core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala
> :
>
> >stmt.setFetchSize(Integer.MIN_VALUE)
> >logInfo("statement fetch size set to: " + stmt.getFetchSize + " 
to force MySQL streaming ")
> > +} else if (url.matches("jdbc:postgresql:*")) {
>
> The regex is wrong, as it matches 0 or more : at the end. Actually, why
> don't we just use startsWith in both cases?
>
> Seems reasonable even if 10 is arbitrary. Is that low? but then again
> mysql above is asked to retrieve row by row, and I'm not actually sure
> that's a good idea. I wonder if we should dispense with this and just set
> it to something moderate like 1000 for all drivers? CC @koeninger
> 
>
> Can you update the MySQL link above while you're here to
> https://dev.mysql.com/doc/connector-j/5.1/en/connector-
> j-reference-implementation-notes.html as the existing one doesn't work.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> 
,
> or mute the thread
> 

> .
>



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-05 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/14502#discussion_r73650348
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -79,12 +79,18 @@ class JdbcRDD[T: ClassTag](
 val conn = getConnection()
 val stmt = conn.prepareStatement(sql, ResultSet.TYPE_FORWARD_ONLY, 
ResultSet.CONCUR_READ_ONLY)
 
-// setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
-// rather than pulling entire resultset into memory.
-// see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
-if (conn.getMetaData.getURL.matches("jdbc:mysql:.*")) {
+val url = conn.getMetaData.getURL
+if (url.matches("jdbc:mysql:.*")) {
+  // setFetchSize(Integer.MIN_VALUE) is a mysql driver specific way to 
force streaming results,
+  // rather than pulling entire resultset into memory.
+  // see 
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html
+
   stmt.setFetchSize(Integer.MIN_VALUE)
   logInfo("statement fetch size set to: " + stmt.getFetchSize + " to 
force MySQL streaming ")
+} else if (url.matches("jdbc:postgresql:*")) {
--- End diff --

The regex is wrong, as it matches 0 or more : at the end. Actually, why 
don't we just use startsWith in both cases? 

Seems reasonable even if 10 is arbitrary. Is that low? but then again mysql 
above is asked to retrieve row by row, and I'm not actually sure that's a good 
idea. I wonder if we should dispense with this and just set it to something 
moderate like 1000 for all drivers? CC @koeninger 

Can you update the MySQL link above while you're here to 
https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-implementation-notes.html
 as the existing one doesn't work.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14502: [SPARK-16909][Spark Core] - Streaming for postgre...

2016-08-04 Thread princejwesley
GitHub user princejwesley opened a pull request:

https://github.com/apache/spark/pull/14502

[SPARK-16909][Spark Core] - Streaming for postgreSQL JDBC driver

As per the postgreSQL JDBC driver 
[implementation](https://github.com/pgjdbc/pgjdbc/blob/ab2a6d89081fc2c1fdb2a8600f413db33669022c/pgjdbc/src/main/java/org/postgresql/PGProperty.java#L99),
 the default record fetch size is 0(which means, it caches all record)

This fix enforces default record fetch size as 10 to enable streaming of 
data.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/princejwesley/spark spark-postgres

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14502.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14502


commit c45f8212add3ac7ce04edd0ea1b3903ff9782c6d
Author: Prince J Wesley 
Date:   2016-08-05T03:26:28Z

SPARK-16909 - Streaming for postgreSQL JDBC driver




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org