Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r17279050
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -205,6 +208,54 @@ class SQLContext(@transient val sparkContext:
SparkContext)
}
/**
+ * Loads from JDBC, returning the ResultSet as a [[SchemaRDD]].
+ * It gets MetaData from ResultSet of PreparedStatement to determine the
schema.
+ *
+ * @group userf
+ */
+ def jdbcResultSet(
--- End diff --
For the 1.2 release we are going to be focusing on adding more external
datasources. As part of this we are trying to change the way we add them to
avoid SQLContext getting to large. What do you think about adding an object,
`org.apache.spark.sql.jdbc.JDBC` that has these methods instead of adding them
here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]