Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1612#discussion_r17504535
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
    @@ -205,6 +208,54 @@ class SQLContext(@transient val sparkContext: 
SparkContext)
       }
     
       /**
    +   * Loads from JDBC, returning the ResultSet as a [[SchemaRDD]].
    +   * It gets MetaData from ResultSet of PreparedStatement to determine the 
schema.
    +   *
    +   * @group userf
    +   */
    +  def jdbcResultSet(
    --- End diff --
    
    What we did for Spark Streaming is that each of the external connectors has 
a module called XUtils with static utility functions for creating things:
    
    ```
    object KafkaUtils {
      def createKafkaStream(streamingContext: StreamingContext)
    }
    ```
    
    For this reason it might be good to call this `JDBCUtils` in similar 
fashion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to