Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8879#discussion_r40388266
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
    @@ -531,6 +531,32 @@ class SQLContext(@transient val sparkContext: 
SparkContext)
       }
     
       /**
    +   * Applies a schema to an List of Java Beans.
    +   *
    +   * WARNING: Since there is no guaranteed ordering for fields in a Java 
Bean,
    +   *          SELECT * queries will return the columns in an undefined 
order.
    +   * @group dataframes
    +   * @since 1.6.0
    +   */
    +  def createDataFrame(data: java.util.List[_], beanClass: Class[_]): 
DataFrame = {
    +    val schema = getSchema(beanClass)
    --- End diff --
    
    Perhaps, I could take the code to work on Iterators of Rows and move that 
to a shared function they could both use (along with have it pass in bean info 
since in local mode we don't need to construct the class by name it already 
exists). I could also just prallelize it and call the RDD implementation, but I 
figured it would be better to use the LocalRelation as we did with the List of 
Rows.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to