Github user HarshSharma8 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16997#discussion_r102011677
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -297,6 +297,9 @@ reflection and become the names of the columns. Case 
classes can also be nested
     types such as `Seq`s or `Array`s. This RDD can be implicitly converted to 
a DataFrame and then be
     registered as a table. Tables can be used in subsequent SQL statements.
     
    +Spark Encoders are used to convert a JVM object to Spark SQL 
representation. When we want to make a datase, Spark requires an encoder which 
takes the form Encoder[T] where T is the type we want to be encoded. When we 
try to create dataset with a custom type of object, then may result into 
<b>java.lang.UnsupportedOperationException: No Encoder found for 
Object-Name</b>.
    --- End diff --
    
    Hello srowen,
    I have updated the content to match the void of the content, you can have 
another look at it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to