Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16997#discussion_r102008215
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -297,6 +297,9 @@ reflection and become the names of the columns. Case 
classes can also be nested
     types such as `Seq`s or `Array`s. This RDD can be implicitly converted to 
a DataFrame and then be
     registered as a table. Tables can be used in subsequent SQL statements.
     
    +Spark Encoders are used to convert a JVM object to Spark SQL 
representation. When we want to make a datase, Spark requires an encoder which 
takes the form Encoder[T] where T is the type we want to be encoded. When we 
try to create dataset with a custom type of object, then may result into 
<b>java.lang.UnsupportedOperationException: No Encoder found for 
Object-Name</b>.
    --- End diff --
    
    It's minor, but there are enough problems with the text to call it out. 
Please match the voice of the other text and avoid 'we'. Typos: "datase", 
"spark sql" and "kryo" for example. Use back-ticks to consistently format code 
if you're going to. What is Object-Name? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to