Github user BenFradet commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10060#discussion_r47008015
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -428,6 +461,45 @@ df <- sql(sqlContext, "SELECT * FROM table")
     </div>
     
     
    +## Creating Datasets
    +
    +Datasets are similar to RDDs, however, instead of using Java Serialization 
or Kryo they use
    +a specialized [Encoder](api/scala/index.html#org.apache.spark.sql.Encoder) 
to serialize the objects
    +for processing or transmitting over the network. While both encoders and 
standard serialization are
    +responsible for during an object into bytes, encoders are code generated 
dynamically and use a format
    +that allows Spark to perform many operations like filtering, sorting and 
hashing without deserialzing
    +the back into an object.
    +
    +<div class="codetabs">
    +<div data-lang="scala"  markdown="1">
    +
    +{% highlight scala %}
    +// Encoders for most common types are automatically provided by importing 
sqlContext.implicits._
    +val ds = Seq(1, 2, 3).toDS()
    +ds.map(_ + 1).collect() // Returns: Array(2, 3, 4)
    +
    +// Encoders are also created for case classes.
    +case class Person(name: String, age: Long)
    +val ds = Seq(Person("Andy", 32)).toDS()
    +
    +// DataFrames can be converted to a Dataset by providing a class.  Mapping 
will be done by name.
    --- End diff --
    
    2 whitespaces here too


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to