Github user BenFradet commented on a diff in the pull request:
https://github.com/apache/spark/pull/10060#discussion_r47007845
--- Diff: docs/sql-programming-guide.md ---
@@ -428,6 +461,45 @@ df <- sql(sqlContext, "SELECT * FROM table")
</div>
+## Creating Datasets
+
+Datasets are similar to RDDs, however, instead of using Java Serialization
or Kryo they use
+a specialized [Encoder](api/scala/index.html#org.apache.spark.sql.Encoder)
to serialize the objects
+for processing or transmitting over the network. While both encoders and
standard serialization are
+responsible for during an object into bytes, encoders are code generated
dynamically and use a format
+that allows Spark to perform many operations like filtering, sorting and
hashing without deserialzing
+the back into an object.
--- End diff --
the **bytes** back into an object?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]