Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17938#discussion_r116371744
--- Diff: docs/sql-programming-guide.md ---
@@ -581,6 +581,113 @@ Starting from Spark 2.1, persistent datasource tables
have per-partition metadat
Note that partition information is not gathered by default when creating
external datasource tables (those with a `path` option). To sync the partition
information in the metastore, you can invoke `MSCK REPAIR TABLE`.
+### Bucketing, Sorting and Partitioning
+
+For file-based data source it is also possible to bucket and sort or
partition the output.
+Bucketing and sorting is applicable only to persistent tables:
+
+<div class="codetabs">
+
+<div data-lang="scala" markdown="1">
+{% include_example write_sorting_and_bucketing
scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
+</div>
+
+<div data-lang="java" markdown="1">
+{% include_example write_sorting_and_bucketing
java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% include_example write_sorting_and_bucketing python/sql/datasource.py %}
+</div>
+
+<div data-lang="sql" markdown="1">
+
+{% highlight sql %}
+
+CREATE TABLE users_bucketed_by_name(
+ name STRING,
+ favorite_color STRING,
+ favorite_NUMBERS array<integer>
+) USING parquet
+CLUSTERED BY(name) INTO 42 BUCKETS;
+
+{% endhighlight %}
+
+</div>
+
+</div>
+
+while partitioning can be used with both `save` and `saveAsTable`:
+
+
+<div class="codetabs">
+
+<div data-lang="scala" markdown="1">
+{% include_example write_partitioning
scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
+</div>
+
+<div data-lang="java" markdown="1">
+{% include_example write_partitioning
java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% include_example write_partitioning python/sql/datasource.py %}
+</div>
+
+<div data-lang="sql" markdown="1">
+
+{% highlight sql %}
+
+CREATE TABLE users_by_favorite_color(
+ name STRING,
+ favorite_color STRING,
+ favorite_NUMBERS array<integer>
+) USING csv PARTITIONED BY(favorite_color);
+
+{% endhighlight %}
+
+</div>
+
+</div>
+
+It is possible to use both partitions and buckets for a single table:
+
+<div class="codetabs">
+
+<div data-lang="scala" markdown="1">
+{% include_example write_partition_and_bucket
scala/org/apache/spark/examples/sql/SQLDataSourceExample.scala %}
+</div>
+
+<div data-lang="java" markdown="1">
+{% include_example write_partition_and_bucket
java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% include_example write_partition_and_bucket python/sql/datasource.py %}
+</div>
+
+<div data-lang="sql" markdown="1">
+
+{% highlight sql %}
+
+CREATE TABLE users_bucketed_and_partitioned(
+ name STRING,
+ favorite_color STRING,
+ favorite_NUMBERS array<integer>
+) USING parquet
+PARTITIONED BY (favorite_color)
+CLUSTERED BY(name) INTO 42 BUCKETS;
+
+{% endhighlight %}
+
+</div>
+
+</div>
+
+`partitionBy` creates a directory structure as described in the [Partition
Discovery](#partition-discovery) section.
+Because of that it has limited applicability to columns with high
cardinality. In contrast `bucketBy` distributes
+data across fixed number of buckets and can be used if a number of unique
values is unbounded.
--- End diff --
`used if` -> `used when `
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]