This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new a7147c8e047 [SPARK-45963][SQL][DOCS] Restore documentation for DSv2 API
a7147c8e047 is described below
commit a7147c8e04711a552009d513d900d29fcb258315
Author: Hyukjin Kwon <[email protected]>
AuthorDate: Thu Nov 16 22:50:43 2023 -0800
[SPARK-45963][SQL][DOCS] Restore documentation for DSv2 API
### What changes were proposed in this pull request?
This PR restores the DSv2 documentation.
https://github.com/apache/spark/pull/38392 mistakenly added
`org/apache/spark/sql/connect` as a private that includes
`org/apache/spark/sql/connector`.
### Why are the changes needed?
For end users to read DSv2 documentation.
### Does this PR introduce _any_ user-facing change?
Yes, it restores the DSv2 API documentation that used to be there
https://spark.apache.org/docs/3.3.0/api/scala/org/apache/spark/sql/connector/catalog/index.html
### How was this patch tested?
Manually tested via:
```
SKIP_PYTHONDOC=1 SKIP_RDOC=1 SKIP_SQLDOC=1 bundle exec jekyll build
```
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #43855 from HyukjinKwon/connector-docs.
Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
---
project/SparkBuild.scala | 2 +-
.../apache/spark/sql/connector/catalog/SupportsMetadataColumns.java | 4 ++--
.../org/apache/spark/sql/connector/expressions/expressions.scala | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala
index d76af6a06cf..b15bba0474c 100644
--- a/project/SparkBuild.scala
+++ b/project/SparkBuild.scala
@@ -1361,7 +1361,7 @@ object Unidoc {
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/util/io")))
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/util/kvstore")))
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/catalyst")))
-
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/connect")))
+
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/connect/")))
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/execution")))
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/internal")))
.map(_.filterNot(_.getCanonicalPath.contains("org/apache/spark/sql/hive")))
diff --git
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java
index 894184dbcc8..e42424268b4 100644
---
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java
+++
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/SupportsMetadataColumns.java
@@ -58,8 +58,8 @@ public interface SupportsMetadataColumns extends Table {
* Determines how this data source handles name conflicts between metadata
and data columns.
* <p>
* If true, spark will automatically rename the metadata column to resolve
the conflict. End users
- * can reliably select metadata columns (renamed or not) with {@link
Dataset.metadataColumn}, and
- * internal code can use {@link MetadataAttributeWithLogicalName} to extract
the logical name from
+ * can reliably select metadata columns (renamed or not) with {@code
Dataset.metadataColumn}, and
+ * internal code can use {@code MetadataAttributeWithLogicalName} to extract
the logical name from
* a metadata attribute.
* <p>
* If false, the data column will hide the metadata column. It is
recommended that Table
diff --git
a/sql/catalyst/src/main/scala/org/apache/spark/sql/connector/expressions/expressions.scala
b/sql/catalyst/src/main/scala/org/apache/spark/sql/connector/expressions/expressions.scala
index 6fabb43a895..fc41d5a98e4 100644
---
a/sql/catalyst/src/main/scala/org/apache/spark/sql/connector/expressions/expressions.scala
+++
b/sql/catalyst/src/main/scala/org/apache/spark/sql/connector/expressions/expressions.scala
@@ -156,7 +156,7 @@ private[sql] object BucketTransform {
}
/**
- * This class represents a transform for [[ClusterBySpec]]. This is used to
bundle
+ * This class represents a transform for `ClusterBySpec`. This is used to
bundle
* ClusterBySpec in CreateTable's partitioning transforms to pass it down to
analyzer.
*/
final case class ClusterByTransform(
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]