This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 2250b35be6a2 [SPARK-49649][DOCS] Make `docs/index.md` up-to-date for 
4.0.0
2250b35be6a2 is described below

commit 2250b35be6a24c777d6fa82b1c6a7a10a6854895
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Fri Sep 13 20:49:01 2024 -0700

    [SPARK-49649][DOCS] Make `docs/index.md` up-to-date for 4.0.0
    
    ### What changes were proposed in this pull request?
    
    This PR aims to update Spark documentation landing page (`docs/index.md`) 
for Apache Spark 4.0.0-preview2 release.
    
    ### Why are the changes needed?
    
    - [SPARK-45314 Drop Scala 2.12 and make Scala 2.13 by 
default](https://issues.apache.org/jira/browse/SPARK-45314)
    - #46228
    - #47842
    - [SPARK-45923 Spark Kubernetes 
Operator](https://issues.apache.org/jira/browse/SPARK-45923)
    
    ### Does this PR introduce _any_ user-facing change?
    
    No because this is a documentation-only change.
    
    ### How was this patch tested?
    
    Manual review.
    
    <img width="927" alt="Screenshot 2024-09-13 at 16 01 55" 
src="https://github.com/user-attachments/assets/bdbd0e61-d71a-41ca-aa1b-1b0805813a45";>
    
    <img width="911" alt="Screenshot 2024-09-13 at 16 02 09" 
src="https://github.com/user-attachments/assets/e13a6bba-2149-48fa-983d-c5399defdc70";>
    
    <img width="820" alt="Screenshot 2024-09-13 at 16 02 38" 
src="https://github.com/user-attachments/assets/721c7760-bc2e-444c-9209-174e3119c2b4";>
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #48113 from dongjoon-hyun/SPARK-49649.
    
    Authored-by: Dongjoon Hyun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 docs/index.md | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/docs/index.md b/docs/index.md
index 7e57eddb6da8..fea62865e216 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -34,9 +34,8 @@ source, visit [Building Spark](building-spark.html).
 
 Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS), and it 
should run on any platform that runs a supported version of Java. This should 
include JVMs on x86_64 and ARM64. It's easy to run locally on one machine --- 
all you need is to have `java` installed on your system `PATH`, or the 
`JAVA_HOME` environment variable pointing to a Java installation.
 
-Spark runs on Java 17/21, Scala 2.13, Python 3.8+, and R 3.5+.
-When using the Scala API, it is necessary for applications to use the same 
version of Scala that Spark was compiled for.
-For example, when using Scala 2.13, use Spark compiled for 2.13, and compile 
code/applications for Scala 2.13 as well.
+Spark runs on Java 17/21, Scala 2.13, Python 3.9+, and R 3.5+ (Deprecated).
+When using the Scala API, it is necessary for applications to use the same 
version of Scala that Spark was compiled for. Since Spark 4.0.0, it's Scala 
2.13.
 
 # Running the Examples and Shell
 
@@ -110,7 +109,7 @@ options for deployment:
 * [Spark Streaming](streaming-programming-guide.html): processing data streams 
using DStreams (old API)
 * [MLlib](ml-guide.html): applying machine learning algorithms
 * [GraphX](graphx-programming-guide.html): processing graphs
-* [SparkR](sparkr.html): processing data with Spark in R
+* [SparkR (Deprecated)](sparkr.html): processing data with Spark in R
 * [PySpark](api/python/getting_started/index.html): processing data with Spark 
in Python
 * [Spark SQL CLI](sql-distributed-sql-engine-spark-sql-cli.html): processing 
data with SQL on the command line
 
@@ -128,10 +127,13 @@ options for deployment:
 * [Cluster Overview](cluster-overview.html): overview of concepts and 
components when running on a cluster
 * [Submitting Applications](submitting-applications.html): packaging and 
deploying applications
 * Deployment modes:
-  * [Amazon EC2](https://github.com/amplab/spark-ec2): scripts that let you 
launch a cluster on EC2 in about 5 minutes
   * [Standalone Deploy Mode](spark-standalone.html): launch a standalone 
cluster quickly without a third-party cluster manager
   * [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
-  * [Kubernetes](running-on-kubernetes.html): deploy Spark on top of Kubernetes
+  * [Kubernetes](running-on-kubernetes.html): deploy Spark apps on top of 
Kubernetes directly
+  * [Amazon EC2](https://github.com/amplab/spark-ec2): scripts that let you 
launch a cluster on EC2 in about 5 minutes
+* [Spark Kubernetes 
Operator](https://github.com/apache/spark-kubernetes-operator):
+  * 
[SparkApp](https://github.com/apache/spark-kubernetes-operator/blob/main/examples/pyspark-pi.yaml):
 deploy Spark apps on top of Kubernetes via [operator 
patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
+  * 
[SparkCluster](https://github.com/apache/spark-kubernetes-operator/blob/main/examples/cluster-with-template.yaml):
 deploy Spark clusters on top of Kubernetes via [operator 
patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
 
 **Other Documents:**
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to