HyukjinKwon commented on code in PR #40092:
URL: https://github.com/apache/spark/pull/40092#discussion_r1112013627


##########
python/docs/source/getting_started/quickstart_connect.ipynb:
##########
@@ -0,0 +1,1118 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Quickstart: DataFrame with Spark Connect\n",
+    "\n",
+    "This is a short introduction and quickstart for the DataFrame with Spark 
Connect. A DataFrame with Spark Connect is virtually, conceptually identical to 
an existing [PySpark 
DataFrame](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html?highlight=dataframe#pyspark.sql.DataFrame),
 so most of the examples from 'Live Notebook: DataFrame' at [the quickstart 
page](https://spark.apache.org/docs/latest/api/python/getting_started/index.html)
 can be reused directly.\n",
+    "\n",
+    "However, it does not yet support some key features such as 
[RDD](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.RDD.html?highlight=rdd#pyspark.RDD)
 and 
[SparkSession.conf](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.SparkSession.conf.html#pyspark.sql.SparkSession.conf),
 so you need to consider it when using DataFrame with Spark Connect.\n",
+    "\n",
+    "This notebook shows the basic usages of the DataFrame with Spark Connect 
geared mainly for those new to Spark Connect, along with comments of which 
features is not supported compare to the existing DataFrame.\n",
+    "\n",
+    "There is also other useful information in Apache Spark documentation 
site, see the latest version of [Spark SQL and 
DataFrames](https://spark.apache.org/docs/latest/sql-programming-guide.html).\n",
+    "\n",
+    "PySpark applications start with initializing `SparkSession` which is the 
entry point of PySpark as below. In case of running it in PySpark shell via 
<code>pyspark</code> executable, the shell automatically creates the session in 
the variable <code>spark</code> for users."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Spark Connect uses SparkSession from `pyspark.sql.connect.session` 
instead of `pyspark.sql.SparkSession`.\n",
+    "from pyspark.sql.connect.session import SparkSession\n",
+    "\n",
+    "spark = SparkSession.builder.getOrCreate()"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## DataFrame Creation\n",
+    "\n",
+    "A PySpark DataFrame with Spark Connect can be created via 
`pyspark.sql.connect.session.SparkSession.createDataFrame` typically by passing 
a list of lists, tuples, dictionaries and `pyspark.sql.Row`s, a [pandas 
DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) 
and an RDD consisting of such a list.\n",
+    "`pyspark.sql.connect.session.SparkSession.createDataFrame` takes the 
`schema` argument to specify the schema of the DataFrame. When it is omitted, 
PySpark infers the corresponding schema by taking a sample from the data.\n",

Review Comment:
   Actually all with `pyspark.sql.connect.session.SparkSession` is supposed to 
be internal. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to