This is an automated email from the ASF dual-hosted git repository.

jiayu pushed a commit to branch structured-adapter
in repository https://gitbox.apache.org/repos/asf/sedona.git

commit fac7c9d337dbf5a311304894660e7f31c78fe2c4
Author: Jia Yu <ji...@apache.org>
AuthorDate: Wed Aug 27 22:58:24 2025 -0700

    Add
---
 docs/setup/databricks.md | 8 ++++++++
 docs/tutorial/sql.md     | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/docs/setup/databricks.md b/docs/setup/databricks.md
index 2d44faccdc..af96a2bc64 100644
--- a/docs/setup/databricks.md
+++ b/docs/setup/databricks.md
@@ -129,6 +129,14 @@ Create a Databricks notebook and connect it to the 
cluster.  Verify that you can
 
 ![Python computation](../image/databricks/image1.png)
 
+If you want to use Sedona Python functions such as [DataFrame 
APIs](../api/sql/DataFrameAPI.md) or 
[StructuredAdapter](../tutorial/sql.md#spatialrdd-to-dataframe-with-spatial-partitioning),
 you need to initialize Sedona as follows:
+
+```python
+from sedona.spark import *
+
+sedona = SedonaContext.create(spark)
+```
+
 You can also use the SQL API as follows:
 
 ![SQL computation](../image/databricks/image8.png)
diff --git a/docs/tutorial/sql.md b/docs/tutorial/sql.md
index fb01f12f6d..5031af398d 100644
--- a/docs/tutorial/sql.md
+++ b/docs/tutorial/sql.md
@@ -111,7 +111,7 @@ You can add additional Spark runtime config to the config 
builder. For example,
 
 ## Initiate SedonaContext
 
-Add the following line after creating Sedona config. If you already have a 
SparkSession (usually named `spark`) created by AWS EMR/Databricks/Microsoft 
Fabric, please call `sedona = SedonaContext.create(spark)` instead. For 
==Databricks==, the situation is more complicated, please refer to [Databricks 
setup guide](../setup/databricks.md), but generally you don't need to create 
SedonaContext.
+Add the following line after creating Sedona config. If you already have a 
SparkSession (usually named `spark`) created by AWS EMR/Databricks/Microsoft 
Fabric, please call `sedona = SedonaContext.create(spark)` instead.
 
 === "Scala"
 

Reply via email to