james-willis commented on code in PR #2077:
URL: https://github.com/apache/sedona/pull/2077#discussion_r2201915717


##########
python/tests/sql/test_dataframe_api.py:
##########
@@ -1783,3 +1783,29 @@ def test_lof(self):
         df.withColumn(
             "localOutlierFactor", ST_LocalOutlierFactor("geometry", 2, False)
         ).collect()
+
+    def test_expand_address_df_api(self):
+        df = (
+            self.spark.range(1)
+            .selectExpr(
+                "'781 Franklin Ave Crown Heights Brooklyn NY 11216 USA' as 
address"
+            )
+            .cache()
+        )  # cache to avoid Constant Folding Optimization
+        df = df.select(ExpandAddress("address").alias("normalized"))
+        # Actually running downloads the model and is very expensive, so we 
just check the plan
+        # Checking the plan should allow us to verify that the function is 
correctly registered
+        df.explain()

Review Comment:
   Couple reasons I didnt do this:
   
   1. AFAIK there is not a real opportunity to inspect the plan contents. The 
pyspark API doesn't expose anything to programmatically interact with the query 
plan: 
https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/dataframe.html
   2. I didn't think it was particularly valuable. If the explain doesn't throw 
that implies that the function was found on the path, which IMO is the primary 
purpose of this test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to