itholic commented on code in PR #41606:
URL: https://github.com/apache/spark/pull/41606#discussion_r1250546812


##########
python/pyspark/testing/utils.py:
##########
@@ -209,3 +220,45 @@ def check_error(
         self.assertEqual(
             expected, actual, f"Expected message parameters was '{expected}', 
got '{actual}'"
         )
+
+
+def assertSparkSchemaEquality(
+    s1: Optional[Union[AtomicType, StructType, str, List[str], Tuple[str, 
...]]],
+    s2: Optional[Union[AtomicType, StructType, str, List[str], Tuple[str, 
...]]],
+):
+    if s1 != s2:
+        msg = "Schemas are different"
+        raise AssertionError(msg)
+
+
+def assertSparkDFEquality(
+    left: PySparkDataFrame, right: PySparkDataFrame,
+):
+    def assert_rows_equality(rows1, rows2):
+        if rows1 != rows2:
+            msg = "Dataframes are different"
+            raise AssertionError(msg)

Review Comment:
   Yes, we should use PySpark specific errors instead of Python built-in 
exception if the exception can be raised from user space.
   
   In this case, we can use `PySparkAssertionError` instead of `AssertionError` 
with proper error class. e.g.
   ```python
   ...
   from pyspark.errors import PySparkAssertionError
   ...
       def assert_rows_equality(rows1, rows2):
           if rows1 != rows2:
               raise PySparkAssertionError(
                   error_class="DIFFERENT_DATAFRAME",
               )
   ```
   
   New error class can be added to 
https://github.com/apache/spark/blob/master/python/pyspark/errors/error_classes.py.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to