This is an automated email from the ASF dual-hosted git repository. dongjoon pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push: new 73a09ed7bd7 [SPARK-46206][PS] Use a narrower scope exception for SQL processor 73a09ed7bd7 is described below commit 73a09ed7bd7372779e25d65498c4ab6b8496f0a8 Author: Haejoon Lee <haejoon....@databricks.com> AuthorDate: Sat Dec 2 21:42:09 2023 -0800 [SPARK-46206][PS] Use a narrower scope exception for SQL processor ### What changes were proposed in this pull request? This PR proposes to refine the exception handling in SQL processor functions by replacing the general `Exception` class with more specific exception types. ### Why are the changes needed? The current exception handling uses the broad `Exception` type, which can obscure the root cause of issues. By specifying more accurate exceptions, the code becomes clearer: - In `_get_local_scope()`, an `IndexError` is more appropriate as it explicitly handles the case where the index is out of range when accessing the call stack using `inspect.stack()`. - In `_get_ipython_scope()`, `AttributeError` and `ModuleNotFoundError` could occur if the IPython environment is not available or the expected attributes in the IPython shell object are missing. Using these specific exceptions enhances the maintainability and readability of the code, making it easier for developers to understand and handle errors more effectively. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? The existing test suite `pyspark.pandas.tests.test_sql::SQLTests` should pass. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44114 from itholic/refine_sql_error. Authored-by: Haejoon Lee <haejoon....@databricks.com> Signed-off-by: Dongjoon Hyun <dh...@apple.com> --- python/pyspark/pandas/sql_processor.py | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/python/pyspark/pandas/sql_processor.py b/python/pyspark/pandas/sql_processor.py index 1bd1cb9823c..b047417b763 100644 --- a/python/pyspark/pandas/sql_processor.py +++ b/python/pyspark/pandas/sql_processor.py @@ -206,9 +206,7 @@ def _get_local_scope() -> Dict[str, Any]: # Get 2 scopes above (_get_local_scope -> sql -> ...) to capture the vars there. try: return inspect.stack()[_CAPTURE_SCOPES][0].f_locals - except Exception: - # TODO (rxin, thunterdb): use a narrower scope exception. - # See https://github.com/databricks/koalas/pull/448 + except IndexError: return {} @@ -222,9 +220,7 @@ def _get_ipython_scope() -> Dict[str, Any]: shell = get_ipython() return shell.user_ns - except Exception: - # TODO (rxin, thunterdb): use a narrower scope exception. - # See https://github.com/databricks/koalas/pull/448 + except (AttributeError, ModuleNotFoundError): return None --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org