This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new b8100b5b3fd [SPARK-41586][PYTHON] Introduce `pyspark.errors` and error
classes for PySpark
b8100b5b3fd is described below
commit b8100b5b3fd82c0ee79c4f35a14a2bbfbe03ef43
Author: itholic <[email protected]>
AuthorDate: Mon Jan 16 10:22:43 2023 +0900
[SPARK-41586][PYTHON] Introduce `pyspark.errors` and error classes for
PySpark
### What changes were proposed in this pull request?
This PR proposes to introduce `pyspark.errors` and error classes to
unifying & improving errors generated by PySpark under a single path.
To summarize, this PR includes the changes below:
- Add `python/pyspark/errors/error_classes.py` to support error class for
PySpark.
- Add `ErrorClassesReader` to manage the `error_classes.py`.
- Add `PySparkException` to handle the errors generated by PySpark.
- Add `check_error` for error class testing.
This is an initial PR for introducing error framework for PySpark to
facilitate the error management and provide better/consistent error messages to
users.
While such an active work is being done on the [SQL side to improve error
messages](https://issues.apache.org/jira/browse/SPARK-37935), so far there is
no work to improve error messages in PySpark.
So, I'd expect to also initiate the effort on error message improvement for
PySpark side from this PR.
Eventually, the errors massage will be shown as below, for example:
- PySpark, `PySparkException` (thrown by Python driver):
```python
>>> from pyspark.sql.functions import lit
>>> lit([df.id, df.id])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../spark/python/pyspark/sql/utils.py", line 334, in wrapped
return f(*args, **kwargs)
File ".../spark/python/pyspark/sql/functions.py", line 176, in lit
raise PySparkException(
pyspark.errors.exceptions.PySparkException: [COLUMN_IN_LIST] lit does not
allow a column in a list.
```
- PySpark, `AnalysisException` (thrown by JVM side, and capture in PySpark
side):
```
>>> df.unpivot("id", [], "var", "val").collect()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../spark/python/pyspark/sql/dataframe.py", line 3296, in unpivot
jdf = self._jdf.unpivotWithSeq(jids, jvals, variableColumnName,
valueColumnName)
File ".../spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py",
line 1322, in __call__
File ".../spark/python/pyspark/sql/utils.py", line 209, in deco
raise converted from None
pyspark.sql.utils.AnalysisException: [UNPIVOT_REQUIRES_VALUE_COLUMNS] At
least one value column needs to be specified for UNPIVOT, all columns specified
as ids;
'Unpivot ArraySeq(id#2L), ArraySeq(), var, [val]
+- LogicalRDD [id#2L, int#3L, double#4, str#5], false
```
- Spark, `AnalysisException`:
```scala
scala> df.select($"id").unpivot(Array($"id"),
Array.empty,variableColumnName = "var", valueColumnName = "val")
org.apache.spark.sql.AnalysisException: [UNPIVOT_REQUIRES_VALUE_COLUMNS] At
least one value column needs to be specified for UNPIVOT, all columns specified
as ids;
'Unpivot ArraySeq(id#0L), ArraySeq(), var, [val]
+- Project [id#0L]
+- Range (0, 10, step=1, splits=Some(16))
```
**Next up** for this PR include:
- Migrate more errors into `PySparkException` across all modules (e.g,
Spark Connect, pandas API on Spark...).
- Migrate more error tests into error class tests by using `check_error`.
- Define more error classes onto `error_classes.py`.
- Add documentation.
### Why are the changes needed?
Centralizing error messages & introducing identified error class provides
the following benefits:
- Errors are searchable via the unique class names and properly classified.
- Reduce the cost of future maintenance for PySpark errors.
- Provide consistent & actionable error messages to users.
- Facilitates translating error messages into different languages.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Adding UTs & running the existing static analysis tools (`dev/lint-python`)
Closes #39387 from itholic/SPARK-41586.
Authored-by: itholic <[email protected]>
Signed-off-by: Hyukjin Kwon <[email protected]>
---
dev/pyproject.toml | 2 +-
dev/sparktestsupport/modules.py | 10 ++
dev/tox.ini | 1 +
python/docs/source/reference/index.rst | 1 +
.../reference/{index.rst => pyspark.errors.rst} | 24 ++---
python/mypy.ini | 3 +
.../pyspark/errors/__init__.py | 24 ++---
.../pyspark/errors/error_classes.py | 28 +++--
python/pyspark/errors/exceptions.py | 76 ++++++++++++++
.../pyspark/errors/tests/__init__.py | 18 ----
python/pyspark/errors/tests/test_errors.py | 47 +++++++++
python/pyspark/errors/utils.py | 116 +++++++++++++++++++++
python/pyspark/sql/functions.py | 5 +-
python/pyspark/sql/tests/test_functions.py | 9 +-
python/pyspark/testing/utils.py | 29 ++++++
python/setup.py | 1 +
16 files changed, 325 insertions(+), 69 deletions(-)
diff --git a/dev/pyproject.toml b/dev/pyproject.toml
index 8d556ba2ca9..019b6a03398 100644
--- a/dev/pyproject.toml
+++ b/dev/pyproject.toml
@@ -31,4 +31,4 @@ required-version = "22.6.0"
line-length = 100
target-version = ['py37']
include = '\.pyi?$'
-extend-exclude = 'cloudpickle'
+extend-exclude = 'cloudpickle|error_classes.py'
diff --git a/dev/sparktestsupport/modules.py b/dev/sparktestsupport/modules.py
index f63e7206599..90bb3e1dc88 100644
--- a/dev/sparktestsupport/modules.py
+++ b/dev/sparktestsupport/modules.py
@@ -756,6 +756,16 @@ pyspark_pandas_slow = Module(
],
)
+pyspark_errors = Module(
+ name="pyspark-errors",
+ dependencies=[],
+ source_file_regexes=["python/pyspark/errors"],
+ python_test_goals=[
+ # unittests
+ "pyspark.errors.tests.test_errors",
+ ],
+)
+
sparkr = Module(
name="sparkr",
dependencies=[hive, mllib],
diff --git a/dev/tox.ini b/dev/tox.ini
index 15c93832c2c..50ef4b21ab6 100644
--- a/dev/tox.ini
+++ b/dev/tox.ini
@@ -30,6 +30,7 @@ per-file-ignores =
# Examples contain some unused variables.
examples/src/main/python/sql/datasource.py: F841,
# Exclude * imports in test files
+ python/pyspark/errors/tests/*.py: F403,
python/pyspark/ml/tests/*.py: F403,
python/pyspark/mllib/tests/*.py: F403,
python/pyspark/pandas/tests/*.py: F401 F403,
diff --git a/python/docs/source/reference/index.rst
b/python/docs/source/reference/index.rst
index 2f316924405..a74b4a82e02 100644
--- a/python/docs/source/reference/index.rst
+++ b/python/docs/source/reference/index.rst
@@ -35,3 +35,4 @@ Pandas API on Spark follows the API specifications of latest
pandas release.
pyspark.mllib
pyspark
pyspark.resource
+ pyspark.errors
diff --git a/python/docs/source/reference/index.rst
b/python/docs/source/reference/pyspark.errors.rst
similarity index 67%
copy from python/docs/source/reference/index.rst
copy to python/docs/source/reference/pyspark.errors.rst
index 2f316924405..d18be18fe82 100644
--- a/python/docs/source/reference/index.rst
+++ b/python/docs/source/reference/pyspark.errors.rst
@@ -16,22 +16,14 @@
under the License.
-=============
-API Reference
-=============
+======
+Errors
+======
-This page lists an overview of all public PySpark modules, classes, functions
and methods.
+.. currentmodule:: pyspark.errors
-Pandas API on Spark follows the API specifications of latest pandas release.
+.. autosummary::
+ :toctree: api/
-.. toctree::
- :maxdepth: 2
-
- pyspark.sql/index
- pyspark.pandas/index
- pyspark.ss/index
- pyspark.ml
- pyspark.streaming
- pyspark.mllib
- pyspark
- pyspark.resource
+ PySparkException.getErrorClass
+ PySparkException.getMessageParameters
diff --git a/python/mypy.ini b/python/mypy.ini
index dd1c1cd4875..5f662a4a237 100644
--- a/python/mypy.ini
+++ b/python/mypy.ini
@@ -106,6 +106,9 @@ ignore_errors = True
[mypy-pyspark.testing.*]
ignore_errors = True
+[mypy-pyspark.errors.tests.*]
+ignore_errors = True
+
; Allow non-strict optional for pyspark.pandas
[mypy-pyspark.pandas.*]
diff --git a/dev/pyproject.toml b/python/pyspark/errors/__init__.py
similarity index 64%
copy from dev/pyproject.toml
copy to python/pyspark/errors/__init__.py
index 8d556ba2ca9..84260aa0fda 100644
--- a/dev/pyproject.toml
+++ b/python/pyspark/errors/__init__.py
@@ -15,20 +15,12 @@
# limitations under the License.
#
-[tool.pytest.ini_options]
-# Pytest it used only to run mypy data tests
-python_files = "test_*.yml"
-testpaths = [
- "pyspark/tests/typing",
- "pyspark/sql/tests/typing",
- "pyspark/ml/typing",
-]
+"""
+PySpark exceptions.
+"""
+from pyspark.errors.exceptions import PySparkException
+
-[tool.black]
-# When changing the version, we have to update
-# GitHub workflow version and dev/reformat-python
-required-version = "22.6.0"
-line-length = 100
-target-version = ['py37']
-include = '\.pyi?$'
-extend-exclude = 'cloudpickle'
+__all__ = [
+ "PySparkException",
+]
diff --git a/dev/pyproject.toml b/python/pyspark/errors/error_classes.py
similarity index 64%
copy from dev/pyproject.toml
copy to python/pyspark/errors/error_classes.py
index 8d556ba2ca9..8e2a6ed74ca 100644
--- a/dev/pyproject.toml
+++ b/python/pyspark/errors/error_classes.py
@@ -14,21 +14,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
+import json
-[tool.pytest.ini_options]
-# Pytest it used only to run mypy data tests
-python_files = "test_*.yml"
-testpaths = [
- "pyspark/tests/typing",
- "pyspark/sql/tests/typing",
- "pyspark/ml/typing",
-]
-[tool.black]
-# When changing the version, we have to update
-# GitHub workflow version and dev/reformat-python
-required-version = "22.6.0"
-line-length = 100
-target-version = ['py37']
-include = '\.pyi?$'
-extend-exclude = 'cloudpickle'
+ERROR_CLASSES_JSON = """
+{
+ "COLUMN_IN_LIST": {
+ "message": [
+ "<func_name> does not allow a column in a list."
+ ]
+ }
+}
+"""
+
+ERROR_CLASSES_MAP = json.loads(ERROR_CLASSES_JSON)
diff --git a/python/pyspark/errors/exceptions.py
b/python/pyspark/errors/exceptions.py
new file mode 100644
index 00000000000..bda51ecffb2
--- /dev/null
+++ b/python/pyspark/errors/exceptions.py
@@ -0,0 +1,76 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from typing import Dict, Optional, cast
+
+from pyspark.errors.utils import ErrorClassesReader
+
+
+class PySparkException(Exception):
+ """
+ Base Exception for handling errors generated from PySpark.
+ """
+
+ def __init__(
+ self,
+ message: Optional[str] = None,
+ error_class: Optional[str] = None,
+ message_parameters: Optional[Dict[str, str]] = None,
+ ):
+ # `message` vs `error_class` & `message_parameters` are mutually
exclusive.
+ assert (message is not None and (error_class is None and
message_parameters is None)) or (
+ message is None and (error_class is not None and
message_parameters is not None)
+ )
+
+ self.error_reader = ErrorClassesReader()
+
+ if message is None:
+ self.message = self.error_reader.get_error_message(
+ cast(str, error_class), cast(Dict[str, str],
message_parameters)
+ )
+ else:
+ self.message = message
+
+ self.error_class = error_class
+ self.message_parameters = message_parameters
+
+ def getErrorClass(self) -> Optional[str]:
+ """
+ Returns an error class as a string.
+
+ .. versionadded:: 3.4.0
+
+ See Also
+ --------
+ :meth:`PySparkException.getMessageParameters`
+ """
+ return self.error_class
+
+ def getMessageParameters(self) -> Optional[Dict[str, str]]:
+ """
+ Returns a message parameters as a dictionary.
+
+ .. versionadded:: 3.4.0
+
+ See Also
+ --------
+ :meth:`PySparkException.getErrorClass`
+ """
+ return self.message_parameters
+
+ def __str__(self) -> str:
+ return f"[{self.getErrorClass()}] {self.message}"
diff --git a/dev/pyproject.toml b/python/pyspark/errors/tests/__init__.py
similarity index 64%
copy from dev/pyproject.toml
copy to python/pyspark/errors/tests/__init__.py
index 8d556ba2ca9..cce3acad34a 100644
--- a/dev/pyproject.toml
+++ b/python/pyspark/errors/tests/__init__.py
@@ -14,21 +14,3 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-
-[tool.pytest.ini_options]
-# Pytest it used only to run mypy data tests
-python_files = "test_*.yml"
-testpaths = [
- "pyspark/tests/typing",
- "pyspark/sql/tests/typing",
- "pyspark/ml/typing",
-]
-
-[tool.black]
-# When changing the version, we have to update
-# GitHub workflow version and dev/reformat-python
-required-version = "22.6.0"
-line-length = 100
-target-version = ['py37']
-include = '\.pyi?$'
-extend-exclude = 'cloudpickle'
diff --git a/python/pyspark/errors/tests/test_errors.py
b/python/pyspark/errors/tests/test_errors.py
new file mode 100644
index 00000000000..cd2a8a4a22c
--- /dev/null
+++ b/python/pyspark/errors/tests/test_errors.py
@@ -0,0 +1,47 @@
+# -*- encoding: utf-8 -*-
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import unittest
+
+from pyspark.errors.utils import ErrorClassesReader
+
+
+class ErrorsTest(unittest.TestCase):
+ def test_error_classes(self):
+ # Test error classes is sorted alphabetically
+ error_reader = ErrorClassesReader()
+ error_class_names = error_reader.error_info_map
+ for i in range(len(error_class_names) - 1):
+ self.assertTrue(
+ error_class_names[i] < error_class_names[i + 1],
+ f"Error class [{error_class_names[i]}] should place"
+ f"after [{error_class_names[i + 1]}]",
+ )
+
+
+if __name__ == "__main__":
+ import unittest
+ from pyspark.errors.tests.test_errors import * # noqa: F401
+
+ try:
+ import xmlrunner
+
+ testRunner = xmlrunner.XMLTestRunner(output="target/test-reports",
verbosity=2)
+ except ImportError:
+ testRunner = None
+ unittest.main(testRunner=testRunner, verbosity=2)
diff --git a/python/pyspark/errors/utils.py b/python/pyspark/errors/utils.py
new file mode 100644
index 00000000000..69a72f86b9f
--- /dev/null
+++ b/python/pyspark/errors/utils.py
@@ -0,0 +1,116 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import re
+from typing import Dict
+
+from pyspark.errors.error_classes import ERROR_CLASSES_MAP
+
+
+class ErrorClassesReader:
+ """
+ A reader to load error information from error_classes.py.
+ """
+
+ def __init__(self) -> None:
+ self.error_info_map = ERROR_CLASSES_MAP
+
+ def get_error_message(self, error_class: str, message_parameters:
Dict[str, str]) -> str:
+ """
+ Returns the completed error message by applying message parameters to
the message template.
+ """
+ message_template = self.get_message_template(error_class)
+ # Verify message parameters.
+ message_parameters_from_template = re.findall("<([a-zA-Z0-9_-]+)>",
message_template)
+ assert set(message_parameters_from_template) ==
set(message_parameters), (
+ f"Undifined error message parameter for error class:
{error_class}. "
+ f"Parameters: {message_parameters}"
+ )
+ table = str.maketrans("<>", "{}")
+
+ return message_template.translate(table).format(**message_parameters)
+
+ def get_message_template(self, error_class: str) -> str:
+ """
+ Returns the message template for corresponding error class from
error_classes.py.
+
+ For example,
+ when given `error_class` is "EXAMPLE_ERROR_CLASS",
+ and corresponding error class in error_classes.py looks like the below:
+
+ .. code-block:: python
+
+ "EXAMPLE_ERROR_CLASS" : {
+ "message" : [
+ "Problem <A> because of <B>."
+ ]
+ }
+
+ In this case, this function returns:
+ "Problem <A> because of <B>."
+
+ For sub error class, when given `error_class` is
"EXAMPLE_ERROR_CLASS.SUB_ERROR_CLASS",
+ and corresponding error class in error_classes.py looks like the below:
+
+ .. code-block:: python
+
+ "EXAMPLE_ERROR_CLASS" : {
+ "message" : [
+ "Problem <A> because of <B>."
+ ],
+ "subClass" : {
+ "SUB_ERROR_CLASS" : {
+ "message" : [
+ "Do <C> to fix the problem."
+ ]
+ }
+ }
+ }
+
+ In this case, this function returns:
+ "Problem <A> because <B>. Do <C> to fix the problem."
+ """
+ error_classes = error_class.split(".")
+ len_error_classes = len(error_classes)
+ assert len_error_classes in (1, 2)
+
+ # Generate message template for main error class.
+ main_error_class = error_classes[0]
+ if main_error_class in self.error_info_map:
+ main_error_class_info_map = self.error_info_map[main_error_class]
+ else:
+ raise ValueError(f"Cannot find main error class
'{main_error_class}'")
+
+ main_message_template = "\n".join(main_error_class_info_map["message"])
+
+ has_sub_class = len_error_classes == 2
+
+ if not has_sub_class:
+ message_template = main_message_template
+ else:
+ # Generate message template for sub error class if exists.
+ sub_error_class = error_classes[1]
+ main_error_class_subclass_info_map =
main_error_class_info_map["subClass"]
+ if sub_error_class in main_error_class_subclass_info_map:
+ sub_error_class_info_map =
main_error_class_subclass_info_map[sub_error_class]
+ else:
+ raise ValueError(f"Cannot find sub error class
'{sub_error_class}'")
+
+ sub_message_template =
"\n".join(sub_error_class_info_map["message"])
+ message_template = main_message_template + " " +
sub_message_template
+
+ return message_template
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 73698afa4e3..4a2ec37ba1b 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -38,6 +38,7 @@ from typing import (
)
from pyspark import SparkContext
+from pyspark.errors import PySparkException
from pyspark.rdd import PythonEvalType
from pyspark.sql.column import Column, _to_java_column, _to_seq,
_create_column_from_literal
from pyspark.sql.dataframe import DataFrame
@@ -172,7 +173,9 @@ def lit(col: Any) -> Column:
return col
elif isinstance(col, list):
if any(isinstance(c, Column) for c in col):
- raise ValueError("lit does not allow a column in a list")
+ raise PySparkException(
+ error_class="COLUMN_IN_LIST", message_parameters={"func_name":
"lit"}
+ )
return array(*[lit(item) for item in col])
else:
if has_numpy and isinstance(col, np.generic):
diff --git a/python/pyspark/sql/tests/test_functions.py
b/python/pyspark/sql/tests/test_functions.py
index 38a4e3e6644..11cd60833f5 100644
--- a/python/pyspark/sql/tests/test_functions.py
+++ b/python/pyspark/sql/tests/test_functions.py
@@ -25,6 +25,7 @@ import math
import unittest
from py4j.protocol import Py4JJavaError
+from pyspark.errors import PySparkException
from pyspark.sql import Row, Window, types
from pyspark.sql.functions import (
udf,
@@ -1033,9 +1034,15 @@ class FunctionsTestsMixin:
self.assertEqual(actual, expected)
df = self.spark.range(10)
- with self.assertRaisesRegex(ValueError, "lit does not allow a column
in a list"):
+ with self.assertRaises(PySparkException) as pe:
lit([df.id, df.id])
+ self.check_error(
+ exception=pe.exception,
+ error_class="COLUMN_IN_LIST",
+ message_parameters={"funcName": "lit"},
+ )
+
# Test added for SPARK-39832; change Python API to accept both col & str
as input
def test_regexp_replace(self):
df = self.spark.createDataFrame(
diff --git a/python/pyspark/testing/utils.py b/python/pyspark/testing/utils.py
index bfa07dc9459..0a61b809528 100644
--- a/python/pyspark/testing/utils.py
+++ b/python/pyspark/testing/utils.py
@@ -21,8 +21,10 @@ import struct
import sys
import unittest
from time import time, sleep
+from typing import Dict, Optional
from pyspark import SparkContext, SparkConf
+from pyspark.errors import PySparkException
from pyspark.find_spark_home import _find_spark_home
@@ -138,6 +140,33 @@ class ReusedPySparkTestCase(unittest.TestCase):
def tearDownClass(cls):
cls.sc.stop()
+ def check_error(
+ self,
+ exception: PySparkException,
+ error_class: str,
+ message_parameters: Optional[Dict[str, str]] = None,
+ ):
+ # Test if given error is an instance of PySparkException.
+ self.assertIsInstance(
+ exception,
+ PySparkException,
+ f"checkError requires 'PySparkException', got
'{exception.__class__.__name__}'.",
+ )
+
+ # Test error class
+ expected = error_class
+ actual = exception.getErrorClass()
+ self.assertEqual(
+ expected, actual, f"Expected error class was '{expected}', got
'{actual}'."
+ )
+
+ # Test message parameters
+ expected = message_parameters
+ actual = exception.getMessageParameters()
+ self.assertEqual(
+ expected, actual, f"Expected message parameters was '{expected}',
got '{actual}'"
+ )
+
class ByteArrayOutput:
def __init__(self):
diff --git a/python/setup.py b/python/setup.py
index 08ffd0f0b1e..e475333b24e 100755
--- a/python/setup.py
+++ b/python/setup.py
@@ -256,6 +256,7 @@ try:
"pyspark.data",
"pyspark.licenses",
"pyspark.resource",
+ "pyspark.errors",
"pyspark.examples.src.main.python",
],
include_package_data=True,
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]