This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 94bdbec  [SPARK-35644][PYTHON][DOCS] Merge contents and remove 
obsolete pages in Development section
94bdbec is described below

commit 94bdbec380b5c4fa63450f16e7a68c2f545a4009
Author: Hyukjin Kwon <[email protected]>
AuthorDate: Thu Jun 17 13:35:20 2021 +0900

    [SPARK-35644][PYTHON][DOCS] Merge contents and remove obsolete pages in 
Development section
    
    ### What changes were proposed in this pull request?
    
    This PR proposes to merge contents and remove obsolete pages in Development 
section, especially about pandas API on Spark.
    
    Some were removed, and some were merged to the existing PySpark guides. I 
will inline some comments in the PRs to make the review easier.
    
    ### Why are the changes needed?
    
    To guide developers on the code base of pandas API on Spark.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, it updates the user-facing documentation.
    
    ### How was this patch tested?
    
    Manually built the docs and checked.
    
    Closes #32926 from HyukjinKwon/SPARK-35644.
    
    Authored-by: Hyukjin Kwon <[email protected]>
    Signed-off-by: Hyukjin Kwon <[email protected]>
---
 python/docs/source/development/contributing.rst    | 123 +++++++++++--
 python/docs/source/development/index.rst           |   7 -
 python/docs/source/development/ps_contributing.rst | 192 ---------------------
 python/docs/source/development/ps_design.rst       |  85 ---------
 4 files changed, 111 insertions(+), 296 deletions(-)

diff --git a/python/docs/source/development/contributing.rst 
b/python/docs/source/development/contributing.rst
index 4f0f9ae..b3a6f1d 100644
--- a/python/docs/source/development/contributing.rst
+++ b/python/docs/source/development/contributing.rst
@@ -72,17 +72,94 @@ Preparing to Contribute Code Changes
 ------------------------------------
 
 Before starting to work on codes in PySpark, it is recommended to read `the 
general guidelines <https://spark.apache.org/contributing.html>`_.
-There are a couple of additional notes to keep in mind when contributing to 
codes in PySpark:
+Additionally, there are a couple of additional notes to keep in mind when 
contributing to codes in PySpark:
+
+* Be Pythonic
+    See `The Zen of Python <https://www.python.org/dev/peps/pep-0020/>`_.
+
+* Match APIs with Scala and Java sides
+    Apache Spark is an unified engine that provides a consistent API layer. In 
general, the APIs are consistently supported across other languages.
+
+* PySpark-specific APIs can be accepted
+    As long as they are Pythonic and do not conflict with other existent APIs, 
it is fine to raise a API request, for example, decorator usage of UDFs.
+
+* Adjust the corresponding type hints if you extend or modify public API
+    See `Contributing and Maintaining Type Hints`_ for details.
+
+If you are fixing pandas API on Spark (``pyspark.pandas``) package, please 
consider the design principles below:
+
+* Return pandas-on-Spark data structure for big data, and pandas data 
structure for small data
+    Often developers face the question whether a particular function should 
return a pandas-on-Spark DataFrame/Series, or a pandas DataFrame/Series. The 
principle is: if the returned object can be large, use a pandas-on-Spark 
DataFrame/Series. If the data is bound to be small, use a pandas 
DataFrame/Series. For example, ``DataFrame.dtypes`` return a pandas Series, 
because the number of columns in a DataFrame is bounded and small, whereas 
``DataFrame.head()`` or ``Series.unique()`` return [...]
+
+* Provide discoverable APIs for common data science tasks
+    At the risk of overgeneralization, there are two API design approaches: 
the first focuses on providing APIs for common tasks; the second starts with 
abstractions, and enables users to accomplish their tasks by composing 
primitives. While the world is not black and white, pandas takes more of the 
former approach, while Spark has taken more of the latter.
+
+    One example is value count (count by some key column), one of the most 
common operations in data science. pandas ``DataFrame.value_count`` returns the 
result in sorted order, which in 90% of the cases is what users prefer when 
exploring data, whereas Spark's does not sort, which is more desirable when 
building data pipelines, as users can accomplish the pandas behavior by adding 
an explicit ``orderBy``.
+
+    Similar to pandas, pandas API on Spark should also lean more towards the 
former, providing discoverable APIs for common data science tasks. In most 
cases, this principle is well taken care of by simply implementing pandas' 
APIs. However, there will be circumstances in which pandas' APIs don't address 
a specific need, e.g. plotting for big data.
+
+* Guardrails to prevent users from shooting themselves in the foot
+    Certain operations in pandas are prohibitively expensive as data scales, 
and we don't want to give users the illusion that they can rely on such 
operations in pandas API on Spark. That is to say, methods implemented in 
pandas API on Spark should be safe to perform by default on large datasets. As 
a result, the following capabilities are not implemented in pandas API on Spark:
+
+    * Capabilities that are fundamentally not parallelizable: e.g. 
imperatively looping over each element
+    * Capabilities that require materializing the entire working set in a 
single node's memory. This is why we do not implement 
`pandas.DataFrame.to_xarray 
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_xarray.html>`_.
 Another example is the ``_repr_html_`` call caps the total number of records 
shown to a maximum of 1000, to prevent users from blowing up their driver node 
simply by typing the name of the DataFrame in a notebook.
+
+    A few exceptions, however, exist. One common pattern with "big data 
science" is that while the initial dataset is large, the working set becomes 
smaller as the analysis goes deeper. For example, data scientists often perform 
aggregation on datasets and want to then convert the aggregated dataset to some 
local data structure. To help data scientists, we offer the following:
+
+    * ``DataFrame.to_pandas``: returns a pandas DataFrame (pandas-on-Spark 
only)
+    * ``DataFrame.to_numpy``: returns a numpy array, works with both pandas 
and pandas API on Spark
+
+    Note that it is clear from the names that these functions return some 
local data structure that would require materializing data in a single node's 
memory. For these functions, we also explicitly document them with a warning 
note that the resulting data structure must be small.
+
+
+Environment Setup
+-----------------
+
+Prerequisite
+~~~~~~~~~~~~
+
+PySpark development requires to build Spark that needs a proper JDK installed, 
etc. See `Building Spark 
<https://spark.apache.org/docs/latest/building-spark.html>`_ for more details.
+
+Conda
+~~~~~
+
+If you are using Conda, the development environment can be set as follows.
+
+.. code-block:: bash
+
+    # Python 3.6+ is required
+    conda create --name pyspark-dev-env python=3.9
+    conda activate pyspark-dev-env
+    pip install -r dev/requirements.txt
+
+Once it is set up, make sure you switch to `pyspark-dev-env` before starting 
the development:
+
+.. code-block:: bash
+
+    conda activate pyspark-dev-env
+
+Now, you can start developing and `running the tests <testing.rst>`_.
+
+pip
+~~~
+
+With Python 3.6+, pip can be used as below to install and set up the 
development environment.
+
+.. code-block:: bash
+
+    pip install -r dev/requirements.txt
+
+Now, you can start developing and `running the tests <testing.rst>`_.
 
-* Be Pythonic.
-* APIs are matched with Scala and Java sides in general.
-* PySpark specific APIs can still be considered as long as they are Pythonic 
and do not conflict with other existent APIs, for example, decorator usage of 
UDFs.
-* If you extend or modify public API, please adjust corresponding type hints. 
See `Contributing and Maintaining Type Hints`_ for details.
 
 Contributing and Maintaining Type Hints
 ----------------------------------------
 
-PySpark type hints are provided using stub files, placed in the same directory 
as the annotated module, with exception to ``# type: ignore`` in modules which 
don't have their own stubs (tests, examples and non-public API).
+PySpark type hints are provided using stub files, placed in the same directory 
as the annotated module, with exception to:
+
+* ``# type: ignore`` in modules which don't have their own stubs (tests, 
examples and non-public API). 
+* pandas API on Spark (``pyspark.pandas`` package) where the type hints are 
inlined.
+
 As a rule of thumb, only public API is annotated.
 
 Annotations should, when possible:
@@ -122,16 +199,38 @@ Annotations can be validated using ``dev/lint-python`` 
script or by invoking myp
     mypy --config python/mypy.ini python/pyspark
 
 
-
 Code and Docstring Guide
-----------------------------------
+------------------------
+
+Code Conventions
+~~~~~~~~~~~~~~~~
 
 Please follow the style of the existing codebase as is, which is virtually PEP 
8 with one exception: lines can be up
 to 100 characters in length, not 79.
-For the docstring style, PySpark follows `NumPy documentation style 
<https://numpydoc.readthedocs.io/en/latest/format.html>`_.
 
-Note that the method and variable names in PySpark are the similar case is 
``threading`` library in Python itself where
-the APIs were inspired by Java. PySpark also follows `camelCase` for exposed 
APIs that match with Scala and Java.
-There is an exception ``functions.py`` that uses `snake_case`. It was in order 
to make APIs SQL (and Python) friendly.
+Note that:
+
+* the method and variable names in PySpark are the similar case is 
``threading`` library in Python itself where the APIs were inspired by Java. 
PySpark also follows `camelCase` for exposed APIs that match with Scala and 
Java. 
+
+* In contrast, ``functions.py`` uses `snake_case` in order to make APIs SQL 
(and Python) friendly.
+
+* In addition, pandas-on-Spark (``pyspark.pandas``) also uses `snake_case` 
because this package is free from API consistency with other languages.
 
 PySpark leverages linters such as `pycodestyle 
<https://pycodestyle.pycqa.org/en/latest/>`_ and `flake8 
<https://flake8.pycqa.org/en/latest/>`_, which ``dev/lint-python`` runs. 
Therefore, make sure to run that script to double check.
+
+
+Docstring Conventions
+~~~~~~~~~~~~~~~~~~~~~
+
+PySpark follows `NumPy documentation style 
<https://numpydoc.readthedocs.io/en/latest/format.html>`_.
+
+
+Doctest Conventions
+~~~~~~~~~~~~~~~~~~~
+
+In general, doctests should be grouped logically by separating a newline.
+
+For instance, the first block is for the statements for preparation, the 
second block is for using the function with a specific argument,
+and third block is for another argument. As a example, please refer 
`DataFrame.rsub 
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rsub.html#pandas.DataFrame.rsub>`_
 in pandas.
+
+These blocks should be consistently separated in PySpark doctests, and more 
doctests should be added if the coverage of the doctests or the number of 
examples to show is not enough.
diff --git a/python/docs/source/development/index.rst 
b/python/docs/source/development/index.rst
index 9c54d52..b947fe4 100644
--- a/python/docs/source/development/index.rst
+++ b/python/docs/source/development/index.rst
@@ -27,10 +27,3 @@ Development
    debugging
    setting_ide
 
-For pandas API on Spark:
-
-.. toctree::
-   :maxdepth: 2
-
-   ps_contributing
-   ps_design
diff --git a/python/docs/source/development/ps_contributing.rst 
b/python/docs/source/development/ps_contributing.rst
deleted file mode 100644
index c83057b..0000000
--- a/python/docs/source/development/ps_contributing.rst
+++ /dev/null
@@ -1,192 +0,0 @@
-==================
-Contributing Guide
-==================
-
-.. contents:: Table of contents:
-   :depth: 1
-   :local:
-
-Types of Contributions
-======================
-
-The largest amount of work consists simply of implementing the pandas API 
using Spark's built-in functions, which is usually straightforward. But there 
are many different forms of contributions in addition to writing code:
-
-1. Use the project and provide feedback, by creating new tickets or commenting 
on existing relevant tickets.
-
-2. Review existing pull requests.
-
-3. Improve the project's documentation.
-
-4. Write blog posts or tutorial articles evangelizing pandas API on Spark and 
help new users learn pandas API on Spark.
-
-5. Give a talk about pandas API on Spark at your local meetup or a conference.
-
-
-Step-by-step Guide For Code Contributions
-=========================================
-
-1. Read and understand the `Design Principles <design.rst>`_ for the project. 
Contributions should follow these principles.
-
-2. Signaling your work: If you are working on something, comment on the 
relevant ticket that you are doing so to avoid multiple people taking on the 
same work at the same time. It is also a good practice to signal that your work 
has stalled or you have moved on and want somebody else to take over.
-
-3. Understand what the functionality is in pandas or in Spark.
-
-4. Implement the functionality, with test cases providing close to 100% 
statement coverage. Document the functionality.
-
-5. Run existing and new test cases to make sure they still pass. Also run 
`dev/reformat` script to reformat Python files by using `Black 
<https://github.com/psf/black>`_, and run the linter `dev/lint-python`.
-
-6. Build the docs (`make html` in `docs` directory) and verify the docs 
related to your change look OK.
-
-7. Submit a pull request, and be responsive to code review feedback from other 
community members.
-
-That's it. Your contribution, once merged, will be available in the next 
release.
-
-
-Environment Setup
-=================
-
-Conda
------
-
-If you are using Conda, the pandas API on Spark installation and development 
environment are as follows.
-
-.. code-block:: bash
-
-    # Python 3.6+ is required
-    conda create --name koalas-dev-env python=3.6
-    conda activate koalas-dev-env
-    conda install -c conda-forge pyspark=2.4
-    pip install -r requirements-dev.txt
-    pip install -e .  # installs koalas from current checkout
-
-Once setup, make sure you switch to `koalas-dev-env` before development:
-
-.. code-block:: bash
-
-    conda activate koalas-dev-env
-
-pip
----
-
-With Python 3.6+, pip can be used as below to install and set up the 
development environment.
-
-.. code-block:: bash
-
-    pip install pyspark==2.4
-    pip install -r requirements-dev.txt
-    pip install -e .  # installs koalas from current checkout
-
-Running Tests
-=============
-
-There is a script `./dev/pytest` which is exactly same as `pytest` but with 
some default settings to run the tests easily.
-
-To run all the tests, similar to our CI pipeline:
-
-.. code-block:: bash
-
-    # Run all unittest and doctest
-    ./dev/pytest
-
-To run a specific test file:
-
-.. code-block:: bash
-
-    # Run unittest
-    ./dev/pytest -k test_dataframe.py
-
-    # Run doctest
-    ./dev/pytest -k series.py --doctest-modules databricks
-
-To run a specific doctest/unittest:
-
-.. code-block:: bash
-
-    # Run unittest
-    ./dev/pytest -k "DataFrameTest and test_Dataframe"
-
-    # Run doctest
-    ./dev/pytest -k DataFrame.corr --doctest-modules databricks
-
-Note that `-k` is used for simplicity although it takes an expression. You can 
use `--verbose` to check what to filter. See `pytest --help` for more details.
-
-
-Building Documentation
-======================
-
-To build documentation via Sphinx:
-
-.. code-block:: bash
-
-     cd docs && make clean html
-
-It generates HTMLs under `docs/build/html` directory. Open 
`docs/build/html/index.html` to check if documentation is built properly.
-
-
-Coding Conventions
-==================
-
-We follow `PEP 8 <https://www.python.org/dev/peps/pep-0008/>`_ with one 
exception: lines can be up to 100 characters in length, not 79.
-
-Doctest Conventions
-===================
-
-When writing doctests, usually the doctests in pandas are converted into 
pandas API on Spark to make sure the same codes work in pandas API on Spark.
-In general, doctests should be grouped logically by separating a newline.
-
-For instance, the first block is for the statements for preparation, the 
second block is for using the function with a specific argument,
-and third block is for another argument. As a example, please refer 
`DataFrame.rsub 
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rsub.html#pandas.DataFrame.rsub>`_
 in pandas.
-
-These blocks should be consistently separated in pandas-on-Spark doctests, and 
more doctests should be added if the coverage of the doctests or the number of 
examples to show is not enough even though they are different from pandas'.
-
-Release Guide
-=============
-
-Release Cadence
----------------
-
-Koalas 1.8.0 is the last minor release because Koalas will be officially 
included to PySpark.
-There will be only maintenance releases. Users are expected to directly use 
PySpark with Apache Spark 3.2+.
-
-Release Instructions
---------------------
-
-Only project maintainers can do the following to publish a release.
-
-1. Make sure version is set correctly in `pyspark.pandas/version.py`.
-
-2. Make sure the build is green.
-
-3. Create a new release on GitHub. Tag it as the same version as the setup.py. 
If the version is "0.1.0", tag the commit as "v0.1.0".
-
-4. Upload the package to PyPi:
-
-  .. code-block:: bash
-
-      rm -rf dist/koalas*
-      python setup.py sdist bdist_wheel
-      export package_version=$(python setup.py --version)
-      echo $package_version
-
-      python3 -m pip install --user --upgrade twine
-
-      # for test
-      python3 -m twine upload --repository-url https://test.pypi.org/legacy/ 
dist/koalas-$package_version-py3-none-any.whl 
dist/koalas-$package_version.tar.gz
-
-      # for release
-      python3 -m twine upload --repository-url https://upload.pypi.org/legacy/ 
dist/koalas-$package_version-py3-none-any.whl 
dist/koalas-$package_version.tar.gz
-
-5. Verify the uploaded package can be installed and executed. One unofficial 
tip is to run the doctests of pandas API on Spark within a Python interpreter 
after installing it.
-
-  .. code-block:: python
-
-      import os
-
-      from pytest import main
-      import databricks
-
-      test_path = os.path.abspath(os.path.dirname(databricks.__file__))
-      main(['-k', '-to_delta -read_delta', '--verbose', '--showlocals', 
'--doctest-modules', test_path])
-
-Note that this way might require additional settings, for instance, 
environment variables.
-
diff --git a/python/docs/source/development/ps_design.rst 
b/python/docs/source/development/ps_design.rst
deleted file mode 100644
index b131e60..0000000
--- a/python/docs/source/development/ps_design.rst
+++ /dev/null
@@ -1,85 +0,0 @@
-=================
-Design Principles
-=================
-
-.. currentmodule:: pyspark.pandas
-
-This section outlines design principles guiding the pandas API on Spark.
-
-Be Pythonic
------------
-
-Pandas API on Spark targets Python data scientists. We want to stick to the 
convention that users are already familiar with as much as possible. Here are 
some examples:
-
-- Function names and parameters use snake_case, rather than CamelCase. This is 
different from PySpark's design. For example, pandas API on Spark has 
`to_pandas()`, whereas PySpark has `toPandas()` for converting a DataFrame into 
a pandas DataFrame. In limited cases, to maintain compatibility with Spark, we 
also provide Spark's variant as an alias.
-
-- Pandas API on Spark respects to the largest extent the conventions of the 
Python numerical ecosystem, and allows the use of NumPy types, etc. that can be 
supported by Spark.
-
-- pandas-on-Spark docs' style and infrastructure simply follow rest of the 
PyData projects'.
-
-Unify small data (pandas) API and big data (Spark) API, but pandas first
-------------------------------------------------------------------------
-
-The pandas-on-Spark DataFrame is meant to provide the best of pandas and Spark 
under a single API, with easy and clear conversions between each API when 
necessary. When Spark and pandas have similar APIs with subtle differences, the 
principle is to honor the contract of the pandas API first.
-
-There are different classes of functions:
-
- 1. Functions that are found in both Spark and pandas under the same name 
(`count`, `dtypes`, `head`). The return value is the same as the return type in 
pandas (and not Spark's).
-    
- 2. Functions that are found in Spark but that have a clear equivalent in 
pandas, e.g. `alias` and `rename`. These functions will be implemented as the 
alias of the pandas function, but should be marked that they are aliases of the 
same functions. They are provided so that existing users of PySpark can get the 
benefits of pandas API on Spark without having to adapt their code.
- 
- 3. Functions that are only found in pandas. When these functions are 
appropriate for distributed datasets, they should become available in pandas 
API on Spark.
- 
- 4. Functions that are only found in Spark that are essential to controlling 
the distributed nature of the computations, e.g. `cache`. These functions 
should be available in pandas API on Spark.
-
-We are still debating whether data transformation functions only available in 
Spark should be added to pandas API on Spark, e.g. `select`. We would love to 
hear your feedback on that.
-
-Return pandas-on-Spark data structure for big data, and pandas data structure 
for small data
---------------------------------------------------------------------------------------------
-
-Often developers face the question whether a particular function should return 
a pandas-on-Spark DataFrame/Series, or a pandas DataFrame/Series. The principle 
is: if the returned object can be large, use a pandas-on-Spark 
DataFrame/Series. If the data is bound to be small, use a pandas 
DataFrame/Series. For example, `DataFrame.dtypes` return a pandas Series, 
because the number of columns in a DataFrame is bounded and small, whereas 
`DataFrame.head()` or `Series.unique()` returns a pandas [...]
-
-Provide discoverable APIs for common data science tasks
--------------------------------------------------------
-
-At the risk of overgeneralization, there are two API design approaches: the 
first focuses on providing APIs for common tasks; the second starts with 
abstractions, and enable users to accomplish their tasks by composing 
primitives. While the world is not black and white, pandas takes more of the 
former approach, while Spark has taken more of the later.
-
-One example is value count (count by some key column), one of the most common 
operations in data science. pandas `DataFrame.value_count` returns the result 
in sorted order, which in 90% of the cases is what users prefer when exploring 
data, whereas Spark's does not sort, which is more desirable when building data 
pipelines, as users can accomplish the pandas behavior by adding an explicit 
`orderBy`.
-
-Similar to pandas, pandas API on Spark should also lean more towards the 
former, providing discoverable APIs for common data science tasks. In most 
cases, this principle is well taken care of by simply implementing pandas' 
APIs. However, there will be circumstances in which pandas' APIs don't address 
a specific need, e.g. plotting for big data.
-
-Provide well documented APIs, with examples
--------------------------------------------
-
-All functions and parameters should be documented. Most functions should be 
documented with examples, because those are the easiest to understand than a 
blob of text explaining what the function does.
-
-A recommended way to add documentation is to start with the docstring of the 
corresponding function in PySpark or pandas, and adapt it for pandas API on 
Spark. If you are adding a new function, also add it to the API reference doc 
index page in `docs/source/reference` directory. The examples in docstring also 
improve our test coverage.
-
-Guardrails to prevent users from shooting themselves in the foot
-----------------------------------------------------------------
-
-Certain operations in pandas are prohibitively expensive as data scales, and 
we don't want to give users the illusion that they can rely on such operations 
in pandas API on Spark. That is to say, methods implemented in pandas API on 
Spark should be safe to perform by default on large datasets. As a result, the 
following capabilities are not implemented in pandas API on Spark:
-
-1. Capabilities that are fundamentally not parallelizable: e.g. imperatively 
looping over each element
-2. Capabilities that require materializing the entire working set in a single 
node's memory. This is why we do not implement `pandas.DataFrame.to_xarray 
<https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_xarray.html>`_.
 Another example is the `_repr_html_` call caps the total number of records 
shown to a maximum of 1000, to prevent users from blowing up their driver node 
simply by typing the name of the DataFrame in a notebook.
-
-A few exceptions, however, exist. One common pattern with "big data science" 
is that while the initial dataset is large, the working set becomes smaller as 
the analysis goes deeper. For example, data scientists often perform 
aggregation on datasets and want to then convert the aggregated dataset to some 
local data structure. To help data scientists, we offer the following:
-
-- :func:`DataFrame.to_pandas`: returns a pandas DataFrame, koalas only
-- :func:`DataFrame.to_numpy`: returns a numpy array, works with both pandas 
and pandas API on Spark
-
-Note that it is clear from the names that these functions return some local 
data structure that would require materializing data in a single node's memory. 
For these functions, we also explicitly document them with a warning note that 
the resulting data structure must be small.
-
-Be a lean API layer and move fast
----------------------------------
-
-Pandas API on Spark is designed as an API overlay layer on top of Spark. The 
project should be lightweight, and most functions should be implemented as 
wrappers
-around Spark or pandas - the pandas-on-Spark library is designed to be used 
only in the Spark's driver side in general.
-Pandas API on Spark does not accept heavyweight implementations, e.g. 
execution engine changes.
-
-This approach enables us to move fast. For the considerable future, we aim to 
be making monthly releases. If we find a critical bug, we will be making a new 
release as soon as the bug fix is available.
-
-High test coverage
-------------------
-
-Pandas API on Spark should be well tested. The project tracks its test 
coverage with over 90% across the entire codebase, and close to 100% for 
critical parts. Pull requests will not be accepted unless they have close to 
100% statement coverage from the codecov report.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to