gengliangwang commented on a change in pull request #34315:
URL: https://github.com/apache/spark/pull/34315#discussion_r730963433



##########
File path: python/docs/source/getting_started/install.rst
##########
@@ -83,46 +83,54 @@ Note that this installation way of PySpark with/without a 
specific Hadoop versio
 Using Conda
 -----------
 
-Conda is an open-source package management and environment management system 
which is a part of
-the `Anaconda <https://docs.continuum.io/anaconda/>`_ distribution. It is both 
cross-platform and
-language agnostic. In practice, Conda can replace both `pip 
<https://pip.pypa.io/en/latest/>`_ and
-`virtualenv <https://virtualenv.pypa.io/en/latest/>`_.
-
-Create new virtual environment from your terminal as shown below:
-
-.. code-block:: bash
-
-    conda create -n pyspark_env
-
-After the virtual environment is created, it should be visible under the list 
of Conda environments
-which can be seen using the following command:
+Conda is an open-source package management and environment management system 
(developed by
+`Anaconda` <https://www.anaconda.com/>`_), which is best installed through

Review comment:
       ```suggestion
   `Anaconda <https://www.anaconda.com/>`_), which is best installed through
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to