HyukjinKwon commented on a change in pull request #32835:
URL: https://github.com/apache/spark/pull/32835#discussion_r648801975
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -44,11 +44,11 @@ The example below shows how data types are casted from
PySpark DataFrame to Koal
dtype: object
-The example below shows how data types are casted from Koalas DataFrame to
PySpark DataFrame.
+The example below shows how data types are casted from pandas APIs on Spark
DataFrame to PySpark DataFrame.
.. code-block:: python
- # 1. Create a Koalas DataFrame
+ # 1. Create a pandas APIs on Spark DataFrame
Review comment:
pandas-on-Spark DataFrame
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -57,7 +57,7 @@ The example below shows how data types are casted from Koalas
DataFrame to PySpa
>>> kdf['int32'] = kdf['int32'].astype('int32')
>>> kdf['float32'] = kdf['float32'].astype('float32')
- # 3. Check the Koalas data types
+ # 3. Check the pandas APIs on Spark data types
Review comment:
pandas-on-Spark data types
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -72,22 +72,22 @@ The example below shows how data types are casted from
Koalas DataFrame to PySpa
object_date object
dtype: object
- # 4. Convert Koalas DataFrame to PySpark DataFrame
+ # 4. Convert pandas APIs on Spark DataFrame to PySpark DataFrame
Review comment:
pandas-on-Spark DataFrame
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -72,22 +72,22 @@ The example below shows how data types are casted from
Koalas DataFrame to PySpa
object_date object
dtype: object
- # 4. Convert Koalas DataFrame to PySpark DataFrame
+ # 4. Convert pandas APIs on Spark DataFrame to PySpark DataFrame
>>> sdf = kdf.to_spark()
# 5. Check the PySpark data types
>>> sdf
DataFrame[int8: tinyint, bool: boolean, float32: float, float64: double,
int32: int, int64: bigint, int16: smallint, datetime: timestamp, object_string:
string, object_decimal: decimal(2,1), object_date: date]
-Type casting between pandas and Koalas
---------------------------------------
+Type casting between pandas and pandas APIs on Spark
+----------------------------------------------------
-When converting Koalas DataFrame to pandas DataFrame, and the data types are
basically same as pandas.
+When converting pandas APIs on Spark DataFrame to pandas DataFrame, and the
data types are basically same as pandas.
Review comment:
pandas-on-Spark DataFrame
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -72,22 +72,22 @@ The example below shows how data types are casted from
Koalas DataFrame to PySpa
object_date object
dtype: object
- # 4. Convert Koalas DataFrame to PySpark DataFrame
+ # 4. Convert pandas APIs on Spark DataFrame to PySpark DataFrame
>>> sdf = kdf.to_spark()
# 5. Check the PySpark data types
>>> sdf
DataFrame[int8: tinyint, bool: boolean, float32: float, float64: double,
int32: int, int64: bigint, int16: smallint, datetime: timestamp, object_string:
string, object_decimal: decimal(2,1), object_date: date]
-Type casting between pandas and Koalas
---------------------------------------
+Type casting between pandas and pandas APIs on Spark
+----------------------------------------------------
-When converting Koalas DataFrame to pandas DataFrame, and the data types are
basically same as pandas.
+When converting pandas APIs on Spark DataFrame to pandas DataFrame, and the
data types are basically same as pandas.
.. code-block:: python
- # Convert Koalas DataFrame to pandas DataFrame
+ # Convert pandas APIs on Spark DataFrame to pandas DataFrame
Review comment:
pandas-on-Spark DataFrame
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -177,7 +177,7 @@ datetime.date DateType
decimal.Decimal DecimalType(38, 18)
================= ===================
-For decimal type, Koalas uses Spark's system default precision and scale.
+For decimal type, pandas APIs on Spark uses Spark's system default precision
and scale.
Review comment:
```suggestion
For decimal type, pandas APIs on Spark use Spark's system default precision
and scale.
```
##########
File path: python/docs/source/user_guide/pandas_on_spark/types.rst
##########
@@ -204,21 +204,21 @@ You can also check the underlying PySpark data type of
`Series` or schema of `Da
>>> ks.Series([0.3, 0.1, 0.8]).spark.data_type
DoubleType
- >>> ks.Series(["welcome", "to", "Koalas"]).spark.data_type
+ >>> ks.Series(["welcome", "to", "pandas APIs on Spark"]).spark.data_type
Review comment:
```suggestion
>>> ks.Series(["welcome", "to", "pandas-on-Spark"]).spark.data_type
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]