HyukjinKwon commented on a change in pull request #32835:
URL: https://github.com/apache/spark/pull/32835#discussion_r648795252



##########
File path: python/docs/source/user_guide/pandas_on_spark/options.rst
##########
@@ -227,24 +227,24 @@ This is conceptually equivalent to the PySpark example as 
below:
 Available options
 -----------------
 
-=============================== ============== 
=====================================================
+=============================== ============== 
===================================================================
 Option                          Default        Description
-=============================== ============== 
=====================================================
-display.max_rows                1000           This sets the maximum number of 
rows Koalas should
+=============================== ============== 
===================================================================
+display.max_rows                1000           This sets the maximum number of 
rows pandas APIs on Spark should
                                                output when printing out 
various output. For example,
                                                this value determines the 
number of rows to be shown
                                                at the repr() in a dataframe. 
Set `None` to unlimit
                                                the input length. Default is 
1000.
 compute.max_rows                1000           'compute.max_rows' sets the 
limit of the current
-                                               Koalas DataFrame. Set `None` to 
unlimit the input
+                                               pandas APIs on Spark DataFrame. 
Set `None` to unlimit the input
                                                length. When the limit is set, 
it is executed by the
                                                shortcut by collecting the data 
into the driver, and
                                                then using the pandas API. If 
the limit is unset, the
                                                operation is executed by 
PySpark. Default is 1000.
 compute.shortcut_limit          1000           'compute.shortcut_limit' sets 
the limit for a
                                                shortcut. It computes specified 
number of rows and
                                                use its schema. When the 
dataframe length is larger
-                                               than this limit, Koalas uses 
PySpark to compute.
+                                               than this limit, pandas APIs on 
Spark uses PySpark to compute.

Review comment:
       ```suggestion
                                                  than this limit, pandas APIs 
on Spark use PySpark to compute.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to