HyukjinKwon commented on a change in pull request #32835:
URL: https://github.com/apache/spark/pull/32835#discussion_r648794764



##########
File path: python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst
##########
@@ -30,7 +30,7 @@ Reading and writing DataFrames
 
 In the example below, you will read and write a table in SQLite.
 
-Firstly, create the ``example`` database as below via Python's SQLite library. 
This will be read to Koalas later:
+Firstly, create the ``example`` database as below via Python's SQLite library. 
This will be read to pandas APIs on Spark later:

Review comment:
       > read to pandas APIs on Spark
   This reads a bit awkward. maybe read to pyspark-on-Spark?

##########
File path: python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst
##########
@@ -48,13 +48,13 @@ Firstly, create the ``example`` database as below via 
Python's SQLite library. T
     con.commit()
     con.close()
 
-Koalas requires a JDBC driver to read so it requires the driver for your 
particular database to be on the Spark's classpath. For SQLite JDBC driver, you 
can download it, for example, as below:
+Pandas APIs on Spark requires a JDBC driver to read so it requires the driver 
for your particular database to be on the Spark's classpath. For SQLite JDBC 
driver, you can download it, for example, as below:

Review comment:
       ```suggestion
   Pandas APIs on Spark require a JDBC driver to read so it requires the driver 
for your particular database to be on the Spark's classpath. For SQLite JDBC 
driver, you can download it, for example, as below:
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to