HyukjinKwon commented on a change in pull request #32797:
URL: https://github.com/apache/spark/pull/32797#discussion_r647061279



##########
File path: R/pkg/R/SQLContext.R
##########
@@ -455,6 +467,10 @@ read.parquet <- function(path, ...) {
 #'
 #' @param path Path of file to read. A vector of multiple paths is allowed.
 #' @param ... additional external data source specific named properties.
+#'            You can find the text-specific options for reading text files in
+#'            \href{

Review comment:
       ```suggestion
   #'            \url{
   ```

##########
File path: R/pkg/R/SQLContext.R
##########
@@ -602,6 +618,9 @@ loadDF <- function(path = NULL, source = NULL, schema = 
NULL, ...) {
 #' Create a SparkDataFrame representing the database table accessible via JDBC 
URL
 #'
 #' Additional JDBC database connection properties can be set (...)
+#' You can find the JDBC-specific option and parameter documentation for 
reading tables via JDBC in
+#' 
\href{https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option}{

Review comment:
       ```suggestion
   #' 
\url{https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html#data-source-option}{
   ```

##########
File path: R/pkg/R/functions.R
##########
@@ -258,11 +258,19 @@ NULL
 #'          \item \code{to_json}, \code{from_json} and \code{schema_of_json}: 
this contains
 #'              additional named properties to control how it is converted and 
accepts the
 #'              same options as the JSON data source.
+#'              You can find the JSON-specific options for reading/writing 
JSON files in
+#'              \href{

Review comment:
       ```suggestion
   #'              \url{
   ```

##########
File path: R/pkg/R/functions.R
##########
@@ -258,11 +258,19 @@ NULL
 #'          \item \code{to_json}, \code{from_json} and \code{schema_of_json}: 
this contains
 #'              additional named properties to control how it is converted and 
accepts the
 #'              same options as the JSON data source.
+#'              You can find the JSON-specific options for reading/writing 
JSON files in
+#'              \href{
+#'              
https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option}{
+#'              Data Source Option} in the version you use.
 #'          \item \code{to_json}: it supports the "pretty" option which 
enables pretty
 #'              JSON generation.
 #'          \item \code{to_csv}, \code{from_csv} and \code{schema_of_csv}: 
this contains
 #'              additional named properties to control how it is converted and 
accepts the
 #'              same options as the CSV data source.
+#'              You can find the CSV-specific options for reading/writing CSV 
files in
+#'              \href{

Review comment:
       ```suggestion
   #'              \url{
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to