Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14980#discussion_r77740084
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -0,0 +1,853 @@
+---
+title: "SparkR - Practical Guide"
+output:
+ html_document:
+ theme: united
+ toc: true
+ toc_depth: 4
+ toc_float: true
+ highlight: textmate
+---
+
+## Overview
+
+SparkR is an R package that provides a light-weight frontend to use Apache
Spark from R. In Spark 2.0.0, SparkR provides a distributed data frame
implementation that supports data processing operations like selection,
filtering, aggregation etc. and distributed machine learning using
[MLlib](http://spark.apache.org/mllib/).
+
+## Getting Started
+
+We begin with an example running on the local machine and provide an
overview of the use of SparkR: data ingestion, data processing and machine
learning.
+
+First, let's load and attach the package.
+```{r, message=FALSE}
+library(SparkR)
+```
+
+`SparkSession` is the entry point into SparkR which connects your R
program to a Spark cluster. You can create a `SparkSession` using
`sparkR.session` and pass in options such as the application name, any Spark
packages depended on, etc.
+
+We use default settings in which it runs in local mode. It auto downloads
Spark package in the background if no previous installation is found. For more
details about setup, see [Spark Session](#SetupSparkSession).
+
+```{r, message=FALSE, warning=FALSE}
+sparkR.session()
+```
+
+The operations in SparkR are centered around an R class called
`SparkDataFrame`. It is a distributed collection of data organized into named
columns, which is conceptually equivalent to a table in a relational database
or a data frame in R, but with richer optimizations under the hood.
+
+`SparkDataFrame` can be constructed from a wide array of sources such as:
structured data files, tables in Hive, external databases, or existing local R
data frames. For example, we create a `SparkDataFrame` from a local R data
frame,
+
+```{r}
+cars <- cbind(model = rownames(mtcars), mtcars)
+carsDF <- createDataFrame(cars)
+```
+
+We can view the first few rows of the `SparkDataFrame` by `showDF` or
`head` function.
+```{r}
+showDF(carsDF)
+```
+
+Common data processing operations such as `filter`, `select` are supported
on the `SparkDataFrame`.
+```{r}
+carsSubDF <- select(carsDF, "model", "mpg", "hp")
+carsSubDF <- filter(carsSubDF, carsSubDF$hp >= 200)
+showDF(carsSubDF)
+```
+
+SparkR can use many common aggregation functions after grouping.
+
+```{r}
+carsGPDF <- summarize(groupBy(carsDF, carsDF$gear), count = n(carsDF$gear))
+showDF(carsGPDF)
+```
+
+The results `carsDF` and `carsSubDF` are `SparkDataFrame` objects. To
convert back to R `data.frame`, we can use `collect`.
+```{r}
+carsGP <- collect(carsGPDF)
+class(carsGP)
+```
+
+SparkR supports a number of commonly used machine learning algorithms.
Under the hood, SparkR uses MLlib to train the model. Users can call `summary`
to print a summary of the fitted model, `predict` to make predictions on new
data, and `write.ml`/`read.ml` to save/load fitted models.
+
+SparkR supports a subset of R formula operators for model fitting,
including â~â, â.â, â:â, â+â, and â-â. We use linear
regression as an example.
+```{r}
+model <- spark.glm(carsDF, mpg ~ wt + cyl)
+```
+
+```{r}
+summary(model)
+```
+
+The model can be saved by `write.ml` and loaded back using `read.ml`.
+```{r, eval=FALSE}
+write.ml(model, path = "/HOME/tmp/mlModel/glmModel")
+```
+
+In the end, we can stop Spark Session by running
+```{r, eval=FALSE}
+sparkR.session.stop()
+```
+
+## Setup
+
+### Installation
+
+Different from many other R packages, to use SparkR, you need an
additional installation of Apache Spark. The Spark installation will be used to
run a backend process that will compile and execute SparkR programs.
+
+If you don't have Spark installed on the computer, you may download it
from [Apache Spark Website](http://spark.apache.org/downloads.html).
Alternatively, we provide an easy-to-use function `install.spark` to complete
this process.
+
+```{r, eval=FALSE}
+install.spark()
+```
+
+If you already have Spark installed, you don't have to install again and
can pass the `sparkHome` argument to `sparkR.session` to let SparkR know where
the Spark installation is.
+
+```{r, eval=FALSE}
+sparkR.session(sparkHome = "/HOME/spark")
+```
+
+### Spark Session {#SetupSparkSession}
+
+**For Windows users**: Due to different file prefixes across operating
systems, to avoid the issue of potential wrong prefix, a current workaround is
to specify `spark.sql.warehouse.dir` when starting the `SparkSession`.
+
+```{r, eval=FALSE}
+spark_warehouse_path <- file.path(path.expand('~'), "spark-warehouse")
+sparkR.session(spark.sql.warehouse.dir = spark_warehouse_path)
+```
+
+In addition to `sparkHome`, many other options can be specified in
`sparkR.session`. For a complete list, see the [SparkR API
doc](http://spark.apache.org/docs/latest/api/R/sparkR.session.html).
+
+In particular, the following Spark driver properties can be set in
`sparkConfig`.
+
+Property Name | Property group | spark-submit equivalent
+---------------- | ------------------ | ----------------------
+spark.driver.memory | Application Properties | --driver-memory
+spark.driver.extraClassPath | Runtime Environment | --driver-class-path
+spark.driver.extraJavaOptions | Runtime Environment | --driver-java-options
+spark.driver.extraLibraryPath | Runtime Environment | --driver-library-path
+
+
+
+### Cluster Mode
+SparkR can connect to remote Spark clusters. [Cluster Mode
Overview](http://spark.apache.org/docs/latest/cluster-overview.html) is a good
introduction to different Spark cluster modes.
+
+When connecting SparkR to a remote Spark cluster, make sure that the Spark
version and Hadoop version on the machine match the corresponding versions on
the cluster. Current SparkR package is compatible with
+```{r, echo=FALSE, tidy = TRUE}
+paste("Spark", packageVersion("SparkR"))
+```
+It should be used both on the local computer and on the remote cluster.
+
+To connect, pass the URL of the master node to `sparkR.session`. A
complete list can be seen in [Spark Master
URLs](http://spark.apache.org/docs/latest/submitting-applications.html#master-urls).
+For example, to connect to a local standalone Spark master, we can call
+
+```{r, eval=FALSE}
+sparkR.session(master = "spark://local:7077")
+```
+
+For YARN cluster, SparkR supports the client mode with the master set as
"yarn".
+```{r, eval=FALSE}
+sparkR.session(master = "yarn")
+```
+
+
+## Data Import
+
+### Local Data Frame
+The simplest way is to convert a local R data frame into a
`SparkDataFrame`. Specifically we can use `as.DataFrame` or `createDataFrame`
and pass in the local R data frame to create a `SparkDataFrame`. As an example,
the following creates a `SparkDataFrame` based using the `faithful` dataset
from R.
+```{r}
+df <- as.DataFrame(faithful)
+head(df)
+```
+
+### Data Sources
+SparkR supports operating on a variety of data sources through the
`SparkDataFrame` interface. You can check the Spark SQL programming guide for
more [specific
options](https://spark.apache.org/docs/latest/sql-programming-guide.html#manually-specifying-options)
that are available for the built-in data sources.
+
+The general method for creating `SparkDataFrame` from data sources is
`read.df`. This method takes in the path for the file to load and the type of
data source, and the currently active Spark Session will be used automatically.
SparkR supports reading CSV, JSON and Parquet files natively and through Spark
Packages you can find data source connectors for popular file formats like
Avro. These packages can be added with `sparkPackages` parameter when
initializing SparkSession using `sparkR.session'.`
+
+```{r, eval=FALSE}
+sparkR.session(sparkPackages = "com.databricks:spark-avro_2.11:3.0.0")
+```
+
+We can see how to use data sources using an example CSV input file. For
more information please refer to SparkR
[read.df](https://spark.apache.org/docs/latest/api/R/read.df.html) API
documentation.
+```{r, eval=FALSE}
+df <- read.df(csvPath, "csv", header = "true", inferSchema = "true",
na.strings = "NA")
+```
+
+The data sources API natively supports JSON formatted input files. Note
that the file that is used here is not a typical JSON file. Each line in the
file must contain a separate, self-contained valid JSON object. As a
consequence, a regular multi-line JSON file will most often fail.
+
+Let's take a look at the first two lines of the raw JSON file used here.
+
+```{r}
+filePath <- paste0(sparkR.conf("spark.home"),
+ "/examples/src/main/resources/people.json")
+readLines(filePath, n = 2L)
+```
+
+We use `read.df` to read that into a `SparkDataFrame`.
+
+```{r}
+people <- read.df(filePath, "json")
+count(people)
+head(people)
+```
+
+SparkR automatically infers the schema from the JSON file.
+```{r}
+printSchema(people)
+```
+
+If we want to read multiple JSON files, `read.json` can be used.
+```{r}
+people <- read.json(paste0(Sys.getenv("SPARK_HOME"),
+ c("/examples/src/main/resources/people.json",
+ "/examples/src/main/resources/people.json")))
+count(people)
+```
+
+The data sources API can also be used to save out `SparkDataFrames` into
multiple file formats. For example we can save the `SparkDataFrame` from the
previous example to a Parquet file using `write.df`.
+```{r, eval=FALSE}
+write.df(people, path = "people.parquet", source = "parquet", mode =
"overwrite")
+```
+
+### Hive Tables
+You can also create SparkDataFrames from Hive tables. To do this we will
need to create a SparkSession with Hive support which can access tables in the
Hive MetaStore. Note that Spark should have been built with Hive support and
more details can be found in the [SQL programming
guide](https://spark.apache.org/docs/latest/sql-programming-guide.html). In
SparkR, by default it will attempt to create a SparkSession with Hive support
enabled (`enableHiveSupport = TRUE`).
+
+```{r, eval=FALSE}
+sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
+
+txtPath <- paste0(sparkR.conf("spark.home"),
"/examples/src/main/resources/kv1.txt")
+sqlCMD <- sprintf("LOAD DATA LOCAL INPATH '%s' INTO TABLE src", txtPath)
+sql(sqlCMD)
+
+results <- sql("FROM src SELECT key, value")
+
+# results is now a SparkDataFrame
+head(results)
+```
+
+
+## Data Processing
+
+**To dplyr users**: SparkR has similar interface as dplyr in data
processing. However, some noticeable differences are worth mentioning in the
first place. We use `df` to represent a `SparkDataFrame` and `col` to represent
the name of column here.
+
+1. indicate columns. SparkR uses either a character string of the column
name or a Column object constructed with `$` to indicate a column. For example,
to select `col` in `df`, we can write `select(df, "col")` or `select(df,
df$col)`.
+
+2. describe conditions. In SparkR, the Column object representation can be
inserted into the condition directly, or we can use a character string to
describe the condition, without referring to the `SparkDataFrame` used. For
example, to select rows with value > 1, we can write `filter(df, df$col > 1)`
or `filter(df, "col > 1")`.
+
+Here are more concrete examples.
+
+dplyr | SparkR
+-------- | ---------
+`select(mtcars, mpg, hp)` | `select(carsDF, "mpg", "hp")`
+`filter(mtcars, mpg > 20, hp > 100)` | `filter(carsDF, carsDF$mpg > 20,
carsDF$hp > 100)`
+
+Other differences will be mentioned in the specific methods.
+
+We use the `SparkDataFrame` `carsDF` created above. We can get basic
information about the `SparkDataFrame`.
+```{r}
+carsDF
+```
+
+Print out the schema in tree format.
+```{r}
+printSchema(carsDF)
+```
+
+### SparkDataFrame Operations
+
+#### Selecting rows, columns
+
+SparkDataFrames support a number of functions to do structured data
processing. Here we include some basic examples and a complete list can be
found in the [API](https://spark.apache.org/docs/latest/api/R/index.html) docs:
+
+You can also pass in column name as strings.
+```{r}
+head(select(carsDF, "mpg"))
+```
+
+Filter the SparkDataFrame to only retain rows with wait times shorter than
50 mins.
+```{r}
+head(filter(carsDF, carsDF$mpg < 20))
--- End diff --
this example doesn't match the description above?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]