Github user junyangq commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14980#discussion_r78594958
  
    --- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
    @@ -0,0 +1,853 @@
    +---
    +title: "SparkR - Practical Guide"
    +output:
    +  html_document:
    +    theme: united
    +    toc: true
    +    toc_depth: 4
    +    toc_float: true
    +    highlight: textmate
    +---
    +
    +## Overview
    +
    +SparkR is an R package that provides a light-weight frontend to use Apache 
Spark from R. In Spark 2.0.0, SparkR provides a distributed data frame 
implementation that supports data processing operations like selection, 
filtering, aggregation etc. and distributed machine learning using 
[MLlib](http://spark.apache.org/mllib/).
    +
    +## Getting Started
    +
    +We begin with an example running on the local machine and provide an 
overview of the use of SparkR: data ingestion, data processing and machine 
learning.
    +
    +First, let's load and attach the package.
    +```{r, message=FALSE}
    +library(SparkR)
    +```
    +
    +`SparkSession` is the entry point into SparkR which connects your R 
program to a Spark cluster. You can create a `SparkSession` using 
`sparkR.session` and pass in options such as the application name, any Spark 
packages depended on, etc.
    +
    +We use default settings in which it runs in local mode. It auto downloads 
Spark package in the background if no previous installation is found. For more 
details about setup, see [Spark Session](#SetupSparkSession).
    +
    +```{r, message=FALSE, warning=FALSE}
    +sparkR.session()
    +```
    +
    +The operations in SparkR are centered around an R class called 
`SparkDataFrame`. It is a distributed collection of data organized into named 
columns, which is conceptually equivalent to a table in a relational database 
or a data frame in R, but with richer optimizations under the hood.
    +
    +`SparkDataFrame` can be constructed from a wide array of sources such as: 
structured data files, tables in Hive, external databases, or existing local R 
data frames. For example, we create a `SparkDataFrame` from a local R data 
frame,
    +
    +```{r}
    +cars <- cbind(model = rownames(mtcars), mtcars)
    +carsDF <- createDataFrame(cars)
    +```
    +
    +We can view the first few rows of the `SparkDataFrame` by `showDF` or 
`head` function.
    +```{r}
    +showDF(carsDF)
    +```
    +
    +Common data processing operations such as `filter`, `select` are supported 
on the `SparkDataFrame`.
    +```{r}
    +carsSubDF <- select(carsDF, "model", "mpg", "hp")
    +carsSubDF <- filter(carsSubDF, carsSubDF$hp >= 200)
    +showDF(carsSubDF)
    +```
    +
    +SparkR can use many common aggregation functions after grouping.
    +
    +```{r}
    +carsGPDF <- summarize(groupBy(carsDF, carsDF$gear), count = n(carsDF$gear))
    +showDF(carsGPDF)
    +```
    +
    +The results `carsDF` and `carsSubDF` are `SparkDataFrame` objects. To 
convert back to R `data.frame`, we can use `collect`.
    +```{r}
    +carsGP <- collect(carsGPDF)
    +class(carsGP)
    +```
    +
    +SparkR supports a number of commonly used machine learning algorithms. 
Under the hood, SparkR uses MLlib to train the model. Users can call `summary` 
to print a summary of the fitted model, `predict` to make predictions on new 
data, and `write.ml`/`read.ml` to save/load fitted models.
    +
    +SparkR supports a subset of R formula operators for model fitting, 
including ‘~’, ‘.’, ‘:’, ‘+’, and ‘-‘. We use linear 
regression as an example.
    +```{r}
    +model <- spark.glm(carsDF, mpg ~ wt + cyl)
    +```
    +
    +```{r}
    +summary(model)
    +```
    +
    +The model can be saved by `write.ml` and loaded back using `read.ml`.
    +```{r, eval=FALSE}
    +write.ml(model, path = "/HOME/tmp/mlModel/glmModel")
    +```
    +
    +In the end, we can stop Spark Session by running
    +```{r, eval=FALSE}
    +sparkR.session.stop()
    +```
    +
    +## Setup
    +
    +### Installation
    +
    +Different from many other R packages, to use SparkR, you need an 
additional installation of Apache Spark. The Spark installation will be used to 
run a backend process that will compile and execute SparkR programs.
    +
    +If you don't have Spark installed on the computer, you may download it 
from [Apache Spark Website](http://spark.apache.org/downloads.html). 
Alternatively, we provide an easy-to-use function `install.spark` to complete 
this process.
    +
    +```{r, eval=FALSE}
    +install.spark()
    +```
    +
    +If you already have Spark installed, you don't have to install again and 
can pass the `sparkHome` argument to `sparkR.session` to let SparkR know where 
the Spark installation is.
    +
    +```{r, eval=FALSE}
    +sparkR.session(sparkHome = "/HOME/spark")
    +```
    +
    +### Spark Session {#SetupSparkSession}
    +
    +**For Windows users**: Due to different file prefixes across operating 
systems, to avoid the issue of potential wrong prefix, a current workaround is 
to specify `spark.sql.warehouse.dir` when starting the `SparkSession`.
    +
    +```{r, eval=FALSE}
    +spark_warehouse_path <- file.path(path.expand('~'), "spark-warehouse")
    +sparkR.session(spark.sql.warehouse.dir = spark_warehouse_path)
    +```
    +
    +In addition to `sparkHome`, many other options can be specified in 
`sparkR.session`. For a complete list, see the [SparkR API 
doc](http://spark.apache.org/docs/latest/api/R/sparkR.session.html).
    +
    +In particular, the following Spark driver properties can be set in 
`sparkConfig`.
    +
    +Property Name | Property group | spark-submit equivalent
    +---------------- | ------------------ | ----------------------
    +spark.driver.memory | Application Properties | --driver-memory
    +spark.driver.extraClassPath | Runtime Environment | --driver-class-path
    +spark.driver.extraJavaOptions | Runtime Environment | --driver-java-options
    +spark.driver.extraLibraryPath | Runtime Environment | --driver-library-path
    +
    +
    +
    +### Cluster Mode
    +SparkR can connect to remote Spark clusters. [Cluster Mode 
Overview](http://spark.apache.org/docs/latest/cluster-overview.html) is a good 
introduction to different Spark cluster modes.
    +
    +When connecting SparkR to a remote Spark cluster, make sure that the Spark 
version and Hadoop version on the machine match the corresponding versions on 
the cluster. Current SparkR package is compatible with
    +```{r, echo=FALSE, tidy = TRUE}
    +paste("Spark", packageVersion("SparkR"))
    +```
    +It should be used both on the local computer and on the remote cluster.
    +
    +To connect, pass the URL of the master node to `sparkR.session`. A 
complete list can be seen in [Spark Master 
URLs](http://spark.apache.org/docs/latest/submitting-applications.html#master-urls).
    +For example, to connect to a local standalone Spark master, we can call
    +
    +```{r, eval=FALSE}
    +sparkR.session(master = "spark://local:7077")
    +```
    +
    +For YARN cluster, SparkR supports the client mode with the master set as 
"yarn".
    +```{r, eval=FALSE}
    +sparkR.session(master = "yarn")
    +```
    +
    +
    +## Data Import
    +
    +### Local Data Frame
    +The simplest way is to convert a local R data frame into a 
`SparkDataFrame`. Specifically we can use `as.DataFrame` or `createDataFrame` 
and pass in the local R data frame to create a `SparkDataFrame`. As an example, 
the following creates a `SparkDataFrame` based using the `faithful` dataset 
from R.
    +```{r}
    +df <- as.DataFrame(faithful)
    +head(df)
    +```
    +
    +### Data Sources
    +SparkR supports operating on a variety of data sources through the 
`SparkDataFrame` interface. You can check the Spark SQL programming guide for 
more [specific 
options](https://spark.apache.org/docs/latest/sql-programming-guide.html#manually-specifying-options)
 that are available for the built-in data sources.
    +
    +The general method for creating `SparkDataFrame` from data sources is 
`read.df`. This method takes in the path for the file to load and the type of 
data source, and the currently active Spark Session will be used automatically. 
SparkR supports reading CSV, JSON and Parquet files natively and through Spark 
Packages you can find data source connectors for popular file formats like 
Avro. These packages can be added with `sparkPackages` parameter when 
initializing SparkSession using `sparkR.session'.`
    +
    +```{r, eval=FALSE}
    +sparkR.session(sparkPackages = "com.databricks:spark-avro_2.11:3.0.0")
    +```
    +
    +We can see how to use data sources using an example CSV input file. For 
more information please refer to SparkR 
[read.df](https://spark.apache.org/docs/latest/api/R/read.df.html) API 
documentation.
    +```{r, eval=FALSE}
    +df <- read.df(csvPath, "csv", header = "true", inferSchema = "true", 
na.strings = "NA")
    +```
    +
    +The data sources API natively supports JSON formatted input files. Note 
that the file that is used here is not a typical JSON file. Each line in the 
file must contain a separate, self-contained valid JSON object. As a 
consequence, a regular multi-line JSON file will most often fail.
    +
    +Let's take a look at the first two lines of the raw JSON file used here.
    +
    +```{r}
    +filePath <- paste0(sparkR.conf("spark.home"),
    +                         "/examples/src/main/resources/people.json")
    +readLines(filePath, n = 2L)
    +```
    +
    +We use `read.df` to read that into a `SparkDataFrame`.
    +
    +```{r}
    +people <- read.df(filePath, "json")
    +count(people)
    +head(people)
    +```
    +
    +SparkR automatically infers the schema from the JSON file.
    +```{r}
    +printSchema(people)
    +```
    +
    +If we want to read multiple JSON files, `read.json` can be used.
    +```{r}
    +people <- read.json(paste0(Sys.getenv("SPARK_HOME"),
    +                           c("/examples/src/main/resources/people.json",
    +                             "/examples/src/main/resources/people.json")))
    +count(people)
    +```
    +
    +The data sources API can also be used to save out `SparkDataFrames` into 
multiple file formats. For example we can save the `SparkDataFrame` from the 
previous example to a Parquet file using `write.df`.
    +```{r, eval=FALSE}
    +write.df(people, path = "people.parquet", source = "parquet", mode = 
"overwrite")
    +```
    +
    +### Hive Tables
    +You can also create SparkDataFrames from Hive tables. To do this we will 
need to create a SparkSession with Hive support which can access tables in the 
Hive MetaStore. Note that Spark should have been built with Hive support and 
more details can be found in the [SQL programming 
guide](https://spark.apache.org/docs/latest/sql-programming-guide.html). In 
SparkR, by default it will attempt to create a SparkSession with Hive support 
enabled (`enableHiveSupport = TRUE`).
    +
    +```{r, eval=FALSE}
    +sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
    +
    +txtPath <- paste0(sparkR.conf("spark.home"), 
"/examples/src/main/resources/kv1.txt")
    +sqlCMD <- sprintf("LOAD DATA LOCAL INPATH '%s' INTO TABLE src", txtPath)
    +sql(sqlCMD)
    +
    +results <- sql("FROM src SELECT key, value")
    +
    +# results is now a SparkDataFrame
    +head(results)
    +```
    +
    +
    +## Data Processing
    +
    +**To dplyr users**: SparkR has similar interface as dplyr in data 
processing. However, some noticeable differences are worth mentioning in the 
first place. We use `df` to represent a `SparkDataFrame` and `col` to represent 
the name of column here.
    +
    +1. indicate columns. SparkR uses either a character string of the column 
name or a Column object constructed with `$` to indicate a column. For example, 
to select `col` in `df`, we can write `select(df, "col")` or `select(df, 
df$col)`.
    +
    +2. describe conditions. In SparkR, the Column object representation can be 
inserted into the condition directly, or we can use a character string to 
describe the condition, without referring to the `SparkDataFrame` used. For 
example, to select rows with value > 1, we can write `filter(df, df$col > 1)` 
or `filter(df, "col > 1")`.
    +
    +Here are more concrete examples.
    +
    +dplyr | SparkR
    +-------- | ---------
    +`select(mtcars, mpg, hp)` | `select(carsDF, "mpg", "hp")`
    +`filter(mtcars, mpg > 20, hp > 100)` | `filter(carsDF, carsDF$mpg > 20, 
carsDF$hp > 100)`
    +
    +Other differences will be mentioned in the specific methods.
    +
    +We use the `SparkDataFrame` `carsDF` created above. We can get basic 
information about the `SparkDataFrame`.
    +```{r}
    +carsDF
    +```
    +
    +Print out the schema in tree format.
    +```{r}
    +printSchema(carsDF)
    +```
    +
    +### SparkDataFrame Operations
    +
    +#### Selecting rows, columns
    +
    +SparkDataFrames support a number of functions to do structured data 
processing. Here we include some basic examples and a complete list can be 
found in the [API](https://spark.apache.org/docs/latest/api/R/index.html) docs:
    +
    +You can also pass in column name as strings.
    +```{r}
    +head(select(carsDF, "mpg"))
    +```
    +
    +Filter the SparkDataFrame to only retain rows with wait times shorter than 
50 mins.
    +```{r}
    +head(filter(carsDF, carsDF$mpg < 20))
    +```
    +
    +#### Grouping, Aggregation
    +
    +A common flow of grouping and aggregation is
    +
    +1. Use `groupBy` or `group_by` with respect to some grouping variables to 
create a `GroupedData` object
    +
    +2. Feed the `GroupedData` object to `agg` or `summarize` functions, with 
some provided aggregation functions to compute a number within each group.
    +
    +A number of widely used functions are supported to aggregate data after 
grouping, including `avg`, `countDistinct`, `count`, `first`, `kurtosis`, 
`last`, `max`, `mean`, `min`, `sd`, `skewness`, `stddev_pop`, `stddev_samp`, 
`sumDistinct`, `sum`, `var_pop`, `var_samp`, `var`.
    +
    +For example we can compute a histogram of the number of cylinders in the 
`mtcars` dataset as shown below.
    +
    +```{r}
    +numCyl <- summarize(groupBy(carsDF, carsDF$cyl), count = n(carsDF$cyl))
    +head(numCyl)
    +```
    +
    +#### Operating on Columns
    +
    +SparkR also provides a number of functions that can directly applied to 
columns for data processing and during aggregation. The example below shows the 
use of basic arithmetic functions.
    +
    +```{r}
    +carsDF_km <- carsDF
    +carsDF_km$kmpg <- carsDF_km$mpg * 1.61
    +head(select(carsDF_km, "model", "mpg", "kmpg"))
    +```
    +
    +
    +### Window Functions
    +A window function is a variation of aggregation function. In simple words,
    +
    +* aggregation function: `n` to `1` mapping - returns a single value for a 
group of entries. Examples include `sum`, `count`, `max`.
    +
    +* window function: `n` to `n` mapping - returns one value for each entry 
in the group, but the value may depend on all the entries of the *group*. 
Examples include `rank`, `lead`, `lag`.
    +
    +Formally, the *group* mentioned above is called the *frame*. Every input 
row can have a unique frame associated with it and the output of the window 
function on that row is based on the rows confined in that frame.
    +
    +Window functions are often used in conjunction with the following 
functions: `windowPartitionBy`, `windowOrderBy`, `partitionBy`, `orderBy`, 
`over`. To illustrate this we next look at an example.
    +
    +We still use the `mtcars` dataset. The corresponding `SparkDataFrame` is 
`carsDF`. Suppose for each number of cylinders, we want to calculate the rank 
of each car in `mpg` within the group.
    +```{r}
    +carsSubDF <- select(carsDF, "model", "mpg", "cyl")
    +ws <- orderBy(windowPartitionBy("cyl"), "mpg")
    +carsRank <- withColumn(carsSubDF, "rank", over(rank(), ws))
    +showDF(carsRank)
    +```
    +
    +We explain in detail the above steps.
    +
    +* `windowPartitionBy` creates a window specification object `WindowSpec` 
that defines the partition. It controls which rows will be in the same 
partition as the given row. In this case, rows with the same value in `cyl` 
will be put in the same partition. `orderBy` further defines the ordering - the 
position a given row is in the partition. The resulting `WindowSpec` is 
returned as `ws`.
    +
    +More window specification methods include `rangeBetween`, which can define 
boundaries of the frame by value, and `rowsBetween`, which can define the 
boundaries by row indices.
    +
    +* `withColumn` appends a Column called `"rank"` to the `SparkDataFrame`. 
`over` returns a windowing column. The first argument is usually a Column 
returned by window function(s) such as `rank()`, `lead(carsDF$wt)`. That 
calculates the corresponding values according to the partitioned-and-ordered 
table.
    +
    +### User-Defined Function
    +
    +In SparkR, we support several kinds of user-defined functions (UDFs).
    +
    +#### Apply by Partition
    +
    +`dapply` can apply a function to each partition of a `SparkDataFrame`. The 
function to be applied to each partition of the `SparkDataFrame` should have 
only one parameter, a `data.frame` corresponding to a partition, and the output 
should be a `data.frame` as well. Schema specifies the row format of the 
resulting a `SparkDataFrame`. It must match to data types of returned value. 
See [here](#DataTypes) for mapping between R and Spark.
    +
    +We convert `mpg` to `kmpg` (kilometers per gallon). `carsSubDF` is a 
`SparkDataFrame` with a subset of `carsDF` columns.
    +
    +```{r}
    +carsSubDF <- select(carsDF, "model", "mpg")
    +schema <- structType(structField("model", "string"), structField("mpg", 
"double"),
    +                     structField("kmpg", "double"))
    +out <- dapply(carsSubDF, function(x) { x <- cbind(x, x$mpg * 1.61) }, 
schema)
    +head(collect(out))
    +```
    +
    +Like `dapply`, apply a function to each partition of a `SparkDataFrame` 
and collect the result back. The output of function should be a `data.frame`, 
but no schema is required in this case. Note that `dapplyCollect` can fail if 
the output of UDF run on all the partition cannot be pulled to the driver and 
fit in driver memory.
    +
    +```{r}
    +out <- dapplyCollect(
    +         carsSubDF,
    +         function(x) {
    +           x <- cbind(x, "kmpg" = x$mpg * 1.61)
    +         })
    +head(out, 3)
    +```
    +
    +#### Apply by Group
    +`gapply` can apply a function to each group of a `SparkDataFrame`. The 
function is to be applied to each group of the `SparkDataFrame` and should have 
only two parameters: grouping key and R `data.frame` corresponding to that key. 
The groups are chosen from `SparkDataFrames` column(s). The output of function 
should be a `data.frame`. Schema specifies the row format of the resulting 
`SparkDataFrame`. It must represent R function’s output schema on the basis 
of Spark data types. The column names of the returned `data.frame` are set by 
user. See [here](#DataTypes) for mapping between R and Spark.
    +
    +```{r}
    +schema <- structType(structField("cyl", "double"), structField("max_mpg", 
"double"))
    +result <- gapply(
    +    carsDF,
    +    "cyl",
    +    function(key, x) {
    +        y <- data.frame(key, max(x$mpg))
    +    },
    +    schema)
    +head(arrange(result, "max_mpg", decreasing = TRUE))
    +```
    +
    +Like gapply, `gapplyCollect` applies a function to each partition of a 
`SparkDataFrame` and collect the result back to R `data.frame`. The output of 
the function should be a `data.frame` but no schema is required in this case. 
Note that `gapplyCollect` can fail if the output of UDF run on all the 
partition cannot be pulled to the driver and fit in driver memory.
    +
    +```{r}
    +result <- gapplyCollect(
    +    carsDF,
    +    "cyl",
    +    function(key, x) {
    +         y <- data.frame(key, max(x$mpg))
    +        colnames(y) <- c("cyl", "max_mpg")
    +        y
    +    })
    +head(result[order(result$max_mpg, decreasing = TRUE), ])
    +```
    +
    +#### Distribute Local Functions
    +
    +Similar to `lapply` in native R, `spark.lapply` runs a function over a 
list of elements and distributes the computations with Spark. `spark.lapply` 
works in a manner that is similar to `doParallel` or `lapply` to elements of a 
list. The results of all the computations should fit in a single machine. If 
that is not the case you can do something like `df <- createDataFrame(list)` 
and then use `dapply`.
    +
    +```{r}
    +families <- c("gaussian", "poisson")
    +train <- function(family) {
    +  model <- glm(mpg ~ hp, mtcars, family = family)
    +  summary(model)
    +}
    +```
    +
    +Return a list of model's summaries.
    +```{r}
    +model.summaries <- spark.lapply(families, train)
    --- End diff --
    
    I will use svm with different constraints, which may be familiar to a 
larger audience.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to