Github user NarineK commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8869#discussion_r40581218
  
    --- Diff: R/pkg/R/DataFrameStatFunctions.R ---
    @@ -0,0 +1,102 @@
    +#
    +# Licensed to the Apache Software Foundation (ASF) under one or more
    +# contributor license agreements.  See the NOTICE file distributed with
    +# this work for additional information regarding copyright ownership.
    +# The ASF licenses this file to You under the Apache License, Version 2.0
    +# (the "License"); you may not use this file except in compliance with
    +# the License.  You may obtain a copy of the License at
    +#
    +#    http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing, software
    +# distributed under the License is distributed on an "AS IS" BASIS,
    +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    +# See the License for the specific language governing permissions and
    +# limitations under the License.
    +#
    +
    +# DataFrameStatFunctions.R - Statistic functions for DataFrames.
    +
    +setOldClass("jobj")
    +
    +#' crosstab
    +#'
    +#' Computes a pair-wise frequency table of the given columns. Also known 
as a contingency
    +#' table. The number of distinct values for each column should be less 
than 1e4. At most 1e6
    +#' non-zero pair frequencies will be returned.
    +#'
    +#' @param col1 name of the first column. Distinct items will make the 
first item of each row.
    +#' @param col2 name of the second column. Distinct items will make the 
column names of the output.
    +#' @return a local R data.frame representing the contingency table. The 
first column of each row
    +#'         will be the distinct values of `col1` and the column names will 
be the distinct values
    +#'         of `col2`. The name of the first column will be `$col1_$col2`. 
Pairs that have no
    +#'         occurrences will have zero as their counts.
    +#'
    +#' @rdname statfunctions
    +#' @name crosstab
    +#' @export
    +#' @examples
    +#' \dontrun{
    +#' df <- jsonFile(sqlCtx, "/path/to/file.json")
    +#' ct <- crosstab(df, "title", "gender")
    +#' }
    +setMethod("crosstab",
    +          signature(x = "DataFrame", col1 = "character", col2 = 
"character"),
    +          function(x, col1, col2) {
    +            statFunctions <- callJMethod(x@sdf, "stat")
    +            sct <- callJMethod(statFunctions, "crosstab", col1, col2)
    +            collect(dataFrame(sct))
    +          })
    +
    +#' cov
    +#'
    +#' Calculate the sample covariance of two numerical columns of a DataFrame.
    +#'
    +#' @param x A SparkSQL DataFrame
    +#' @param col1 the name of the first column
    +#' @param col2 the name of the second column
    +#' @return the covariance of the two columns.
    +#'
    +#' @rdname statfunctions
    +#' @name cov
    +#' @export
    +#' @examples
    +#'\dontrun{
    +#' df <- jsonFile(sqlCtx, "/path/to/file.json")
    +#' cov <- cov(df, "title", "gender")
    +#' }
    +setMethod("cov",
    +          signature(x = "DataFrame", col1 = "character", col2 = 
"character"),
    --- End diff --
    
    Hi there, 
    I have some points about correlation and covariance.
    1. R calls the method 'cor' and not 'corr', so if we want to have the same 
syntax as R, we might want to use the 'cor'.
    2. The actual syntax for cor (cov has a similar one) is : cor(x, y = NULL, 
use = "everything",
        method = c("pearson", "kendall", "spearman"))
    where X is a dataframe and y can be another dataframe, a vector or matrix 
    and in R I can get smth like this:
    cor(longley)
                 GNP.deflator       GNP   Unemployed .... 
    GNP.deflator    1.0000000 0.9915892
    GNP             0.9915892 1.0000000
    Unemployed      0.6206334 0.6042609
    Armed.Forces    0.4647442 0.4464368
    Population      0.9791634 0.9910901
    Year            0.9911492 0.9952735
    Employed        0.9708985 0.9835516
    
    I wonder if we can get this in SparkR too.
    I see at least 2 options here:
    1. we make K number of calls to dataframe api for each column pair or
    2. we extend scala dataframe api so that it also accepts a list of columns 
... 
    I can help you with this if you think that it makes sense and we want to 
add it.
    
    Thanks,
    Narine


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to