[ https://issues.apache.org/jira/browse/SPARK-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14952432#comment-14952432 ]
Shivaram Venkataraman commented on SPARK-11057: ----------------------------------------------- [~Narine] Does this feature exist in Scala ? If not we should implement this in Scala first and then call it from SparkR cc [~rxin] > SparkSQL: corr and cov for many columns > --------------------------------------- > > Key: SPARK-11057 > URL: https://issues.apache.org/jira/browse/SPARK-11057 > Project: Spark > Issue Type: New Feature > Reporter: Narine Kokhlikyan > > Hi there, > As we know R has the option to calculate the correlation and covariance for > all columns of a dataframe or between columns of two dataframes. > If we look at apache math package we can see that, they have that too. > http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/stat/correlation/PearsonsCorrelation.html#computeCorrelationMatrix%28org.apache.commons.math3.linear.RealMatrix%29 > In case we have as input only one DataFrame: > ------------------------------------------------------ > for correlation: > cor[i,j] = cor[j,i] > and for the main diagonal we can have 1s. > --------------------- > for covariance: > cov[i,j] = cov[j,i] > and for main diagonal: we can compute the variance for that specific column: > See: > http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/stat/correlation/Covariance.html#computeCovarianceMatrix%28org.apache.commons.math3.linear.RealMatrix%29 > Let me know what do you think. > I'm working on this and will make a pull request soon. > Thanks, > Narine -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org