[jira] [Commented] (SPARK-12372) Document limitations of MLlib local linear algebra

2015-12-18 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064476#comment-15064476
 ] 

Christos Iraklis Tsatsoulis commented on SPARK-12372:
-

You are very welcome

> Document limitations of MLlib local linear algebra
> --
>
> Key: SPARK-12372
> URL: https://issues.apache.org/jira/browse/SPARK-12372
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, MLlib
>Affects Versions: 1.5.2
>Reporter: Christos Iraklis Tsatsoulis
>
> This JIRA is now for documenting limitations of MLlib's local linear algebra 
> types.  Basically, we should make it clear in the user guide that they 
> provide simple functionality but are not a full-fledged local linear library. 
>  We should also recommend libraries for users to use in the meantime: 
> probably Breeze for Scala (and Java?) and numpy/scipy for Python.
> *Original JIRA title*: Unary operator "-" fails for MLlib vectors
> *Original JIRA text, as an example of the need for better docs*:
> Consider the following snippet in pyspark 1.5.2:
> {code:none}
> >>> from pyspark.mllib.linalg import Vectors
> >>> x = Vectors.dense([0.0, 1.0, 0.0, 7.0, 0.0])
> >>> x
> DenseVector([0.0, 1.0, 0.0, 7.0, 0.0])
> >>> -x
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: func() takes exactly 2 arguments (1 given)
> >>> y = Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0])
> >>> y
> DenseVector([2.0, 0.0, 3.0, 4.0, 5.0])
> >>> x-y
> DenseVector([-2.0, 1.0, -3.0, 3.0, -5.0])
> >>> -y+x
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: func() takes exactly 2 arguments (1 given)
> >>> -1*x
> DenseVector([-0.0, -1.0, -0.0, -7.0, -0.0])
> {code}
> Clearly, the unary operator {{-}} (minus) for vectors fails, giving errors 
> for expressions like {{-x}} and {{-y+x}}, despite the fact that {{x-y}} 
> behaves as expected.
> The last operation, {{-1*x}}, although mathematically "correct", includes 
> minus signs for the zero entries, which again is normally not expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-12372) Unary operator "-" fails for MLlib vectors

2015-12-16 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060858#comment-15060858
 ] 

Christos Iraklis Tsatsoulis commented on SPARK-12372:
-

If this is the case, then a warning/clarification in the documentation wouldn't 
hurt - Spark users are not supposed to be aware of the internal "ongoing 
discussions" between Spark developers (BTW, any relevant link would be very 
welcome - I could not find any mention in MLlib & Breeze docs, neither in the 
recent preprint papers on linalg & MLlib).
All in all, I suggest you re-open the issue with a different type (it's not a 
bug, as you say), and the required resolution being a notification in the 
relevant docs ("don't try this..., because...").

> Unary operator "-" fails for MLlib vectors
> --
>
> Key: SPARK-12372
> URL: https://issues.apache.org/jira/browse/SPARK-12372
> Project: Spark
>  Issue Type: Bug
>  Components: MLlib, PySpark
>Affects Versions: 1.5.2
>Reporter: Christos Iraklis Tsatsoulis
>
> Consider the following snippet in pyspark 1.5.2:
> {code:none}
> >>> from pyspark.mllib.linalg import Vectors
> >>> x = Vectors.dense([0.0, 1.0, 0.0, 7.0, 0.0])
> >>> x
> DenseVector([0.0, 1.0, 0.0, 7.0, 0.0])
> >>> -x
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: func() takes exactly 2 arguments (1 given)
> >>> y = Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0])
> >>> y
> DenseVector([2.0, 0.0, 3.0, 4.0, 5.0])
> >>> x-y
> DenseVector([-2.0, 1.0, -3.0, 3.0, -5.0])
> >>> -y+x
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: func() takes exactly 2 arguments (1 given)
> >>> -1*x
> DenseVector([-0.0, -1.0, -0.0, -7.0, -0.0])
> {code}
> Clearly, the unary operator {{-}} (minus) for vectors fails, giving errors 
> for expressions like {{-x}} and {{-y+x}}, despite the fact that {{x-y}} 
> behaves as expected.
> The last operation, {{-1*x}}, although mathematically "correct", includes 
> minus signs for the zero entries, which again is normally not expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-12372) Unary operator "-" fails for MLlib vectors

2015-12-16 Thread Christos Iraklis Tsatsoulis (JIRA)
Christos Iraklis Tsatsoulis created SPARK-12372:
---

 Summary: Unary operator "-" fails for MLlib vectors
 Key: SPARK-12372
 URL: https://issues.apache.org/jira/browse/SPARK-12372
 Project: Spark
  Issue Type: Bug
  Components: MLlib, PySpark
Affects Versions: 1.5.2
Reporter: Christos Iraklis Tsatsoulis


Consider the following snippet in pyspark 1.5.2:

{code:none}
>>> from pyspark.mllib.linalg import Vectors
>>> x = Vectors.dense([0.0, 1.0, 0.0, 7.0, 0.0])
>>> x
DenseVector([0.0, 1.0, 0.0, 7.0, 0.0])
>>> -x
Traceback (most recent call last):
  File "", line 1, in 
TypeError: func() takes exactly 2 arguments (1 given)
>>> y = Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0])
>>> y
DenseVector([2.0, 0.0, 3.0, 4.0, 5.0])
>>> x-y
DenseVector([-2.0, 1.0, -3.0, 3.0, -5.0])
>>> -y+x
Traceback (most recent call last):
  File "", line 1, in 
TypeError: func() takes exactly 2 arguments (1 given)
>>> -1*x
DenseVector([-0.0, -1.0, -0.0, -7.0, -0.0])
{code}

Clearly, the unary operator {{-}} (minus) for vectors fails, giving errors for 
expressions like {{-x}} and {{-y+x}}, despite the fact that {{x-y}} behaves as 
expected.
The last operation, {{-1*x}}, although mathematically "correct", includes minus 
signs for the zero entries, which again is normally not expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-11530) Return eigenvalues with PCA model

2015-11-09 Thread Christos Iraklis Tsatsoulis (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christos Iraklis Tsatsoulis updated SPARK-11530:

Component/s: MLlib

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-11530) Return eigenvalues with PCA model

2015-11-09 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996533#comment-14996533
 ] 

Christos Iraklis Tsatsoulis edited comment on SPARK-11530 at 11/9/15 1:40 PM:
--

I edited it to target both; there are `PCA.scala` scripts for both ML & MLLib, 
but since I am using it via PySpark, where it is available only via ML, I 
initially omitted MLlib


was (Author: ctsats):
I edited it to target both; there are PCA.scala scripts for both 
ML & MLLib, but since I am using it via PySpark, where it is available only via 
ML, I initially omitted MLlib

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-11530) Return eigenvalues with PCA model

2015-11-09 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996533#comment-14996533
 ] 

Christos Iraklis Tsatsoulis edited comment on SPARK-11530 at 11/9/15 1:45 PM:
--

I edited it to target both; there are PCA.scala scripts for both ML & MLLib, 
but since I am using it via PySpark, where it is available only via ML, I 
initially omitted MLlib


was (Author: ctsats):
I edited it to target both; there are `PCA.scala` scripts for both ML & MLLib, 
but since I am using it via PySpark, where it is available only via ML, I 
initially omitted MLlib

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-11530) Return eigenvalues with PCA model

2015-11-09 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996533#comment-14996533
 ] 

Christos Iraklis Tsatsoulis edited comment on SPARK-11530 at 11/9/15 1:50 PM:
--

I edited it to target both; there are PCA.scala scripts for both ML & MLlib, 
but since I am using it via PySpark, where it is available only via ML, I 
initially omitted MLlib.


was (Author: ctsats):
I edited it to target both; there are PCA.scala scripts for both ML & MLLib, 
but since I am using it via PySpark, where it is available only via ML, I 
initially omitted MLlib

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11530) Return eigenvalues with PCA model

2015-11-09 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996533#comment-14996533
 ] 

Christos Iraklis Tsatsoulis commented on SPARK-11530:
-

I edited it to target both; there are ``PCA.scala`` scripts for both ML & 
MLLib, but since I am using it via PySpark, where it is available only via ML, 
I initially omitted MLlib

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-11530) Return eigenvalues with PCA model

2015-11-09 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996533#comment-14996533
 ] 

Christos Iraklis Tsatsoulis edited comment on SPARK-11530 at 11/9/15 1:37 PM:
--

I edited it to target both; there are PCA.scala scripts for both 
ML & MLLib, but since I am using it via PySpark, where it is available only via 
ML, I initially omitted MLlib


was (Author: ctsats):
I edited it to target both; there are ``PCA.scala`` scripts for both ML & 
MLLib, but since I am using it via PySpark, where it is available only via ML, 
I initially omitted MLlib

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML, MLlib
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-11530) Return eigenvalues with PCA model

2015-11-06 Thread Christos Iraklis Tsatsoulis (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christos Iraklis Tsatsoulis updated SPARK-11530:

Description: 
For data scientists & statisticians, PCA is of little use if they cannot 
estimate the _proportion of variance explained_ by selecting _k_ principal 
components (see here for the math details: 
https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
'Explained variance'). To estimate this, one only needs the eigenvalues of the 
covariance matrix.
Although the eigenvalues are currently computed during PCA model fitting, they 
are not _returned_; hence, as it stands now, PCA in Spark ML is of extremely 
limited practical use.
For details, see these SO questions
http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
 (pyspark)

http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)

and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/


  was:
For data scientists & statisticians, PCA is of little use if they cannot 
estimate the _proportion of variance explained_ by selecting _k_ principal 
components (see here for the math details: 
https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
'Explained variance'). To estimate this, one only needs the eigenvalues of the 
covariance matrix.
Although the eigenvalues are currently computed during PCA model fitting, they 
are not _returned_; hence, as it stands now, PCA in Spark ML is of extremely 
limited practical use.
See this SO question 
http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
 

and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/

for details.


> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-11530) Return eigenvalues with PCA model

2015-11-06 Thread Christos Iraklis Tsatsoulis (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14994136#comment-14994136
 ] 

Christos Iraklis Tsatsoulis commented on SPARK-11530:
-

Thanks Sean. Unfortunately, I don't speak Scala - I'm actually a data scientist 
and not a developer. Thought that it was worthy raising the issue, even if I 
cannot resolve it myself. Hope that's OK...

> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> For details, see these SO questions
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  (pyspark)
> http://stackoverflow.com/questions/33559599/spark-pca-top-components (Scala)
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-11530) Return eigenvalues with PCA model

2015-11-05 Thread Christos Iraklis Tsatsoulis (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christos Iraklis Tsatsoulis updated SPARK-11530:

Description: 
For data scientists & statisticians, PCA is of little use if they cannot 
estimate the _proportion of variance explained_ by selecting _k_ principal 
components (see here for the math details: 
https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
'Explained variance'). To estimate this, one only needs the eigenvalues of the 
covariance matrix.
Although the eigenvalues are currently computed during PCA model fitting, they 
are not _returned_; hence, as it stands now, PCA in Spark ML is of extremely 
limited practical use.
See this SO question 
http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
 

and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/

for details.

  was:
For data scientists & statisticians, PCA is of little use if they cannot 
estimate the _proportion of variance explained_ by selecting _k_ principal 
components (see here for the math details: 
https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
'Explained variance'). To estimate this, one only needs the eigenvalues of the 
covariance matrix.
Although the eigenvalues are currently computed during PCA model fitting, they 
are not _returned_; hence, as it stands now, PCA in Spark ML is of extremely 
limited practical use.
See this SO question 
http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/)
 

and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/

for details.


> Return eigenvalues with PCA model
> -
>
> Key: SPARK-11530
> URL: https://issues.apache.org/jira/browse/SPARK-11530
> Project: Spark
>  Issue Type: Improvement
>  Components: ML
>Affects Versions: 1.5.1
>Reporter: Christos Iraklis Tsatsoulis
>
> For data scientists & statisticians, PCA is of little use if they cannot 
> estimate the _proportion of variance explained_ by selecting _k_ principal 
> components (see here for the math details: 
> https://inst.eecs.berkeley.edu/~ee127a/book/login/l_sym_pca.html , section 
> 'Explained variance'). To estimate this, one only needs the eigenvalues of 
> the covariance matrix.
> Although the eigenvalues are currently computed during PCA model fitting, 
> they are not _returned_; hence, as it stands now, PCA in Spark ML is of 
> extremely limited practical use.
> See this SO question 
> http://stackoverflow.com/questions/33428589/pyspark-and-pca-how-can-i-extract-the-eigenvectors-of-this-pca-how-can-i-calcu/
>  
> and this blog post http://www.nodalpoint.com/pca-in-spark-1-5/
> for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org