Re: [scikit-learn] Does NMF optimise over observed values

2016-08-29 Thread Raphael C
On Monday, August 29, 2016, Andreas Mueller wrote: > > > On 08/28/2016 01:16 PM, Raphael C wrote: > > > > On Sunday, August 28, 2016, Andy > wrote: > >> >> >> On 08/28/2016 12:29 PM, Raphael C wrote: >> >> To give a little context from the web, see e.g. http://www.quuxlabs.com/b >> log/2010/09/m

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-29 Thread Tom DLT
If X is sparse, explicit zeros and missing-value zeros are **both** considered as zeros in the objective functions. Changing the objective function wouldn't need a new interface, yet I am not sure the code change would be completely trivial. The question is: do we want this new objective function

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-29 Thread Andreas Mueller
On 08/28/2016 01:16 PM, Raphael C wrote: On Sunday, August 28, 2016, Andy > wrote: On 08/28/2016 12:29 PM, Raphael C wrote: To give a little context from the web, see e.g. http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-an

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
On Sunday, August 28, 2016, Andy wrote: > > > On 08/28/2016 12:29 PM, Raphael C wrote: > > To give a little context from the web, see e.g. http://www.quuxlabs.com/ > blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation- > in-python/ where it explains: > > " > A question might ha

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Andy
On 08/28/2016 12:29 PM, Raphael C wrote: To give a little context from the web, see e.g. http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/ where it explains: " A question might have come to your mind by now: if we find two matrices \ma

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
To give a little context from the web, see e.g. http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/ where it explains: " A question might have come to your mind by now: if we find two matrices [image: \mathbf{P}] and [image: \mathbf{Q}] such th

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
Thank you for the quick reply. Just to make sure I understand, if X is sparse and n by n with X[0,0] = 1, X_[n-1, n-1]=0 explicitly set (that is only two values are set in X) then this is treated the same for the purposes of the objective function as the all zeros n by n matrix with X[0,0] set to

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Arthur Mensch
Zeros are considered as zeros in the objective function, not as missing values - - i.e. no mask in the loss function. Le 28 août 2016 16:58, "Raphael C" a écrit : What I meant was, how is the objective function defined when X is sparse? Raphael On Sunday, August 28, 2016, Raphael C wrote: >

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
What I meant was, how is the objective function defined when X is sparse? Raphael On Sunday, August 28, 2016, Raphael C wrote: > Reading the docs for http://scikit-learn.org/stable/modules/generated/ > sklearn.decomposition.NMF.html it says > > The objective function is: > > 0.5 * ||X - WH||_Fr

[scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
Reading the docs for http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html it says The objective function is: 0.5 * ||X - WH||_Fro^2 + alpha * l1_ratio * ||vec(W)||_1 + alpha * l1_ratio * ||vec(H)||_1 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 + 0.5 * alpha * (1 - l1_r