Re: [scikit-learn] Issue with DecisionTreeClassifier

2016-08-28 Thread Nelson Liu
Oops, phone removed the underscore between the two words of the variable name but I think you get the point. Nelson On Sun, Aug 28, 2016, 13:12 Ibrahim Dalal via scikit-learn < scikit-learn@python.org> wrote: > Dear Developers, > > DecisionTreeClassifier.decision_path() as used here > http://sci

Re: [scikit-learn] Issue with DecisionTreeClassifier

2016-08-28 Thread Nelson Liu
That should be: node indicator = estimator.tree_.decision_path(X_test) PR welcome :) On Sun, Aug 28, 2016, 13:12 Ibrahim Dalal via scikit-learn < scikit-learn@python.org> wrote: > Dear Developers, > > DecisionTreeClassifier.decision_path() as used here > http://scikit-learn.org/dev/auto_examples

[scikit-learn] Issue with DecisionTreeClassifier

2016-08-28 Thread Ibrahim Dalal via scikit-learn
Dear Developers, DecisionTreeClassifier.decision_path() as used here http://scikit-learn.org/dev/auto_examples/tree/unveil_tree_structure.html is giving the following error: AttributeError: 'DecisionTreeClassifier' object has no attribute 'decision_path' Kindly help. Thanks

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
On Sunday, August 28, 2016, Andy wrote: > > > On 08/28/2016 12:29 PM, Raphael C wrote: > > To give a little context from the web, see e.g. http://www.quuxlabs.com/ > blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation- > in-python/ where it explains: > > " > A question might ha

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Andy
On 08/28/2016 12:29 PM, Raphael C wrote: To give a little context from the web, see e.g. http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/ where it explains: " A question might have come to your mind by now: if we find two matrices \ma

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
To give a little context from the web, see e.g. http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/ where it explains: " A question might have come to your mind by now: if we find two matrices [image: \mathbf{P}] and [image: \mathbf{Q}] such th

Re: [scikit-learn] Fwd: inconsistency between libsvm and scikit-learn.svc results

2016-08-28 Thread Michael Bommarito
Any chance it's related to the seed issue in the "Decoding Differences Between SKL SVM and Matlab Libsvm Even When Parameters the Same" thread? Thanks, Michael J. Bommarito II, CEO Bommarito Consulting, LLC *Web:* http://www.bommaritollc.com *Mobile:* +1 (646) 450-3387 On Sun, Aug 28, 2016 at 12:

Re: [scikit-learn] Latent Semantic Analysis (LSA) and TrucatedSVD

2016-08-28 Thread Andy
If you do "with_mean=False" it should be the same, right? On 08/27/2016 12:20 PM, Olivier Grisel wrote: I am not sure this is exactly the same because we do not center the data in the TruncatedSVD case (as opposed to the real PCA case where whitening is the same as calling StandardScaler). Havi

Re: [scikit-learn] Fwd: inconsistency between libsvm and scikit-learn.svc results

2016-08-28 Thread Andy
On 08/27/2016 09:48 AM, Joel Nothman wrote: I don't think we should assume that this is the only possible reason for inconsistency. Could you give us a small snippet of data and code on which you find this inconsistency? I would also expect different settings or random states or data prepar

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
Thank you for the quick reply. Just to make sure I understand, if X is sparse and n by n with X[0,0] = 1, X_[n-1, n-1]=0 explicitly set (that is only two values are set in X) then this is treated the same for the purposes of the objective function as the all zeros n by n matrix with X[0,0] set to

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Arthur Mensch
Zeros are considered as zeros in the objective function, not as missing values - - i.e. no mask in the loss function. Le 28 août 2016 16:58, "Raphael C" a écrit : What I meant was, how is the objective function defined when X is sparse? Raphael On Sunday, August 28, 2016, Raphael C wrote: >

Re: [scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
What I meant was, how is the objective function defined when X is sparse? Raphael On Sunday, August 28, 2016, Raphael C wrote: > Reading the docs for http://scikit-learn.org/stable/modules/generated/ > sklearn.decomposition.NMF.html it says > > The objective function is: > > 0.5 * ||X - WH||_Fr

[scikit-learn] Does NMF optimise over observed values

2016-08-28 Thread Raphael C
Reading the docs for http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html it says The objective function is: 0.5 * ||X - WH||_Fro^2 + alpha * l1_ratio * ||vec(W)||_1 + alpha * l1_ratio * ||vec(H)||_1 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 + 0.5 * alpha * (1 - l1_r

Re: [scikit-learn] GradientBoostingRegressor, question about initialisation with MeanEstimator

2016-08-28 Thread Алексей Драль
Hi Mathieu, I was looking exactly for this article. Thank you very much. 2016-08-28 5:30 GMT+01:00 Mathieu Blondel : > This comes from Algorithm 1, line 1, in "Greedy Function Approximation: a > Gradient Boosting Machine" by J. Friedman. > > Intuitively, this has the same effect as fitting a bia