Or you could just say x.reshape(100,1) which is what I do.
Anne Dwyer
On Tue, Mar 26, 2013 at 9:44 AM, Lars Buitinck wrote:
> 2013/3/26 abinash.panda.ece10 :
> X = np.array([2*val for val in y])
> X.shape
> > (100,)
>
> The correct input format is, per the documentation
>
> X : nu
On Tue, Mar 26, 2013 at 9:25 PM, Lee Zamparo wrote:
> AFAIK, you might not want all the missing values to be imputed at once,
> especially if the dimensions of X are large. Maybe something like:
>
>
> X_transformed = estimator.fit_transform(X) # X contains missing values
> X_subset = estimator.in
2013/3/26 abinash.panda.ece10 :
X = np.array([2*val for val in y])
X.shape
> (100,)
The correct input format is, per the documentation
X : numpy array or sparse matrix of shape [n_samples,n_features]
So that's
X = np.array([[2*val] for val in y])
or
X = np.atleast2d(2 *
I have tried to fit some 1-D data using LinearRegression available in
linear_model. Encountered a following error :
>>> import numpy as np
>>> y = np.arange(100)
>>> X = np.array([2*val for val in y])
>>> clf = linear_model.LinearRegression()
>>> clf.fit(X,y)
Traceback (most recent call last):y
F
Please review https://github.com/scikit-learn/scikit-learn/pull/1814
--
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam
--
Own the Future-IntelĀ® Level Up Game Demo Contest 2013
Rise to greatness in Inte
AFAIK, you might not want all the missing values to be imputed at once,
especially if the dimensions of X are large. Maybe something like:
X_transformed = estimator.fit_transform(X) # X contains missing values
X_subset = estimator.inverse_transform(X_transformed,row_subset) # impute
only a subset
On Tue, Mar 26, 2013 at 3:55 AM, Mathieu Blondel wrote:
> On Tue, Mar 26, 2013 at 1:41 AM, Olivier Grisel
> wrote:
>
> > I am also +1 a simple short term solution while still keeping longer
> > terms goal for
> > - proper multinomial penalized LR on one hand,
>
> It would still be nice to have it
> +1 for simple normalization in SGDClassifier.predict_proba and a
> meta-estimator approach for
> calibrating probability outputs (finishing @agramfort's PR would be a
> great topic for the next sprint).
that's my plan unless somebody beats me to it.
Alex
---
Hi all,
I noticed that there is little interest in Mult-view analysis in sklearn.
The current implemented modules are PLS and CCA. The documentation has no
examples (I think) for them. I can add something on that direction and also
help in implementing KPLS,KCCA, Bi-linear models (BLM) and other
On Tue, Mar 26, 2013 at 3:28 PM, Gael Varoquaux
wrote:
> * For matrix factorization to be useful in the context of recomender
> systems, there needs to be an API for recomender systems. While I'd
> love to see this, I am afraid that it might be premature and should
> probably happen after t
10 matches
Mail list logo