There is no tool in scikit-learn for that, as it is not really relevant
for predictive performance, as others said.
Scikit-learn is not really meant for statistical inference.
Have a look at statsmodels.
http://statsmodels.sourceforge.net/
On 04/11/2015 12:23 PM, Ady Wahyudi Paundu wrote:
> Hi al
Like Luca mentioned, if your primary goal is to train a well-performing model
for regression or classification, you don't really have to worry much about
collinearity, e.g., if you are using regularized models. For classification via
decision trees, you probably want to reduce the number of feat
It is difficult to say what you should do. In particular it depends on
what you want to do. If you are interested only in prediction performances
do not care about collinearity
On Sat, Apr 11, 2015, 19:19 Ady Wahyudi Paundu wrote:
> Can I?
> I'm not that good in statistics, so I try it in R.
Can I?
I'm not that good in statistics, so I try it in R. Results from PCA and VIF
were totally different
~ady
On Sunday, April 12, 2015, Luca Puggini wrote:
> Maybe you can try pca?
>
> On Sat, Apr 11, 2015, 18:24 Ady Wahyudi Paundu > wrote:
>
>> Hi all,
>>
>> In scikit-learn, how to pre-proc
Maybe you can try pca?
On Sat, Apr 11, 2015, 18:24 Ady Wahyudi Paundu wrote:
> Hi all,
>
> In scikit-learn, how to pre-processing data and remove multicollinearity?
>
> ~Ady
>
>
> --
> BPM Camp - Free Virtual Workshop Ma
Hi all,
In scikit-learn, how to pre-processing data and remove multicollinearity?
~Ady
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn