Can I?
I'm not that good in statistics, so I try it in R. Results from PCA and VIF
were totally different
~ady
On Sunday, April 12, 2015, Luca Puggini lucapug...@gmail.com wrote:
Maybe you can try pca?
On Sat, Apr 11, 2015, 18:24 Ady Wahyudi Paundu awpau...@gmail.com
Hi all,
In scikit-learn, how to pre-processing data and remove multicollinearity?
~Ady
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Maybe you can try pca?
On Sat, Apr 11, 2015, 18:24 Ady Wahyudi Paundu awpau...@gmail.com wrote:
Hi all,
In scikit-learn, how to pre-processing data and remove multicollinearity?
~Ady
--
BPM Camp - Free Virtual
What is your dataset like? How are you building your individual classifier that
you are ensembling with AdaBoost? A common-use case would be boosted decision
stumps (one-level decision trees).
http://en.wikipedia.org/wiki/Decision_stump
Or report macro and micro in classification_report. Micro is equivalent to
accuracy for multiclass without #4287
https://github.com/scikit-learn/scikit-learn/pull/4287.
On 10 April 2015 at 01:00, Andreas Mueller t3k...@gmail.com wrote:
Hi Jack.
You mean in the classification report?
That
I've made some changes to the proposal.
https://github.com/scikit-learn/scikit-learn/wiki/GSoC-2015-Proposal:-scikit-learn:-Cross-validation-and-Meta-estimators-for-Semi-supervised-learning
On Wed, Apr 8, 2015 at 8:44 PM, Andreas Mueller t3k...@gmail.com wrote:
So what would you sugggest?
On