Hi everyone,
I'm currently using Scikit learn to train and test multiple neural networks.
My issue - I'm breaking my dataset into 90/10, training on the 90%, and
testing on the 10%.
For the 10% trained data, I get outcomes as follows:
predicted = neural_network.predict(test_data)
Here, the pre
Thanks Piotr, this was indeed the case. Works for me now :)
On Wed, Oct 26, 2016 at 11:26 AM, Suranga Kasthurirathne <
suranga...@gmail.com> wrote:
>
> Hi everyone,
>
> I'm currently using Scikit learn to train and test multiple neural
> networks.
>
> My issue -
Hi folks!
I'm using scikit-learn to build two neural networks using 10% holdout, and
compare their performance using precision. To compare statistical
significance in the variance of precision, i'm using scikit's boxplots.
My problem is twofold -
1) The standard deviation in the precision of the
Hi Sebastian!
Thank you, you might be onto something here ;)
So, I may have to go over 2 models, so McNamara's may not be an option :(
In regard to your second comment, in building my boxplots, this is how I
input results.
plt.boxplot(results)
So what does "results" look like?
[0.8543380834571
Hi all,
I apologize - i've been looking for this answer all over the internet, and
it could be that I'm not googling the right terms.
For managing unbalanced datasets, Weka has SMOTE, and scikit has
randomoversampling.
In weka, we can ask it to boost by a given percentage (say 100%) so an
unders
Well actually, i'm able to answer this myself. Its the ratio attribute
(see:
http://contrib.scikit-learn.org/imbalanced-learn/generated/imblearn.over_sampling.RandomOverSampler.html
)
:) :)
On Tue, Jan 10, 2017 at 12:36 PM, Suranga Kasthurirathne <
suranga...@gmail.com> wrote:
Hi all,
I'm using scikit-learn to build a number of random forrest models using the
default number of trees.
However, when I print out the prediction probability (
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier
Hi there!
Thank you, yea, it was the number of estimators. I was hoping that there
was something easier that I could do, but apparently not! But anyways,
thank you, this did solve the problem :)
On Fri, Apr 14, 2017 at 11:17 AM, Andreas Mueller wrote:
>
>
> On 04/13/2017 02:45 PM, Gael Varoqua
Hello all,
I'm looking at the confidence matrix and performance measures (precision,
recall, f-measure etc.) produced by scikit.
It seems that scikit calculates these measures per each outcome class, and
then combines them into some sort of average.
I would really like to see these measures pres