Hello Mahesan,
Thank you for pointing that out. This was actually before the latest built,
This was happening because of following code snippent in view-model.jag
var actual = testResultDataPointsSample[i].predictedVsActual.actual;
var predicted =
testResultDataPointsSample[i].predictedVsActual.predicted;
var labeledPredicted = labelPredicted(predicted, 0.5);
if(actual == labeledPredicted) {
predictedVsActualPoint[2] = 'Correct';
}
else {
predictedVsActualPoint[2] = 'Incorrect';
}
where it should compare *actual == predicted* for deeplearning. But it is
fixed in the latest commit as CD mentioned.
So it is working properly at the moment.
I've attached a screenshot of new version
On Wed, Jul 15, 2015 at 7:54 PM, Sinnathamby Mahesan <[email protected]>
wrote:
> Hi Thushan
> thank you for sending the attachments.
> I am just wondering why I see many red-dots in the graphs:
> For example, for iris data set, oaccroding to the table only 3 were found
> incorrectly predicted
> whereas the scatter diagram shows many reds as well as greens.
> Enlighten me if the way I see is wrong.
> :-)
> Regards
> Mahesan
>
> On 13 July 2015 at 07:14, Thushan Ganegedara <[email protected]> wrote:
>
>> Hi all,
>>
>> I have integrated H-2-O deeplearning to WSO2-ml successfully. Following
>> are the stats on 2 tests conducted (screenshots attached).
>>
>> Iris dataset - 93.62% Accuracy
>> MNIST (Small) dataset - 94.94% Accuracy
>>
>> However, there were few unusual issues that I had to spend lot of time to
>> identify.
>>
>> *FrameSplitter does not work for any value other than 0.5. Any value
>> other than 0.5, the following error is returned*
>> (Frame splitter is used to split trainingData to train and valid sets)
>> barrier onExCompletion for
>> hex.deeplearning.DeepLearning$DeepLearningDriver@25e994ae
>> java.lang.RuntimeException: java.lang.RuntimeException:
>> java.lang.NullPointerException
>> at
>> hex.deeplearning.DeepLearning$DeepLearningDriver.trainModel(DeepLearning.java:382)
>>
>> *DeepLearningModel.score(double[] vec) method doesn't work. *
>> The predictions obtained with score(Frame f) and score(double[] v) is
>> shown below.
>>
>> *Actual, score(Frame f), score(double[] v)*
>> 0.0, 0.0, 1.0
>> 1.0, 1.0, 2.0
>> 2.0, 2.0, 2.0
>> 2.0, 1.0, 2.0
>> 1.0, 1.0, 2.0
>>
>> As you can see, score(double[] v) is quite poor.
>>
>> After fixing above issues, everything seems to be working fine at the
>> moment.
>>
>> However, the I've a concern regarding the following method in
>> view-model.jag -> function
>> drawPredictedVsActualChart(testResultDataPointsSample)
>>
>> var actual = testResultDataPointsSample[i].predictedVsActual.actual;
>> var predicted =
>> testResultDataPointsSample[i].predictedVsActual.predicted;
>> var labeledPredicted = labelPredicted(predicted, 0.5);
>>
>> if(actual == labeledPredicted) {
>> predictedVsActualPoint[2] = 'Correct';
>> }
>> else {
>> predictedVsActualPoint[2] = 'Incorrect';
>> }
>>
>> why does it compare the *actual and labeledPredicted* where it should be
>> comparing *actual and predicted*?
>>
>> Also, the *Actual vs Predicted graph for MNIST show the axis in "Meters"
>> *(mnist.png) which doesn't make sense. I'm still looking into this.
>>
>> Thank you
>>
>>
>>
>> --
>> Regards,
>>
>> Thushan Ganegedara
>> School of IT
>> University of Sydney, Australia
>>
>> _______________________________________________
>> Dev mailing list
>> [email protected]
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Sinnathamby Mahesan
>
>
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
--
Regards,
Thushan Ganegedara
School of IT
University of Sydney, Australia
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev