Hi all,

Thank you very much for pointing out. I'll get the latest update and see.

On Mon, Jul 13, 2015 at 3:03 PM, CD Athuraliya <[email protected]> wrote:

> Hi Thushan,
>
> That method has been updated. Please get the latest. You might have to
> define your own case depending on predicted values.
>
> CD Athuraliya
> Sent from my mobile device
> On Jul 13, 2015 10:24 AM, "Nirmal Fernando" <[email protected]> wrote:
>
>> Great work Thushan! On the UI issues, @CD could help you. AFAIK actual
>> keeps the pointer to the actual label and predicted is the probability and
>> predictedLabel is after rounding it using a threshold.
>>
>> On Mon, Jul 13, 2015 at 7:14 AM, Thushan Ganegedara <[email protected]>
>> wrote:
>>
>>> Hi all,
>>>
>>> I have integrated H-2-O deeplearning to WSO2-ml successfully. Following
>>> are the stats on 2 tests conducted (screenshots attached).
>>>
>>> Iris dataset - 93.62% Accuracy
>>> MNIST (Small) dataset - 94.94% Accuracy
>>>
>>> However, there were few unusual issues that I had to spend lot of time
>>> to identify.
>>>
>>> *FrameSplitter does not work for any value other than 0.5. Any value
>>> other than 0.5, the following error is returned*
>>> (Frame splitter is used to split trainingData to train and valid sets)
>>> barrier onExCompletion for
>>> hex.deeplearning.DeepLearning$DeepLearningDriver@25e994ae
>>> ​java.lang.RuntimeException: java.lang.RuntimeException:
>>> java.lang.NullPointerException
>>> at
>>> hex.deeplearning.DeepLearning$DeepLearningDriver.trainModel(DeepLearning.java:382)​
>>>
>>> *​DeepLearningModel.score(double[] vec) method doesn't work. *
>>> The predictions obtained with ​score(Frame f) and score(double[] v) is
>>> shown below.
>>>
>>> *Actual, score(Frame f), score(double[] v)*
>>> ​0.0, 0.0, 1.0
>>> 1.0, 1.0, 2.0
>>> 2.0, 2.0, 2.0
>>> 2.0, 1.0, 2.0
>>> 1.0, 1.0, 2.0
>>>
>>> As you can see, score(double[] v) is quite poor.
>>>
>>> After fixing above issues, everything seems to be working fine at the
>>> moment.
>>>
>>> However, the I've a concern regarding the following method in
>>> view-model.jag -> function
>>> drawPredictedVsActualChart(testResultDataPointsSample)
>>>
>>> var actual = testResultDataPointsSample[i].predictedVsActual.actual;
>>>         var predicted =
>>> testResultDataPointsSample[i].predictedVsActual.predicted;
>>>         var labeledPredicted = labelPredicted(predicted, 0.5);
>>>
>>>         if(actual == labeledPredicted) {
>>>             predictedVsActualPoint[2] = 'Correct';
>>>         }
>>>         else {
>>>             predictedVsActualPoint[2] = 'Incorrect';
>>>         }
>>>
>>> why does it compare the *actual and labeledPredicted* where it should
>>> be comparing *actual and predicted*?
>>>
>>> Also, the *Actual vs Predicted graph for MNIST show the axis in
>>> "Meters" *(mnist.png) which doesn't make sense. I'm still looking into
>>> this.
>>>
>>> Thank you
>>>
>>>
>>>
>>> --
>>> Regards,
>>>
>>> Thushan Ganegedara
>>> School of IT
>>> University of Sydney, Australia
>>>
>>
>>
>>
>> --
>>
>> Thanks & regards,
>> Nirmal
>>
>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>> Mobile: +94715779733
>> Blog: http://nirmalfdo.blogspot.com/
>>
>>
>>


-- 
Regards,

Thushan Ganegedara
School of IT
University of Sydney, Australia
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to