Hello,


I would like to have some explanations about the calculus of the confusion 
matrix (learning process) with OTB because I get totally different results 
according to the way I proceed.


On one side, I have split my reference data (polygons, 4 classes of 
defoliation in forest) into 2 data files: a learning data set (50% of the 
polygons) and a testing data set (others 50%). To do so, I have sampled 
polygons regularly according to their size in each class of defoliation. An 
important point maybe is that I have sampled polygons and not points or 
pixels. I calculate my random forest rule on my learning data set and then 
I calculate my confusion matrix with my classification map and my testing 
data set.


On the other side, it seem to me than we can calculate simultaneously the 
classifying rule and the matrix confusion with TrainImagesClassifier module 
with the entire reference data set (learning and testing polygons all 
together) and setting learning/validation ratio to 0.5. Isn’t? If yes, I 
don’t understand why I get completely different results. Can the reason be 
a different sampling procedure of OTB as a systematic sampling of pixels 
for example?


Thank you for your answer.


Thierry Bélouard

-- 
-- 
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to