I have noticed that every time I train and test a model using the same data (in 
SGD algo), I get different confusion matrix. Meaning, if I generate a model and 
look at the confusion matrix, it might say 90% correctly classified instances, 
but if I generate the model again (with the SAME data for training and testing 
as before) and test it, the confusion matrix changes and it might say 75% 
correctly classified instances.

Is this a desired behavior? 

Reply via email to