Hello Sir/Madam,

I am going through the incremental learning algorithm in Scikit-learn. SGD in sci-kit learn is such a kind of algorithm that allows learning incrementally by passing chunks/batches. Now my question is: does sci-kit learn keeps all the batches for training data in memory? Or it keeps chunks/batches in memory up to a certain amount of size? Or it keeps only one chunk/batch while training in memory and removes the other trained chunks/batches after training? Does that mean it suffers from catastrophic forgetting?

Thanks!

--
Regards,

Farzana Anowar
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to