Greetings,

I traing MLPRegressors using small datasets, usually with 10-50
observations. The default batch_size=min(200, n_samples) for the adam
optimizer, and because my n_samples is always < 200, it is eventually
batch_size=n_samples. According to the theory, stochastic gradient-based
optimizers like adam perform better in the small batch regime. Considering
the above, what would be a good batch_size value in my case (e.g. 4)? Is
there any rule of thump to select the batch_size when the n_samples is
small or must the choice be based on trial and error?


-- 

======================================================================

Dr Thomas Evangelidis

Post-doctoral Researcher
CEITEC - Central European Institute of Technology
Masaryk University
Kamenice 5/A35/2S049,
62500 Brno, Czech Republic

email: tev...@pharm.uoa.gr

          teva...@gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to