Have you used sparse arrays?
On Fri, Jun 2, 2017 at 7:39 PM, Stuart Reynolds
wrote:
> Hmmm... is it possible to place your original data into a memmap?
> (perhaps will clear out 8Gb, depending on SGDClassifier internals?)
>
> https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html
Thanks for the answer. Not really. How can I do that?
Sent from my iPhone
On Jun 2, 2017, at 12:51 PM, Iván Vallés Pérez
mailto:ivanvallespe...@gmail.com>> wrote:
Are you monitoring your RAM memory consumption? I would say that it is the
cause of the majority of the kernel crashes
El El vie, 2
Hmmm... is it possible to place your original data into a memmap?
(perhaps will clear out 8Gb, depending on SGDClassifier internals?)
https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html
https://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas
- Stuart
On
I also think that this could be likely a memory related issue. I just ran the
following snippet in a Jupyter Nb:
import numpy as np
from sklearn.linear_model import SGDClassifier
model = SGDClassifier(loss='log',penalty=None,alpha=0.0,
l1_ratio=0.0,fit_intercept=False,n_iter=1,shu
Are you monitoring your RAM memory consumption? I would say that it is the
cause of the majority of the kernel crashes
El El vie, 2 jun 2017 a las 12:45, Aymen J escribió:
> Hey Guys,
>
>
> So I'm trying to fit an SGD classifier on a dataset that has 900,000 for
> about 3,600 features (high cardi