I read a large data into memory and it cost about 2GB ram(I have 4GB ram)
Size get from sys.getsizeof(train_X)
*63963248*
And I evalute clustering with gridsearchcv below:
def grid_search_clu(X):
def cv_scorer(estimator, X):
estimator.fit(X)
cluster_labels = estimator.labels_ if hasattr(estimato
Dear,
What the dependencies are you talking? Because I have install the Numpy, Scipy
and Joblib, this is the necessary programs, ok?
I used also this paper below, it is a very nice paper about the programs to
data science from Raspberry Pi, but isn't working, when I try $ pytest sklearn,
the R
Hi folks
Recently we have released new version of PyCM, library for confusion matrix
statistical analysis. I thought you might find it interesting.
http://www.pycm.ir
https://github.com/sepandhaghighi/pycm
Changelog :
-
Negative likelihood ratio interpretation (NLRI) added
-