Hi Nadim,
you may also want to take a look at *skope-rules* (
https://github.com/scikit-learn-contrib/skope-rules), which has recently been
added to scikit-learn-contrib.
The main goal of this package is to provide logical rules verifying
precision and recall conditions, by extracting them from a
Hi.
Unfortunately we don't have an implementation of a cost matrix in
sklearn directly, but you can change the threshold of the model prediction,
by using something like y_pred = tree.predict_proba(X_test)[:, 1] > 0.6
What trade-off of precision and recall do you want? Have you looked at
the
Dear All,
I have a *screening* lab test and I am trying to minimize the False
negative value in the recall (TP/(TP+FN)) therefore I want to increase the
cost whenever an FN is found in the training. I understand that in R they
have some kind of loss matrix that penalize the FN during fitting. my