Hello, when can we expect a PDF version of the User’s Guide for 0.16.2?
https://sourceforge.net/projects/scikit-learn/files/documentation/
Thanks very much.
Dale Smith, Ph.D.
Data Scientist
[http://host.msgapp.com/Extranet/96621/Signature%20Images/sig%20logo.png]http://nexidia.com/
d.
It is using the variance reduction algorithm to make the splits while the
tree is being built. The final tree can be evaluated using the Mean Squared
Error.
On Thu, Jul 9, 2015 at 8:56 AM, Sebastian Raschka se.rasc...@gmail.com
wrote:
Hi, all,
sorry, but I have another question regarding the
Hi, all,
sorry, but I have another question regarding the terminology in the
documentation.
In the DecisionTreeRegressor's documentation at
http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html#sklearn.tree.DecisionTreeRegressor
is says
criterion : string,
Thanks, so when I understand correctly, the criterion argument described as
The function to measure the quality of a split. The only supported criterion
is “mse” for the mean squared error.
only reports the quality but does not influence the split ...
Maybe the use of variance reduction, or
Hi Sebastian,
Indeed, N samples are drawn with replacement, where N=len(original
training set). I guess we could add an extra max_samples parameter,
just like we have for the Bagging estimators.
Gilles
On 6 July 2015 at 23:00, Sebastian Raschka se.rasc...@gmail.com wrote:
Thanks, Jeff, that
Thanks, Gilles,
I think that's a good idea! It would make the implementation more flexible
and would add clarity as well!
Best,
Sebastian
On Jul 9, 2015, at 2:35 PM, Gilles Louppe g.lou...@gmail.com wrote:
Hi Sebastian,
Indeed, N samples are drawn with replacement, where N=len(original
Hi Sebastian,
Both terminology are in fact strictly equivalent for regression. See
e.g. page 46 of http://arxiv.org/abs/1407.7502
Best,
Gilles
On 9 July 2015 at 18:56, Sebastian Raschka se.rasc...@gmail.com wrote:
Hi, all,
sorry, but I have another question regarding the terminology in the
Hi Herb
Many classifiers have a `predict_proba` or a `predict_log_proba` method
which returns the probability for each class for each point.
Jacob
On Thu, Jul 9, 2015 at 11:39 AM, Herbert Schulz hrbrt@gmail.com wrote:
Hey,
is there a way to get the probability for a predicted class?
Thank you very much!
--
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured
Hey,
is there a way to get the probability for a predicted class?
For Example:
my model is predicting the input1 to Class 3probability that it is
class 3?
. input2 to Class 1
probability that it is class 1? ( with 55% it is class 1).
I think
Thanks, I think my confusion came from the fact that they use x_i as target
variable, and I was thinking of feature/attribute when I saw the equation;
makes sense now!
Btw. That's a beautiful thesis and a useful reference, too!
Best,
Sebastian
On Jul 9, 2015, at 2:23 PM, Gilles Louppe
11 matches
Mail list logo