The best book is still the CART book by Breiman, Friedman, Olshen, and Stone.
Unfortunately, it's out of print the last time I checked at BarnesandNoble.com.
If you're a beginner, you may find the book by Zhang and Singer useful although
I personally dislike it. Go to
Surely for a given dataset there is an optimal transformation of the form
f(x) = (x + a)^b for reducing heterogeneity of variance (or skew or both),
where a is an offset equal to the minimum score. Does anyone know how the
optimal value of b can be found? This transformation would encompass the
Hi, a quick question:
I am reading Silverman (Density Estimation, 1987), and hoping to apply
it to some work I am doing.
Let's say that I have a series of data, recurrent over several years.
For each year, I estimate a kernel density function, and plot the
results.
Given the density functions
In article [EMAIL PROTECTED],
Kenmlin [EMAIL PROTECTED] wrote:
Whoever told you how to do this is completely wrong. For multiple regression,
you must find all parameters simultaneously. This is because X1, X2, and X3
are NOT independent.
This is incorrect. In fact, the standard Gaussian
On Fri, 11 Feb 2000 10:02:54 GMT, [EMAIL PROTECTED] wrote:
...
Suppose I want to explain number of deaths of time from a fixed pool of
people (got ur attention? :)
snip, detail
I guess it is a panel data analysis problem with binomial dependant
variable.
snip, the rest
It sounds to
On 13 Feb 2000 05:37:36 -0800, [EMAIL PROTECTED]
(Graham D Smith) wrote:
Surely for a given dataset there is an optimal transformation of the form
f(x) = (x + a)^b for reducing heterogeneity of variance (or skew or both),
where a is an offset equal to the minimum score. Does anyone know how
Why don't you use something like the Kolmogorov-Smirnov statistic
directly on the data? It seems that doing the density estimation first
may just complicate the testing.
In article 886ii3$ser$[EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
Hi, a quick question:
I am reading Silverman (Density