Quoting Andrew Dunstan <[EMAIL PROTECTED]>: 
 
> After some more experimentation, I'm wondering about some sort of  
> adaptive algorithm, a bit along the lines suggested by Marko 
Ristola, but limited to 2 rounds. 
>  
> The idea would be that we take a sample (either of fixed size, or 
> some  small proportion of the table) , see how well it fits a larger 
sample 
> > (say a few times the size of the first sample), and then adjust 
the > formula accordingly to project from the larger sample the 
estimate for the full population. Math not worked out yet - I think we 
want to ensure that the result remains bounded by [d,N]. 
 
Perhaps I can save you some time (yes, I have a degree in Math). If I 
understand correctly, you're trying extrapolate from the correlation 
between a tiny sample and a larger sample. Introducing the tiny sample 
into any decision can only produce a less accurate result than just 
taking the larger sample on its own; GIGO. Whether they are consistent 
with one another has no relationship to whether the larger sample 
correlates with the whole population. You can think of the tiny sample 
like "anecdotal" evidence for wonderdrugs.  
--  
"Dreams come true, not free." -- S.Sondheim, ITW  


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to