On 11 December 2013 01:27, Sergey E. Koposov <m...@sai.msu.ru> wrote:
> For what it's worth.
>
> I'll quote Chaudhuri et al. first line from the abstract about the block
> sampling.
> "Block-level sampling is far more efficient than true uniform-random
> sampling over a large database, but prone to  significant errors if used to
> create database statistics."

This glosses over the point that both SQLServer and Oracle use this technique.

> And after briefly glancing through the paper, my opinion is why it works is
> because after making one version of statistics they cross-validate, see how
> well it goes and then collect more if the cross-validation error is large
> (for example because the data is clustered). Without this bit, as far as I
> can a simply block based sampler will be bound to make catastrophic mistakes
> depending on the distribution

I don't think its true that a block based sampler will be *bound* to
make "catastrophic mistakes". They can clearly happen, just as they
can with random samples, hence the need for a parameter to control the
sample with a parameter.

Realistically, I never heard of an Oracle DBA doing advanced
statistical mathematics before setting the sample size on ANALYZE. You
use the default and bump it up if the sample is insufficient for the
data.

-- 
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to