On Wed, Dec 11, 2013 at 2:33 PM, Greg Stark <st...@mit.edu> wrote:

> I think we're all wet here. I don't see any bias towards larger or smaller
> rows. Larger tied will be on a larger number of pages but there will be
> fewer of them on any one page. The average effect should be the same.
> Smaller values might have a higher variance with block based sampling than
> larger values. But that actually *is* the kind of thing that Simon's
> approach of just compensating with later samples can deal with.

I think that looking at all rows in randomly-chosen blocks will not bias
size, or histograms.  But it will bias n_distinct and MCV for some data
distributions of data, unless we find some way to compensate for it.

But even for avg size and histograms, what does block sampling get us?  We
get larger samples sizes for the same IO, but those samples are less
independent (assuming data is no randomly scattered over the table), so the
"effective sample size" is less than the true sample size.  So we can't
just sample 100 time fewer blocks because there are about 100 rows per
block--doing so would not bias our avg size or histogram boundaries, but it
would certainly make them noisier.



Reply via email to