Did you restart Postgres and drop file system caches?
What I am suspecting is that some sort of prefetching is happening.
I know that Postgres does not do prefetching.
I also understand very little about OS/FileSystem level prefetching.
----- Original Message ----
From: Tom Lane <[EMAIL PROTECTED]>
To: Jeff Davis <[EMAIL PROTECTED]>
Cc: jaba the mobzy <[EMAIL PROTECTED]>; email@example.com
Sent: Thursday, May 3, 2007 11:42:32 PM
Subject: Re: [HACKERS] Bitmap Heap Scan anomaly
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Thu, 2007-05-03 at 14:33 -0700, jaba the mobzy wrote:
>> mycorr_100 took 11.4 s to run although it had to fetch 100000 row from
>> the base table.
>> mycorr_10 took 24.4 s to run although it had to fetch 10563 row from
>> the base table.
> This is because the physical distribution of data is different. The
> mycorr_10 table has tuples in which a and b are > 15.9M spread all
> throughout. mycorr_100 has them all collected together at the end of the
> physical file. Less disk seeking.
If the OP had generated the data randomly, as claimed, the rows
shouldn't be particularly more clumped in one table than the other.
But I sure agree that it sounds like a nonrandom distribution in the
mycorr_100 table. FWIW I tried to duplicate the behavior, and could
not, using tables made up like this:
create table src as
select int4(16*1024*1024*random()) as key,
int4(16*1024*1024*random()) as a,
int4(16*1024*1024*random()) as b
create table mycorr_10 as
select key, a,
case when random() < 0.1 then a else b end as b
create table mycorr_100 as
select key, a, a as b
create index mycorr_10i on mycorr_10(a,b);
create index mycorr_100i on mycorr_100(a,b);
vacuum analyze mycorr_10;
vacuum analyze mycorr_100;
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at
Sucker-punch spam with award-winning protection.
Try the free Yahoo! Mail Beta.