On Fri, 17 Dec 2004 23:37:37 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Frank Wiles <[EMAIL PROTECTED]> writes:
> > I've also seen a huge difference between select count(*) and
> > select count(1) in older versions,
>
> That must have been before my time, ie, pre-6.4 or so. There is
> cert
On Fri, Dec 17, 2004 at 10:39:18PM -0600, Bruno Wolff III wrote:
> It doesn't seem totally out of wack. You will be limited by the memory
> bandwidth and it looks like you get something on the order of a few
> hundred references to memory per row. That may be a little high, but
> it doesn't seem ri
Frank Wiles <[EMAIL PROTECTED]> writes:
> I've also seen a huge difference between select count(*) and
> select count(1) in older versions,
That must have been before my time, ie, pre-6.4 or so. There is
certainly zero difference now.
regards, tom lane
-
On Fri, Dec 17, 2004 at 22:56:27 +0100,
"Steinar H. Gunderson" <[EMAIL PROTECTED]> wrote:
>
> I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,
> everything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?
It doesn't seem totally out of wack. You will be limited b
On Fri, Dec 17, 2004 at 05:02:29PM -0600, Frank Wiles wrote:
> It depends more on your disk IO than the processor. Counting isn't
> processor intensive, but reading through the entire table on disk
> is. I've also seen a huge difference between select count(*) and
> select count(1) in o
On Fri, 17 Dec 2004 23:09:07 +0100
"Steinar H. Gunderson" <[EMAIL PROTECTED]> wrote:
> On Fri, Dec 17, 2004 at 10:56:27PM +0100, Steinar H. Gunderson wrote:
> > I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG
> > 7.4, everything in cache, 32-byte rows) take ~3500ms on an Athlon 64
On Fri, Dec 17, 2004 at 10:56:27PM +0100, Steinar H. Gunderson wrote:
> I'm a bit unsure -- should counting ~3 million rows (no OIDs, PG 7.4,
> everything in cache, 32-byte rows) take ~3500ms on an Athlon 64 2800+?
(I realize I was a bit unclear here. This is a completely separate case, not
relate
On Fri, Dec 17, 2004 at 10:47:57AM -0500, Greg Stark wrote:
>> Must admit this puzzles me. Are you saying you can't saturate your disk I/O?
>> Or
>> are you saying other DBMS store records in 0.5 to 0.2 times less space than
>> PG?
> I don't know what he's talking about either. Perhaps he's think
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> I think the one effect that's not being modeled is amortization of index
>> fetches across successive queries.
> And across multiple fetches in a single query, such as with a nested loop.
Right, that's effectively
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Postgres is also more pessimistic about the efficiency of index scans. It's
> > willing to use a sequential scan down to well below 5% selectivity when
> > other
> > databases use the more traditional rule of thumb
Greg Stark <[EMAIL PROTECTED]> writes:
> Postgres is also more pessimistic about the efficiency of index scans. It's
> willing to use a sequential scan down to well below 5% selectivity when other
> databases use the more traditional rule of thumb of 10%.
However, other databases are probably basi
Richard Huxton <[EMAIL PROTECTED]> writes:
> Not going to do anything in this case. The planner is roughly right about how
> many rows will be returned, it's just not expecting everything to be in RAM.
That doesn't make sense or else it would switch to the index at
random_page_cost = 1.0. If it w
David Brown wrote:
You might want to reduce random_page_cost a little.
Keep in mind that your test case is small enough to fit in RAM and
is probably not reflective of what will happen with larger tables.
I am also running 8.0 rc1 for Windows. Despite many hours spent
tweaking various planner cos
> You might want to reduce random_page_cost a little.
> Keep in mind that your test case is small enough to fit in RAM and is
> probably not reflective of what will happen with larger tables.
I am also running 8.0 rc1 for Windows. Despite many hours spent tweaking
various planner cost constants,
Jon Anderson <[EMAIL PROTECTED]> writes:
> Any hints on what to do to make PostgreSQL use the index?
You might want to reduce random_page_cost a little.
Keep in mind that your test case is small enough to fit in RAM and is
probably not reflective of what will happen with larger tables.
I have a table 'Alias' with 541162 rows. It's created as follows:
CREATE TABLE alias
(
id int4 NOT NULL,
person_id int4 NOT NULL,
last_name varchar(30),
first_name varchar(30),
middle_name varchar(30),
questioned_identity_flag varchar,
CONSTRAINT alias_pkey PRIMARY KEY (id)
)
Afte
16 matches
Mail list logo