>
> Maybe there's some hybrid type possible where you can scan the index to
> find large table regions that are known /not/ to contain tuples of interest
> and seek over them in your scan. I wouldn't know, really, but it sounds like
> it'd probably be more I/O than a pure seq scan (given the readin
Vladimir Sitnikov wrote:
Suppose you want to find all the values that contain '%123%'. Currently
PostgreSQL will do a sec scan, while the better option might be (and it is)
to loop through all the items in the index (it will cost 30 I/O), find
records that truly contain %123% (it will find 20 of
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
> loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION
> ALL SELECT * FROM loan_tasks_pending;.
You seem to have neglected to mention a join or two.
regards, tom lane
--
Sent via pgsql-performance maili
Vladimir Sitnikov wrote:
Lutischán Ferenc wrote:
It is possible to make an index on the table, and make a seq index scan on
this values?
My understanding is that this isn't possible in PostgreSQL, because indexes
do not contain information about tuple visibility. Data read from the index
mig
Dear Vladimir,
Thanks for clear description of the problem. :-)
Please report it to the bug list.
I hope it will be accepted as a "performance bug" and will be solved.
Best Regards,
Ferenc
Vladimir Sitnikov wrotte:
As far as I understand, it is discouraged to implement/suggest patches
dur
I found that simple IN query on indexed tables takes too much time.
dok and rid have both indexes on int dokumnr columnr and dokumnr is not
null.
PostgreSql can use index on dok or event on rid so it can executed fast.
How to make this query to run fast ?
Andrus.
note: list contain a lot o
"Andrus" <[EMAIL PROTECTED]> writes:
> How to make this query to run fast ?
Using something newer than 8.1 would help.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgre
Hi,
I have table with cca 60.000 rows and
when I run query as:
Update table SET column=0;
after 10 minutes i must stop query, but it still running :(
I've Postgres 8.1 with all default settings in postgres.conf
Where is the problem?
Thak you for any tips.
best regards.
Marek Fiala
--
Sorry, but you have to provide much more information about the table. The
information you've provided is really not sufficient - the rows might be
large or small. I guess it's the second option, with a lots of dead rows.
Try this:
ANALYZE table;
SELECT relpages, reltuples FROM pg_class WHERE reln
Obviously, most of the total cost (cost=327569, time=39749ms) comes from
two operations in the execution plan:
(a) sequential scan on the 'rid' table (cost=195342, time=26347ms) that
produces almost 3.200.000 rows
(b) hash join of the two subresults (cost=24, time=14000ms)
How many rows are t
Hi All,
Thank you all in advance for your answers.
My application does bulk inserts on a table (50,000 rows concurrently by 10
threads), There are 3 indexes on the table whose update takes a lot of time.
(Deleting those indexes speeds up inserts).
One thing I have noticed in my task manager is pa
Why is this view 9x slower than the base table?
[EMAIL PROTECTED] explain analyze select count(*) from
loan_tasks_committed;
QUERY PLAN
---
On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION
ALL SELECT * FROM loan_tasks_pending;.
You seem to have neglected to mention a join or two.
Yeah, though I did show them at th
Jim 'Decibel!' Nasby wrote:
> On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:
>> "Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
>>> loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION
>>> ALL SELECT * FROM loan_tasks_pending;.
>>
>> You seem to have neglected to mention a join or
Tom,
Using something newer than 8.1 would help.
Thank you.
If
CREATE TEMP TABLE ids ( id int ) ON COMMIT DROP;
is created, ids are added to this table and
ids table is used in inner join insted of IN clause or
IN clause is replaced with
... dokumnr IN ( SELECT id FROM ids ) ...
, w
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
> How on earth did the seqscan suddenly take 4x longer? And why is the
> subquery scan then doubling the amount of time again?
Maybe the disk access is less sequential because of the need to fetch
the other table too?
re
On Nov 10, 2008, at 12:21 PM, Richard Huxton wrote:
Jim 'Decibel!' Nasby wrote:
On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
loan_tasks effectively does SELECT * FROM loan_tasks_committed
UNION
ALL SELECT * FROM loan_tasks_pending;.
You se
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
> Here's the commands to generate the test case:
> create table a(a int, b text default 'test text');
> create table c(c_id serial primary key, c_text text);
> insert into c(c_text) values('a'),('b'),('c');
> create table b(a int, c_id int referen
On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
Here's the commands to generate the test case:
create table a(a int, b text default 'test text');
create table c(c_id serial primary key, c_text text);
insert into c(c_text) values('a'),('b'),('c')
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
> On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:
>> On my machine this runs about twice as fast as the original view.
> Am I missing some magic? I'm still getting the subquery scan.
Hmm, I'm getting a core dump :-( ... this seems to be busted in HE
20 matches
Mail list logo