The listing 2 example:
1 SELECT D_TAX, D_NEXT_O_ID
2 INTO :dist_tax , :next_o_id
3 FROM OLD TABLE ( UPDATE DISTRICT
4 SET D_NEXT_O_ID = D_NEXT_O_ID + 1
5 WHERE D_W_ID = :w_id
6 AND D_ID = :d_id
7 ) AS
Hi
I see this article about DB2
http://www-106.ibm.com/developerworks/db2/library/techarticle/dm
-0411rielau/?ca=dgr-lnxw06SQL-Speed
The listing 2 example:
1 SELECT D_TAX, D_NEXT_O_ID
2 INTO :dist_tax , :next_o_id
3 FROM OLD TABLE ( UPDATE DISTRICT
4 SET D_NEXT_O_I
I try to do:
CREATE RULE ndicti AS ON INSERT TO ndict
DO INSTEAD INSERT INTO 'ndict_' || (NEW.word_id & 255)
VALUES( NEW.url_id, NEW.word_id, NEW.intag);
I got an error on 'ndict_' .
I did not found the right syntax.
In fact I discover that
SELECT * FROM / INSERT INTO table
doesn't accept f
Hello,
We're having a substantial problem with our FreeBSD 5.2 database
server
running PostgreSQL - it's getting a lot of traffic (figure about 3,000
queries per second), but queries are slow, and it's seemingly waiting
on
other things than CPU time
Could this be a 5.2 performance issue ?
In spi
Hi,
I test a configuration where one table is divided in 256 sub-table.
And I use a RULE to offer a single view to the data.
For INSERT I have create 256 rules like:
CREATE RULE ndicti_000 AS ON INSERT TO ndict
WHERE (NEW.word_id & 255) = 000 DO INSTEAD
INSERT INTO ndict_000 VALUES( NEW.url
The planner is guessing that scanning in rec_id order will produce a
matching row fairly quickly (sooner than selecting all the matching
rows
and sorting them would do). It's wrong in this case, but I'm not sure
it could do better without very detailed cross-column statistics.
Am I
right to gues
Hi,
I apologize for the mistake.
So, I dump the database, I reload it then VACUUM ANALYZE.
For each statement: I then quit postgres, start it, execute one
command, then quit.
Le 14 avr. 04, à 14:39, Pailloncy Jean-Gérard a écrit :
dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0
I run the following command three times to prevent cache/disk results.
[...]
dps=> explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND
crc32=764518963 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;
QUERY PLAN
--
Hm, this is odd. That says you've got 349519 live index entries in
only
463 actively-used index pages, or an average of 754 per page, which
AFAICS could not fit in an 8K page. Are you using a nondefault value
of
BLCKSZ? If so what?
Sorry, I forgot to specify I use BLCKSZ of 32768, the same blo
Hi,
In 7.4 a VACUUM should be sufficient ... or at least, if it isn't
Atfer VACUUM:
dps=# explain analyze select next_index_time from url order by
next_index_time desc limit 1;
QUERY PLAN
--
Hi,
How to optimize the last query ? (~ 2000 times slower than the first
one)
I suppose there is some odd distribution of data in the index ?
Looks to me like a whole lot of dead rows at the left end of the index.
Have you VACUUMed this table lately?
From pg_autovacuum:
[2004-04-10 05:45:39 AM] Pe
Hi,
I test many times the foolowing query.
dps=# explain analyze select next_index_time from url order by
next_index_time desc limit 1;
QUERY PLAN
---
I did not expect so many answers about this question.
Thanks.
I find by myself the "order by trick" to speed min/max function.
Jean-Gérard Pailloncy
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
s
I have:
psql (PostgreSQL) 7.3.2
I do a modification of 'access/index/indexam.c' where I comment:
#ifdef NOT_USED
if (scan->keys_are_unique && scan->got_tuple)
{
if (ScanDirectionIsForward(direction))
{
if (scan->unique_tuple_po
14 matches
Mail list logo