As far as I know, to use a straight index Postgres requires either
ORDER BY pages, description -- or --
ORDER BY pages DESC, description DESC.
If you want the results by pages DESC, description ASC, then you have to
make an index on an expression or define your own operator or something
esoter
Should I temporarily increase sort_mem, vacuum_mem, neither, or both
when doing a CLUSTER on a large (100 million row) table where as many as
half of the tuples are deadwood from UPDATEs or DELETEs? I have large
batch (10 million row) inserts, updates, and deletes so I'm not sure
frequent vacuu
For the reasons indicated (that is, MVCC), PG can not do a DISTINCT or the
equivalent
GROUP BY from index values alone.
If this table is large, perhaps you could denormalize and maintain a
summary table with date (using truncation) and count, updated with
triggers on the original table. This tabl
>> How about this option:
>>
>> SELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING
>> (ip) RIGHT JOIN host_events USING (ip) WHERE
>> (network_events.name='blah' OR host_events.name = 'blah') AND
>> ip_info.ip IS NOT NULL;
MA> Nah, that seems to be much much worse. The othe
t of issue?
Thanks.
Andrew Lazarus [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
rs, when I killed
it, and I wasn't in retrospect sure an index that took something like
6GB by itself would be helpful on a 2GB of RAM box.
MK> I don't think that will work for the vector norm i.e:
MK> |x - y| = sqrt(sum over j ((x[j] - y[j])^2))
MK> Cheers
MK> Mark
Let me just thank the list, especially for the references. (I found
similar papers myself with Google: and to think I have a university
library alumni card and barely need it any more!)
I'll write again on the sorts of results I get.
BEGIN:VCARD
VERSION:2.1
N:Lazarus;Andrew;;;Ph.D.
FN:Andrew Lazar
Jamal Ghaffour wrote:
CREATE TABLE cookies (
domain varchar(50) NOT NULL,
path varchar(50) NOT NULL,
name varchar(50) NOT NULL,
principalid varchar(50) NOT NULL,
host text NOT NULL,
value text NOT NULL,
secure bool NOT NULL,
timestamp timestamp with time zone NOT NULL DEF
Indeed there is: you can use an ARRAY constructor with SELECT. Here's
some PGPLSQL code I have (simplified and with the variable names shrouded).
SELECT INTO m
ARRAY(SELECT d FROM hp
WHERE hp.ss=$1
ORDER BY 1);
FERREIRA, William (VALTECH) wrote:
maybe t
sarlav kumar wrote:
Hi all,
Can someone please help me optimize this query? Is there a better way to
write this query? I am generating a report of transactions ordered by
time and with details of the sender and receiver etc.
SELECT distinct a.time::date ||''||substring(a.time::time::text,1,8)
I have a cascading delete trigger that is obviously using a seqscan.
(Explain analyze shows that trigger as taking over 1000s while all
other triggers are <1s. The value in test delete didn't even appear in
this child table, so an index scan would have been almost instant.)
If I do
DELETE FROM chi
route.
What I've done is remove the FK (maybe it would be better to leave it
albeit disabled for documentation) and written my own AFTER DELETE
trigger that uses EXECUTE to delay planning until the actual value is
known. This appears to work correctly.
--
Sincerely,
Andrew Lazarusm
TO bigtable SELECT * FROM remnant;
COMMIT;
ANALYSE bigtable;
works well because there is no possibility of my forgetting FKs.
--
Sincerely,
Andrew Lazarusmailto:and...@pillette.comBEGIN:VCARD
VERSION:2.1
N:Lazarus;Andrew;;;Ph.D.
FN:Andrew Lazarus, Ph.D.
EMAIL;PREF;INTERNET:and...@p
All three tables have the same integer key,
and it's indexed.
I parenthesized the joins to do the two
small tables first.
I'm running and INSERT INTO ... SELECT
query with this join (one record added per record in join), 4 hours down and all
I have to show for it is 100 recycled transact
14 matches
Mail list logo