When I issue a fairly large DELETE query which has multiple tables
with FOREIGN KEY CASCADE on them, Postgres eats up *all* the
memory on my system and the system crashes. I figure that there are
two problems, one is PG eating up all of the memory, the other is the
system crashing and not te
On 10/6/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Worky Workerson" <[EMAIL PROTECTED]> writes:
> When I issue a fairly large DELETE query which has multiple tables
> with FOREIGN KEY CASCADE on them, Postgres eats up *all* the
> memory on my system and the sy
Note that whether you have CASCADE or not is not the issue --- if you
are doing a delete in a foreign-key-referenced relation at all, you
are going to have a trigger event per deleted row no matter what the
details of the FK are.
So the best/fastest* way to do this would be to remove the FK
rela
I'm having an issue with a query plan that seems to be taking *far*
longer than it should. I have the following schema (Note: I had to
retype everything from an isolated lab, so I'm hoping that its correct
enough to show my problem):
CREATE TABLE ip_profiles (
ipIP4 PRIMARY KEY,
--more col
This is probably a really basic question, but how do I trap when
spi_query_exec returns an error in PL/Perl? I would expect to be able
to use the return value to determine an error condition, however I get
the message:
ERROR: error from Perl function: relation "foo" does not exist at line 7.
w
I'm wondering (given what I have read about PG and IN), if there is a
better way to execute this query. I have (essentially) two tables in
a read-only DB, ip_info and events, where many events can map to one
ip_info. I'd like to get all the ip_info.info columns where a
specific event occurred.
Is it possible to configure the tsearch2 parser? I'd like a very dumb
parser that splits on everything that is not [a-zA-Z0-9.]. The
default parser seems to work well on my dataset except for the '/'
character ... it doesn't split mike/john into two lexemes. And ideas?
Thanks!
---
I read recently about the efforts underway to COPY from a view,
however I was wondering what the current best-practices are for being
able to copy out of a view and import that data into an actual table
elsewhere. I am currently doing psql -c "SELECT ..." and the using a
bit of perl to transform
On 6/20/06, Aaron Koning <[EMAIL PROTECTED]> wrote:
google pg_dump
Really? What command do you use? I've tried the following:
pg_dump -t viewname dbname
and I get the view definition, whereas I would like the data. Is
there an option to pg_dump that I'm missing?
--
I'm trying to trigger a whole bunch of partitions at once (initial DB
setup) using the same plpgsql trigger. The trigger is basically the
merge trigger in the docs (i.e. UPDATE IF NOT FOUND RETURN NEW ...).
I need to use the TG_RELNAME variable within the "UPDATE" in the
trigger so that I can us
ould be greatly appreciated.
Thanks!
CREATE OR REPLACE FUNCTION merge_db() RETURNS TRIGGER AS $$
BEGIN
UPDATE db SET number_seen = number_seen + NEW.number_seen
WHERE id = NEW.id AND content = NEW.content;
IF FOUND THEN
--Update row
RETURN NULL;
END IF;
RETURN NEW;
END;
$$ L
I don't think it's possible. however, what is possible and achieves
roughly the same affect is to query the system catalogs (or
information schema) and via dynamic sql cut trigger
funtions/procedures by looping the results of your query. non-dynamic
sql will usually be a bit faster than dynamic
12 matches
Mail list logo