-- Original message --
From: Ken Allen [EMAIL PROTECTED]
I have a linked server on SQL server 2005. I can update or write to a
table in Postgres that does not have a OID. But the Table I have has an
OID and I cannot write or update to that table. Anyone have
On 04/04/2008 05:43 PM, mark wrote:
stats with new settings are below..
These stats look good for me.
but even with this sometimes update queries take more than coupla
seconds sometimes...
checkpoint_completion_target = 0.8
It looks like that this is set too high for your workload. If
On Thu, Apr 3, 2008 at 10:02 PM, Greg Smith [EMAIL PROTECTED] wrote:
On Wed, 2 Apr 2008, mark wrote:
with no clients connected to the database when I try to shutdown the
database [to apply new settings], it says database cant be shutdown.. for a
long time both in smart and normal mode...
On Wednesday 2. April 2008, Tomasz Ostrowski wrote:
Go for it. Even 64 (I like round numbers) would not be too much.
Geek test: Do you find the above statement odd?
Yes: 0, No: +10.
(Sorry for being massively off-topic :-))
--
Leif Biberg Kristensen | Registered Linux User #338009
On Thu, Apr 3, 2008 at 9:18 AM, Leif B. Kristensen [EMAIL PROTECTED] wrote:
On Wednesday 2. April 2008, Tomasz Ostrowski wrote:
Go for it. Even 64 (I like round numbers) would not be too much.
Geek test: Do you find the above statement odd?
Sadly, no.
--
Dave Page
EnterpriseDB UK Ltd:
Leif B. Kristensen wrote:
On Wednesday 2. April 2008, Tomasz Ostrowski wrote:
Go for it. Even 64 (I like round numbers) would not be too much.
Geek test: Do you find the above statement odd?
Yes: 0, No: +10.
(Sorry for being massively off-topic :-))
I had the same thought. ;)
Colin
--
On Wed, 2 Apr 2008, mark wrote:
with no clients connected to the database when I try to shutdown the
database [to apply new settings], it says database cant be shutdown..
for a long time both in smart and normal mode... then i had to go to
immediate mode to shut down.. but then when i start
On Tue, Apr 1, 2008 at 5:31 PM, Greg Smith [EMAIL PROTECTED] wrote:
On Tue, 1 Apr 2008, mark wrote:
current settings all default
#checkpoint_segments = 3
#checkpoint_timeout = 5min
#checkpoint_completion_target = 0.5
#checkpoint_warning = 30s
this is what I have on
On 2008-04-02 09:30, mark wrote:
Based on what Tomasz suggested a day ago, I had changed settings to
checkpoint_segments = 16
checkpoint_timeout = 20min
checkpoint_completion_target = 0.8
but i still do get statements that take over 2 or 3 seconds to execute
On Wed, 2 Apr 2008, mark wrote:
this really clear! Thannks!!
This is the first time someone new to this has ever said that about
checkpoint tuning, which is quite the victory for all of us who worked
toward the 8.3 functional and monitoring improvements in this area.
Please keep posting
On Wed, 2 Apr 2008, Tomasz Ostrowski wrote:
Even 64 (I like round numbers) would not be too much. This
would make 1GB of data in WALs.
It's much worse than that. Assume the system starts a checkpoint after
checkpoint_segments worth of WAL files have been used. It may take that
long before
On Wed, Apr 2, 2008 at 1:19 AM, Greg Smith [EMAIL PROTECTED] wrote:
On Wed, 2 Apr 2008, mark wrote:
this really clear! Thannks!!
This is the first time someone new to this has ever said that about
checkpoint tuning, which is quite the victory for all of us who worked
toward the 8.3
On 2008-03-31 21:16, mark wrote:
is the query I am running , and it takes over 10 seconds to complete
this query...
update users set number_recieved=number_recieved+1 where
uid=738889333;
Every time or only sometimes?
If it is sometimes then I think this query is waiting for a checkpoint.
On Mon, Mar 31, 2008 at 11:18 PM, Tomasz Ostrowski [EMAIL PROTECTED]
wrote:
On 2008-03-31 21:16, mark wrote:
is the query I am running , and it takes over 10 seconds to complete
this query...
update users set number_recieved=number_recieved+1 where
uid=738889333;
Every time or only
On Tue, Apr 1, 2008 at 12:44 AM, mark [EMAIL PROTECTED] wrote:
On Mon, Mar 31, 2008 at 11:18 PM, Tomasz Ostrowski [EMAIL PROTECTED]
wrote:
On 2008-03-31 21:16, mark wrote:
is the query I am running , and it takes over 10 seconds to complete
this query...
update users set
On 2008-04-01 09:44, mark wrote:
I already am running 8.3.1 [ i mentioned in subject].
Missed that, sorry.
I'd first try to set the following on a write-busy 8.3 server to smooth
checkpoint spikes:
checkpoint_segments = 16
checkpoint_timeout = 20min
On Tue, Apr 1, 2008 at 1:48 AM, Tomasz Ostrowski [EMAIL PROTECTED]
wrote:
On 2008-04-01 09:44, mark wrote:
I already am running 8.3.1 [ i mentioned in subject].
But I have no experience on anything with more than 1GB of RAM...
Should I reduce shared_buffers to less than 1GB? If that is
Tomasz Ostrowski [EMAIL PROTECTED] writes:
I'd also set
log_checkpoints=on
to get an idea how it behaves.
Yeah, that's really the *first* thing to do. You need to determine
whether the episodes of slowness are correlated with checkpoints
or not; there's no point fooling with the
On Tue, Apr 1, 2008 at 7:27 AM, Tom Lane [EMAIL PROTECTED] wrote:
Tomasz Ostrowski [EMAIL PROTECTED] writes:
I'd also set
log_checkpoints=on
to get an idea how it behaves.
Yeah, that's really the *first* thing to do. You need to determine
I set this on,
log_checkpoints = on
and
mark [EMAIL PROTECTED] writes:
I set this on,
log_checkpoints = on
and started postgres using this command
pg_ctl -D /mnt/work/database -l /mnt/work/logs/pgsql.log start
nothing is getting logged in the logfile. should I turn on some other
settings?
You're probably looking in the wrong
On Tue, 1 Apr 2008, mark wrote:
current settings all default
#checkpoint_segments = 3
#checkpoint_timeout = 5min
#checkpoint_completion_target = 0.5
#checkpoint_warning = 30s
this is what I have on pg_stat_bgwriter ; how much should I increase
checkpoint_segment checkpoint_completion_target
is the query I am running , and it takes over 10 seconds to complete this
query...
update users set number_recieved=number_recieved+1 where uid=738889333;
table has about 1.7 million rows.. i have an index on column uid and also on
number_received. .. this is also slowing down the inserts that
On 31/03/2008 20:16, mark wrote:
is the query I am running , and it takes over 10 seconds to complete
this query...
update users set number_recieved=number_recieved+1 where uid=738889333;
table has about 1.7 million rows.. i have an index on column uid and
also on number_received. .. this
On Mon, Mar 31, 2008 at 12:23 PM, Raymond O'Donnell [EMAIL PROTECTED] wrote:
On 31/03/2008 20:16, mark wrote:
is the query I am running , and it takes over 10 seconds to complete
this query...
update users set number_recieved=number_recieved+1 where uid=738889333;
table has about
On 31/03/2008 20:38, mark wrote:
I dont manually do vaccum..
It might be worth doing one and seeing if it makes a difference.
Ray.
---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
On 31/03/2008 20:38, mark wrote:
EXPLAIN ANALYZE update users set number_recieved=number_recieved+1 where
uid=738889333;
QUERY PLAN
On Mon, Mar 31, 2008 at 12:48 PM, Raymond O'Donnell [EMAIL PROTECTED] wrote:
On 31/03/2008 20:38, mark wrote:
EXPLAIN ANALYZE update users set number_recieved=number_recieved+1 where
uid=738889333;
QUERY PLAN
On 31/03/2008 20:51, mark wrote:
can you explain what the numbers mean in the EXPLAIN ANALYZE?
(cost=0.00..8.46 rows=1 width=1073) (actual time=0.094..0.161 rows=1
loops=1)
It's worth reading through the docs at that reference in my previous
email - it's well explained there.
As I
On Mon, Mar 31, 2008 at 12:59 PM, Raymond O'Donnell [EMAIL PROTECTED] wrote:
On 31/03/2008 20:51, mark wrote:
can you explain what the numbers mean in the EXPLAIN ANALYZE?
(cost=0.00..8.46 rows=1 width=1073) (actual time=0.094..0.161 rows=1
loops=1)
It's worth reading through the docs
On Mon, Mar 31, 2008 at 12:59 PM, Raymond O'Donnell [EMAIL PROTECTED] wrote:
On 31/03/2008 20:51, mark wrote:
can you explain what the numbers mean in the EXPLAIN ANALYZE?
(cost=0.00..8.46 rows=1 width=1073) (actual time=0.094..0.161 rows=1
loops=1)
As I understand it, cost signifies
On 31/03/2008 21:11, mark wrote:
it says actual time is 0.161 seconds or milliseconds.. but the total run
time is 11 seconds.. any ideas why this discrepancy?
Well, I interpret the docs as implying that the difference between the
time quoted in the top line of EXPLAIN ANALYZE's output and
Hi,
Here is a quick and easy one I have a questions about:
I have a table (customers) that has a field (firstname) with the
following data.
firstname
Mike H
Josh
Jim B
Katie I
Jeff
Suzy
John R
Can someone provide the syntax for a SQL UPDATE statement that will
remove
On Mar 14, 2008, at 1:30 PM, Joshua wrote:
Hi,
Here is a quick and easy one I have a questions about:
I have a table (customers) that has a field (firstname) with the
following data.
firstname
Mike H
Josh
Jim B
Katie I
Jeff
Suzy
John R
Can someone provide the syntax
am Fri, dem 14.03.2008, um 13:30:02 -0500 mailte Joshua folgendes:
Hi,
Here is a quick and easy one I have a questions about:
I have a table (customers) that has a field (firstname) with the
following data.
firstname
Mike H
Josh
Jim B
Katie I
Jeff
Suzy
John
Hello,
I have a quick questions... consider the following information:
I have a table 'customers' which looks like the following:
firstname | middlename
---|--
Johnathan C
Mark S
Joshua
Susan T
Jennifer
Marcus D
Mike
you need to strip the string apart using either regex which is
difficult to use or split_part()
http://www.postgresql.org/docs/8.3/interactive/functions-string.html
The update will look something like this...
Update customer set custfirstname = split_part(Name, ' ', 1) ,
custmiddlename =
Hi.
If I have table A (x integer primary key);
and table B (y integer references A on delete cascade );
and that a new item (x=70) gets inserted into A and a lot of items go into B
that references the new item in A.
Now, if I really have to do:
delete from A where x=5;
update A set x=5 where
A B wrote:
Hi.
If I have table A (x integer primary key);
and table B (y integer references A on delete cascade );
There is an on update option to the create table command, but I can't
find out if that will help me. The docs are unfortunatly a little short on
describing the on update
I've to sync 2 tables with pk (serial/identity).The source comes from
MS SQL, the destination is pg (ODBC is not an option currently).
One solution would be to truncate the destination and just copy the
new data in it but I prefer a slower sync but avoid a period where no
data is available.
So I
Hi
Does anyone have any recommendations on how to migrate Oracle's
'INSTEAD OF' triggers to PostgreSQL's 'INSTEAD OF' rules?
What I would like to do are:
- When there's an insert to the view, the rule would insert the data
into many tables.
- When there's an update, the rule would insert and
Most honourable members of the list,
this is a simple one, but I can't find the solution ( probably a
forest/tree problem).
update table set bolean_column = set_it_to_its_inverse where fk =
some_value;
or join me in the dark forest
cedric
---(end of
am Thu, dem 06.12.2007, um 10:25:26 +0100 mailte Cedric Boudin folgendes:
Most honourable members of the list,
this is a simple one, but I can't find the solution ( probably a
forest/tree problem).
update table set bolean_column = set_it_to_its_inverse where fk =
some_value;
test=#
Cedric Boudin wrote:
Most honourable members of the list,
this is a simple one, but I can't find the solution (
probably a forest/tree problem).
update table set bolean_column = set_it_to_its_inverse where
fk = some_value;
I am usually a newbie around here, but this is one that I
Cedric Boudin wrote:
Most honourable members of the list,
this is a simple one, but I can't find the solution ( probably a
forest/tree problem).
update table set bolean_column = set_it_to_its_inverse where fk =
some_value;
or join me in the dark forest
cedric
On Thu, 06 Dec 2007 13:38:31 +0100
Cedric Boudin [EMAIL PROTECTED] wrote:
It was indeed a forest problem.
both:
set bolean_column= not bolean_column
and
set bolean_column= case when bolean_column then 'f'::bool else
't'::bool end;
do work perfectly.
What if boolean_column is NULL?
Ivan Sergio Borgonovo wrote:
On Thu, 06 Dec 2007 13:38:31 +0100
Cedric Boudin [EMAIL PROTECTED] wrote:
It was indeed a forest problem.
both:
set bolean_column= not bolean_column
and
set bolean_column= case when bolean_column then 'f'::bool else
't'::bool end;
do work perfectly.
On Thu, 06 Dec 2007 14:53:13 +0100
Cedric Boudin [EMAIL PROTECTED] wrote:
What if boolean_column is NULL?
btw set bolean_column= not bolean_column works as expected.
template1=# select (not 't'::boolean),(not 'f'::boolean),(not
NULL::boolean);
?column? | ?column? | ?column?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ivan Sergio
Borgonovo
Sent: Thursday, December 06, 2007 10:19 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] simple update on boolean
On Thu, 06 Dec 2007 14:53:13 +0100
Cedric Boudin [EMAIL
Hi,
I need help with designing a set of queries I am running with psql -f
reports.sql
I have a tb_master with report headers, tb_records table with it's own
serial field and foreign key referencing an id of tb_master. The third
tb_details table has two foreign keys referencing the id's of
Hi all,
I met a problem that it's slow to update all rows in a table. My
procedures are:
ALTER TABLE user ADD COLUMN foo smallint;
UPDATE user SET foo = 2;
ALTER TABLE user ALTER COLUMN foo SET NOT NULL;
ALTER TABLE user ALTER COLUMN foo SET DEFAULT 2;
The 2nd step took me 2600s. It's a
Hi,
I would like to generate with a request a SQL order like
UPDATE tab1
SET col_a = ?, col_b = ? ...
WHERE pk = ?
for each table of a given schema (in fact what I want is slightly more
complicated but if I can write the above, I will be able to get the rest).
Will anyone be kind enough to
On Thu, Oct 25, 2007 at 09:22:02AM -0300, Evandro Andersen wrote:
In Oracle you can use this:
DELETE FROM A WHERE A1 = 10 ORDER BY A2
There is something in the Postgresql ?
Yes.
DELETE...USING :)
http://www.postgresql.org/docs/current/static/sql-delete.html
Cheers,
David.
--
David
In Oracle you can use this:
DELETE FROM A WHERE A1 = 10 ORDER BY A2
There is something in the Postgresql ?
Evandro Andersen
Brazil
Postgresql 8.2
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On 10/25/07, Evandro Andersen [EMAIL PROTECTED] wrote:
In Oracle you can use this:
DELETE FROM A WHERE A1 = 10 ORDER BY A2
There is something in the Postgresql ?
That doesn't work in Oracle 9i, I tried it.
No, there's nothing exactly like it in pgsql.
---(end of
In Oracle you can use this:
DELETE FROM A WHERE A1 = 10 ORDER BY A2
There is something in the Postgresql ?
Evandro Andersen
Brazil
Postgresql 8.2
Abra sua conta no Yahoo! Mail, o único sem limite de espaço para
armazenamento!
http://br.mail.yahoo.com/
am Thu, dem 25.10.2007, um 5:25:14 -0700 mailte Evandro Andersen folgendes:
In Oracle you can use this:
DELETE FROM A WHERE A1 = 10 ORDER BY A2
There is something in the Postgresql ?
Can you explain this a little bit more? I can't see any sense. Either i
delete rows with A1=10 or
-
From: Evandro Andersen [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 25, 2007 7:25 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Delete/Update with ORDER BY
In Oracle you can use this:
DELETE FROM A WHERE A1 = 10 ORDER BY A2
There is something in the Postgresql ?
Evandro
Whenever a row is updated in a table with a timestamp column. How do I write a
trigger to set that timestamp column to now() or the current timestamp?
thanks
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 10/23/07, ahnf [EMAIL PROTECTED] wrote:
Whenever a row is updated in a table with a timestamp column. How do I write
a trigger to set that timestamp column to now() or the current timestamp?
Straight from the horse's mouth:
On 10/23/07, ahnf [EMAIL PROTECTED] wrote:
Whenever a row is updated in a table with a timestamp column. How do I write
a trigger to set that timestamp column to now() or the current timestamp?
You should be able to take a virtual chainsaw to the example on this
page and do it:
On Mon, Oct 15, 2007 at 03:23:59PM -0400, Carlo Stonebanks wrote:
Would someone be kind enough to tell me if there is somethign wrong with
this apporach:
Can't see anything obviously wrong, but if you want more help you'll
need to provide the EXPLAIN output.
Have a nice day,
--
Martijn van
Carlo Stonebanks wrote:
If I do an update using the FROM clause, and that clause has a sub-query
that refers to the table I am updating, will I be waiting for ever for a
table to lock to release?
You cannot block yourself with a lock, if that's what you mean.
The locks your session takes out
You cannot block yourself with a lock, if that's what you mean.
The locks your session takes out will only block other sessions.
Well, that's the GOOD news! The bad news is - I'm not sure whether there's
somethign wrong with the logic of this query. The sub-query inside the FROM
is correct,
If I do an update using the FROM clause, and that clause has a sub-query
that refers to the table I am updating, will I be waiting for ever for a
table to lock to release?
The update before seems to stall, and it occurs to me that perhaps this is
the problem.
BTW - I rarely do anything but
On Wed, Sep 12, 2007 at 01:56:13PM -0500, Josh Trutwin wrote:
Or would you have to compare each field in OLD, NEW to see if
anything actually changed?
you dont have to compare all columns (at least not in 8.2 and newer).
please take a look at
If I create an ON UPDATE trigger run on each row after update, does
the trigger fire only on rows affected by the update or for all rows?
For example:
CREATE TRIGGER my_update_trigger
AFTER UPDATE ON my_table
FOR EACH ROW
EXECUTE PROCEDURE my_update_proc;
UPDATE my_table SET my_val =
On Wednesday 12 September 2007 15:56:13 Josh Trutwin wrote:
If I create an ON UPDATE trigger run on each row after update, does
the trigger fire only on rows affected by the update or for all rows?
For example:
CREATE TRIGGER my_update_trigger
AFTER UPDATE ON my_table
FOR EACH ROW
On Wed, 2007-08-22 at 20:41 +0100, Richard Huxton wrote:
Murali Maddali wrote:
This is what I am doing, I am reading the data from SQL Server 2005 and
dumping to out to Postgresql 8.2 database.
My 2 cents.. I'm doing roughly the same thing, but I'm using perl and
DBI to do it.
Fastest way
Hello Group,
I have asked this question already on the NpgSql forum, but didn't get a
response so far. Sorry for cross posting, but want to check if any one had
any suggestions for my problem.
I am trying to do my updates through NpgsqlDataAdapter (I also tried with
Odbc driver with no luck)
Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 22, 2007 2:41 PM
To: Murali Maddali
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adapter update.
Murali Maddali wrote:
This is what I am doing, I am reading the data from SQL Server 2005
and dumping
, 2007 2:41 PM
To: Murali Maddali
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Adapter update.
Murali Maddali wrote:
This is what I am doing, I am reading the data from SQL Server 2005
and dumping to out to Postgresql 8.2 database.
while (r.Read
At 01:30 AM 8/24/2007, Murali Maddali wrote:
options like a different driver I can use or through stored procedures. I
have to compare each column in each row before doing the update.
Do you have to compare with all rows, or just one? Can your
comparison make use of an index?
Link.
Joshua D. Drake wrote:
I have added transaction to my code and it took about 2 and half hours to
process around 48,000 records. Again all this time is taken by update method
on the adapter.
I don't know Perl to setup the database link to SQL Server 2005 and also I
don't have permission to
Hello Group,
I have asked this question already on the NpgSql forum, but didn't get a
response so far. Sorry for cross posting, but want to check if any one had
any suggestions for my problem.
I am trying to do my updates through NpgsqlDataAdapter (I also tried with
Odbc driver with no luck)
Murali Maddali wrote:
This is what I am doing, I am reading the data from SQL Server 2005 and
dumping to out to Postgresql 8.2 database.
while (r.Read())
_save(r, srcTblSchema, destTbl, destConn);
r.Close();
Andrew,
Thanks for your input, comments below.
On Thu, 2007-07-26 at 13:30, Andrew Kroeger wrote:
After reading through this thread, I have an idea that should accomplish
what I believe are your 3 main goals (avoid any negative performance
impact on the user's inserts, do not lose any data
Csaba Nagy wrote:
On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
Unfortunately I don't think this will work. Multiple backends will happily
pick up the same ctid in their selects and then try to delete the same
records.
I'm pretty sure he said that the batch processing (and the delete)
On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
Unfortunately I don't think this will work. Multiple backends will happily
pick up the same ctid in their selects and then try to delete the same
records.
I'm pretty sure he said that the batch processing (and the delete) would
only be
First of all, thanks for all the suggestions.
put a SERIAL primary key on the table
Or:
Maybe add OIDs to the table, and delete based on the OID number?
No, this is not acceptable, it adds overhead to the insertions. Normally
the overhead will be small enough, but on occasions it is
How about using the following?
delete from table
where ctid in (select ctid from table limit num);
I actually checked this out before starting this thread, and the plan
looked like:
explain delete from my_table where ctid in (select ctid from my_table
limit 10);
Csaba Nagy wrote:
First of all, thanks for all the suggestions.
put a SERIAL primary key on the table
Or:
Maybe add OIDs to the table, and delete based on the OID number?
No, this is not acceptable, it adds overhead to the insertions. Normally
the overhead will be small enough, but on
Unfortunately the stuff that makes a ctid=value nice doesn't seem to be
used when you're doing an in. It's possible that a function that does
something like
for rec in select ctid from my_table limit 10 loop
delete from my_table where ctid=rec.ctid;
end loop
might do okay, but I haven't
Stephan Szabo [EMAIL PROTECTED] writes:
Unfortunately the stuff that makes a ctid=value nice doesn't seem to be
used when you're doing an in.
Yeah, see the header comments in tidpath.c:
* There is currently no special support for joins involving CTID; in
* particular nothing corresponding to
On Tue, 2007-07-24 at 18:19, Tom Lane wrote:
[ thinks for a bit ... ] Actually, you can do it as of 8.2 or so,
by abusing the ScalarArrayOp stuff: turn the subquery into an array.
An example in the regression database:
regression=# explain update tenk1 set ten=ten+1
regression-# where
Csaba Nagy [EMAIL PROTECTED] writes:
Unfortunately the stuff that makes a ctid=value nice doesn't seem to be
used when you're doing an in. It's possible that a function that does
something like
for rec in select ctid from my_table limit 10 loop
delete from my_table where ctid=rec.ctid;
On Tue, 24 Jul 2007, Gregory Stark wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
Unfortunately the stuff that makes a ctid=value nice doesn't seem to be
used when you're doing an in. It's possible that a function that does
something like
for rec in select ctid from my_table limit 10
On Tue, 24 Jul 2007, Csaba Nagy wrote:
How about using the following?
delete from table
where ctid in (select ctid from table limit num);
I actually checked this out before starting this thread, and the plan
looked like:
explain delete from my_table where ctid in (select ctid
Hi all,
This subject was touched a few times in the past, I looked into the
archives... the result is invariably key developers saying such a
feature is unsafe because the result is unpredictable, while the people
requesting is saying it is OK that way, it is expected... but no
compelling use
[EMAIL PROTECTED] (Csaba Nagy) writes:
In postgres we're currently not chunking, due to the fact that the code
to do it is simply overly contorted and inefficient compared to the
other DBs we use. At least all the solutions we could think of to do the
chunking in a safe way while the inserts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/23/07 10:56, Csaba Nagy wrote:
Hi all,
This subject was touched a few times in the past, I looked into the
archives... the result is invariably key developers saying such a
feature is unsafe because the result is unpredictable, while the
On Mon, 2007-07-23 at 17:56 +0200, Csaba Nagy wrote:
Now I don't put too much hope I can convince anybody that the limit on
the delete/update commands has valid usage scenarios, but then can
anybody help me find a good solution to chunk-wise process such a buffer
table where insert speed is
Roger,
This statement runs but the table isnt actually updated. Slightly odd.
I ran the subquery, which makes perfect sense and nothing was
returned. If I remove the AND temp.id junctions.id part I get loads
of results returned but if I run the update with the the adjusted
subquery I get ERROR:
On Apr 23, 4:25 am, Iain Adams [EMAIL PROTECTED] wrote:
Hi,
This is probably a really easy question but I have been toiling with
this SQL for ages.
Okay I have a table with geom_id and id in, this table is called temp.
I also have another table called junctions with a column, old_id. Now
Hi,
This is probably a really easy question but I have been toiling with
this SQL for ages.
Okay I have a table with geom_id and id in, this table is called temp.
I also have another table called junctions with a column, old_id. Now
old_id corresponds to geom_id. I want to update the junctions
Hi,
I got stuck with the following problem and strangely couldn't find
anything similar in the list:
Assume two tables:
table: warehouse, columns: id, incoming, outgoing
table: articles, columns: id, articles_in_stock
All I want is to create a rule, that everytime when articles leave or
get in
Hi,
I got stuck with the following problem and strangely couldn't find
anything similar in the list:
Assume two tables:
table: warehouse, columns: id, incoming, outgoing
table: articles, columns: id, articles_in_stock
All I want is to create a rule, that everytime when articles leave or
get in
I think you need delete, update, and insert rules for warehouse.
Subtract out the old on delete update. Add the new in update and
insert. Aggregates would be an incredible waste of processor time.
pgsql-user wrote:
Hi,
I got stuck with the following problem and strangely couldn't find
Hi All
I have three tables
Create Table Pipe
( pipe_id serial ,
fluid_id int4
) ;
Create Table Equipment
(Equipment_id serial,
fluid_id int4
) ;
Create Table Processes
( Fluid_id serial
fluid varchar (15),
ip_op_equipment varchar (5)
) ;
The interface inserts the name
When I trigger 'after insert' the function doesn't work because the
ip_op_equipment condition is an update. When I manually enter directley
into the table this trigger works fine when both the fluid and
ip_op_equipment are entered as one entry.
When I trigger 'after update' every row in
that you referred to?
Bob
- Original Message -
From: Tomas Vondra [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Friday, November 17, 2006 3:34 PM
Subject: Re: [GENERAL] After Update Triggers
When I trigger 'after insert' the function doesn't work because the
ip_op_equipment
201 - 300 of 436 matches
Mail list logo