Hi,
I did: pkill -x postgres
so it should send SIGTERM.
Regards
Pupillo
On Tue, Dec 6, 2016 at 2:44 AM, Tom DalPozzo wrote:
> Hi,
> about SSD light:
>
> I guessed it was WAL -> actual db files data traffic. It explains why the
> light stops blinking after shutting down the server (I did it via kill
> command) .
>
Do you kill with -15 (the
Hi,
about SSD light:
I guessed it was WAL -> actual db files data traffic. It explains why the
light stops blinking after shutting down the server (I did it via kill
command) . But if so, I expected the light to restart blinking after
restarting the server (in order to continue WAL->db activity).
On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo wrote:
> Hi,
> I've two tables, t1 and t2, both with one bigint id indexed field and one
> 256 char data field; t1 has always got 1 row, while t2 is increasing as
> explained in the following.
>
> My pqlib client countinously
I tried to tune some parameters without appreciable changes in this
behaviour.
I tried to play with:
checkpoint timeout
wal size
shared buffers
commit delay
checkpoijnt completion target
No meaningful info found in the log file.
Regards
2016-12-04 4:02 GMT+01:00 Tomas Vondra
On Fri, 2016-12-02 at 13:45 -0800, Adrian Klaver wrote:
>
> On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> >
> >
> > Hi,
> > I've two tables, t1 and t2, both with one bigint id indexed field
> > and
> > one 256 char data field; t1 has always got 1 row, while t2 is
> > increasing as explained
On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and
one 256 char data field; t1 has always got 1 row, while t2 is
increasing as explained in the following.
My pqlib client countinously updates one row in t1 (every time
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one
256 char data field; t1 has always got 1 row, while t2 is increasing as
explained in the following.
My pqlib client countinously updates one row in t1 (every time targeting a
different row) and inserts a new row in
On 12/02/2016 04:23 AM, Alexander Farber wrote:
Thank you, Rob -
On Fri, Dec 2, 2016 at 11:12 AM, Rob Sargent > wrote:
> On Dec 2, 2016, at 2:52 AM, Alexander Farber
>
Thank you, Rob -
On Fri, Dec 2, 2016 at 11:12 AM, Rob Sargent wrote:
>
> > On Dec 2, 2016, at 2:52 AM, Alexander Farber
> wrote:
> >
> > CREATE OR REPLACE FUNCTION words_unban_user(
> > in_uid integer)
> > RETURNS integer AS
>
> And finally here is the failing usage of the function :
>
> words=> SELECT uid FROM words_unban_user(1);
> ERROR: column "uid" does not exist
> LINE 1: SELECT uid FROM words_unban_user(1);
> ^
>
>
Shouldn't you be doing:-
SELECT words_unban_user(1);
--
Sent via
Hello,
why does this fail in PostgreSQL 9.5 please?
Here is my custom SQL function :
CREATE OR REPLACE FUNCTION words_unban_user(
in_uid integer)
RETURNS integer AS
$func$
UPDATE words_users SET
banned_until = null,
banned_reason = null
WHERE uid
> Why do you want to lock these results?
Because we are migrating very old pipeline where we save data in file
texts. So we have decided to keep some logics inside perl code. That means
to update a record:
1. we have to read/lock it
2. return the result to the application
3. the application
said assemlal wrote:
> Hello,
>
> PG: 9.4
> CentOS 6
>
> I am writing functions to lock results.
>
> Let's take an example:
>
> CREATE OR REPLACE FUNCTION usp_locking_my_result($1 VARCHAR(50), $2
> VARCHAR(50))
> RETURNS TEXT AS $$
> SELECT value
> FROM my_table
> WHERE field1 = $1 AND field2
Hello,
PG: 9.4
CentOS 6
I am writing functions to lock results.
Let's take an example:
CREATE OR REPLACE FUNCTION usp_locking_my_result($1 VARCHAR(50), $2
VARCHAR(50))
RETURNS TEXT AS $$
SELECT value
FROM my_table
WHERE field1 = $1 AND field2 = $2 FOR UPDATE;
$$ LANGUAGE SQL;
What happens if
Andreas Terrius writes:
> Is there any way to check whether the row already exists before checking
> constraints ? I still want it to fail if it turns out to be a new row
> (which would violate the not null constraint), but updates the row if it
> already exists.
I'm not
On 11/21/2016 2:32 AM, Andreas Terrius wrote:
Is there any way to check whether the row already exists before
checking constraints ? I still want it to fail if it turns out to be a
new row (which would violate the not null constraint), but updates the
row if it already exists.
just do an
Is there any way to check whether the row already exists before checking
constraints ? I still want it to fail if it turns out to be a new row
(which would violate the not null constraint), but updates the row if it
already exists.
Since if that is not possible, I would need to do a query to
On 11/21/2016 02:32 AM, Andreas Terrius wrote:
Is there any way to check whether the row already exists before checking
constraints ? I still want it to fail if it turns out to be a new row
(which would violate the not null constraint), but updates the row if it
already exists.
Since if that is
Adrian Klaver writes:
> ... So looks like constraints are checked before you get to the ON CONFLICT
> section.
Right. ON CONFLICT is a means for dealing with duplicate-key errors in
the specified (or inferred) unique index. It is *not* an all-purpose
error catcher.
On 11/19/2016 11:33 AM, Kim Rose Carlsen wrote:
>> AFAIK, EXCLUDED is only available in a trigger function:
>
>>
>> https://www.postgresql.org/docs/9.5/static/trigger-definition.html
>>
>> You are using EXCLUDED in a regular function so it would not be found.
>>
>> Can you also show the failure
> AFAIK, EXCLUDED is only available in a trigger function:
>
> https://www.postgresql.org/docs/9.5/static/trigger-definition.html
>
> You are using EXCLUDED in a regular function so it would not be found.
>
> Can you also show the failure for your alternate method?
>From the manual
On 11/17/2016 10:13 PM, Andreas Terrius wrote:
Hi,
Basically I wanted to do a partial update inside pg (9.5), but it seems
that a partial update fails when not all of constraint is fulfilled
(such as the not null constraint)
Below are the sql queries I used,
|CREATETABLEjobs (id integer
Hi,
Basically I wanted to do a partial update inside pg (9.5), but it seems
that a partial update fails when not all of constraint is fulfilled (such
as the not null constraint)
Below are the sql queries I used,
CREATE TABLE jobs (
id integer PRIMARY KEY,
employee_name TEXT NOT NULL,
On Monday, September 26, 2016 9:44 AM, Tom Lane wrote:
>> Paul Jones writes:
>> For a freshly pg_restore'd 9.2 database, would VACUUM ANALYZE update
>> statistics any better than just an ANALYZE?
>
> VACUUM would have caused the page-all-visible flags
Paul Jones writes:
> For a freshly pg_restore'd 9.2 database, would VACUUM ANALYZE update
> statistics any better than just an ANALYZE?
Not as far as the statistics kept in pg_stat go.
> After a restore, we ran a bunch of ANALYZEs on each table individually
> using GNU
For a freshly pg_restore'd 9.2 database, would VACUUM ANALYZE update
statistics any better than just an ANALYZE?
After a restore, we ran a bunch of ANALYZEs on each table individually
using GNU 'parallel' (for speed). Many of these tables are child tables
in a partition. Following the ANALYZEs,
Thank you Brian and others, but -
On Fri, Sep 9, 2016 at 5:22 PM, Brian Dunavant wrote:
> I'm making the assumption that you only have one ip/user in words_users.
>
> with lockrow as (
>SELECT g.gid, u.ip
>FROM words_games g join words_users u
> ON (g.player1
Good afternoon,
I have a question please.
In one table I store user ids and their IP addresses -
CREATE TABLE words_users (
uid SERIAL PRIMARY KEY,
ip inet NOT NULL
);
And in another table I keep 2-player games and timestamps of last moves
(NULL
On 04/02/2016 06:07 PM, Seamus Abshere wrote:
hi,
I want to write a function that updates arbitrary columns and here's my
pseudocode:
CREATE OR REPLACE FUNCTION update_pet(id int, raw_data jsonb) RETURNS
VOID AS $$
DECLARE
data record;
BEGIN
SELECT jsonb_populate_record(null::pets,
hi,
I want to write a function that updates arbitrary columns and here's my
pseudocode:
CREATE OR REPLACE FUNCTION update_pet(id int, raw_data jsonb) RETURNS
VOID AS $$
DECLARE
data record;
BEGIN
SELECT jsonb_populate_record(null::pets, raw_data) INTO data;
UPDATE pets [... from data ...]
Hello,
I am using a foreign data wrapper where i get a portion of my data
pre-loaded , i.e I get a set of rows before hand . So now i want to run
multiple update queries on this loaded data , write the changes to file ,
load the next set and continue with updates again.
How should i
Hello!
Is there any non-functional difference between these two forms of Update?
WITH name AS ( SELECT )
UPDATE tbl SET ...
FROM name
WHERE tbl.id = name.id
and
UPDATE tbl SET ...
FROM ( WITH qry AS ( SELECT ) SELECT * FROM qry ) AS name
WHERE tbl.id = name.id
They both better give the same
On 4/13/15 8:12 PM, David G. Johnston wrote:
Hello!
Is there any non-functional difference between these two forms of Update?
WITH name AS ( SELECT )
UPDATE tbl SET ...
FROM name
WHERE tbl.id http://tbl.id = name.id http://name.id
and
UPDATE tbl SET ...
FROM ( WITH qry AS ( SELECT ) SELECT *
Dear sir/madam.
We build a program that can automatically generate triggers in C-language
for synchronous incremental matview update. It supports queries with
limitations:
- inner join
- group by with sum, count, avg, min, max
- the join has to be of the form: t1 1-n t2 1-n t3 1-n t4
The
Hello list,
For a big table with more than 1,000,000 records, may I know which
update is quicker please?
(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1a.c1;
..
update t1
set c_N = a.c_N
from a
where pk and
On Tue, Sep 23, 2014 at 8:35 PM, Emi Lu em...@encs.concordia.ca wrote:
Hello list,
For a big table with more than 1,000,000 records, may I know which update is
quicker please?
(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1a.c1;
..
On 09/23/2014 12:35 PM, Emi Lu wrote:
Hello list,
For a big table with more than 1,000,000 records, may I know which
update is quicker please?
(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1a.c1;
..
update t1
set c_N = a.c_N
Hi
Please help, we are using postgreSQL 9.2.4. I need to update over 9000
rows. See the query below: A table called contact has got *addr_id *field
as null which is incorrect. So now I need to update contact table for each
account (cus_acno is in cus table) where contact_addr_id is null. For
update contacts set addr_id=b.addr_id
from
(select distinct(cus_acno), contact.con_id, address.addr_id from
address join person using (addr_id) join cus using (per_id) join
link_contact using (cus_acno) join contact using (con_id) where
contact.addr_id is
Thank very much
*From:* pgsql-general-ow...@postgresql.org [mailto:
pgsql-general-ow...@postgresql.org] *On Behalf Of *Sim Zacks
*Sent:* Thursday, May 15, 2014 10:42 AM
*To:* pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] Correct update statement
update contacts set addr_id=b.addr_id
Ok, thanks. I'll keep that in mind.
On Tue, Apr 1, 2014 at 7:45 PM, Andrew Sullivan a...@crankycanuck.ca wrote:
On Tue, Apr 01, 2014 at 07:00:16PM -0400, Tom Lane wrote:
one of the clients, in a way that isn't visible to the deadlock detector.
One way for that to happen without any
Hello,
I'm using postgresql 9.0.13, and I have a simple query that seems to be
stuck. I did a
postgres=# select procpid, query_start, waiting, current_query from
pg_stat_activity;
procpid | query_start | waiting |
current_query
32605 | 2014-04-01
Do these queries update more than one row? I ran into a similar issue
a year ago, where two multi-row updates would deadlock because they
processed rows in a different order. I'd love to see UPDATE support
ORDER BY to fix this, but it doesn't yet. (If I ever try contributing
to Postgres, this is a
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
Sent: Tuesday, April 01, 2014 3:51 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] simple update query stuck
Hello,
I'm using postgresql 9.0.13, and I have a simple query
they both be trying to update the same row, resulting in a deadlock?
*From:* pgsql-general-ow...@postgresql.org [mailto:
pgsql-general-ow...@postgresql.org] *On Behalf Of *Si Chen
*Sent:* Tuesday, April 01, 2014 3:51 PM
*To:* pgsql-general@postgresql.org
*Subject:* [GENERAL] simple update query
, Apr 1, 2014 at 1:10 PM, Igor Neyman iney...@perceptron.com wrote:
From: pgsql-general-ow...@postgresql.org [mailto:
pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
Sent: Tuesday, April 01, 2014 3:51 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] simple update query stuck
On Tue, Apr 01, 2014 at 01:37:17PM -0700, Si Chen wrote:
You are right. That was the problem. I tried the query from
http://wiki.postgresql.org/wiki/Lock_Monitoring and found a COMMIT
transaction that was blocking it.
I restarted postgresql again, and (it seems) everything went back to
Andrew Sullivan-8 wrote
On Tue, Apr 01, 2014 at 01:37:17PM -0700, Si Chen wrote:
You are right. That was the problem. I tried the query from
http://wiki.postgresql.org/wiki/Lock_Monitoring and found a COMMIT
transaction that was blocking it.
I restarted postgresql again, and (it seems)
Andrew Sullivan a...@crankycanuck.ca writes:
Probably you could have killed one of the queries. But it sounds like
what's happening is that you have multiple queries that are all trying
to update the same rows in a different order. It may be that none of
these is strictly deadlocked, in that
On Tue, Apr 01, 2014 at 07:00:16PM -0400, Tom Lane wrote:
one of the clients, in a way that isn't visible to the deadlock detector.
One way for that to happen without any external interconnections is if the
client is waiting for a NOTIFY that will never arrive because the would-be
sender is
Victor Yegorov vyego...@gmail.com writes:
Could you kindly explain me why the query as it is updates no records?
It's a bug, that's why. See
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=324577f39bc8738ed0ec24c36c5cb2c2f81ec660
or for 9.2,
Greetings.
I've hit a strange issue. This is a simplified setup.
First, run create.sql to get a couple of tables.
Then, run the update query.
Tested on 9.2.6 and 9.3.2.
Now:
- if I remove the UNION ALL part of the inner query, UPDATE works;
- if I move the `raw` subquery into the CTE, UPDATE
Hello.
What happens if I issue UPDATE SET FROM... but with incomplete/buggy WHERE
condition and thus SEVERAL rows from the from_list match ONE row to update?
Thank you in advance,
Ladislav Lenart
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Ladislav Lenart lenart...@volny.cz writes:
What happens if I issue UPDATE SET FROM... but with incomplete/buggy WHERE
condition and thus SEVERAL rows from the from_list match ONE row to update?
Any given row will be updated at most once. However, the from_list row
it gets updated against will
On 5.12.2013 15:14, Tom Lane wrote:
Ladislav Lenart lenart...@volny.cz writes:
What happens if I issue UPDATE SET FROM... but with incomplete/buggy WHERE
condition and thus SEVERAL rows from the from_list match ONE row to update?
Any given row will be updated at most once. However, the
I have a relatively simple data load script, which upserts (UPDATE existing
rows, INSERT new rows), which should be supported by the primary key index,
the only index on this table:
UPDATE destination SET ... FROM staging WHERE staging.pk =
destination.pk;
INSERT INTO destination SELECT
Matt bsg...@gmail.com writes:
In most cases, EXPLAIN and runtime tell me the index is utilized. However,
sometime on back to back runs (5 minute intervals) the runtime explodes and
EXPLAIN shows me that the PK index is not used, as both the staging table
and destination table a sequentially
Since this is an upsert, the staging table simply accepts a copy of
pipe-delimited data via a COPY FROM STDIN, and a subsequent SQL script
performs the UPDATE/INSERT. The staging table is then truncated (or delete
without where) for the next run: Truncate staging, COPY into staging,
update then
With make I can do make -n and it just tells me what it would do but
doesn't actually do anything.
How could I do that with SQL?
I want to write a really complicated (for me) SQL UPDATE statement. I'm sure I
won't get it right the first time. Is there an easy way to not really make the
On 10/01/2013 10:16 AM, Perry Smith wrote:
With make I can do make -n and it just tells me what it would do but
doesn't actually do anything.
How could I do that with SQL?
I want to write a really complicated (for me) SQL UPDATE statement. I'm sure I
won't get it right the first time. Is
On Oct 1, 2013, at 12:23 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On 10/01/2013 10:16 AM, Perry Smith wrote:
With make I can do make -n and it just tells me what it would do but
doesn't actually do anything.
How could I do that with SQL?
I want to write a really complicated
2013/10/1 Perry Smith pedz...@gmail.com
On Oct 1, 2013, at 12:23 PM, Adrian Klaver adrian.kla...@gmail.com
wrote:
Assuming you are not doing this in a function, you can. Do UPDATE, then
SELECT to see your changes or not and then ROLLBACK.
Ah... yes. I forgot you can see the changes
Hi Kevin,
Sorry, that's the wrong way around. I should have said:
Your BEFORE UPDATE trigger could leave the synced value in NEW
alone if force_sync was **true**, and set synced to false
otherwise. It could then set NEW.force_sync to false, to leave you
ready for the next update.
Thanks
Clemens Eisserer linuxhi...@gmail.com wrote:
Here is what I am trying to achieve: Set synced to false at any
update, except when it has been set explicitly to true.
This does not seem to be possible, without checking the value SET
by UPDATE?
Right; since there is no way to check whether a
Kevin Grittner kgri...@ymail.com wrote:
Your BEFORE UPDATE trigger could leave the synced value in NEW
alone if force_sync was false, and set synced to false
otherwise. It could then set NEW.force_sync to false, to leave you
ready for the next update.
Sorry, that's the wrong way around. I
Hi,
Sorry for this newbie-question, I am trying for quite some time now to get
the following trigger-function to work properly:
CREATE OR REPLACE FUNCTION update_synced_column()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.synced IS NULL THEN
NEW.synced := false;
END IF;
RETURN NEW;
Clemens Eisserer linuxhi...@gmail.com wrote:
I am trying for quite some time now to get the following
trigger-function to work properly:
CREATE OR REPLACE FUNCTION update_synced_column()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.synced IS NULL THEN
NEW.synced := false;
END
On 5/4/13 at 6:59 AM, Clemens Eisserer wrote:
Sorry for this newbie-question, I am trying for quite some time now to get
the following trigger-function to work properly:
CREATE OR REPLACE FUNCTION update_synced_column()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.synced IS NULL THEN
NEW.synced :=
Hi,
NEW reflects what the row will look like after the UPDATE. There
is no way to tell which columns were specified in the SET clause of
the UPDATE; a column which is omitted from that clause will look
exactly the same as a column which is set to the value it already
had.
Thanks a lot for
On 2013-01-25, Tim Uckun timuc...@gmail.com wrote:
I agree that seems like the most likely cause. Each update to the
row holding the hstore column requires adding new index entries for
all the hstore elements, and autovacuum will need to clean up the
old ones in the background. The best
That would be one solution, but I think a better one would be to not
store make_id in imports in the first place, but instead to always
fetch it by joining imports to models at query time.
My problem here is that the incoming data is quite messy so the join
conditions become weird (lots of
Sorry I haven't been responsive for a little while.
I ran your script but creating a new schema instead of my existing
schema. My timings were similar to yours (more or less) except fo the
vacuums which took roughly 147891 ms.
On Tue, Jan 22, 2013 at 3:21 PM, Kevin Grittner kgri...@mail.com
What if you do:
alter table cars.imports set (fillfactor=50);
Before the vacuum full, and then try the update again?
This makes a dramatic difference when combined with a vacuum.
UPDATE 98834
Time: 3408.210 ms
Ten times faster!
--
Sent via pgsql-general mailing list
relid | 26710
schemaname| cars
relname | imports
seq_scan | 280
seq_tup_read | 25873543
idx_scan | 4
idx_tup_fetch | 2749
n_tup_ins | 98926
n_tup_upd | 6350466
n_tup_del | 92
n_tup_hot_upd | 625286
n_live_tup
I agree that seems like the most likely cause. Each update to the
row holding the hstore column requires adding new index entries for
all the hstore elements, and autovacuum will need to clean up the
old ones in the background. The best solution would be to either
normalize the data instead
On 25/01/13 08:57, Tim Uckun wrote:
What if you do:
alter table cars.imports set (fillfactor=50);
Before the vacuum full, and then try the update again?
This makes a dramatic difference when combined with a vacuum.
UPDATE 98834
Time: 3408.210 ms
Ten times faster!
That suggests (to me, at
That suggests (to me, at least) that it is related to index updating. Again,
your GIN index seems primary candidate.
Try running iostat (I think that's available on a Mac) with/without the
fillfactor and with/without the GIN index while you do the updates. It's
possible your SSD is just
On 25/01/13 11:38, Tim Uckun wrote:
That suggests (to me, at least) that it is related to index updating. Again,
your GIN index seems primary candidate.
Try running iostat (I think that's available on a Mac) with/without the
fillfactor and with/without the GIN index while you do the updates.
Tim Uckun wrote:
I dropped the index and the numbers shot up tenfold or more. I don't
know why postgres feels the need to update the GIN index on the hstore
field when I am only updating an integer field but it looks like I
need to split the hstore into a different table.
Every UPDATE that
On Fri, Jan 25, 2013 at 3:38 AM, Tim Uckun timuc...@gmail.com wrote:
That suggests (to me, at least) that it is related to index updating. Again,
your GIN index seems primary candidate.
Try running iostat (I think that's available on a Mac) with/without the
fillfactor and with/without the
Jeff Janes wrote:
one hstore field can easily be equivalent to 50 text fields with
an index on each one.
I'm pretty sure that that is your bottleneck.
I agree that seems like the most likely cause. Each update to the
row holding the hstore column requires adding new index entries for
all
On 01/21/2013 05:02 PM, Tim Uckun wrote:
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
Oh well. I can't find it but maybe it got lost in shipping or eaten by a
spam filter.
On Tue, Jan 22, 2013 at 1:23 PM, Steve Crawford
On 01/21/2013 06:21 PM, Kevin Grittner wrote:
Kevin Grittner wrote:
...
shared_buffers | 32MB | configuration file
...
I did absolutely no tuning from the default configuration.
But Tim has his shared_buffers set to 1600kB
(shared_buffers;1600kB;configuration file)
Steve Crawford escribió:
On 01/21/2013 05:02 PM, Tim Uckun wrote:
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
Oh well. I can't find it but maybe it got lost in shipping or eaten
by a spam filter.
This is what we have the archives are for:
On 01/22/2013 10:59 AM, Alvaro Herrera wrote:
Steve Crawford escribió:
On 01/21/2013 05:02 PM, Tim Uckun wrote:
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
Oh well. I can't find it but maybe it got lost in shipping or eaten
by a spam filter.
On 21/01/13 08:04, Tim Uckun wrote:
This is the query I am running
update cars.imports i
set make_id = md.make_id
from cars.models md where i.model_id = md.id;
Here is the analyse
Looks like it's the actual update that's taking all the time.
This query takes fifty seconds on a
Can you try a couple of things just to check timings. Probably worth EXPLAIN
ANALYSE.
SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id =
md.id;
Takes about 300 ms
CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models md
ON i.model_id = md.id;
On 21/01/13 10:30, Tim Uckun wrote:
Can you try a couple of things just to check timings. Probably worth EXPLAIN
ANALYSE.
SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id =
md.id;
Takes about 300 ms
CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models
Richard Huxton wrote:
The only differences I can think of are WAL logging (transaction
log) and index updates (the temp table has no indexes).
What about foreign keys? Are there any tables which reference the
updated column in a foreign key declaration? Do they have indexes
on that column?
Just to close this up and give some guidance to future googlers...
There are no foreign keys. The table is empty when I start. I copy the
data into it from a text file.
Removing indexes didn't help much (made a very slight difference).
running a query CREATE TEMP TABLE tt AS SELECT using a
On 21/01/13 20:09, Tim Uckun wrote:
Just to close this up and give some guidance to future googlers...
Careful, future googlers.
Conclusion. Updates on postgres are slow
Nope.
(given the default
postgresql.conf). I presume this is due to MVCC or the WAL or
something and there are probably
On Mon, Jan 21, 2013 at 1:23 PM, Richard Huxton d...@archonet.com wrote:
On 21/01/13 20:09, Tim Uckun wrote:
Just to close this up and give some guidance to future googlers...
Careful, future googlers.
Conclusion. Updates on postgres are slow
Nope.
(given the default
Nope.
If you have any suggestions I am all ears. For the purposes of this
discussion we can narrow down the problem this update statement.
Update imports set make_id = null.
There are 98K records in the table. There is no index on the make_id
field. Standard untouched postgresql.conf from the
I'd be curious to see results of the same update on a standard HDD
vs the SSD, and maybe on a more typical database deployment hardware
vs a macbook air.
I haven't tried it on any other machine yet. CREATE TEMP TABLE tt as
SELECT ... takes eight seconds so presumably the disk is not the
Richard Huxton wrote:
On 21/01/13 20:09, Tim Uckun wrote:
Just to close this up and give some guidance to future
googlers...
Careful, future googlers.
+1
Conclusion. Updates on postgres are slow
Nope.
Agreed.
(given the default postgresql.conf). I presume this is due to
MVCC or the
Tim Uckun wrote:
If you have any suggestions I am all ears. For the purposes of this
discussion we can narrow down the problem this update statement.
Update imports set make_id = null.
Well, that simplifies things.
First off, what does it say for rows affected? (Hint, if you really
are
On Mon, Jan 21, 2013 at 1:48 PM, Tim Uckun timuc...@gmail.com wrote:
I'd be curious to see results of the same update on a standard HDD
vs the SSD, and maybe on a more typical database deployment hardware
vs a macbook air.
I haven't tried it on any other machine yet. CREATE TEMP TABLE tt
Kevin Grittner wrote:
First off, what does it say for rows affected? (Hint, if you really
are using a default configuration and it doesn't say 0 rows
affected, please show us the actual query used.)
Never mind that bit -- I got myself confused. Sorry for the noise.
-Kevin
--
Sent via
First off, what does it say for rows affected? (Hint, if you really
are using a default configuration and it doesn't say 0 rows
affected, please show us the actual query used.)
update imports set make_id = null
Query returned successfully: 98834 rows affected, 49673 ms execution time.
1 - 100 of 436 matches
Mail list logo