Rikard Pavelic writes:
> Is it possible to speed up deletes which have null references so they don't
> check if a reference is valid?
You're thinking about the problem backwards. Since the table is
self-referential, each row is both a PK (referenced) row and an FK
(referencing) row. In its rol
Is it possible to speed up deletes which have null references so they don't
check if a reference is valid?
I had this scenario:
--large table not referenced from other places
CREATE TABLE large_table
(
id bigserial primary key,
ref_1 bigint not null,
ref_2 bigint not null,
at_1 timestamptz n
On 10/16/2012 04:41 PM, Filippos Kalamidas wrote:
the first thing you should probably do is run an 'analyze' on one of
these tables and then run again the delete statement. if there are no
stats for these tables, it's normal not to have very good plans.
Yep, and the fact that the stats are that
the first thing you should probably do is run an 'analyze' on one of these
tables and then run again the delete statement. if there are no stats for
these tables, it's normal not to have very good plans.
On Tue, Oct 16, 2012 at 11:24 AM, Sylvain CAILLET wrote:
> Hi Craig,
>
> Here are the outpu
Hi Craig,
Here are the outputs :
flows=# explain analyze delete from agg_t377_incoming_a40_dst_net_f5 where
start_date < 1346487911000;
QUERY PLAN
---
Seq
On 10/16/2012 03:50 PM, Sylvain CAILLET wrote:
Hi to all,
I've got a trouble with some delete statements. My db contains a little
more than 1 tables and runs on a dedicated server (Debian 6 - bi
quad - 16Gb - SAS disks raid 0). Most of the tables contains between 2
and 3 million rows and no
Hi Sékine,
You're right : my question is why the planner doesn't use the index ! My DELETE
statements have WHERE clause like : start_date<134648610. They are executed
to delete too old rows.
My postgresql version is 8.4. Below is an example of a table (they all have the
same structure) :
Hi Sylvain,
Might sound like a nasty question, and gurus will correct me if I'm wrong,
but first thing to investigate is why the index is not used :
- You have 2/3 million rows per table so the planner should use the index.
Seqscan is prefered for small tables.
- Maybe the WHERE clause of your DEL
On Fri, Jun 26, 2009 at 1:34 AM, Janet Jacobsen wrote:
> Thank you for the answers. Very helpful.
>
> Between the time that I sent my original post and saw your reply,
> I tried to drop a couple of foreign key constraints. The alter
> table statements also showed up as "waiting" when I ran ps aux
Hi. I posted a question about a very slow DELETE on a table
with 12 M rows last week, and I wanted to (1) thank everyone
who provided a reply since each clue helped to find the solution,
and (2) give the solution.
The slow DELETE was due to another user having a lock on
the table - which several
2009/6/27 Scott Carey :
> In addition to the above, note that long lived transactions cause all sorts
> of other problems in the database. In particular, table and index bloat can
> become severe due to this sort of poor client behavior if there is a lot of
> update or delete activity. You can fi
On 6/26/09 6:36 PM, "Robert Haas" wrote:
> 2009/6/26 Janet Jacobsen :
>> Hi. The user in question is using psycopg2, which he uses
>> psycopg2:
>>> import psycopg2
>>> conn = psycopg2.connect("dbname=%s user=%s host=%s password=%s port=%s"
>>> ...)
>>> pg_cursor = conn.cursor()
>>> pg_cursor
2009/6/26 Janet Jacobsen :
> Hi. The user in question is using psycopg2, which he uses
> psycopg2:
>> import psycopg2
>> conn = psycopg2.connect("dbname=%s user=%s host=%s password=%s port=%s" ...)
>> pg_cursor = conn.cursor()
>> pg_cursor.execute()
>> rows = pg_cursor.fetchall()
> Note that
> (1
Hi. The user in question is using psycopg2, which he uses
psycopg2:
> import psycopg2
> conn = psycopg2.connect("dbname=%s user=%s host=%s password=%s port=%s" ...)
> pg_cursor = conn.cursor()
> pg_cursor.execute()
> rows = pg_cursor.fetchall()
Note that
(1) he said that he does not set an isolat
On Fri, Jun 26, 2009 at 9:34 AM, Janet Jacobsen wrote:
> I assume that killing the user's process released the lock on the
> table. This user has only SELECT privileges. Under what
> conditions would a SELECT lock a table. The user connects
> to the database via a (Python?) script that runs on
Thank you for the answers. Very helpful.
Between the time that I sent my original post and saw your reply,
I tried to drop a couple of foreign key constraints. The alter
table statements also showed up as "waiting" when I ran ps aux.
I took your suggestion to run pg_locks and pg_stat_activit
Greg Stark wrote:
"waiting" means it's blocked trying to acquire a lock. Some open
transaction has the table you're trying to index locked. Look in
pg_locks and pg_stat_activity to find out who.
Or you might find CREATE INDEX CONCURRENTLY fits your situation.
http://www.postgresql.org/docs/8.3
On Fri, Jun 26, 2009 at 3:33 AM, Janet Jacobsen wrote:
> (1) is my interpretation of the posts correct, i.e., if I am deleting
> rows from
> table1, where the pkey of table 1 is a fkey in table 2, then do I need
> to create an
> index on the fkey field in table 2?
Exactly right. The index on the t
Hi. We are running Postgres 8.3.7 on an eight-processor Linux system.
Because the machine has very little local disk, the database files are on
a file system running GPFS.
The machine is mostly dedicated to processing images. After the images
are processed, the image attributes and processing pa
On Friday 04 July 2008, [EMAIL PROTECTED] wrote:
> > My next question is: what is the difference between "select" and
> > "delete"? There is another table that has one foreign key to reference
> > the test (parent) table that I am deleting from and this foreign key
> > does not have an index on it
> My next question is: what is the difference between "select" and "delete"?
> There is another table that has one foreign key to reference the test
> (parent) table that I am deleting from and this foreign key does not have
> an index on it (a 330K row table).
The difference is that with SELECT y
side?
Thanks a lot,
Jessica
- Original Message
From: Craig Ringer <[EMAIL PROTECTED]>
To: Jessica Richard <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org
Sent: Friday, July 4, 2008 1:16:31 AM
Subject: Re: [PERFORM] slow delete
Jessica Richard wrote:
> I have a tab
by the way, there is a foreign key on another table that references the
primary key col0 on table test.
Is there an index on the referencing field in the other table ? Postgres
must find the rows referencing the deleted rows, so if you forget to index
the referencing column, this can tak
Jessica Richard wrote:
I have a table with 29K rows total and I need to delete about 80K out of it.
I assume you meant 290K or something.
I have a b-tree index on column cola (varchar(255) ) for my where clause
to use.
my "select count(*) from test where cola = 'abc' runs very fast,
but m
I have a table with 29K rows total and I need to delete about 80K out of it.
I have a b-tree index on column cola (varchar(255) ) for my where clause to use.
my "select count(*) from test where cola = 'abc' runs very fast,
but my actual "delete from test where cola = 'abc';" takes forever, neve
First of, thanks, Tom.
Although I've been very careful on this kind of things, looks like I
missed one index on a referencing column. Still, I don't allow an entire
delete of a table if it has referencing columns with values, so at the
moment of the deletion, it has no rows at all.
I checked da
"Octavio Alvarez" <[EMAIL PROTECTED]> writes:
> Please tell me if this timing makes sense to you for a Celeron 433 w/
> RAM=256MB dedicated testing server. I expected some slowness, but not this
> high.
I'll bet you have foreign keys referencing this table, and the
referencing columns do not have
Octavio Alvarez wrote:
Please tell me if this timing makes sense to you for a Celeron 433 w/
RAM=256MB dedicated testing server. I expected some slowness, but not this
high.
Well delete is generally slow. If you want to delete the entire table
(and your really sure)
use truncate.
J
db_eps
Please tell me if this timing makes sense to you for a Celeron 433 w/
RAM=256MB dedicated testing server. I expected some slowness, but not this
high.
db_epsilon=# \d t_active_subjects
Table "public.t_active_subjects"
Column | Type |
29 matches
Mail list logo