On Thu, Nov 29, 2012 at 11:58 AM, Claudio Freire wrote:
> On Thu, Nov 29, 2012 at 3:32 PM, Jeff Davis wrote:
>>
>> I tried a quick test with 2M tuples and 3 indexes over int8, numeric,
>> and text (generated data). There was also an unindexed bytea column.
>> Using my laptop, a full update of the
On Thu, Nov 29, 2012 at 10:14 AM, Mike Blackwell wrote:
>
>
>
> On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes wrote:
>>
>>
>> But If you do keep the drop index inside the transaction, then you
>> would probably be better off using truncate rather than delete, and
>> rebuild the index non-concurren
On Thu, Nov 29, 2012 at 3:32 PM, Jeff Davis wrote:
> On Thu, 2012-11-29 at 00:48 -0300, Claudio Freire wrote:
>> Not really that fast if you have indices (and who doesn't have a PK or two).
>>
>> I've never been able to update (update) 2M rows in one transaction in
>> reasonable times (read: less
On Thu, 2012-11-29 at 00:48 -0300, Claudio Freire wrote:
> Not really that fast if you have indices (and who doesn't have a PK or two).
>
> I've never been able to update (update) 2M rows in one transaction in
> reasonable times (read: less than several hours) without dropping
> indices. Doing it
On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes wrote:
>
> But If you do keep the drop index inside the transaction, then you
> would probably be better off using truncate rather than delete, and
> rebuild the index non-concurrently and move that inside the
> transaction as well.
>
>
Hmm From
On Thu, Nov 29, 2012 at 8:38 AM, Mike Blackwell wrote:
>
> What I'm trying at this point is:
>
> BEGIN;
> DROP INDEX -- only one unique index exists
> DELETE FROM table;
> COPY table FROM STDIN;
> COMMIT;
> CREATE INDEX CONCURRENTLY;
>
> Do I understand correctly that DROP/CREATE index are not tra
Ah. So it does. Testing with two psql sessions locks as you said, and
moving the DROP INDEX to a separate transaction give the results I was
looking for.
Thanks,
Mike
__
*Mike Blackwell | Technical Analyst, Distribu
On Thu, Nov 29, 2012 at 9:38 AM, Mike Blackwell wrote:
> On Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire
> wrote:
>>
>> On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis wrote:
>> >
>> > The main problem with a long-running delete or update transaction is
>> > that the dead tuples (deleted tuples or t
On Wed, Nov 28, 2012 at 9:48 PM, Claudio Freire wrote:
> On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis wrote:
> >
> > The main problem with a long-running delete or update transaction is
> > that the dead tuples (deleted tuples or the old version of an updated
> > tuple) can't be removed until the
On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis wrote:
>
> The main problem with a long-running delete or update transaction is
> that the dead tuples (deleted tuples or the old version of an updated
> tuple) can't be removed until the transaction finishes. That can cause
> temporary "bloat", but 1.5M
On Tue, 2012-11-27 at 16:04 -0600, Mike Blackwell wrote:
> I need to delete about 1.5 million records from a table and reload it
> in one transaction. The usual advice when loading with inserts seems
> to be group them into transactions of around 1k records. Committing
> at that point would leave
On Tue, 2012-11-27 at 22:16 -0300, Claudio Freire wrote:
> Updates, are faster if batched, if your business logic allows it,
> because it creates less bloat and creates more opportunities for with
> HOT updates. I don't think it applies to inserts, though, and I
> haven't heard it either.
Huge upd
Commitmarks are written to disk after each transaction. So transactionsize has
impact on performance.
Date: Wed, 28 Nov 2012 09:18:20 -0600
Subject: Re: [PERFORM] Savepoints in transactions for speed?
From: mike.blackw...@rrd.com
To: klaussfre...@gmail.com
CC: pgsql-performance@postgresql.org
On Tue, Nov 27, 2012 at 7:16 PM, Claudio Freire wrote:
> Updates, are faster if batched, if your business logic allows it,
> because it creates less bloat and creates more opportunities for with
> HOT updates. I don't think it applies to inserts, though, and I
> haven't heard it either.
>
Ah. Th
Savepoint are not created for performance. If you have one very long running
transactions that fails in the end, it will all be rolled back. So be pretty
sure about your dataquality or use safepoints.
On Tue, Nov 27, 2012 at 6:26 PM, Steve Atkins wrote:
>
> On Nov 27, 2012, at 2:04 PM, Mike Blackwell
> wrote:
>
> > I need to delete about 1.5 million records from a table and reload it in
> one transaction. The usual advice when loading with inserts seems to be
> group them into transactions o
On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell wrote:
>
> > Postgresql isn't going to run out of resources doing a big transaction, in
> > the way some other databases will.
>
> I thought I had read something at one point about keeping the transaction
> size on the order of a couple thousand b
Steve Atkins wrote:
> Postgresql isn't going to run out of resources doing a big transaction,
in the way some other databases will.
I thought I had read something at one point about keeping the transaction
size on the order of a couple thousand because there were issues when it
got larger. As th
Mike,
Is there anything that the 1.5 million rows have in common that would allow you
to use partitions? if so, you could load the new data into a partition at
your leisure, start a transaction, alter the partition table with the old data
to no longer inherit from the parent, alter the new pa
On Nov 27, 2012, at 2:04 PM, Mike Blackwell wrote:
> I need to delete about 1.5 million records from a table and reload it in one
> transaction. The usual advice when loading with inserts seems to be group
> them into transactions of around 1k records. Committing at that point would
> leave
On 27/11/12 22:04, Mike Blackwell wrote:
I need to delete about 1.5 million records from a table and reload it
in one transaction.
The data to reload the table is coming from a Perl DBI connection to a
different database (not PostgreSQL) so I'm not sure the COPY
alternative applies here.
No
I need to delete about 1.5 million records from a table and reload it in
one transaction. The usual advice when loading with inserts seems to be
group them into transactions of around 1k records. Committing at that
point would leave the table in an inconsistent state. Would issuing a
savepoint e
22 matches
Mail list logo