Csaba Nagy wrote:
> On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
>>> Unfortunately I don't think this will work. Multiple backends will happily
>>> pick up the same ctid in their selects and then try to delete the same
>>> records.
>> I'm pretty sure he said that the batch processing (and the
Andrew,
Thanks for your input, comments below.
On Thu, 2007-07-26 at 13:30, Andrew Kroeger wrote:
> After reading through this thread, I have an idea that should accomplish
> what I believe are your 3 main goals (avoid any negative performance
> impact on the user's inserts, do not lose any data
On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
> > Unfortunately I don't think this will work. Multiple backends will happily
> > pick up the same ctid in their selects and then try to delete the same
> > records.
>
> I'm pretty sure he said that the batch processing (and the delete) would
> on
On Tue, 24 Jul 2007, Csaba Nagy wrote:
> > How about using the following?
> >
> > delete from
> > where ctid in (select ctid from limit );
> >
>
> I actually checked this out before starting this thread, and the plan
> looked like:
>
> > explain delete from my_table where ctid in (select cti
On Tue, 24 Jul 2007, Gregory Stark wrote:
> "Csaba Nagy" <[EMAIL PROTECTED]> writes:
>
> >> Unfortunately the stuff that makes a ctid= nice doesn't seem to be
> >> used when you're doing an in. It's possible that a function that does
> >> something like
> >> for rec in select ctid from my_table
"Csaba Nagy" <[EMAIL PROTECTED]> writes:
>> Unfortunately the stuff that makes a ctid= nice doesn't seem to be
>> used when you're doing an in. It's possible that a function that does
>> something like
>> for rec in select ctid from my_table limit 10 loop
>> delete from my_table where ctid=rec.
On Tue, 2007-07-24 at 18:19, Tom Lane wrote:
> [ thinks for a bit ... ] Actually, you can do it as of 8.2 or so,
> by abusing the ScalarArrayOp stuff: turn the subquery into an array.
> An example in the regression database:
>
> regression=# explain update tenk1 set ten=ten+1
> regression-# whe
Stephan Szabo <[EMAIL PROTECTED]> writes:
> Unfortunately the stuff that makes a ctid= nice doesn't seem to be
> used when you're doing an in.
Yeah, see the header comments in tidpath.c:
* There is currently no special support for joins involving CTID; in
* particular nothing corresponding to b
> Unfortunately the stuff that makes a ctid= nice doesn't seem to be
> used when you're doing an in. It's possible that a function that does
> something like
> for rec in select ctid from my_table limit 10 loop
> delete from my_table where ctid=rec.ctid;
> end loop
> might do okay, but I haven'
Csaba Nagy wrote:
> First of all, thanks for all the suggestions.
>
>> put a SERIAL primary key on the table
> Or:
>> Maybe add OIDs to the table, and delete based on the OID number?
>
> No, this is not acceptable, it adds overhead to the insertions. Normally
> the overhead will be small enough,
> How about using the following?
>
> delete from
> where ctid in (select ctid from limit );
>
I actually checked this out before starting this thread, and the plan
looked like:
> explain delete from my_table where ctid in (select ctid from my_table
limit 10);
First of all, thanks for all the suggestions.
> put a SERIAL primary key on the table
Or:
> Maybe add OIDs to the table, and delete based on the OID number?
No, this is not acceptable, it adds overhead to the insertions. Normally
the overhead will be small enough, but on occasions it is noticeabl
On Mon, 2007-07-23 at 17:56 +0200, Csaba Nagy wrote:
> Now I don't put too much hope I can convince anybody that the limit on
> the delete/update commands has valid usage scenarios, but then can
> anybody help me find a good solution to chunk-wise process such a buffer
> table where insert speed i
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/23/07 10:56, Csaba Nagy wrote:
> Hi all,
>
> This subject was touched a few times in the past, I looked into the
> archives... the result is invariably key developers saying such a
> feature is unsafe because the result is unpredictable, while t
[EMAIL PROTECTED] (Csaba Nagy) writes:
> In postgres we're currently not chunking, due to the fact that the code
> to do it is simply overly contorted and inefficient compared to the
> other DBs we use. At least all the solutions we could think of to do the
> chunking in a safe way while the insert
Hi all,
This subject was touched a few times in the past, I looked into the
archives... the result is invariably key developers saying such a
feature is unsafe because the result is unpredictable, while the people
requesting is saying it is OK that way, it is expected... but no
compelling use case
16 matches
Mail list logo