Well, I have solved executing with more RAM, and then works correctly

Thanks



2010/10/28 Cédric Villemain <cedric.villemain.deb...@gmail.com>

> 2010/10/28 Trenta sis <trenta....@gmail.com>:
> >
> >
> > There are about 100.000 inserts and 300000 updates. Without transaction
> it
> > seems that works, but with a transaction no. Witt about only 300.000
> updates
> > it seems that can finish correctly, but last 20% is slow because is using
> > swap...
> >
> > Any tunning to do in this configuration or it is correct?
>
> You should post your queries, and tables definitions involved.
>
> >
> > thanks
> >
> > 2010/10/28 Craig Ringer <cr...@postnewspapers.com.au>
> >>
> >> On 10/28/2010 02:38 AM, Trenta sis wrote:
> >>>
> >>> Hi,
> >>>
> >>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems
> >>> with a massive update, about 400000 updates/inserts.
> >>> If I execute about 100000 it seems all ok, but when I execute 400000, I
> >>> have the same problem with or without a transaction (I need to do with
> a
> >>> transaction) increase memory usage and disk usage.
> >>> With a execution of 400.000 inserts/update server begin woring well,
> but
> >>> after 100 seconds of executions increase usage of RAM, and then Swap
> and
> >>> finally all RAM and swap are used and execution can't finish.
> >>
> >> Do you have lots of triggers on the table? Or foreign key relationships
> >> that're DEFERRABLE ?
> >>
> >> --
> >> Craig Ringer
> >
> >
> >
>
>
>
> --
> Cédric Villemain               2ndQuadrant
> http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support
>

Reply via email to