Jeff I think adding the new table is the best way to handle this issue.
From: Jeff Janes
To: acanada ,
Cc: postgres performance list
Date: 03/18/2014 02:05 AM
Subject:Re: [PERFORM] Adding new field to big table
Sent by:pgsql-performance-ow...@postgresql.org
On
On Monday, March 17, 2014, acanada wrote:
> Hello,
>
> Jeff and Jeffrey thank you for your tips.
> This is the explain of the query:
> x=> explain update document as d set "svmConfidence" = st.svmconfidence
> from svm_confidence_id_tmp as st where st.id = d.id;
>
Hello,
Jeff and Jeffrey thank you for your tips.
This is the explain of the query:
x=> explain update document as d set "svmConfidence" = st.svmconfidence from
svm_confidence_id_tmp as st where st.id = d.id;
QUERY PLAN
On Fri, Mar 14, 2014 at 10:06 AM, acanada wrote:
> Hello Jeff,
>
> The lock time is not a problem. The problem is that takes too much time. I
> will need to add more fields to this table in the near future and I'd like
> to know if the process can be accelerated by any parameter, workaround or
>
Hello Jeff,
The lock time is not a problem. The problem is that takes too much time. I will
need to add more fields to this table in the near future and I'd like to know
if the process can be accelerated by any parameter, workaround or whatever...
Thank you for your answer.
Cheers,
Andrés
El
On Fri, Mar 14, 2014 at 4:30 AM, acanada wrote:
> Hello,
>
> I'm having time issues when adding new fields to a big table. I hope you
> can point me some hints to speed up the updates of a table with 124 million
> rows...
>
> This is what I do:
>
> First I create a tmp_table with the data that wi
Hello,
I'm having time issues when adding new fields to a big table. I hope you can
point me some hints to speed up the updates of a table with 124 million rows...
This is what I do:
First I create a tmp_table with the data that will be added to the big table:
\d+ svm_confidence_id_tmp