On 8/7/07, Decibel! <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:
> > Mark Makarowsky wrote:
> > >I have a table with 4,889,820 records in it.  The
> > >table also has 47 fields.  I'm having problems with
> > >update performance.  Just as a test, I issued the
> > >following update:
> > >
> > >update valley set test='this is a test'
> > >
> > >This took 905641 ms.  Isn't that kind of slow?
> >
> > The limiting factor here will be how fast you can write to your disk.
>
> Well, very possibly how fast you can read, too. Using your assumption of
> 1k per row, 5M rows means 5G of data, which might well not fit in
> memory. And if the entire table's been updated just once before, even
> with vacuuming you're now at 10G of data.

Where one might have to update just one column of a wide table often,
it's often a good idea to move that column into its own dependent
table.

Or just don't update one column of every row in  table...

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to