Adi Alurkar <[EMAIL PROTECTED]> writes:
> 1) Add a new config paramter  e.g work_maintanence_max_mem  this will 
> the max memory postgresql *can* claim if need be.

> 2) During the dump phase of the DB  postgresql  estimates the 
> "work_maintenance_mem" that would be required to create the index in 
> memory(if possible) and add's a
> SET work_maintenance_mem="the value calculated"  (IF this value is less 
> than work_maintanence_max_mem. )

This seems fairly pointless to me.  How is this different from just
setting maintenance_work_mem as large as you can stand before importing
the dump?

Making any decisions at dump time seems wrong to me in the first place;
pg_dump should not be expected to know what conditions the restore will
be run under.  I'm not sure that's what you're proposing, but I don't
see what the point is in practice.  It's already the case that
maintenance_work_mem is treated as the maximum memory you can use,
rather than what you will use even if you don't need it all.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to