On Wed, Sep 11, 2013 at 11:14 PM, Roberto Grandi <
roberto.gra...@trovaprezzi.it> wrote:

> Hi Guys,
>
> we found a suitable solution for our process we run every 5-6 hours a
> CLUSTER stement for our big table: this "lock" activities but allow us to
> recover all available space.
>


If you can tolerate the locks, that is fine, but it just doesn't seem like
this should be necessary.  A manual vacuum should get the job done with
weaker locking.  Did you try running a manual vacuum every 5-6 hours
instead (it would probably not reclaim the space, but would make it
available for reuse and so cap the steady-state size of the file, hopefully
to about the same size as the max size under the CLUSTER regime)


> When testing this task we discover another issues and that's why I'm
> coming back to you for your experience:
>
> duting our process we run multiple simoultaneously "COPY... FROM" in order
> to load data into our table but a t the same time we run also "COPY ... TO"
> statement in parallel to export data for other clients.
>
> We found that COPY .. TO queries sometimes are pending for more than 100
> minutes and the destination file continues to be at 0 Kb. Can you advise me
> how to solve this issue?
>

Are your COPY ... FROM also blocking, just in a way you are not detecting
(because there is no growing file to watch the size of)?  What does pg_lock
say?

Cheers,

Jeff

Reply via email to