Hello

I have encountered some issues of  PG crash when dealing  with too much
data.
It seems that PG tries to do its task as quckly as it can be and will use as
much resource as it can.

Later I tried cgroups to limit resource usage to avoid PG consuming too much
memory etc. too quickly.
And PG works fine.

I edited the following files:

/etc/cgconfig.conf

mount {
    cpuset    = /cgroup/cpuset;
    cpu    = /cgroup/cpu;
    cpuacct    = /cgroup/cpuacct;
    memory    = /cgroup/memory;
    devices    = /cgroup/devices;
    freezer    = /cgroup/freezer;
    net_cls    = /cgroup/net_cls;
    blkio    = /cgroup/blkio;
}

group test1 {
    perm {
          task{
              uid=postgres;
              gid=postgres;
          }
          
          admin{
             uid=root;
             gid=root; 
          }

    } memory {
       memory.limit_in_bytes=300M;
    }
}

/etc/cgrules.conf
# End of file
 postgres      memory           test1/
#
Then set service on and restart , then login as postgres
chkconfig cgconfig  on

chkconfig cgred on

And I can find PG works under 300M memory limit.

Best Regards
jian gao




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-9-2-pg-dump-out-of-memory-when-backuping-a-database-with-300000000-large-objects-tp5772931p5774252.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to