On Mon, 2008-02-04 at 19:29 +0000, Simon Riggs wrote:
> On Mon, 2008-02-04 at 10:57 -0800, Jeff Davis wrote:
> 
> > I tried bringing this up on LKML several times (Ron Mayer linked to one
> > of my posts: http://lkml.org/lkml/2007/2/9/275). If anyone has an inside
> > connection to the linux developer community, I suggest that they raise
> > this issue.
> > 
> > If you want to experiment, start a postgres process with shared_buffers
> > set at 25% of the available memory, and then start about 100 idle
> > connections. Then, start a process that just slowly eats memory, such
> > that it will invoke the OOM killer after a couple minutes (badness()
> > takes into account the time the process has been alive, as well, so you
> > can't just eat memory in a tight loop).
> > 
> > The postgres process will always be killed, and then it will realize
> > that it didn't alleviate the memory pressure much, and then kill the
> > runaway process.
> 
> I think the badness() thing sucks badly too, but if we don't keep our
> own house in order then they're not going to listen.

I am missing something, can you elaborate? What is postgresql doing
wrong?

Regards,
        Jeff Davis


---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate

Reply via email to