Note that as of oneiric, pam_limits directly probes the kernel for
default limits to set (by reading /proc/1/limits), and it is our stated
policy that pam_limits should not impose any policy by default other
than the one given by the kernel itself. As such, though a feature to
allow specifying
The above fix I have written does what it is coded to, but does NOT
create the required behaviour. If a user has a process that starts
eating memory, then the entire session will be terminated, and the user
will be booted back to the login screen.
--
Please set memory limits by default
I have the percentage based memory limits working fine. Currently, if
the char suffixing the value in security.conf is a %, it will calculate
the max virtual memory size based on the physical memory size. I'm
tweaking it, and bug testing it at the moment.
Do any of the other memory limiting
In Karmic, memory limits are still not set. ulimit -a reports data seg
size, max memory size and virtual memory are all unlimited.
** Changed in: pam (Ubuntu)
Status: Incomplete = New
--
Please set memory limits by default
https://bugs.launchpad.net/bugs/182960
You received this bug
** Changed in: pam (Ubuntu)
Status: New = Triaged
** Changed in: pam (Ubuntu)
Importance: Undecided = Wishlist
--
Please set memory limits by default
https://bugs.launchpad.net/bugs/182960
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
Thank you for taking the time to report this bug and helping to make
Ubuntu better. You reported this bug a while ago and there hasn't been
any activity in it recently. We were wondering if this is still an issue
for you. Can you try with the latest Ubuntu release? Thanks in advance.
** Changed
Like Steve, I used to think that the Linux kernel will kill off the
process responsible if it sees that too much memory is being used. But
just two days ago on our Jaunty server with lots of GB RAM and no swap,
a number of system processes were killed during a long-running memory-
consuming
A memory leak at a rate of a GB/s would quickly exhaust all system
memory and trigger the OOM killer. If the memory is being allocated in
very small quantities then it may swap for a while, but the system will
recover in due order.
Which is why, independent of whether we should be setting
I believe the problem is when an application allocates memory in very
small chunks. The OOM killer works well if you ask for 5GB and there is
only 1GB in the system, but nothing prevents an app from taking up all
available memory, forcing the system to grind to a halt and even if it
might recover
@Steve
I stand ready to be corrected, but my understanding is that Linux will
only kill processes when it actually runs out of memory. This only
happens when it runs out of swap. If a process is out of control, then
it will cause the system to thrash the swap and become unusable, but
long before
A memory leak alone doesn't bring down a Linux system; the Linux kernel
will kill off the process responsible if it sees that too much memory is
being used.
So whatever problem you're having that requires a hard reboot is more
than this.
How did you determine that the problem you're seeing is
Not sure if this is being worked on, but I think this is a VERY
important issue to fix!
Every once in a while, something I run has a massive memory leak (GB/s),
which brings down the entire system, requiring a hard reboot.
this seems like FAR from ideal behavior!
while there are valid reasons
A plausible solution would to be enable pam_limits by default, and add
support for setting virtual memory limits with a percentage.
In that vein, it would be also handy to be able to set the max number of
processes based on the number of cores.
--
Please set memory limits by default
13 matches
Mail list logo