Eshel/Almaden/IBM@IBMUS
> Cc:Frank Filz ,
> nfs-ganesha-devel@lists.sourceforge.net
> Date: 11/02/2015 01:24 PM
> Subject: Re: [Nfs-ganesha-devel] Topic for discussion - Out of
> MemoryHandling
> --
>
>
>
> Marc
change until we have solutions for all other issues
that where discussed.
Marc.
From: mala...@linux.vnet.ibm.com
To: Marc Eshel/Almaden/IBM@IBMUS
Cc: Frank Filz ,
nfs-ganesha-devel@lists.sourceforge.net
Date: 11/02/2015 01:24 PM
Subject: Re: [Nfs-ganesha-devel] Topic for d
Marc Eshel [es...@us.ibm.com] wrote:
>Yes, it looks like I am outvoted, memory management is complicated. Let me
>first say that under no condition we should reboot the node any action
>should be limited to the Ganesha process. When we fail to get heap memory
>than yes kill the proc
and tune appropriately.
Frank
From: Marc Eshel [mailto:es...@us.ibm.com]
Sent: Monday, November 2, 2015 11:56 AM
To: Frank Filz
Cc: nfs-ganesha-devel@lists.sourceforge.net
Subject: RE: [Nfs-ganesha-devel] Topic for discussion - Out of Memory Handling
Yes, it looks like I am outvoted
abort before try to reduce the cache size.
Marc.
From: "Frank Filz"
To: Marc Eshel/Almaden/IBM@IBMUS
Cc:
Date: 11/02/2015 11:24 AM
Subject: RE: [Nfs-ganesha-devel] Topic for discussion - Out of
Memory Handling
There seems to be overwhelming support for log an
Frank Filz [ffilz...@mindspring.com] wrote:
>
> No matter what we decide to do, another thing we need to look at is more
> memory throttling. Cache inode has a limit on the number of inodes. This is
> helpful, but is incomplete. Other candidates for memory throttling would be:
>
> Number of clien
Good discussions so far. Frankly, I don't see the point of adding code
to handle ENOMEM in some places but abort in other places. This may make
sense only if we handle vast majority of failures but only a very few
very rare cases where we abort.
I am inclined to believe that recovering from a vast
Marc.
From:"Frank Filz" mailto:ffilz...@mindspring.com> >
To:mailto:nfs-ganesha-devel@lists.sourceforge.net> >
Date: 10/28/2015 11:55 AM
Subject: [Nfs-ganesha-devel] Topic for discussion - Out of Memory
Handling
_
We have had various d
Frank Filz wrote on Thu, Oct 29, 2015 at 12:00:31PM -0700:
> > In another situation the Linux the OOM killer might have already killed
> > important other processes trying to free memory for the NFS server. You
> > wouldn't want to recover the NFS process here, since you don't know if
> that
> > sy
> The discussion so far has discussed how to handle a failure to allocate
more
> memory from the system allocator (your native C library malloc, or some
> replacement library). However my opinion is that if such a system
allocation
> fails, it's already too late for any meaningful reaction. You've
The discussion so far has discussed how to handle a failure to allocate more
memory from the system allocator (your native C library malloc, or some
replacement library). However my opinion is that if such a system allocation
fails, it's already too late for any meaningful reaction. You've exhau
On 10/29/15 10:52 AM, Matt W. Benjamin wrote:
> I think this is probably the best approach for now, ++.
>
> Matt
>
> - "Frank Filz" wrote:
>
>> From: Swen Schillig [mailto:s...@vnet.ibm.com]
>>> Regardless of what's decided on how to react to out of mem
>> conditions, we
>>> must check and det
I think this is probably the best approach for now, ++.
Matt
- "Frank Filz" wrote:
> From: Swen Schillig [mailto:s...@vnet.ibm.com]
> > Regardless of what's decided on how to react to out of mem
> conditions, we
> > must check and detect them, fast and reliable, always.
> > It is not accept
From: Swen Schillig [mailto:s...@vnet.ibm.com]
> Besides, memory allocations or other operations which could possibly end in
> blocking or otherwise fatal situations must be avoided under a lock-condition
> anyway.
> We must try to prevent such "no-way" out situations.
So the problem in this unloc
From: Swen Schillig [mailto:s...@vnet.ibm.com]
> Regardless of what's decided on how to react to out of mem conditions, we
> must check and detect them, fast and reliable, always.
> It is not acceptable to silently accept such a condition and risk to crash or
> modify other memory areas.
Yes, we s
From: Kaleb S. KEITHLEY [mailto:kkeit...@redhat.com]
> On 10/28/2015 02:55 PM, Frank Filz wrote:
> > We have had various discussions over the years as to how to best
> > handle out of memory conditions.
>
> a) I see that, e.g., jemalloc's mallctl(3) has the ability to selectively
> purge
> unused
On 10/29/2015 08:29 AM, Kaleb S. KEITHLEY wrote:
> On 10/28/2015 02:55 PM, Frank Filz wrote:
>> We have had various discussions over the years as to how to best handle out
>> of memory conditions.
>
> a) I see that, e.g., jemalloc's mallctl(3) has the ability to
> selectively purge unused dirty pa
On 10/28/2015 02:55 PM, Frank Filz wrote:
> We have had various discussions over the years as to how to best handle out
> of memory conditions.
a) I see that, e.g., jemalloc's mallctl(3) has the ability to
selectively purge unused dirty pages from one or all arenas.
I don't know what that actuall
> From: Marc Eshel [mailto:es...@us.ibm.com]
> > Sent: Wednesday, October 28, 2015 7:38 PM
> > To: Frank Filz
> > Cc: nfs-ganesha-devel@lists.sourceforge.net
> > Subject: Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory
> > Handling
> >
> >
&
On Mi, 2015-10-28 at 11:55 -0700, Frank Filz wrote:
> We have had various discussions over the years as to how to best handle out
> of memory conditions.
>
> In the meantime, our code is littered with attempts to handle the situation,
> however, it is not clear to me these really solve anything. I
2015 7:38 PM
> To: Frank Filz
> Cc: nfs-ganesha-devel@lists.sourceforge.net
> Subject: Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory
> Handling
>
>
>
>
> I don't believe that we need to restart Ganesha on every out of memory
> calls for many reasons, but I
et of
locks).
Frank
From: Marc Eshel [mailto:es...@us.ibm.com]
Sent: Wednesday, October 28, 2015 7:38 PM
To: Frank Filz
Cc: nfs-ganesha-devel@lists.sourceforge.net
Subject: Re: [Nfs-ganesha-devel] Topic for discussion - Out of Memory
Handling
I don't believe that we need to restart
Date: 10/28/2015 11:55 AM
Subject: [Nfs-ganesha-devel] Topic for discussion - Out of Memory
Handling
We have had various discussions over the years as to how to best handle
out
of memory conditions.
In the meantime, our code is littered with attempts to handle the
situation,
how
thoughts about some
of them.
Thanks
Frank
> -Original Message-
> From: Frank Filz [mailto:ffilz...@mindspring.com]
> Sent: Wednesday, October 28, 2015 11:55 AM
> To: nfs-ganesha-devel@lists.sourceforge.net
> Subject: [Nfs-ganesha-devel] Topic for discussion - Out of
We have had various discussions over the years as to how to best handle out
of memory conditions.
In the meantime, our code is littered with attempts to handle the situation,
however, it is not clear to me these really solve anything. If we don't have
100% recoverability, likely we just delay the
25 matches
Mail list logo