Michael Haggerty <mhag...@alum.mit.edu> writes:

> One likely reason for fdopen() to fail is the lack of memory for
> allocating a FILE structure. When that happens, try freeing some
> memory and calling fdopen() again in the hope that it will work the
> second time.

In codepaths where we are likely under memory pressure, the above
might help, but I have to wonder

    (1) if update-server-info and daemon fall into that category; and

    (2) if Git continues to work under such a memory pressure to
        cause even fdopen() to fail.

In other words, I do not see a reason not to do this change, but I
am not sure how much it would help us in practice.

We call fopen() from a lot more places than we call fdopen().  Do we
want to do the same, or is there a good reason why this does not
matter to callers of fopen(), and if so why doesn't the same reason
apply to callers of fdopen()?

> Michael Haggerty (5):
>   xfdopen(): if first attempt fails, free memory and try again
>   fdopen_lock_file(): use fdopen_with_retry()
>   copy_to_log(): use fdopen_with_retry()
>   update_info_file(): use fdopen_with_retry()
>   buffer_fdinit(): use fdopen_with_retry()
>
>  daemon.c              |  4 ++--
>  git-compat-util.h     | 11 +++++++++++
>  lockfile.c            |  2 +-
>  server-info.c         |  2 +-
>  vcs-svn/line_buffer.c |  2 +-
>  wrapper.c             | 28 +++++++++++++++++++++++++---
>  6 files changed, 41 insertions(+), 8 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to