[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-12 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Okay, I made a patch that I hoped dealt with all the criticisms and that fixed 
up a problem I noted myself.

--
Added file: http://bugs.python.org/file35599/fixed-setcloexec.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-12 Thread Steven Stewart-Gallus

Changes by Steven Stewart-Gallus sstewartgallu...@mylangara.bc.ca:


Removed file: http://bugs.python.org/file35599/fixed-setcloexec.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-12 Thread Steven Stewart-Gallus

Changes by Steven Stewart-Gallus sstewartgallu...@mylangara.bc.ca:


Added file: http://bugs.python.org/file35600/fixed-setcloexec.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-06 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Okay, now I'm confused. How would I conditionally compile and use the 
setcloexec object and header on POSIX platforms and not on Windows?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-04 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Okay, so the Python directory seems to be where wrappers over low level or 
missing functionality is placed. So I guess I should make a _Py_setcloexec 
function in a Python/setcloexec.c and a definition in Include/setcloexec.h?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-02 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

It occurred to me that the current patch I have is wrong and that using 
_Py_set_inheritable is wrong because EBADF can occur in the brute force version 
which in the case of _Py_set_inheritable raises an error which I am not sure is 
asynch signal safe. I could test ahead of time but that is a bit hacky. The 
most principled approach would be to extract either a common set_cloexec or 
set_inheritable function. If set_inheritable is done then I would have to 
report either a Windows error or an errno error which would be messy. I'm not 
sure where the best place to put the set_cloexec function would be.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-06-01 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Thank you for the very quick patch Gregory P. Smith.

It's fair enough if you don't bother to fix the EINTR issue.

One small note:

 +Confirm that issue21618 is fixed (mail fail under valgrind).

That's a typo right? Shouldn't it be may instead of mail?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-01 Thread Steven Stewart-Gallus

New submission from Steven Stewart-Gallus:

Hello,

I noticed some possible bad behaviour while working on Python issue
21618 (see http://bugs.python.org/issue21618). Python has the
following code in _posixsubmodules.c for closing extra files before
spawning a process:

static void
_close_open_fd_range_safe(int start_fd, int end_fd, PyObject* py_fds_to_keep)
{
int fd_dir_fd;
if (start_fd = end_fd)
return;

fd_dir_fd = _Py_open(FD_DIR, O_RDONLY);
if (fd_dir_fd == -1) {
/* No way to get a list of open fds. */
_close_fds_by_brute_force(start_fd, end_fd, py_fds_to_keep);
return;
} else {
char buffer[sizeof(struct linux_dirent64)];
int bytes;
while ((bytes = syscall(SYS_getdents64, fd_dir_fd,
(struct linux_dirent64 *)buffer,
sizeof(buffer)))  0) {
struct linux_dirent64 *entry;
int offset;
for (offset = 0; offset  bytes; offset += entry-d_reclen) {
int fd;
entry = (struct linux_dirent64 *)(buffer + offset);
if ((fd = _pos_int_from_ascii(entry-d_name))  0)
continue;  /* Not a number. */
if (fd != fd_dir_fd  fd = start_fd  fd  end_fd 
!_is_fd_in_sorted_fd_sequence(fd, py_fds_to_keep)) {
while (close(fd)  0  errno == EINTR);
}
}
}
close(fd_dir_fd);
}
}

In the code FD_DIR is /proc/self/fd on Linux. I'm not sure this code
is correct. This seems as if it would have the same problems as
iterating over a list and modifying it at the same time.

I can think of a few solutions but they all have problems.

One could allocate a list of open files once and then iterate through
that list and close the files but this is not signal safe so this
solution is incorrect. One possible workaround is to use mmap to
allocate the memory. This is a direct system call and I don't see a
reason for it not to be signal safe but I'm not sure. One neat hack
would be too mmap the memory ahead of time and then rely on lazy
paging to allocate the memory lazily. Another solution is to search
for the largest open file and then iterate over and close all the
possible file descriptors in between. So far most possible solutions
just seem really hacky to me though.

I feel the best solution is to let the OS close the files and to set
the O_CLOEXEC flag on the files that need to be closed.

--
messages: 219510
nosy: sstewartgallus
priority: normal
severity: normal
status: open
title: Concurrently closing files and iterating over the open files directory 
is not well specified

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-01 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Okay, I've made a simple proof of concept patch.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21627] Concurrently closing files and iterating over the open files directory is not well specified

2014-06-01 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Gah, I had trouble figuring out mecurial.

Close files after reading open files and not concurrently

This commit removes the old behaviour of closing files concurrently
with reading the system's list of open files and instead closes the
files after the list has been read.

Because no memory can be allocated in that specific context the
approach of setting the CLOEXEC flag and letting the following exec
close the files has been used.

For consistency, the brute force approach to closing the files has
also been modified to set the CLOEXEC flag.

--
keywords: +patch
Added file: http://bugs.python.org/file35441/cloexec.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21627
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-05-31 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

I agree that this is not a likely scenario but I can think of one
mildly plausible scenario. Suppose some web server runs a Python CGI
script but has a bug that leaks a file descriptor into the script. The
web server sandboxes the Python CGI script a little bit with resource
limits so the leaked file descriptor is higher than the script's file
descriptor maximum. The Python CGI script then runs a sandboxed
(perhaps it's run as a different user) utility and leaks the file
descriptor again (because the descriptor is above the resource
limits). This utility is somehow exploited by an attacker over the
internet by being fed bad input. Because of the doubly leaked file
descriptor the attacker could possibly break out of a chroot or start
bad input through a sensitive file descriptor. Anyways, the bug should
be fixed regardless.

Thanks for correcting me on the location of the fd closing code. Some
observations.

Strangely, there seems to be a _close_fds method in the Python
subprocess module that is not used anywhere. Either it should be
removed or fixed similarly. For understandability if it is fixed it
should simply delegate to the C code.

The bug I mentioned earlier about concurrently modifing the fd dir and
reading from it occurs in _close_open_fd_range_safe which is a genuine
security issue (although I don't know if it's ver likely to happen in
practise). Because _close_open_fd_range_safe can't allocate memory the
code there will be pretty ugly but oh well.

There doesn't seem to be any point to caching max_fd in a variable on
module load. Why not just use sysconf every time it is needed? Is
there some need for really fast performance? Does sysconf allocate
memory or something?

Anyways, the code should be refactored to not use max_fd on the
platforms that support that.

Thank you for your thoughts. Also, should I keep discussion of some of
the bugs I observed here or raise them in other issues so they don't
get lost?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-05-31 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

I found another problem with _close_open_fd_range_safe. POSIX leaves
the state of a file descriptor given to close undefined if the close
fails with EINTR. I believe but haven't double checked that close
should not be retried on EINTR on all of our supported platforms. If
you must have absolute portability, block all signals so that close
can't fail with EINTR and then unblock them after close. This isn't an
actual problem because the code will just close an extra time but it's
still bothersome.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-05-30 Thread Steven Stewart-Gallus

Changes by Steven Stewart-Gallus sstewartgallu...@mylangara.bc.ca:


--
type:  - security
versions: +Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-05-30 Thread Steven Stewart-Gallus

New submission from Steven Stewart-Gallus:

The sysconf(_SC_OPEN_MAX) approach to closing fds is kind of flawed. It is kind 
of hacky and slow (see http://bugs.python.org/issue1663329). Moreover, this 
approach is incorrect as fds can be inherited from previous processes that have 
had higher resource limits. This is especially important because this is a 
possible security problem. I would recommend using the closefrom system call on 
BSDs or the /dev/fd directory on BSDs and /proc/self/fd on Linux (remember not 
to close fds as you read directory entries from the fd directory as that gives 
weird results because you're concurrently reading and modifying the entries in 
the directory at the same time). A C program that illustrates the problem of 
inheriting fds past lowered resource limits is shown below.

#include errno.h
#include stdlib.h
#include stdio.h
#include string.h
#include sys/time.h
#include sys/resource.h

int main() {
struct rlimit const limit = {
.rlim_cur = 0,
.rlim_max = 0
};
if (-1 == setrlimit(RLIMIT_NOFILE, limit)) {
fprintf(stderr, error: %s\n, strerror(errno));
exit(EXIT_FAILURE);
}

puts(Printing to standard output even though the resource limit is lowered 
past standard output's number!);

return EXIT_SUCCESS;
}

--
messages: 219440
nosy: sstewartgallus
priority: normal
severity: normal
status: open
title: POpen does not close fds when fds have been inherited from a process 
with a higher resource limit

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-05-30 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Okay here's a stub patch that address FreeBSD, NetBSD and Linux. I'm not sure 
how to address the other platforms.

--
keywords: +patch
Added file: http://bugs.python.org/file35419/python.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21618] POpen does not close fds when fds have been inherited from a process with a higher resource limit

2014-05-30 Thread Steven Stewart-Gallus

Steven Stewart-Gallus added the comment:

Oh right! I forgot a possible problem with my proposed patch. It is 
incompatible with Valgrind (see issue 
https://bugs.kde.org/show_bug.cgi?id=331311). Either this patch won't be 
applied, Valgrind compatibility is judged not essential or the Valgrind 
developers will start emulating /proc/self/fd.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21618
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com