fork/open race results in wasted fd

2001-05-31 Thread Brian J. Watson
Two tasks (A & B) share the same files_struct. A calls open() at the same time as B calls fork(). After A runs get_unused_fd() but before it calls fd_install(), B runs copy_files(). It looks like the result is one of the bits is set in B's open_fds field with no corresponding file pointer in its

fork/open race results in wasted fd

2001-05-31 Thread Brian J. Watson
Two tasks (A B) share the same files_struct. A calls open() at the same time as B calls fork(). After A runs get_unused_fd() but before it calls fd_install(), B runs copy_files(). It looks like the result is one of the bits is set in B's open_fds field with no corresponding file pointer in its

Re: active after unmount?

2001-04-19 Thread Brian J. Watson
> Unmounting a SCSI disk device succeeded, and yielded: > > Red Hat Linux release 6.2 (Zoot) > Kernel 2.4.3 on a 2-processor i686 > > chico login: VFS: Busy inodes after unmount. Self-destruct in 5 seconds. Have > a nice day... > This message comes out of kill_super(). I would guess that

Re: active after unmount?

2001-04-19 Thread Brian J. Watson
Unmounting a SCSI disk device succeeded, and yielded: Red Hat Linux release 6.2 (Zoot) Kernel 2.4.3 on a 2-processor i686 chico login: VFS: Busy inodes after unmount. Self-destruct in 5 seconds. Have a nice day... This message comes out of kill_super(). I would guess that somebody's

Why does do_signal() repost deadly signals?

2001-04-18 Thread Brian J. Watson
If a signal's default behavior is to kill a process, do_signal() reposts that signal before calling do_exit(). Why does it do that? Our guess is that it prevents the exiting process from blocking for an extremely long period of time. One example might be a process with an open NFS file. The

Why does do_signal() repost deadly signals?

2001-04-18 Thread Brian J. Watson
If a signal's default behavior is to kill a process, do_signal() reposts that signal before calling do_exit(). Why does it do that? Our guess is that it prevents the exiting process from blocking for an extremely long period of time. One example might be a process with an open NFS file. The

Re: kernel space getcwd()? (using current() to find out cwd)

2001-04-17 Thread Brian J. Watson
"Brian J. Watson" wrote: > path = __d_path(pwd, pwdmnt, NULL, NULL, path, PAGE_SIZE); Oops! That's no good. Here's the new and improved version: char * kgetcwd(char **bufp) { char *path, *buf = (char *) __get_free_page(GFP_USER); struct vfsmnt *pwdmnt;

Re: kernel space getcwd()? (using current() to find out cwd)

2001-04-17 Thread Brian J. Watson
> This is probably a stupid question, and probably directed to the wrong > list. Apologies in advance, but I'm stumped > > I've been working on a kernel module to report on "changed files". It > works just fine -- I wrap the orignal system calls with my > [...] At least in the 2.4 kernels,

Re: kernel space getcwd()? (using current() to find out cwd)

2001-04-17 Thread Brian J. Watson
This is probably a stupid question, and probably directed to the wrong list. Apologies in advance, but I'm stumped I've been working on a kernel module to report on "changed files". It works just fine -- I wrap the orignal system calls with my [...] At least in the 2.4 kernels, there's

Re: kernel space getcwd()? (using current() to find out cwd)

2001-04-17 Thread Brian J. Watson
"Brian J. Watson" wrote: path = __d_path(pwd, pwdmnt, NULL, NULL, path, PAGE_SIZE); Oops! That's no good. Here's the new and improved version: char * kgetcwd(char **bufp) { char *path, *buf = (char *) __get_free_page(GFP_USER); struct vfsmnt *pwdmnt;

Re: [PATCH] trylock for rw_semaphores: 2.4.1

2001-02-20 Thread Brian J. Watson
Ben LaHaise wrote: > How about the following instead? Warning: compiled, not tested. > > -ben > > +/* returns 1 if it successfully obtained the semaphore for write */ > +static inline int down_write_trylock(struct rw_semaphore *sem) > +{ > + int old = RW_LOCK_BIAS, new =

Re: [PATCH] trylock for rw_semaphores: 2.4.1

2001-02-20 Thread Brian J. Watson
Ben LaHaise wrote: How about the following instead? Warning: compiled, not tested. -ben +/* returns 1 if it successfully obtained the semaphore for write */ +static inline int down_write_trylock(struct rw_semaphore *sem) +{ + int old = RW_LOCK_BIAS, new = 0; +

[PATCH] trylock for rw_semaphores: 2.4.1

2001-02-19 Thread Brian J. Watson
Here is an x86 implementation of down_read_trylock() and down_write_trylock() for read/write semaphores. As with down_trylock() for exclusive semaphores, they don't block if they fail to get the lock. They just return 1, as opposed to 0 in the success case. The algorithm should be robust. It

[PATCH] trylock for rw_semaphores: 2.4.1

2001-02-19 Thread Brian J. Watson
Here is an x86 implementation of down_read_trylock() and down_write_trylock() for read/write semaphores. As with down_trylock() for exclusive semaphores, they don't block if they fail to get the lock. They just return 1, as opposed to 0 in the success case. The algorithm should be robust. It

Re: scheduler

2000-10-26 Thread Brian J. Watson
Anonymous wrote: > > In redhat where is the process scheduler located? Does this scheduler > implement round robin? It doesn't matter whether it's RedHat, or any other distribution. They're all the same kernel. Look at schedule() in kernel/sched.c to see the heart of the scheduler. My

Re: scheduler

2000-10-26 Thread Brian J. Watson
Anonymous wrote: In redhat where is the process scheduler located? Does this scheduler implement round robin? It doesn't matter whether it's RedHat, or any other distribution. They're all the same kernel. Look at schedule() in kernel/sched.c to see the heart of the scheduler. My