On Tue, 26 Sep 2000, Thomas Zehetbauer wrote:
> > But if you can get rid of the stacks, and you _can_ get rid of the
> > stacks sometimes, then why not have one thread per widget in a GUI? Or
> > one thread per animated objected on a web page? Some notions of
> For this to work without opening
> But if you can get rid of the stacks, and you _can_ get rid of the
> stacks sometimes, then why not have one thread per widget in a GUI? Or
> one thread per animated objected on a web page? Some notions of
For this to work without opening up a security hole we must be able to
distribute the
On Tue, 26 Sep 2000, Thomas Zehetbauer wrote:
But if you can get rid of the stacks, and you _can_ get rid of the
stacks sometimes, then why not have one thread per widget in a GUI? Or
one thread per animated objected on a web page? Some notions of
For this to work without opening up a
> quote> SCO's Juergen Kienhoefer tells us that by mapping clone processes
> quote> directly onto UnixWare's native threads, huge performance gains
> quote> can be realised. "Basically thread creation is about a thousand
> quote> times faster than on native Linux," he said. The performance boost
quote SCO's Juergen Kienhoefer tells us that by mapping clone processes
quote directly onto UnixWare's native threads, huge performance gains
quote can be realised. "Basically thread creation is about a thousand
quote times faster than on native Linux," he said. The performance boost
quote
Hi!
> > CODA's local caching is pretty much neccessity: you can't have
> > read/write server in userland on localhost at smaller granularity due
> > to deadlock issues.
>
> For distributed filesystem - yes (but not necessary). For other kinds of
> stuff - provably not true.
Okay, how do you
On Fri, 1 Sep 2000, Pavel Machek wrote:
> CODA's local caching is pretty much neccessity: you can't have
> read/write server in userland on localhost at smaller granularity due
> to deadlock issues.
For distributed filesystem - yes (but not necessary). For other kinds of
stuff - provably not
Hi!
> > * podfuk is has file granularity
> >
> > I could see it working with some kind of directory format, however.]
>
> Yep. And quite a few mailreaders can work with that. I'm less than sure
> that CODA's local caching is appropriate, though. More RPC-oriented
> stub
CODA's local caching
On Thu, 31 Aug 2000, Erik McKee wrote:
> Hello!
>
> This is one of my first posts here, so try to be gentle, please ;)
>
> Seems like if a thread which shares a VM with all the other threads of the
> same family does an execve, the following would be likely to occurr, using
> the standard
Hi!
* podfuk is has file granularity
I could see it working with some kind of directory format, however.]
Yep. And quite a few mailreaders can work with that. I'm less than sure
that CODA's local caching is appropriate, though. More RPC-oriented
stub
CODA's local caching is pretty
On Fri, 1 Sep 2000, Pavel Machek wrote:
CODA's local caching is pretty much neccessity: you can't have
read/write server in userland on localhost at smaller granularity due
to deadlock issues.
For distributed filesystem - yes (but not necessary). For other kinds of
stuff - provably not
Hello!
This is one of my first posts here, so try to be gentle, please ;)
Seems like if a thread which shares a VM with all the other threads of the
same family does an execve, the following would be likely to occurr, using
the standard definition of execve. The vm would be overwriteen with
[EMAIL PROTECTED] (Linus Torvalds) wrote on 27.08.00 in
<[EMAIL PROTECTED]>:
> On Sun, 27 Aug 2000, Alexander Viro wrote:
> >
> > Linus, there is no need in new mask for execve().
>
> What you're saying is "there are other ways to accomplish this". And I
> kind of agree. I still think the
On Tue, 29 Aug 2000, Pavel Machek wrote:
> What does this have to do with private namespaces?
Albert asked what to do if /var/spool/mail dies and every user
has his own namespace. Well, don't let him play with /var/spool/mail
directly...
> mailfsd /dev/coda0 --enable-imap &
> mount
On Tue, 29 Aug 2000, Pavel Machek wrote:
> Hi!
>
[...]
> > > So you need some hackery to make mount(8) cause change in all
> > > namespaces at once. Whatever is done, this will be gross.
> > > I suppose you'd require a loopback of some sort, so that one
> > > might rip the real filesystem out
On Tue, 29 Aug 2000, Albert D. Cahalan wrote:
> Proposal 'c' is that way. If you agree that "VM is just one of
> the resources" but think that there are "uses for sharing between
> different programs", proposal 'a' is for you.
Sorry, no. VM is the resource that execve() replaces. That is what
Alan Cox wrote:
> It isnt a sensible answer. Think about a threaded web server firing off
> cgi scripts. You should probably kill those with the same mm. Especially if
> you have an unclone(CLONE_MM) since you can then unshare the VM for a thread
> and exec stuff off it
Think about a single
[EMAIL PROTECTED] (Linus Torvalds) wrote on 27.08.00 in
[EMAIL PROTECTED]:
On Sun, 27 Aug 2000, Alexander Viro wrote:
Linus, there is no need in new mask for execve().
What you're saying is "there are other ways to accomplish this". And I
kind of agree. I still think the dynamic mask
On Tue, 29 Aug 2000, Pavel Machek wrote:
What does this have to do with private namespaces?
Albert asked what to do if /var/spool/mail dies and every user
has his own namespace. Well, don't let him play with /var/spool/mail
directly...
mailfsd /dev/coda0 --enable-imap
mount
Alan Cox wrote:
It isnt a sensible answer. Think about a threaded web server firing off
cgi scripts. You should probably kill those with the same mm. Especially if
you have an unclone(CLONE_MM) since you can then unshare the VM for a thread
and exec stuff off it
Think about a single
On Tue, 29 Aug 2000, Albert D. Cahalan wrote:
Proposal 'c' is that way. If you agree that "VM is just one of
the resources" but think that there are "uses for sharing between
different programs", proposal 'a' is for you.
Sorry, no. VM is the resource that execve() replaces. That is what
Alexander Viro writes:
> On Tue, 29 Aug 2000, Albert D. Cahalan wrote:
[interaction w/ setuid, close-on-exec, and personality pseudo-roots]
> We _have_ this interaction. In case you've missed it, sys_personality()
> that really happens to change personality unshares fs_struct. Has to.
> Rule:
From: "David Howells" <[EMAIL PROTECTED]>
>
> Would it be possible to make fork() or clone() from a process whose
tgid!=pid
> reparent the child to the thread group leader automatically? Thus, when
the
> creating thread goes away, the child is still a child of the "process",
and
> SIGCHLD is
Would it be possible to make fork() or clone() from a process whose tgid!=pid
reparent the child to the thread group leader automatically? Thus, when the
creating thread goes away, the child is still a child of the "process", and
SIGCHLD is still going to go to the process (leader thread).
Also
On Tue, Aug 29, 2000 at 02:12:17PM +0200, Marc Lehmann wrote:
> On Mon, Aug 28, 2000 at 09:20:49AM -0600, [EMAIL PROTECTED] wrote:
> > You can't rely on signals timing anyway -- that is quite clear in the
> > spec and in the implementation.
>
> there is no "spec" on how it should be done. Again,
Alexander Viro writes:
On Tue, 29 Aug 2000, Albert D. Cahalan wrote:
[interaction w/ setuid, close-on-exec, and personality pseudo-roots]
We _have_ this interaction. In case you've missed it, sys_personality()
that really happens to change personality unshares fs_struct. Has to.
Rule: you
26 matches
Mail list logo