>> >Yes. This is possible. They are in uninterruptible sleep, then. 
>> >This basically means a kernel call is hanging, which is uninterruptible
>> >(i.e. the kernel is of the optionion this request _has_ to complete before
>> >anything else happens).
>
>> Ok, why does the kernel think this? Why can't the kernel simply say
>> "ok processes, you're out" now, and when the IO operation finishes,
>> say "Thank you device, but I no longer need it"?
>
>Hmm. I assume the main reason is the extra bookkeeping overhead you
>describe. As long as the process exists, it has an associated state, 
>that basically says where it is blocking. If you really killed the 
>process, you would have to kind of "shadow" that information to wait 
>for the operation to complete.
>If you just marked it as killed, it is basically a kind of "pre-Zombie",
>that is just used to hold that completion information.
>Also from an accounting point-of-view, it might even be considered good,
>that the connection between the process and the blocked ressource persists.

Ahh. So, what if the kill -9 has the corresponding effect of
zombing the process?

Free all it's memory, close all it's file descriptors, but keep
the state around as a zombie for tracking IO completions or whatever.

>> For The Record: This should *NEVER* be a kernel decision as to "how
>> long is too long". The kernel has no idea how long is too long.
>
>Well ... I thought about the device drivers. They usually have (like they
>have a rough estimate, that rewinding a tape might take long). However
>in a layered scheme this gets more complicated, right.
>
>> On the other hand, whatever is doing the IO can tell how long something
>> should take.
>
>You mean the application ? No. I don't think it has an idea. Say it outputs
>on stdout. It has no idea where that is connected. Might be a 1200 baud
>serial connection, might be a harddisk, might be /dev/null ...

No, the application doesn't do IO. The application requests IO.

IO is done by device drivers in a standard kernel. Or, if you have
a good, well done layered IO, the IO may be done in another user
process (don't laugh -- the original FICUS code permitted this
as a user level NFS server. Version 8 STREAMS permitted this, as
I recall -- they made no distinction between a STREAMS source
in the kernel or in user space. Many OS's (ok, some) implement all
IO by message sending, making no distinction between user space
or kernel drivers.)

Now, whether that user level process is generating data, or using
another kernel level IO channel, either way it should be able to get
a time estimate and return it. It may be a computation intensive
"IO" (ex: /dev/encrypt) that the user process says 'here's how
long my "IO" should take', or it may be a real IO by another
driver that says "here's the time estimate".

In either case, the process that calls "read()" or "write()" cannot
say "this is how long it will take", but they should be able to
call "read_est()", or "write_est()" (better names, anyone?) to get
time estimates.

And, if you are going to layer the IO, that becomes a necessity --
anything doable in kernel should be doable in userland.

>> >This state is also one of the major reasons, why I think cooperative console
>> >switching is a bad idea. A graphical backup application would not be able to
>> >ack a console switch while rewinding a tape ... GRR.
>
>> Sure it can.
>> What, do you mean to tell me that you routinly write programs where
>> the user interface is blocked by long IO operations?
>
>_I_ don't. But look at everyone else. Look at netscape, look at about any
>Windows program, ...

Yea, well, they all write to Microsoft's standard programming model ...

>Yes, I would start an IO-thread/process, but who else would ...

Me :-). I probably urge more multithreading than anyone else I know.

Of course, I probably (over-)use classes more than anyone else I know
also. So, I've got just one place to put a lock/unlock call when
I change something to multithreaded.

>= Andreas Beck                    |  Email :  <[EMAIL PROTECTED]> =

Michael Gersten

Reply via email to