Gee... if this text is only worth 2 cents, why are books so expensive ?

Anyway, let's give it a try:

On Mon, 20 Jan 2003, Brian S. Julin wrote:

>
> This has to be considered in the context of both threaded and non-threaded
> applications, because we don't want to *require* the use of threads with
> KGI, but we want threads to also work.  Also we should consider the
> framebuffer/mmio and the accel FIFOs separately.

True. Haven't checked yet what happens with the children of a task when it
is supsended or something. Problem here is maybe that the parent thread
(which receives the signal) isn't the first of the program that is
scheduled again, which means a thread can draw before the signal is
handled. (DOn't know this for sure)

> Consider a single-threaded application using the framebuffer directly.
> The VT is switched away and the code then tries to write a byte to the
> directbuffer or to the MMIO registers.
>
> If no signal handler has been set up, the application is put to sleep,
> and as far as I know, it wakes up when SIGCONT comes and the write access
> proceeds uninterrupted (??).  Hopefully the state of the MMIO
> (if available/used) is fully restored.  The application then needs to be
> sent a SIGWINCH to tell it its screen data has been corrupted if indeed
> it has (we have the option in the future of saving screen data in some
> cases, like if all the consoles switched-to since the suspend were just
> text VTs.)  Can all kernels queue these two signals back-to-back in the
> fault handler?  My guess is probably, yes.

Signals sure can be stacked, yes.

> If the above app installed a signal handler, it is called.  The application
> can stay in the signal handler as long as it wants as far as I am aware,
> but I am not sure if there is a way to get out of the signal handler
> without execution resuming at the offending write access (??).  From
> what I read in the signal() manpage, although BSD will block this signal
> from re-occurring, other OSes will install SIG_DFL as the handler,
> and so if any more of these signals are generated the application will
> be put to sleep.
>
> If the above app SIG_IGN's the signal, I don't know what happens.
> It would be best if in this case we could just block the access(es) and
> resume them when the app regains the focus.

If the program does nothing, the kernel stops the thread. If the program
does something, the whole idea relies on the assumption that whatever it
does, it is a good thing. The idea relies on the assumption that there is
an userspace lib that knows what it is doing, but in fact the entire
accelleration unit of KGI relies on that.

> I'm not sure how easy this would be to use in the threaded case because
> I don't know how the corner cases of threads work.  The default behavior,
> I believe, will be to put all threads in the application to sleep.
>
> A threaded application with its own signal handler would catch the
> signal, but I am not sure if this pauses all threads or just the
> one causing the signal (?).  If threads were still running they could
> violate again and that would cause a SIG_DFL action to occur, making
> the whole process fall asleep.  (Please folks correct me if I'm wrong here.)

Have no idea, but note that there are also different thread
implementations. THere are implementations that schedule within one task
(pthread ?) and that generate multiple tasks. (The old way linux did it)

> In the signal handler, if the app did not know which threads were using
> the directbuffer, this would be of little use.  It would have to keep
> track itself and stop the correct threads before exiting the signal
> handler, and then start them again on SIGCONT.  However, in a threaded
> application, just blocking/resuming bad accesses would be better
> (only the threads causing offending accesses would be blocked.)  So
> these applications could SIG_IGN the signals, and threads would
> just block on their access and resume when the focus returns.  Then
> the app would get the SIGWINCH.
>
> All in all I think SIGTTOU/SIGTTIN are worth looking into and may
> be the right solution for direct framebuffer/MMIO.

The signalling is definitely needed, but I think we need to map away some
resources too. Just to make sure nothing happens.

> I don't think we should use this on the accelerators, though, personally.
>
> In the case of the accelerators we can set up a shared control page that the
> application can check without any blocking or polling, or context switches,
> before attempting to write to a new page in the accel queue.  This check can
> be rolled into the macros for writing to the accel queues.  This gives even
> single-threaded applications the ability to choose between calling select()
> to do a blocking wait with a finite timeout, accessing the next page anyway
> to get an infinitely blocking poll, or just going about their business and
> trying again later.
>
> This means that KGI must guarantee that writes to the currently mapped-in
> accel buffer *pages* (the "pinged" buffers) never block.   Accesses
> to pages that are not mapped in (the "ponged" buffers) would block.
> As I said above, a control page could allow the KGI driver to put an
> advisory lock where the application can read it to determine if the next
> page is available without incurring a context switch -- this would be "pinging"
> the page.  Buffers never become "ponged" without the application designating
> them as "pongable" by writing into the control page (also a
> context-switch-free action.)  After the application has done this it should
> know better than to try to access these pages again without successfully
> "pinging" them.  This is a different strategy than the old "ping-pong"
> buffers and IMO a better one because it allows SMP machines to hit that
> sweet spot of simultaneous some-cpus-in-kernel-space-other-cpus-in-user-space.
>
> If the buffer is in normal RAM implementing the above is not a problem.
> We have plenty of those to go around and can just keep the page mapped
> into the processes space.  The process can continue to fill the "pinged"
> pages with commands or data.
>
> In those more rare cases where the accel queues are in VRAM (we would
> probably only do this in cases where AGP or DMA are not an option)
> we would have to replace the VRAM pages with RAM pages on the sly
> such that the application does not notice.

Pfew... it's too late for this ...

Jos

Reply via email to