The hardware doesn't care where the modesetting is done: it just
needs code to twiddle the registers. For example, this code twiddles
registers up to Haswell:
https://git.9front.org/plan9front/9front/HEAD/sys/src/cmd/aux/vga/igfx.c/f.html
There's nothing, other than people spending time on it, preventing it from
working on newer hardware.
Quoth Clout Tolstoy <[email protected]>:
> So it seems we need something like kernel mode setting, but then for older
> hardware that doesn't support KMS would still need something in user land.
>
> Idk if this abstraction fully applies to plan9.
>
> And then something for a direct rendering manager...
>
> At least this is what I reflect from how Linux does it and that might not
> be the best approach here.
>
> On Fri, Nov 28, 2025, 12:24 PM <[email protected]> wrote:
>
> > Quoth Clout Tolstoy <[email protected]>:
> > > This talks loops back to my hopes for the gpufs, but the drivers are in
> > the
> > > way. Maybe matrox cards could be a good entry point for proper driver
> > > support?
> >
> > Intel cards seem to be the best documented, for example this
> > seems to have links to fairly complete manuals:
> >
> >
> > https://cdrdv2-public.intel.com/772631/graphics-for-linux_developer-reference_1.0-772629-772631.pdf
> >
> > including register references.
> >
> > AMD is less well documented, but also has docs:
> >
> >
> > https://gpuopen.com/amd-gpu-architecture-programming-documentation/
> >
> > Their docs focus almost exclusively on the instruction set, and say
> > very little about the graphics pipeline and modesetting, however,
> > there is an open source driver, and there are old docs for old 2010
> > era cards that can serve as a starting point.
> >
> > But in the the end, someone motivated needs to sit down and write
> > code; There is no simple, easily obtained, fast, and well documented
> > hardware out there. We get a couple of the checkboxes, and then some
> > motivated soul need to start working through the rest.
> >
> > Even modesetting would be valuable.
> >
> > But it's stil a lot of work, and it won't get done until someone
> > actually sits down and starts wiggling their fingers.
> > > On Fri, Nov 28, 2025, 10:05 AM Paul Lalonde <[email protected]>
> > > wrote:
> > >
> > > > I'm going to be Debbie Downer here.
> > > >
> > > > I worked on GPUs most of my career, from the early days of graphics
> > > > acceleration to the first 3D games consoles, to the start of shader
> > > > processing through to modern Nvidia architectures.
> > > >
> > > > A GPU is *absurdly* more complex than is visible from its API surface,
> > > > which itself is maddenly complex. Vulkan gives an inkling of what has
> > to
> > > > be going on under the hood and what kinds of constraints exist to make
> > a
> > > > performant graphics engine work.
> > > >
> > > > The reason that plain compute looks more attractive is that the
> > hardware
> > > > surface is so much smaller. You don't have to worry about texture
> > engines,
> > > > rasterizers, tiling, ROPs, in-order completion rules, and
> > > > myriad constraints on the memory subsystem to satisfy all these
> > hardware
> > > > units. Yes, compute leaves half the functionality that makes graphics
> > > > possible on the floor, but it also leaves all of its complexity there.
> > > >
> > > > I like the 9p approach to 3D graphics - retained mode APIs have a long
> > > > history because of their usability. I'd be happy to use such a thing
> > on
> > > > Plan9. But a driver for *any* modern GPU is not an achievable target
> > for a
> > > > small band of independent developers.
> > > >
> > > > Paul
> > > >
> > > > On Fri, Nov 28, 2025 at 8:03 AM Shawn Rutledge <[email protected]>
> > wrote:
> > > >>
> > > >> On Nov 28, 2025, at 13:50, [email protected] wrote:
> > > >>
> > > >> Quoth ron minnich <[email protected]>:
> > > >>
> > > >> A very quick test
> > > >> (base) Rons-Excellent-Macbook-Pro:snake rminnich$ GOOS=plan9
> > GOARCH=amd64
> > > >> go build .
> > > >> package github.com/hajimehoshi/ebiten/v2/examples/snake
> > > >> imports github.com/hajimehoshi/ebiten/v2
> > > >> imports github.com/hajimehoshi/ebiten/v2/internal/inputstate
> > > >> imports github.com/hajimehoshi/ebiten/v2/internal/ui
> > > >> imports github.com/hajimehoshi/ebiten/v2/internal/glfw: build
> > constraints
> > > >> exclude all Go files in /Users/rminnich/Documents/ebiten/internal/glfw
> > > >> (base) Rons-Excellent-Macbook-Pro:snake rminnich$ pwd
> > > >> /Users/rminnich/Documents/ebiten/examples/snake
> > > >>
> > > >> So there's a glfw issue, whatever that is :-)
> > > >>
> > > >>
> > > >> GLFW is, IIRC, an OpenGL-based library.
> > > >>
> > > >> a portable language doesn't help when all graphical toolkits
> > > >> rely on interfaces that are not available :)
> > > >>
> > > >>
> > > >> GLFW is one of the lightest libraries for wrapping OpenGL rendering
> > into
> > > >> a real window in a cross-platform way, and handling input.
> > > >> https://www.glfw.org/
> > > >>
> > > >> The Plan 9 community needs to start at the bottom IMO: get serious
> > about
> > > >> supporting GPUs in some way. So far the talk about GPUs has been
> > > >> hand-waving along the lines of using it as some sort of parallel
> > computer
> > > >> for limited “compute” use cases, as opposed to the original
> > application of
> > > >> rendering graphics. But sure, if you can make it into a general
> > parallel
> > > >> computer, and then still develop shader kernels (or whatever we call
> > them
> > > >> in that kind of paradigm) that can render certain kinds of graphics,
> > maybe
> > > >> it would be possible to accelerate some of the draw operations. At
> > least
> > > >> we have a chance to be original, ignore accepted wisdom about how to
> > make
> > > >> graphics fast, and do it another way which might be more general.
> > Maybe.
> > > >>
> > > >> There is also the idea that if GPUs turn out to be indispensable for
> > > >> general computing (especially AI), we won’t want to “waste” their
> > power on
> > > >> basic graphics anymore. Nearly every computer has a GPU by now, and
> > if you
> > > >> run Plan 9 on it, you are letting the Ferrari sit there in the garage
> > doing
> > > >> nothing for the rest of its life: that’s a shame. But if you could
> > use it
> > > >> for serious computing, but actually use it only for drawing 2D
> > graphics,
> > > >> that’s like using the Ferrari only for short shopping trips: an
> > improvement
> > > >> over total idleness, but also a bit of a shame. If you find out that
> > you
> > > >> can make money by racing the Ferrari, or something, maybe you don’t
> > drive
> > > >> it to the store anymore. We won’t mind wasting the CPU to draw
> > rectangles
> > > >> and text if it turns out that the real work is all done on the fancy
> > new
> > > >> parallel computer. I’m not sure how that will turn out. I’ve always
> > > >> wanted a GPU with a long-lived open architecture, optimized for 2D;
> > but
> > > >> gaming was the main market selling GPUs until Bitcoin and LLMs came
> > along,
> > > >> so we have that kind of architecture: more powerful than we need in
> > the
> > > >> easy cases, but also less convenient. Given that, I suppose finding
> > the
> > > >> most-general API to program them would make some sense.
> > > >>
> > > >> Probably someone could pick a relatively easy target to start with: a
> > GPU
> > > >> that is sufficiently “open” to have a blob-free mainline Linux driver
> > > >> already, and try to get it somehow going on 9. None of them are
> > really
> > > >> open hardware, but for example there are enough docs for the
> > videocore IV
> > > >> on raspberry pi’s, maybe other embedded ones like imagination tech,
> > Radeon
> > > >> and Intel on PCs, etc. (And I also don’t have any such low-level
> > > >> experience yet, I just read about it and think: if only I had more
> > lives,
> > > >> maybe I could find time for that…)
> > > >>
> > > >> You could use draw to render fancy graphics already (I guess that is
> > what
> > > >> you are thinking), but it would be lots of CPU work, consequently
> > slow, and
> > > >> without antialiasing (except for text). Draw can render lines and
> > polygons
> > > >> at any angle, and Bézier curves, but thou shalt not antialias them,
> > because
> > > >> that would be a change and we don’t like change - that’s the vibe I’m
> > > >> getting from the community. So someone could go ahead and port
> > ebiten, but
> > > >> it might be a lot of work, and it won’t look as good even if you put
> > up
> > > >> with the speed, I suspect, unless they already have a CPU renderer.
> > Do
> > > >> they, or is it OpenGL-only? But you can maybe find such a rendering
> > engine
> > > >> that can draw vector graphics with AA. At that point, you just
> > generate
> > > >> each frame (pixmap) using such an engine, and blit it afterwards. Not
> > > >> really in the spirit of the mainstream accelerated graphics approach
> > > >> (OpenGL and Vulkan), nor how Plan 9 typically does things either. I’d
> > > >> rather have full AA support with draw API, and get help from the GPU
> > to do
> > > >> it, somehow.
> > > >>
> > > >> With APIs like draw, you assume you can draw when you want. For
> > > >> accelerated graphics, you wait for a callback to prepare a series of
> > draw
> > > >> calls, which need to be minimized in number and maximized in data:
> > don’t
> > > >> draw one primitive at a time, try to group them as much as possible.
> > > >> (60FPS is 16 ms per frame: you don’t have time for too many draw
> > calls;
> > > >> but the GPU is highly parallel and doesn’t mind if you throw in lots
> > of
> > > >> data with each call.) So the coding style has to change, unless the
> > > >> “turtle" API is used only to queue up the commands, and then batches
> > are
> > > >> generated from the queue. I.e. build a scene graph. If you take for
> > > >> granted that there will be a scene graph, then IMO it works quite
> > well to
> > > >> use a declarative style. What you really wanted to say was “let
> > there be
> > > >> rectangle, with these dimensions” (and other primitives: text, images,
> > > >> paths at least) rather than “go draw 4 lines right now". Switch to
> > > >> retained mode. Then it can go directly into the scene graph,
> > optimization
> > > >> of the draw calls can be done algorithmically, and you don’t spend
> > time
> > > >> redrawing anything that doesn’t need it. But everybody seems to like
> > > >> immediate mode better. They keep doing it on GPUs too, and those
> > programs
> > > >> always seem to take a constant few percent of CPU just to maintain a
> > static
> > > >> scene, because they keep redoing work that was already done.
> > > >>
> > > >> I will keep working on my 9p scene graph approach, and then I can
> > write
> > > >> renderers with any technology on any OS, as long as a 9p client is
> > > >> available (either integrated into the renderer, or by mounting the
> > > >> filesystem on the OS). Maybe I or someone could try ebiten for
> > that. At
> > > >> least that way it can make good use of the GPU on platforms where it’s
> > > >> supported. But I do fear that 9p may become a bottleneck at some
> > point: I
> > > >> just want to see how far it’s possible to go that way.
> > > >>
> > > >> *9fans <https://9fans.topicbox.com/latest>* / 9fans / see discussions
> > > > <https://9fans.topicbox.com/groups/9fans> + participants
> > > > <https://9fans.topicbox.com/groups/9fans/members> + delivery options
> > > > <https://9fans.topicbox.com/groups/9fans/subscription> Permalink
> > > > <
> > https://9fans.topicbox.com/groups/9fans/T33e3d4a8f04a347f-M1213743d46bd247dc794bece
------------------------------------------
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T33e3d4a8f04a347f-M3931e96189019cd387505c4a
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription