> On Nov 28, 2025, at 13:50, [email protected] wrote:
> 
> Quoth ron minnich <[email protected] <mailto:[email protected]>>:
>> A very quick test
>> (base) Rons-Excellent-Macbook-Pro:snake rminnich$ GOOS=plan9 GOARCH=amd64
>> go build  .
>> package github.com/hajimehoshi/ebiten/v2/examples/snake
>> imports github.com/hajimehoshi/ebiten/v2
>> imports github.com/hajimehoshi/ebiten/v2/internal/inputstate
>> imports github.com/hajimehoshi/ebiten/v2/internal/ui
>> imports github.com/hajimehoshi/ebiten/v2/internal/glfw: build constraints
>> exclude all Go files in /Users/rminnich/Documents/ebiten/internal/glfw
>> (base) Rons-Excellent-Macbook-Pro:snake rminnich$ pwd
>> /Users/rminnich/Documents/ebiten/examples/snake
>> 
>> So there's a glfw issue, whatever that is :-)
> 
> GLFW is, IIRC, an OpenGL-based library.
> 
> a portable language doesn't help when all graphical toolkits
> rely on interfaces that are not available :)

GLFW is one of the lightest libraries for wrapping OpenGL rendering into a real 
window in a cross-platform way, and handling input.  https://www.glfw.org/

The Plan 9 community needs to start at the bottom IMO: get serious about 
supporting GPUs in some way.  So far the talk about GPUs has been hand-waving 
along the lines of using it as some sort of parallel computer for limited 
“compute” use cases, as opposed to the original application of rendering 
graphics.  But sure, if you can make it into a general parallel computer, and 
then still develop shader kernels (or whatever we call them in that kind of 
paradigm) that can render certain kinds of graphics, maybe it would be possible 
to accelerate some of the draw operations.  At least we have a chance to be 
original, ignore accepted wisdom about how to make graphics fast, and do it 
another way which might be more general.  Maybe.

There is also the idea that if GPUs turn out to be indispensable for general 
computing (especially AI), we won’t want to “waste” their power on basic 
graphics anymore.  Nearly every computer has a GPU by now, and if you run Plan 
9 on it, you are letting the Ferrari sit there in the garage doing nothing for 
the rest of its life: that’s a shame.  But if you could use it for serious 
computing, but actually use it only for drawing 2D graphics, that’s like using 
the Ferrari only for short shopping trips: an improvement over total idleness, 
but also a bit of a shame.  If you find out that you can make money by racing 
the Ferrari, or something, maybe you don’t drive it to the store anymore.  We 
won’t mind wasting the CPU to draw rectangles and text if it turns out that the 
real work is all done on the fancy new parallel computer.  I’m not sure how 
that will turn out.  I’ve always wanted a GPU with a long-lived open 
architecture, optimized for 2D; but gaming was the main market selling GPUs 
until Bitcoin and LLMs came along, so we have that kind of architecture: more 
powerful than we need in the easy cases, but also less convenient.  Given that, 
I suppose finding the most-general API to program them would make some sense.

Probably someone could pick a relatively easy target to start with: a GPU that 
is sufficiently “open” to have a blob-free mainline Linux driver already, and 
try to get it somehow going on 9.  None of them are really open hardware, but 
for example there are enough docs for the videocore IV on raspberry pi’s, maybe 
other embedded ones like imagination tech, Radeon and Intel on PCs, etc.  (And 
I also don’t have any such low-level experience yet, I just read about it and 
think: if only I had more lives, maybe I could find time for that…)

You could use draw to render fancy graphics already (I guess that is what you 
are thinking), but it would be lots of CPU work, consequently slow, and without 
antialiasing (except for text).  Draw can render lines and polygons at any 
angle, and Bézier curves, but thou shalt not antialias them, because that would 
be a change and we don’t like change - that’s the vibe I’m getting from the 
community.  So someone could go ahead and port ebiten, but it might be a lot of 
work, and it won’t look as good even if you put up with the speed, I suspect, 
unless they already have a CPU renderer.  Do they, or is it OpenGL-only?  But 
you can maybe find such a rendering engine that can draw vector graphics with 
AA.  At that point, you just generate each frame (pixmap) using such an engine, 
and blit it afterwards.  Not really in the spirit of the mainstream accelerated 
graphics approach (OpenGL and Vulkan), nor how Plan 9 typically does things 
either.  I’d rather have full AA support with draw API, and get help from the 
GPU to do it, somehow.

With APIs like draw, you assume you can draw when you want.  For accelerated 
graphics, you wait for a callback to prepare a series of draw calls, which need 
to be minimized in number and maximized in data: don’t draw one primitive at a 
time, try to group them as much as possible.  (60FPS is 16 ms per frame: you 
don’t have time for too many draw calls; but the GPU is highly parallel and 
doesn’t mind if you throw in lots of data with each call.)   So the coding 
style has to change, unless the “turtle" API is used only to queue up the 
commands, and then batches are generated from the queue.  I.e. build a scene 
graph.  If you take for granted that there will be a scene graph, then IMO it 
works quite well to use a declarative style.  What you really wanted to say was 
“let there be rectangle, with these dimensions” (and other primitives: text, 
images, paths at least) rather than “go draw 4 lines right now".  Switch to 
retained mode.  Then it can go directly into the scene graph, optimization of 
the draw calls can be done algorithmically, and you don’t spend time redrawing 
anything that doesn’t need it.  But everybody seems to like immediate mode 
better.  They keep doing it on GPUs too, and those programs always seem to take 
a constant few percent of CPU just to maintain a static scene, because they 
keep redoing work that was already done.

I will keep working on my 9p scene graph approach, and then I can write 
renderers with any technology on any OS, as long as a 9p client is available 
(either integrated into the renderer, or by mounting the filesystem on the OS). 
 Maybe I or someone could try ebiten for that.  At least that way it can make 
good use of the GPU on platforms where it’s supported.  But I do fear that 9p 
may become a bottleneck at some point: I just want to see how far it’s possible 
to go that way.


------------------------------------------
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T33e3d4a8f04a347f-M208d1db8610a887d62ba1f64
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

Reply via email to