On Wednesday, 9 July 2014 at 16:21:55 UTC, Ola Fosheim Grøstad
wrote:
My point was that the current move is from heavy graphic contexts with few API calls to explicit command buffers with many API calls. I would think it fits better to tiling where you defer rendering and sort polygons and therefore get context switches anyway (classical PowerVR on iDevices). It fits better to rendering a display graph directly, or UI etc.




Actually is seems to be moving to fewer and fewer api calls where
possible(see AZDO) with lightweight contexts.


Yes, this is what they do. It is closer to what you want for general computing on the GPU. So there is probably a long term strategy for unifying computation and graphics in there somewhere. IIRC Apple claims Metal can be used for general computing as well as 3D.


Yeah, it seems like that is where everything is going very fast,
that is why I wish Aurora could try to follow that.


Why?

Imagination Technologies (PowerVR) purchased the raytracing accelerator (hardware design/patents) that three former Apple employees designed and has just completed the design for mobile devices so it is close to production. The RTU (ray tracing unit) has supposedly been worked into the same series of GPUs that is used in the iPhone. Speculation, sure, but not unlikely either.

http://www.imgtec.com/powervr/raytracing.asp


This is actually really cool, I just don't see real time ray
tracing being usable(games and the like) for at least another
5-10 years, though I will certainly be very happy to be wrong.


Why?

Intel has always been willing to cooperate when AMD holds the strong cards (ATI is stronger than Intel's 3D division).

http://www.phoronix.com/scan.php?page=news_item&px=MTcyODY


You may be right, I don't know, it just doesn't seem to be
something they would do to me, just a gut feeling, no real basis
to back it up.


I doubt it. ;-)

Apple wants unique AAA titles on their iDevices to keep Android/Winphone at bay and to defend the high profit margins. They have no interest in portable low level access and will just point at OpenGL 2ES for that.


They will all be incompatible of course, no way we could get a
decent standard... nooooooo.  All I am saying is that as they get
closer and closer to the hardware, they will all start looking
relatively similar. After all, if they are all trying to get
close to the same thing(the hardware) then by association they
are getting closer to eachother. There will be stupid little
nuances that make them incompatible but they will still be doing
basically the same thing. Hardware specific api's(Mantel)
complicate this a little bit but not by much, all the gpu
hardware(excluding niche stuff like hardware ray tracers :P) out
there has basicly the same interface.

True, but that is not a very stable abstraction level. Display Postscript/PDF etc is much more stable. It is also a very useful abstraction level since it means you can use the same graphics API for sending a drawing to the screen, to the printer or to a file.


I think its a fine abstraction level, buffers and shaders are not
hard concepts at all. All api's that Aurora is going to be based
on offers them as well as all modern gpu's support them. If
shaders were written in a DLS then in the case where Aurora needs
to fall back to software rendering then they can be translated to
D code and mixed right in. When they need to be turned into some
api specific shader then they could be translated at compile
time(the differences should mostly just be syntax). If the DSL
was a subset of D then that would simplify it even further as
well as make the learning curve much smaller. Its a perfectly
fine level of abstraction for any sort of graphics that also
happens to be supported very well by modern GPU's. I don't see
the problem.


Well, having the abstractions for opening a drawing context, input devices etc would be useful, but not really a language level task IMO. Solid cross platform behaviour on that level will never happen (just think about what you have to wrap up on Android).

Well then in that case Aurora should be designed as a software
renderer with
hardware support as a possible addition later on. But that comes
back to the point that is is a little iffy what Aurora is
actually trying to be. Personally I would be disappointed if it
went down that route.

Reply via email to