On Thursday, 10 July 2014 at 00:22:39 UTC, Tofu Ninja wrote:
Actually is seems to be moving to fewer and fewer api calls where possible(see AZDO) with lightweight contexts.

Yeah, AZDO appears to work within the OpenGL framework as is. However, I get a feeling that there will be more moves from Intel/AMD towards integrating the FPUs of the GPU with the CPU. Phi, the HPC CPU from Intel, was meant to support rendering (Larrabee).

Yeah, it seems like that is where everything is going very fast,
that is why I wish Aurora could try to follow that.

Depends on what Aurora is meant to target. The video says it is meant to be more of a playful environment that allows pac-man mockups and possibly GUIs in the long run, but not sure who would want that? There are so many much better IDE/REPL environments for that: Swift, Flash&Co, HTML5/WebGL/Dart/PNaCL, Python and lots of advanced frameworks with engines for cross platform mobile development at all kinds of proficiency levels.

Seems to me what a language that D needs is two separate frameworks:

1. A barebones 3D high performance library with multiple backends that follow the hardware trends (kind of what you are suggesting). Suitable for creating games and HPC stuff.

2. A stable high level API with geometric libraries for dealing with established abstractions: font files, vector primitives, PDF generation and parsing with canvas abstraction for both screen/gui, print, file… Suitable for applications/web.

3. An engine layering of 2. on top of 1. for portable interactive graphics but a higher abstraction level than 1.

This is actually really cool, I just don't see real time ray
tracing being usable(games and the like) for at least another
5-10 years, though I will certainly be very happy to be wrong.

I think it is only meant for shiny details on the mobile platforms. I could see it being used for mirrors in a car game. Spherical knobs etc.

If it works out ok when hitting silicone then I can see Apple using it to strenghten iOS as a "portable console platform", which probably means having proprietary APIs that squeeze every drop out of the hardware.

You may be right, I don't know, it just doesn't seem to be
something they would do to me, just a gut feeling, no real basis
to back it up.

Intel and AMD/ATI have a common "enemy" in the GPU/HPC field: Nvidia/CUDA.

basically the same thing. Hardware specific api's(Mantel)
complicate this a little bit but not by much, all the gpu
hardware(excluding niche stuff like hardware ray tracers :P) out
there has basicly the same interface.

Well, Direct-X's model has forced the same kind of pipeline, but if they get rid of DX… There are also other performant solutions out there: FPGA, crossbar-style multicore (many simple CPUS with local memory on a grid with memory busses between them).

time(the differences should mostly just be syntax). If the DSL
was a subset of D then that would simplify it even further as
well as make the learning curve much smaller. Its a perfectly
fine level of abstraction for any sort of graphics that also
happens to be supported very well by modern GPU's. I don't see
the problem.

Well, it all depends on the application area. For pixel based rendering, sure, shaders is the only way. I agree.

For more stable application oriented APIs, vectors all the way and wrap up JPEG/PNG files in "image block" abstractions.

hardware support as a possible addition later on. But that comes
back to the point that is is a little iffy what Aurora is
actually trying to be. Personally I would be disappointed if it
went down that route.

Well, OS level abstractions are hard to get right and people who has managed to do it charge quite a bit of money for it:

https://www.madewithmarmalade.com/shop

I guess it is possible if you only target desktop Windows/Mac, but other than that I think PNaCL/Pepper would be a more valuable cross platform target.

Reply via email to