Right--we will just go with the idea that I do not understand the 
architectural difference here.

As well, I really did not understand it as being any more significant than a 
CPU vs a FPU--or something like that really. Even then, typically, there are 
ways to emulate floating point operations without a FPU, or in the case of some 
early Pentiums, a faulty FPU. Yes, you do lose some speed, but... well, I 
dunno, when one platform only really requires 100MHz for a decent running 
speed... and the other requires 2GHz... I really have to wonder what exactly 
these insanely different fundamental differences are.

How about this, any terms I can Google that will give me any way to learn more 
about these really large differences you speak of?

I mean unless these graphics cards have some phenomenon bios that contains 
most of the functions for drawing primitives, I really cannot confess to 
knowing what you are talking about here.

So, I will request some Google search terms, here.

I mean, we should be able to have remarkably large amount of room to play with 
in the cycles for doing what we want software wise. I mean--yeah...

Going to have to ask to explain these fundamental differences--or point to 
place that can.

As under my understanding, all a chip, be it CPU, FPU or GPU could do was 
load, store, do stuff with the registers it has, and possibly a few simple 
operations based on the data in the registers, or data being pointed to in the 
arguments.

So, yeah going to have to ask for a link on what exactly these differences are 
(or at least some decent keywords)--as my current comprehension of how 
hardware itself works does not allow for having any clue what you are talking 
about here.

Especially when, you are saying that an OS that only really requires 100MHz 
for base running, when having this software rendering, cannot compete with an 
OS that requires 2GHz of CPU with a GPU.

I mean--this is kind of hellova wack right here. Especially when my suggestion 
was that, apart from the GPU, the hardware this was being tested on would be 
at a similar level.

That is, the OS with the lower overhead, running on a current platform setup, 
as is the OS with the higher overhead. Just the higher overhead has the GPU.

Yeah... please, you guys are making no sense here. At this point, you are 
talking about a fundamental difference that I cannot fathom even really exists 
in any of my understanding how how computer hardware works or even evolved 
over the decades.

You may as well be chiding me for not understanding the 96 Hour Day or 
something at this point.

So, again, I ask for your Axioms.

~Katrina

On Friday, June 18, 2010 02:48:00 am joshua simmons wrote:
> You will never get any speed out of a software renderer, and using Linux
> won't change that.
> 
> I don't think you quite understand the fundamental differences between CPU
> architecture and a massively parallel gpu architecture.
> 
> On 18 Jun 2010 18:41, "Katrina Payne" <fullmetalhar...@nimhlabs.com> wrote:
> 
> The idea of GPU is a method to take load off o the main CPU, to put it onto
> another processor that has the only purpose of processing the graphics you
> are
> doing.
> 
> A form of delegating between multiple chips, as I understood it.
> 
> This way, you have one chip working specifically on the graphics, and the
> other
> doing everything else.
> 
> And you are right---a software render cannot compete with a GPU on an even
> field.
> 
> You missed the point where Linux does not take up as much system resources,
> typically, as the latest versions of Windows does.
> 
> The idea being, to get a software renderer on Linux, to work on the same
> level
> as a hardware renderer on Windows.
> 
> Like I said, you can typically get Linux, to run in a GBA... you cannot fit
> anything else into there (maybe pong, I guess?). A GBA typically clocks in
> at
> about 67.5MHz IIRC, with next to no RAM.
> 
> Windows 7, kind of requires 1GiB at a minimum for RAM, and you are going to
> need at least 1 or 2 GHz to get it running.
> 
> My idea, again, in case you missed it, was to try to take up this saved
> overhead, use it for software rendering, to make it comparable to the
> hardware
> rendering on Windows.
> 
> The idea being:
> 
> If you can get that kind of comparable speed on Linux with Software
> Rendering... this would make graphics card companies more inclined to make
> drivers for Linux--as this shows how much more resources you can fit games
> into.
> 
> I mean, no idea how this point was lost, when what started this train of
> thought was that Nvidia and ATI had issues supporting Linux with their
> drivers.
> 
> The software rendering engine would never be more than used as a form of
> insane PoC idea. Or at least, never commercially.
> 
> It would be a demo, that would be aimed at getting the attention of hardware
> driver developers to target linux for these drivers.
> 
> A publicity stunt was what I was suggesting.
> 
> ~Katrina
> 
> 
> On Tuesday, June 15, 2010 02:45:33 pm Adam Buckland wrote:
> > I was under the impression that the wh...

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlcoders

Reply via email to