Reply follows inline

On Friday, June 18, 2010 04:40:24 am Bob Somers wrote:
> Yeah, no offense, but I don't think you fully understand the
> differences between the CPU/GPU.

Yeah, I do not think I do either.

> The fact that you can get Linux running on a lower powered machine
> doesn't mean much when it comes to raw graphics horsepower. These
> resource "savings" are almost entirely on the CPU/RAM side. A software
> renderer would run just as poorly on a Linux machine as a Windows
> machine because a CPU is not designed for graphics processing, it's
> designed for serial, general purpose computing.

Yes, I know. However, it never gets used for such regardless.

All serial general purpose computing comes down to is adding numbers together, 
subtracting them, dividing, multiplying (sometimes), dividing (sometimes), 
modulus (sometimes) and popping and pushing them from memory.

Typically other hardware reads what is in this memory, and will do a direct 
thoughtless production of this.

For example, on some simple console systems, all the display will do, is read 
from a certain spot of memory and follow what is in there, for displaying 
tiles on the screen, or try to deal with various OAM/sprite values to post 
onto the screen.

All the CPU does is put that data there--and the hardware does what it wants 
with it--typically without questions.

> The hardware graphics pipeline gets you matrix/vector computations,
> per-vertex lighting, view projection transformations, clipping and
> culling, scan conversion, texture lookups, and in modern hardware,
> vertex and fragment shader engines and geometry tessellation, all
> massively parallel in hardware. Even a low-range GPU can crank through
> graphics operations like a hot knife through butter compared to a
> high-range CPU.

Right... okay... I hear that it does this...

But how.

How does the GPU do this in a way that the CPU cannot? Are there special 
opcodes that do these actions.

I mean, the idea of a texture lookup op code seems kind of silly to me.

What is it, that the GPU does, to do this--that makes it better.

It is nice you say this as a summary--but... what are the insides of the 
beast?

> It's not a matter of having extra "resources". The point is that those
> extra "resources" won't get you very far compared to a hardware
> graphics pipeline, because they're not specialized. Modern CPUs run
> best when context switching is kept to a minimum, because they have
> huge cores that offer a lot of general purpose functionality. GPUs
> have (nowadays) hundreds of small, highly-specialized cores designed
> specifically for the operations in the graphics pipeline.

Except, I figured that they both were just large amounts of transistors grouped 
closer together in a certain way. Just to an insanely clustered amount.

So, you are suggesting that a GPU is not so much a Processing Unit, in 
comparison to some form of Midi or other Sound Output device?

I mean, the abstract is really nice and all... but that is all it is, an 
abstract that you are using to argue that a GPU can handle stuff a CPU cannot.

Please, I am going to ask for the various specifics on this matter.

As until then, I am still going to have no what you are talking about...

And likely will start going glassy eyed in much the same way a scientist would 
when you say something is "Magic".

This is a very nice abstract on this...

But... it tells me nothing on the exact differences in the engine here. It just 
says, "they are there"--and really, I am not certain I can believe that at 
face value really.

No offense to you particularly, but I have had some fools try to get me to 
trust that stuff. Usually upon investigation, I learned they were the biggest 
handitards on the planet.

Now, since you are right, and likely, this may take more than an email that 
this mailing list can take, perhaps, link me to a white paper somewhere? Or 
something on the matter.

As until you do, your explaination may as well be a verbose variant of:

Q: What is the fundamental difference between a GPU and CPU

A: Magic.

> There's not a whole lot consumers can do to get ATI to up their game
> on their Linux drivers, other than contact them and complain about
> driver support. Honestly the best thing that could happen right now to
> level the playing field is to have a major game publisher (anybody at
> Valve reading this? :D) announce Linux support, preferably with a
> "runs best on nVidia because their Linux drivers don't suck" campaign.
> Big companies like ATI don't respond to something until it bites them
> in their pocketbook.
> 
> --Bob

That is another method--but I am not certain it will happen. It would be nice 
though.

> On Fri, Jun 18, 2010 at 1:48 AM, joshua simmons <simmons...@gmail.com> 
wrote:
> > You will never get any speed out of a software renderer, and using Linux
> > won't change that.
> >
> > I don't think you quite understand the fundamental differences between CPU
> > architecture and a massively parallel gpu architecture.
> >
> > On 18 Jun 2010 18:41, "Katrina Payne" <fullmetalhar...@nimhlabs.com> 
wrote:
> >
> > The idea of GPU is a method to take load off o the main CPU, to put it onto
> > another processor that has the only purpose of processing the graphics you
> > are
> > doing.
> >
> > A form of delegating between multiple chips, as I understood it.
> >
> > This way, you have one chip working specifically on the graphics, and the
> > other
> > doing everything else.
> >
> > And you are right---a software render cannot compete with a GPU on an even
> > field.
> >
> > You missed the point where Linux does not take up as much system 
resources,
> > typically, as the latest versions of Windows does.
> >
> > The idea being, to get a software renderer on Linux, to work on the same
> > level
> > as a hardware renderer on Windows.
> >
> > Like I said, you can typically get Linux, to run in a GBA... you cannot fit
> > anything else into there (maybe pong, I guess?). A GBA typically clocks in
> > at
> > about 67.5MHz IIRC, with next to no RAM.
> >
> > Windows 7, kind of requires 1GiB at a minimum for RAM, and you are going 
to
> > need at least 1 or 2 GHz to get it running.
> >
> > My idea, again, in case you missed it, was to try to take up this saved
> > overhead, use it for software rendering, to make it comparable to the
> > hardware
> > rendering on Windows.
> >
> > The idea being:
> >
> > If you can get that kind of comparable speed on Linux with Software
> > Rendering... this would make graphics card companies more inclined to make
> > drivers for Linux--as this shows how much more resources you can fit games
> > into.
> >
> > I mean, no idea how this point was lost, when what started this train of
> > thought was that Nvidia and ATI had issues supporting Linux with their
> > drivers.
> >
> > The software rendering engine would never be more than used as a form of
> > insane PoC idea. Or at least, never commercially.
> >
> > It would be a demo, that would be aimed at getting the attention of 
hardware
> > driver developers to target linux for these drivers.
> >
> > A publicity stunt was what I was suggesting.
> >
> > ~Katrina


_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlcoders

Reply via email to