On 1/17/2012 10:58 AM, karl ramberg wrote:


On Tue, Jan 17, 2012 at 5:43 PM, Loup Vaillant <[email protected] <mailto:[email protected]>> wrote:

    David Barbour wrote:



        On Tue, Jan 17, 2012 at 12:30 AM, karl ramberg
        <[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>
        wrote:

           I don't think you can do this project without a
        understanding of
           art. It's a fine gridded mesh that make us pick between
        practically
           similar artifacts with ease and that make the engineer
        baffled. From
           a engineering standpoint there is not much difference between a
           random splash of paint and a painting by Jackson Pollock.
        You can
           get far with surprisingly little resources if done correctly.

           Karl


        I think, even with an understanding of art and several art history
        classes in university, it is difficult to tell the difference
        between a
        random splash of paint and a painting by Jackson Pollock.

        Regards,

        Dave


    If I recall correctly, there is a method: zoom in.  Pollock's
    paintings
    are remarkable in that they tend to display the same amount of entropy
    no matter how much you zoom in (well, up to 100, actually).  Like a
    fractal.

    (Warning: this is a distant memory, so don't count me as a reliable
    source.)

    Loup.


My point here was not to argue about a specific artist or genere but that the domain of art is very different from that of engineer. What makes some music lifeless and some the most awe-inspiring
you heard in your whole life ?


game art doesn't need to be particularly "awe inspiring", so much as "basically works and is not total crap".

for example, if the game map is just:
spawn near the start;
kill a few guys standing in the way;
hit the exit.

pretty much no one will be impressed.

in much a similar way, music need not be the "best thing possible", but if it generally sounds terrible or is just a repeating drum loop, this isn't so good either.


the issue, though, is that the level of effort needed to reach "mediocre" is often itself still a good deal of effort, as maybe one is comparing themselves against a mountain of other people, many trying to do the minimal they can get away with, and many others actually trying to make something decent.

it is more so a problem when ones' effort is already spread fairly thin:
between all of the coding, graphics and sound creation, 3D modeling and map creation, ...

it can all add up fairly quickly (even if one cuts many corners in many places).

what all I have thus far "technically sort of works", but still falls a bit short of what was the norm in commercial games in the late-90s / early-2000s era.

it is also going on a much longer development time-frame as well. many commercial games get from concept to release in 6 months to 1 year, rather than requiring years, but then again, most companies don't have to build everything "from the ground up" (they have their own base of general art assets, will often license the engine from someone else, ...), as well as having a team of people on the project (vs being a single-handed effort), ...


a lot of this is still true of the 3D engine as well, for example my Scripting VM is still sort of lame (I am using a interpreter, rather than a JIT, ...), my renderer architecture kind of sucks and doesn't perform as well as could be hoped (ideally, things would be more modular and cleanly written, ...), ...

note: mostly I am using an interpreter as JITs are a lot more effort to develop and maintain IME, and the interpreter is "fast enough"... the interpreter is mostly using "indirect threaded code" (as this is a little faster and more flexible than directly dispatching bytecode via a "switch()", although the code is a little bigger given each opcode handler needs its own function).


likewise, after the Doom3 source code came out, I was left to realize just how drastically the engines differed internally (I had sort of assumed that Carmack was doing similar stuff with the internals).

the issue is mostly that my engine pulls off worse framerates on current hardware using the stock Doom3 maps than the Doom3 engine does (and leads to uncertainty regarding if scenes can be sufficiently large/complex while performing adequately).


for example:
my engine uses a mostly object-based scene-graph, where "objects" are roughly split into static objects ("brushes", "patches", "meshes", ...) and dynamic objects (3D models for characters and entities, "brush-models", ...); it then does basic (dynamic) visibility culling (frustum and occlusion checks) and makes use of a dynamically-built BSP-tree; most of the rendering is done (fairly directly) via a thin layer of wrappers over OpenGL (the "shader system"); many rendering operations are implemented via "queries" (such as generating a list of every "object" within a given sphere or bounding-box, ...);
...

OTOH, the Doom3 engine seems to instead use a system of deformable meshes, which are then handled using a sort of "somewhat elongated" rendering pipeline. so: front-end 3D models are mapped to meshes, the renderer figures interactions between meshes and light-sources (what is lit by what lights and what things may cast shadows on what, ...), proceeds to do some amount of clipping and culling, ... (things like shadow volumes are generated as meshes, which are then clipped, and subsequently drawn, ...).


as noted, there is a "dynamic BSP" in my engine, which is not based on raw polygons (unlike a "proper" BSP tree), but instead it is more of an "object-sorting binary tree" (most objects are seen as bounding volumes).

it was partly based on an observation (back in 2005 or so), that I could essentially build BSPs in real-time via a fairly simple algorithm: add up the origins for all of the objects in the set, and divide by the count, giving a "centroid"; take the difference of each object from the origin, and add up the absolute differences (originally, I later "improved" it by finding the "major axis" for each delta, potentially inverting the vector based on this axis, and adding this into a 3x3 matrix, and then calculating a division plane, IIRC as a weighted sum); at each stage, the set of objects is split in half (with any objects which cross the plane being linked directly into the node).

(the BSP tree is needed mostly to speed up queries, as otherwise the 3D engine becomes CPU-bound, with lots of time spent in performing such queries...).


the general rendering process itself works like:
make sure BSP/... is built and up-to-date;
perform visibility culling (typically using antiportal and basic occlusion checks, followed by doing a flat-colored render-pass and seeing what shows up as visible, all this is marked via bitmaps); mark/initialize any geometry which is to be using "vertex lighting" (speed/quality/distance tradeoff); update lighting for any such geometry (calculate how much light hits each vertex, ...);
query all light sources which are within visible parts of the scene;
for each light:
query all objects which are within the light's range and potentially cast a visible shadow;
    draw shadow volumes (using depth-pass shadowing);
    query all visible (non-vertex-lit) objects within range of the light;
draw the light-geometry for these objects (may involve use of normal+specular maps); draw surface geometry for visible objects (texture-maps, gloss effects, ...);
draw any vertex-lit objects (single-pass, vertex color used for lighting);
draw any visible decals / ...
draw any visible alpha-blended objects/geometry (windows / water / shader-effects / ...);
draw misc stuff (crepuscular rays, volumetric fog, ...);
...

a lot of low-level rendering is directed through a "shader system", which basically figures how to go about drawing the specific geometry, applying various effects (such as vertex warping, gloss, ...), ... some texture/shader effects will apply on the scale of objects (rather than simply to individual polygon faces or similar). the shader-system also partly manages things like specular and normal maps (hackishly, it will pass the geometry back to the light-rendering code).


it could all be better, but improving it requires time and effort, and so competes against everything else I am working on (and has thus far been a lot of "going for low-hanging fruit" rather than "doing it right").

side note: the scene structure is fairly dynamic and so can be altered in real-time.


or such...

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to