I'm really beginning to think there are two sides to this issue.

1)  Design Methodology.
2)  Code Optimization.

1) Eytan eluded to this here, "and then nullify them nothing happens but
when you recreate them still the memory stays stable".  Today most sites are
page based, and I think within a common site URL browsers attempt to "hold
history active" till you leave to an all new destination, then purge memory.

Everything I have been trying to do is based on using the DynAPI to create a
single interface linked to a dynamically created, updated and evolving site.
Think of the JS as providing "interface" only.  Then you would use dynamic
content engines to pump new content into "reusable components" using set"xx"
methods (heck you could even reskin an object on the fly).  This is the
direction the web is moving anyway with CSS and XML anyway.

There's no reason you couldn't initialize a series of display widgets then
call them in, resize them and modify content based on user activities.  I
think page based browsing + DynAPI (with any level of pizzazz) is a recipe
for failure.  It will eventually overload the system since every page is
going to want it's own widget set.  Design would be based on creating sites
with as tight a "widget depth (how many total layers)" as possible.  Reuse
widgets (if possible) prior to creating new instances of a prototype.

2) Optimization.  I came from a game developer www.angelstudios.com.  I was
COO and I found the business of "making games" slow and boring.  The idea of
waiting and working 2 years to see if you have a winner to only see the
"bounty" get slyly skimmed into the publishers pockets grows old fast).
Hence, I left the Industry for the fast paced insanity of the web.  But game
developers are optimization freaks, they have to be.

I did a little research (I wasn't a programmer, I made sure they didn't eat
to many Twinkies during a specified month because at Angel we bought all the
Twinkies for our employees; kept them coding longer), and just from a
"tricks of the trade standpoint" our code could be significantly optimized
with a skew towards the Pentium processor set.  I'm sure with additional
research we could learn even more.  Here's what I found so far:

a)  API's, in general, are non-optimizied.  DirectX works (as an API)
because it was designed to be as "thin" as possible.

b)  Use global variables.  Don't use parameters for allot of time-critical
functions, instead use a global parameter passing area.  Example:

    void Plot(init x, init y, init color) {
    // plots a pixel on the screen
    video_buffer{x +y*MEMORY_PITCH] = color;
    }

optimized

    int gx,gy,gz,color;  //define globals

    void Plot_G(void) {
    video_buffer[gx + gy*MEMORY_PITCH] = color;
    }

I realize we are an API, but implementing aspects of this where appropriate
will help.

2)  Always use 32-bit variables rather then 8- or 16-bit.  The Pentium and
later processors are all 32-bit.  This means they don't like 8- or 16-bit
data words, and in fact, smaller data can slow them down due to caching and
other related memory anomalies.  Example:

    struct CPOINT {
    short x,y;
    unsigned char c;
    }

Optimized

    struct CPOINT {
    int x,y;
    int c;
    }

I don't think this applies a great deal to the current API but I am sharing
it anyway.

3)  Program in a RISC-like (reduced instruction set computer) manner.  In
other words make code simple rather then complex.  Pentium class processors
like simple instructions rather then complex ones.  Longer simpler code is
faster then short complex code structures.  Example:

    if ((x+=(2*buffer[index++])>10) {
    do stuff...
    }

Optimized

    x+=(2*buffer[index]);
    index++;

    if (x>10) {
    do stuff...
    }

We do sin here a bit....

4)  Use binary shifts for simple multiplication of integers by powers of 2.
Since all data in a computer is stored in a binary form, shifting the bit
pattern to the left or right is equivalent to multiplication or division.
Example:

    int y_pos = 10:
    // multiple y_pos by 64
    y_pos = (y_pos << 6);  //2^6=64

Similarly,

    //to divide y_pos by 8
    y_pos = (y_pos >> 3); // 1/2^3 = 1/8

Disadvantage... confusing code to some.

5)  Write efficient algorithms.

6)  Don't write complex data structures for simple objects.

7)  Don't go crazy with inheritance and layers of software.  Keep the base
as lean as possible.

8)   The Pentium-class processors use an internal data and code cache.  Be
aware of this, and try to keep the size of your functions relatively small
so they can fit into the cache (16KB-32KB+).  Store data in an accessible
way, it minimizes cache thrashing and main memory or secondary cache access,
which is 10 times slower than the internal cache.

9) Use precomputed "look-up tables" when possible.


So, I think that freeing memory and optimizing speed will be the marriage of
a quality site design/interface and a well constructed code base to draw
from.  While some users will elect to take a more conventional path I think
this will ultimately lead the the best "final result".  But hey!  I'm just a
designer...

Ray


_______________________________________________
Dynapi-Dev mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/lists/listinfo/dynapi-dev

Reply via email to