Hi,
 
Just a few comments on the limitations imposed by RAM and processor constraints...
 
Firstly, in practice, boosting the amount of RAM is not very easy on modern computers. On a standard PC type architecture, the most RAM you can get -- so far as I know -- is 32GB, which is on a Penguin 64-bit linux server based on AMD Opteron chips.  To go beyond that, you're into super-high-priced machines sold by IBM and such, which are out of the range of nearly all (maybe all) AI R&D groups.  On the NOvamente project, we have been working within the 4GB limit (that comes with 32-bit architecture), but will during early 2004 be upgrading the software to 64-bit and running it on a machine with 16GB of RAM.
 
To expand beyond 32GB within reasonable cost parameters requires one to move to a distributed processing framework, which obviously cuts down access time (to RAM on distant machines) very considerably (and requires a lot of specialized engineering -- which we know how to do for Novamente based on our earlier similar work with Webmind, and may well undertake eventually)
 
Similarly, to get processing speed beyond 2.5GHz or so, one has to move to a distributed processing framework, which requires a whole lot of complexity and mess (of a nature that's specialized based on the type of computing you're doing).
 
Anyway, from a pragmatic perspective, the situation with RAM and processor speed is kinda symmetrical.  After you push to the limits of available machines, you start messing with specialized and tricky distributed-processing solutions...
 
-- Ben G


Brad Wyble <[EMAIL PROTECTED]> wrote:
>
> This is exactly backward, and which makes using it as an unqualified
> presumption a little odd. Fetching an object from true RAM is substantially
> more expensive than executing an instruction in the CPU, and the gap has
> only gotten worse with time.


That wasn't my point, which you may have missed. The point is that with
our current technology track it's far cheaper to double your memory
than to double your CPU speed. I'm not referring to the amount of memory
bits processed by the CPU, but the total number of pigeonholes available.
These are not one and the same.

Therefore you can make gains in representational power by boosting the
amount of RAM, and having each bit of memory be a more precise
representation. You can afford to have, for example, a neuron encoding
blue sofas and a neuron encoding red sof as. While a more restricted RAM
approach would need to rely on a distributed representation, one with only
sofa neurons and color neurons. (apologies for the poor example, but I'm
in a hurry)

Your points are correct, but refer to the bottleneck of getting
information from RAM to the CPU, not on the total amount of RAM available.


> Back to the problem of the human brain, a big part of the problem in the
> silicon case is that the memory is too far from the processing which adds
> hard latency to the system. The human brain has the opposite problem, the
> "processing" is done in the same place as the "memory" it operates on (great
> for latency), but the operational speed of the processing architecture is
> fundamentally very slow. The reason the brain seems so fast compared to
> silicon for many tasks is that the brain can support a spectacular number of
> effective memory accesses per second that silic on can't touch.

Both technologies have their advantages and disadvantages. The brain's
memory capacity (in terms of number of addressable bits) cannot be
increased easily while a computer's can be. I merely suggest that this
fundamental difference is something to consider if one is intent on
implementing AGI in a Neumann architechture.




-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to