On Dec 7, 2007 7:41 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> > No, my proposal requires lots of regular PCs with regular network
> connections.
>
> Properly connected set of regular PCs would usually have way more
> power than regular PC.
> That makes your hardware request special.
> My point is - AGI can successfully run on singe regular PC.
> Special hardware would be required later, when you try to scale
> out working AGI prototype.
>

I believe Matt's proposal is not as much about the exposure to memory or
sheer computational horsepower - it's about access to learning experience.
A supercomputer atop an ivory tower (or in the deepest government
sub-basement) has an immense memory and speed (and dense mesh of
interconnects, etc., etc.) - but without interaction from outside itself,
it's really just a powerful navel-gazer.

Trees do not first grow a thick trunk and deep roots, then change to growing
leaves to capture sunlight.  As I see it, each node in Matt's proposed
network enables IO to the us [existing examples of intelligence/teachers].
Maybe these nodes can ask questions, "What does my owner know of A?" - the
answer becomes part of its local KB.  Hundreds of distributed agents are now
able to query Matt's node about A (clearly Matt does not have time to answer
500 queries on topic A)  During the course of "processing" the local KB on
topic A, there is a reference to topic B.  Matt's node automatically queries
every node that previously asked about topic A (seeking first likely
authority on the inference)  - My node asks me, "What do you know of B?  Is
A->B?"  I contribute to my node's local KB, and it weights the inference for
A->B.  This answer is returned to Matt's node (among potentially hundreds of
other relative weights) and Matt's node strengthen the A->B inference based
on received responses.  At this point, the distribution of weights for A->B
are all over the network depending on the local KB of each node and the
historical traffic of query/answer flow.   After some time, I ask my node
about topic C.  It knows nothing of topic C, so it asks me directly to
deposit information to the local KB (initial context) - through the course
of 'conversation' with other nodes, my answer comes back as the aggregate of
the P2P knowledge within a query radius.  On a simple question I may only
allow 1 hour of think time, for a deeper research project that radius of
query may be allowed to extend 2 weeks of interconnect.  During my research,
my node will necessarily become "interested" in topic C - and will likely
become known among the network as the local expert.  ("local expert" for a
topic would be a useful designation to weigh each node for primary query
targets as well as 'trusting' the weight of the answers from each node)

I don't think this is vastly different from how people (as working examples
of intelligence nodes) gather knowledge from peers.

Perhaps this approach to "intelligence" is not an absolute definition as
much as a "best effort/most useful answer to date" intention.  Even if this
schema does not extend to emergent AGI, it builds a useful infrastructure
that can be utilized by currently existing intelligences as well as whatever
AGI does eventually come into existence.

Matt, is this coherent with your view or am I off base?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73898638-6a4fad

Reply via email to