Ben, 

 

I see the credit problem as a result from a process of inference, not as a
precondition, and certainly not as something that needs to be engineered
into a GI system. The problem with your approach is that you are using your
own inference, the inference in your own brain, to draw conclusions such as
the credit problem, and then you try to engineer it into a system. This is
narrow AI. Even if you were to succeed doing that, it would still be narrow
AI, because the inference that reached the conclusion would have stayed in
your brain. You think you have identified a problem that needs to be solved,
and then again you use your own inference to solve that problem. This leads
you to another problem, which you also solve in your brain. You solve all
problems yourself without ever allowing the system to do it by itself. Are
you really trying to address grounding by way of geometric figures and
centers of gravity? 

 

Did you once say you have a son? How do you educate him? Do you solve all
problems for him? No, at some point you trust him, you see him confronting a
problem and you let *him* solve it. And pretty soon you discover that he has
become an adult and needs you no more. What would he be if you had kept
solving all problems for him? 

 

I like to think that AGI requires the ultimate sacrifice: the sacrifice of
yourself. At some point you have to just let go, get out of the scene and
let it be and do its thing. So, yes, there are many  problems I have not yet
"resolved" and do not feel compelled to resolve myself. Evolution did not
create the brain to solve problems or to assign credit. It created the brain
to survive. It put EI into it for a reason. Putting credit before inference
is a reversal of causality. And of course it is difficult. Because you are
left with solving all the problems yourself. Which is what the narrow AI
people have been doing all along. 

 

Toy problems? Yes, I solved only some toy problems so far. But there is not
scale in causets. Where do you set the scale? What property of causets are
you going to use to set scale? I solved a set with 18 elements, then one
with 33, then one with 1143. So where do I stop? You say it will not work
for, say, 3541 elements? The size of the set only affects the time it takes
to solve it. It is playing a role in my toy problems only because I lack
resouces to go large scale. 

 

Sergio

 

 

 

From: Ben Goertzel [mailto:[email protected]] 
Sent: Saturday, June 09, 2012 3:41 PM
To: AGI
Subject: Re: [agi] Representations and data structures

 


I'm currently happy with a more flexible weighted, labeled hypergraph
representation scheme.  I think that causation is only one among a host of
different sorts of relationship that a human-like GI needs to internally
represent, and I don't see a need to give it a role at the foundation of the
representational scheme.   

Inferred causation is important, because it drives the direct choice of
actions.   However, the difficulty of the assignment of credit problem
(which your formalism certainly does not resolve in any way you've yet
articulated), seems to imply that directly tying everything in the mind to
actions via explicit causal relations, is not a feasible way to proceed in a
scalable system, though it will work in toy examples...

-- Ben G

On Sat, Jun 9, 2012 at 3:42 PM, Sergio Pissanetzky <[email protected]>
wrote:

In Computer Science, data structures are usually designed for computational
efficiency. I use a FIFO queue if I am planning to use a FIFO algorithm, so
the queue supports the algorithm efficiently. I use a database if I need to
store large amounts of information and retrieve different views of it with
the least possible amount of computer effort. 

 

Such representations suffer of one limitation: they completely divorce data
from meaning, that is, from what is sometimes called "metadata." Database
tables contain symbols, but not their meaning. If I have a database with
employees and their names and addresses, and I want the address of a certain
employee, I have to write a SQL query that references the exact tables where
that information is in. Or, write a case-specific front end that "knows"
everything that the database does not. Meaning remains with the user. 

 

In Physics, representations are chosen based on their mathematical
properties, and on how well those properties represent the physics of the
system that is being represented. Efficiency in computation is not
considered. Frequently, the representations satisfy group-theoretical
requirements necessary to account for the symmetries of the physical system.
For example, I use tensors to represent mechanical systems because tensor
representations are invariant under coordinate transformations and can
adequately represent physical quantities that have magnitudes and
directions, such as position, velocity, force, and moments of inertia. I use
spinors to represent quantum systems with spin because spinors have the
adequate transformation properties. I use the Lorentz group to represent
relativistic systems again because the representation remains invariant
under transformations in spacetime. 

 

For complex causal systems, causal sets are the adequate representation.
Causal sets have many intrinsic properties that correspond to observed
properties of the complex systems. They have attractors, hierarchical
structures, potential wells with levels of energy, the butterfly effect,
deterministic chaos. Causal sets are isomorphic to algorithms, and as such
they can represent behavior. Any algorithm, any computer program, can be
considered as a causal set. 

 

With the addition of a functional that represents physical action and
corresponds to the exact point where Physics enters the pure Mathematics of
causal sets, causal sets exhibit transformations that are
behavior-preserving. These transformations are a type of inference known as
"emergent inference." Inference is any process that can derive new facts
from existing facts. Causal sets map from unstructured causal sets, of the
kind that are obtained from sensors, to structured causal sets, where the
information collected from the sensors is the existing fact and the
resulting structure is the new fact. When seen as a map, emergent inference
can be considered as a function. The function is deterministic,
uncomputable, and unpredictable. The mapping is bijective, the size of the
sets is countably infinite. There exists an inverse function, which is
deterministic and computable. The structures are the same used in
object-oriented analysis, and can be represented by UML diagrams. The
behavior-preserving transformations are equivalent to refactoring. Causal
sets also apply to models of cognition. 

 

Sergio

 

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | Modify
Your Subscription

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 


-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> AGI |
Archives | Modify Your Subscription

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> 

 <https://www.listbox.com/member/archive/rss/303/212726-11ac2389>  




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to