On Mon, Jan 20, 2020 at 8:04 AM 'Nil Geisweiller' via opencog <
[email protected]> wrote:

>
> I can't really tell, I've never used StateLink. As Linas suggested Value
> might be better for various reasons. But as an AI-via-reasoning
> fundamentalist I would think ideally you should avoid immutable
> structures. That means for instance recording perceptions through time,
> via for instance AtTimeLink, as opposed to changing states.
>
> Doing this makes it easier to acquire a consciousness of time, but it's
> obviously much more expensive. You have to learn (or have the system
> learn) what to store in long-term memory, what to discard, etc.
>

Nil, this is an important point, so let me belabor it a bit. So, first,
Values allow gigapixels per second of data to be processed, without piping
them all into the AtomSpace.  But, if needed, they can be sampled, using
e.g. ValueOfLink, TruthValueOfLink, ConfidenceOfLink, StrengthOfLink, the
last two taking ordinary, highly mutable SimpleTV's grabbing the numerical
values in them and stuffing them into NumberNodes.  The analogy here is
that the neurons in my eyeballs are recording gigapixels per second, but
that data is not pumped into the hippocampus, turned into memories,
recorded as a movie-of-my-life, 60fps 24x7. No, only a few select images
are remembered.

Another way to think of immutable atoms vs mutable values is like pipes and
water. Imagine, say, a refinery: pipes everywhere: these are "immutable",
processing the stuff flowing through them.  A better analogy, maybe: a GPU,
processing pixels, with an effectively immutable pipeline algo (i.e. you
don't change the GPU algo more often than once every few seconds, and in
practice, it only changes every few hours, or minutes. Its "immutable")
You *can* perform "reasoning" on the GPU algo - after all its just a bunch
of code, and your reasoning engine can deduct new gpu algos, as needed.  A
good example is tensorflow: if you look at the tensorflow specification
language, it looks a whole lot like atomese. No surprise. What does it do?
It specifies the wiring diagram for some specific neural net. What are
those neurons doing? gigapixls per second of something.

So Atoms are for wiring diagrams, Values are for the currents that flow
through the diagrams. Classical theorem-proving, and forward/backward
chaining build new wiring diagrams. Classical (predicate) logic has very
simple Values: True, False, which can be assigned to nodes in the wiring
diagram, independently of the wiring itself.  PLN generalizes the "flowing
things" to probabilities+confidence. Values generalize a bit more: any kind
of "stuff", colors, 3d coords, whatever, but really anything.

Then things like StateLink and ValueOfLink are just assorted extra gadgets
that one needs, in practice, to do assorted specialty things.  AtTimeLink
is OK for forming occasional memories of a changing external world, but I
doubt you need it for "consciousness" of time.  I think you can gain that
just fine by taking snapshots of Values with ValueOfLink and/or holding
slowly-mutating state in StateLink.

So the above is just the "vision", its important to understand that
vision.  Of course, doing something with it, in practice, is a lot harder
:-)

 -- Linas

-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37gRN_w8X3nn8%2Br3BbzHqU1zMxEWy_%3DULAgajfHWSWLPw%40mail.gmail.com.

Reply via email to