*Todor:*

Mike, while some of the things you say are correct/agree with school of
thought me and several others on the list share
(BTW, the paper about embodiment was a good one), you fail to recognize
this, say things that others agree with you, but always make it like "no,
it's not like what I say!".
Everybody is against you, and what you say is always "different". :)

And once you promote embodiment (sensori-motor generalizing hierarchies -
the prefered school of thought of me, Sergio, Boris, Jeff Hawkins),
the next time you cite guys who are definitively AI-niks with a mask (IMHO
such as one Hofstadter's paper about fonts you cited and was displaying
your confusions about generalization,
Hofstadder seems to has confused generalization and specialization just
like you and many others still do).

Also that guy Bart Kosko and his word plays - I've read a book of his on
Fuzzy logic long time ago, and do you know what I understood from it?
That fuzzy logic is not that fuzzy as they say, and it's nothing special,
it's classical logic in new shoes so it doesn't solve the problem - it's
like classical logic, but with more degrees of freedom.

I wrote about this in my "Theory of Mind and Universe", Part 4, section: ",
first published in early 2004.

"21. Fuzzy logic and is it really fuzzy? [Also Truth, comparison … ], p.21
- p.25:

http://research.twenkid.com/agi_english/Teenage_Theory_of_Universe_and_Mind_4.pdf

or as I re-published the excerpt in the blog:
http://artificial-mind.blogspot.com/2012/06/fuzzy-logic-and-is-it-really-fuzzy.html

*Notice the definition of "truth" *there:

That excerpt says a lot on the topic and about those "fuzzy fuzzy logic"
bullshit, explains about principles of recognition and generalization in
generalizing sensori-motor hierarchies.


...

You're right regarding/my school of thought agree that classical logic etc.
are derived and justified based on sensory experience, they are lower
resolution,
and including that sensory mobility you talk about (adjustment of
coordinates) is crucial and important for the development.

However once after working rules are derived, Ben's right that they can be
applied without using precise simulation at sensory level detail - it's not
needed, unless higher level rules happen to fail,
then they need to be re-adjusted. Generalization, loss of detail allows
working with abstract concepts and solving big abstract problems that
cannot be solved directly (like building a spaceship) - yes, abstract
domains, high generality, has to be converted (grounded) down to lowest
level representations (both sensory and motor), in order to make sense in
reality. That's correct, but spaceships and most of engineering for example
is initially born in sketches, blueprints and computer models, which are
result of derivation from sensory input, but digested, compressed and
morphed.

Humans do have this hierarchies which allow them to convert loogical
expressions into "real world problem solving", by filling the gaps with the
experience (lower level regularities, already collected, and the ones
experienced at the moment).


>This differs from the factual statements we make about
>the real world-statements such as "Pine needles are green" or "Chlorophyll
>molecules reflect green light." These factual statements are
approximations.
>They are technically vague or fuzzy. And they often come juxtaposed with
>probabilistic uncertainty: "Pine needles are green with high probability."
>Note that this last statement involves triple uncertainty. There is first
>the vagueness of green pine needles because there is no bright line
between
>greenness and non-greenness-it is a matter of degree. There is second only
a
>probability whether pine needles have the vague property of greenness. And
>there is last the magnitude of the probability itself. The magnitude is
the
>vague or fuzzy descriptor "high," because here, too, there is no bright
line
>between high probability and not-high probability.

*Todor:*

Sorry to say that about that VIP fuzzy logic guy, but that's fuzzy fuzzy
logic blah-blah-ing, see the citation above...

- Yes, natural language is abstract and imprecise compared to sensory input
that has higher resolution, wider scope/many cases that can't be
encompassed in a single sentence - what a discovery.

- Yes, if there's not grounding/mapping to sensory experience, and one is
using only NL to talk about them, the details cannot be expressed precisely
(namely if the colours are represented with one word, and not for example
like a numerical interval, photo/video samples in controlled  conditions
etc., which is sensory data), it's like a blind man who's never seen to
talk about colours.

NL is an integrating medium for all modalities, its most abstract and it
points to labels to records of sensory inputs.

That's why for precise needs it's not used, or is used with auxilliary
tools (diagrams, sample data, specific terms, video, audio records,
motion-capture records etc.)

...

There are calibrated devices and physical measurements which are not vague
or fuzzy.

Measure the pique of the lenght of the wave of the reflection of a light
with given frequency in given environment.

...

*"Green" and "not green"is a logicist's nonsense*

Colours are not about "green" and "not green", it's about green or red,
yellow, blue etc., also it's about a context where this is applied, i.e.
the adjacent areas, the specific environment.

If something is too general, short, superficial or so (a random sentence
told from nowhere), it *APPARENTLY* and by definition is not intended or
not applicable for practical usage.

Colour has precise mapping in wave-lenghts*, which is mapped to differences
in the sensation in different types of cells in the retina. It's
"technically not vague".

"Non-green" is a logicists' bullshit - stones, "running", f*, beautiful,
34859fje8fjw39fj342 - it's all "non green", and what's the meaning and use
of this?

There are modalities, which are defined by sensory matrices, that have
coordinates and intentisities per their elements.
In a continuous domain of a sensory modality there are continous steps,
"smooth" "greenness" or so - "fuzzy" "logic".

However in this cases, if one is referring to those matrices, he should
*cite* it in terms of that domain and its resolution, such as:
0x20ff56, or 0x10ff10 or 0x002000, or 0x016A0C etc. - this is 24-bit RGB
colours. The ones with biggest difference of the second component compared
to the others is the most "green".
But this is not logic, this is simple measurement.


"Pine needles are green with high probability."

That is to say that they are not red or blue, and compared to those steps
the difference is discrete enough.

...

**Nooo, optical illusions! *

:)) Yeah, some blah-blahbers use to use optical illusions to explain how
magical brain is.

In fact, the optical illusions are predictable and common, which means that
they are ruslt of general and repeatable patterns of operation of the brain
- different people, different experience, the same "illusions".

In my interpretation it is to say that they are not really illusions, and
they always appear in ambiguous scenes, which are artificial extremities
that don't exist normally in real perceptions.

Brain is trained to see scenes, usually spatial scenes with light sources
(thus light spreading, according to the objects and geometry of the scene),
it's not just a PC connected to two flat-bed scanners who work always under
the same lightning conditions in orthogonal projections, and which record
exactly the colours as they come on the CCD line...

Also even the web-cameras adjust the "White-Balance" and do
"colour-corrections", accenting or attenuating one or another colour aspect.

Our conscious sees what the spatial shape and colour would be, if it
follows reconstructed 3D model and the light has the particular properties
reconstructed (supposed) properties.
In cases where the sources, intensities and colour of the light, and also
the 3D structure of the scene cannot be unambiguously reconstructed,
colours may be seen "wrongly" compared to if measured with a "calibrated
colourimeter" or so, or compared to the lowest level input to retina if
each reconstructed "pixel" is seen individually with the rest set to black.

However in fact this is to show that higher levels of the brain cannot
access and fix some of the predictions/models at the lowest levels of
visual processing - big deal.


>In logic, **you already know how to solve the problem.** In real world
reasoning, you DO NOT KNOW EXACTLY HOW TO SOLVE THE PROBLEM - you have to
decide on an approach.

Both are wrong, you know only the methodology to do, in "RWR" you also know
the methodology - you may not realize it in its whole (the highest more
abstract part of the system may not realize all the details - it usually
doesn't such as the marshall to know the coordinates of every single
soldier in his army).
Every single action of the, my term, "causality-control units", the
low-level units (from the given perspective) know what they do, it's not
random.

I think I've accented this in the past, that every human action, our whole
external behavior is eventually reducable to a set of vectors describing
the muscle motions in given space-time coordinates.

The intelligent output is just a of sequence of muscle moves, which in
essence are coordinate adjustments.

And every single step in any activity or solving a problem is known
"exactly" in its local scope - move a hand so and so, stretch, grab, turn
around so and so, etc.

...

*There are always criteria for halting*


>In logic & algorithms there are normally criteria for halting, in RWR you
can go on forever. [Check out tame vs wicked problemsolving, structured vs
unstructured).

*Todor:*

This is also not true. In algorithms you can go forever, too - "iterative
algorithms".
An AGI and SIGI (Self-improving general intelligence) system should be an
iterative algorithm per se, it's not "push-the-button-get-the-result"
software, it's supposed to run all the time.

You see many of as talk about "hierarchies", that's another thing
preventing "halting", if a problem is solved in low level of abstraction,
it can go higher (in more cases, wider scope).
Or vice-verse - if the problem is solved in an abstract form (low
resolution, wide scope), then it have to be converted down to the highest
resolution (high resolution, short scope).

An example of this is science. The advance of science turns abstract or not
precise-enough theories and representations of physical laws or biological
laws into representations which are as precise as allowing to build atoms
and molecules one-by-one,
such as the DNA-builder they created recently.

In high level of abstraction it was just an idea that there's something
called "genetic information", and it's stored somewhere in the cell.
Then they found the molecule.  Then they found segments of the molecule.
Then they started to understand the implications of different changes in
different segments etc.

Also while you "can go on forever", that doesn't normally happen in *one
iteration*.

There are also short-term goals, always there are criteria to stop, even if
they are implicit and most people lack enough of self-reflection in order
to understand it.


I've written about that there (see all the parts):
http://artificial-mind.blogspot.com/2010/01/semantic-analysis-of-sentence.html

Part 1 (и български):(This Post) Semantic analysis of a sentence.
Reflections about the meaning of the meaning and the Artificial Intelligence

Part 2 (и български): Causes and reasons for human actions. Searching for
causes. Whether higher or lower levels control. Control Units.
Reinforcement learning.

Part 3 (и български): Motivation is dependent on local and specific
stimuli, not general ones. Pleasure and displeasure as goal-state
indicators. Reinforcement learning.

Part 4 : Intelligence: search for the biggest cumulative reward for a given
period ahead, based on given model of the rewards. Reinforcement learning.


Regarding art - it's systematic, it's not random wandering, but I
understand why many people believe so -
The reason that art impresses people is the fact that they cannot imagine
and see how the piece of art was created incrementally, with the resolution
of perception and causality/control that they believe they should had to be
able to imagine, in order not to be impressed (my hypothesis, from  a yet
not translated old article).

For non-artists non-actors, non-writers, non-photographers, non-comedians,
etc. art might look like "magic", "spirits give you something which comes
from Heaven."

For the ones who do all those activities as authors, and have enough of
introspective capabilities, and also have technical talents and
capabilities,
art is systematic and predictable, not magic at all. The variables that
define it are beyond average people comprehension or control, but that's
problem of their limited cognitive capabilities.

There are places to choose directions and where to start or where to end,
but choice is never random (and when it is random, it's not intelligence
and it doesn't matter what the choice was), they are all systematic and
determined by the other decisions, experience, personal details and goals
of the piece of art. Random art is meaningless.

...

The ones with obscessive-compulsive disorder or maniacs do cycle the same
thing "forever", but even they have some "criteria"for finishing parts of
the behavior, otherwise they would be having endless convulsions.
Notice that I'm talking in embodied perspective - moving the body requires
coordination, "algorithms" and criteria for success in very tiny steps.


>Bart Kosko:
>The catch is that we can really only prove tautologies. The great binary
>truths of mathematics are still logically equivalent to the tautology 1 =
1
>or Green is green.

*Todor:*

Yeah, but he doesn't get that it's about the MAPPING between different
levels of generality (the sensory experience "green" is mapped to the word
"green" - pronounciation, sound (inter-modality mapping); prevalence in a
scene; attention marker as an attribute of the scene that is pointed,
emphasized by another person etc.).

"Green" doesn't make sense in the mathematical world, it's a pointer to
visual modality. "Truth" in logic is not really meaningful in reality if
not converted, that's true, see my definition of truth and the discussion
about the nonsense pseudo logical paradoxes.

Also, all the sensory, mathematical or whatever knowledge in principle is
always "available there", however it can't be perceived or thinked all at
once, there must be a sequence and focus.
Proves or whatever are focus on particular aspects, chains of thought,
emphasizes of particular POV etc. The one who proves something initially
doesn't know / is not that certain about the outcome which he proves/finds
evidences (matches).

Both regarding sensory experiences or logical expressions (very compressed
and selected sensory experiences), it's about comparison and finding
matches.
In classical logic it's binary matching, in reality it has more degrees of
freedom, more dimensions, more steps, and various resolution, scope,
time-span, ...

Another thing is that somewhat "tautologies" are what mind does: it finds
or predicts correlations/repetitions between already experienced and
current and upcoming sensory input,
also it tries to make its own plans/predictions to match the real future,
planned sensory input to match with the real one.


*-- Todor "Tosh/Twenkid" Arnaudov*
http://research.twenkid.com
http://artificial-mind.blogspot.com



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to