Oh foo. If you stopped engaging me in conversation, I could get some real
work done that I need to do. However, lacking in willpower, I respond:

On Fri, Feb 22, 2019 at 1:18 AM Rob Freeman <[email protected]>
wrote:

>
>
> So this is just a property of sets.
>

This is a property of infinite sets.  Finite sets don't have such
problems.  Much or most of math is about dealing the infinite.  Examples:

* You cannot count to infinity. But you can just say "the set of all
natural numbers" and claim it exists (as an axiom).

* Every real number has an infinite number of digits. You cannot write them
down, but you can give some of them a name - "pi", "sqrt 2" so that others
can know what you are talking about.

* The complex exponential function exp(z) is "entire" on the complex plane
z: it has no poles. ... except at infinity, where it has an "essential
singularity": its totally tangled up there, in such a way that you cannot
compactify or close or complete. The value of exp(z) as z \to \infty
depends on the direction you go in.

* Limits. Function spaces are tame when they have limits e.g. Banach
spaces.  The tame ones are work-horses for practical applications. The
whack ones are weird, and are objects of current study.

* Complicated examples, e.g. Haupfvermutang about triangulation as an
approximation.

All I'm saying is that similar tensions about completeness/incompleteness
when something goes to infinity happens in logic as well. One simple
example, maybe:

* A normal, finite state machine, as commonly understood, works on finite
sets.  However, there is a way to define them so that they also work on
infinite, smooth spaces: euclidean spaces R^n, probability spaces
(simplexes), on "homogeneous spaces" (spheres, quotients of continuous
groups, etc.)  These have the name of "geometric finite state machines".
When the homogeneous space is U(n), then ts called a "quantum finite
automata" (as in "quantum computing").

* These "geometric finite automata" (GFA) are a lot like.. the ordinary
ones, but they have subtle differences, involving the languages they can
recognize...

* Turing machines are a kind of "extension" of finite state machines. I
have never seen any exposition showing a formulation of Turing machines
acting on  homogeneous spaces. I assume that such expositions exist.
Studying them should be interesting. Based on results on GFA, I expect that
the languages they recognize will be different. I expect that the
difference between "recursively enumerable" and "recursive" will be
different, not like in ordinary Turing machines. Decidability will be
different. I expect that Turing machines acting on  homogeneous spaces
might be kind-of oracular-like in various ways. Viz. might be oracles for
ordinary turing machines. I dunno.

* So, insofar as Goedel's various theorems, and the other
completeness/incompleteness theorems in logic are mapped to recognizable or
recursive languages, then I expect that they would not apply, or that they
would be altered, when one instead considers geometric turing machines. In
particular, the incompleteness theorem might not hold, since, if I
understand correctly, it depends on a recursive enumeration.

The moral of the story is that "weird shit happens" when you go to
infinity, everything is tangled there, its really cool, and its a mistake
to think that what we currently understand is complete. Its not.

(And of course, since we live in a quantum world, i.e. in the homogeneous
space that is called "complex projective space", the atoms and photons and
etc. are naturally interacting in this ... geometric way. They're not
minature turing machines or miniature finite automata -- although they
might be, probably are minature geometric finite automata or minautre
geometric turing machines, where the geometry is that of complex projective
space... )  (I kind of doubt this has anything to do with intelligence and
thinking, but I could be wrong. Anyway, I think atoms evade goedel
incompleteness in the above-described hand-waving exercise.)



> That they are able to say more than one thing. The distributional analysis
> of language structure also depends on the properties of sets.
>

Language is more-or-less finite. I can kind-of-see ways where infinity
sneaks in, but... I've already left the bounds of what's concrete and
provable. Questions about decidability and completeness for natural human
language (or for intelligence in general) is a red herring, I think.

If forced to make my claim be provable, I would point out that neural nets,
probability, etc. all work with real numbers that have an infinite number
of decimal places. The space they work in is a homogeneous space called
"the simplex" (i.e. sum_n p_n=1 --- probabilities sum to one) and thus the
corresponding geometric automata and stack machines and turing machines are
already evading  Goedel's theorem.  I don't even have to invoke "quantum"
or "gravity" for this argument; simply working with real numbers already
alters conclusions and invalidates any theorems that depend on discrete,
finite sets.

>
>
>
> In the concrete. I'm still not clear what your jigsaw pieces will look
> like.
>

I pointed at three or four PDF's -- the sleator-temperly paper, the coeke
paper, and my own sheaves paper, which draws one explicitly on page 5 or
10.  There are more out there, this is not something I invented; its
already "well known", just not "popularly known".


>
> Ben agreed with my "network of observed language sequences", with context
> links which "form little diamonds in the network". But what I'm reading
> sounds like you are enhancing the nodes and the links of that network far
> beyond observed words.
>

>From the network of "all possible grammatically correct sentences" one can
extract the "syntactic rules of grammar", as the local structure of that
network.  But one can also extract synonymous words/phrases ("X borders on
Y" and "Y is next to X" - these are sections in the sheaf), and one can
disambiguate word senses (e.g. the "eat pizza with" examples -- these fail
to disambiguate precisely because you failed to look at the whole language
network -- what else you can do with Bob, or with a fork: You can go
fly-fishing with Bob, but never a fork. You can stab a piece of meat with a
fork, but never with Bob)  All this stuff is there in the language network,
and can be extracted -- it does not require common-sense, real-world visual
I-walked-into-a-lamppost experience.  People have done this stuff with
language in various toy problems. Mostly I'm proposing to assemble it into
a completed, working system.

>
> For instance you say, "If you don't know what the pieces are, but are
> setting out to discover them by looking for statistical regularities in
> language"?
>
> Why would you need to do that? Why would you want to do that?! Given what
> I've been saying above about multiple interpretation of sets, why would you
> think that is possible? Especially if the "statistical regularities" might
> resolve in ways which contradict each other, by the above? Why not just use
> observed words, linked in the sequences they are observed to occur, and
> find the regularities you need, on demand?
>

? what's the diff?  Yes, I'm using the "observed words", just like everyone
else. And doing something with them, just like everyone else.

--linas


>
> -Rob
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T581199cf280badd7-Mbdec8acc29a4c3c0d93b1356>
>


-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-Mbdf458272e27f56c9abfecc9
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to