glen∉ℂ wrote:
Great question.
I also appreciate the specificity of the question, despite wanting to
tease it into 3 parts: A) convincing evidence; B) superior intelligence;
C) cultural inheritance .
I agree with Dave's emphasis against "finite sequences from a finite
alphabet" as being central to our SAM. *If* Wolpert's actually relying
on that as crucially as he seems to be, then the "grow vs. specify"
accusation isn't a strawman.
Static (specification) vs dynamic (growth) is an important and I think
fundamental distinction. A genome *is* a finite specification while the
embryology of it's earliest expressive development and the "cultural
embedding" it continues to form within are not precisely finite (maybe
finite-huge in scale but not finite in pre-stateability?).
But the question Wolpert wants to ask remains; and your concise
phrasing nails it. If there is an "effective computing" artifact that
demonstrates maximal intelligence with minimal cultural grounding,
what is it? One valid answer is there is no such thing.
I do think the question is on the same as "what is art" and "what is
pornography" and the answer "I know it when I see it" isn't fully
responsive but possibly as good as it gets?
All forms of "intellignece" are not abstract, are
embedded-embodied-concrete, tightly grounded to context. (Where I'm
probably relying on my definition of "concrete" more than Dave's.)
In pursuit of an abstract definition of B) above it is tempting to
gesture toward "fitness for survival" but with a *larger* sense of
"self" and a long-now sense of "time". Ice9, Cancer and grey-goo have
high fitness by some measure but in both cases most would be loathe to
call them "intelligent". An expansive fitness with an arbitrarily
broad sense of "what means self" might be the most abstract way of
thinking of "superior intelligence"?
But I think that answer, however valid, is unsound. There are ways of
behaving that *translate* across contexts. The berserker physicists
who take that to the extreme notwithstanding, anyone who travels
experiences this. As Wolpert explicitly mentions, perhaps the "level"
at which this occurs is our bodies? As long as the society I visit on
Alpha Centauri was built by homonid-similars, I think some set of my
behaviors will translate, however small that set.
I think you are arguing for the definition of "self" in this case to be
confined to the contents of our skin-bag (torus really), and maybe on a
good day some of the cells recently shed from it's surface or expelled
from one end of it's digestive canal or the other?
But maybe there's a lower level, perhaps capturing less concrete
detail than a homo-built society, of water and carbon based life? I.e.
any society built by water and carbon based life will allow some
translation of behaviors to our society?
It is familiar to define it as carbon-based life, but seems like a
coincidence of history and awareness (if perchance there are non-carbon
based life-forms we are unaware of within our light-cone)?
I don't grok Dave's antipathy, though. It seems to me like Wolpert is
*asking* these questions and challenging our berserker Scientismists
and Mathematicians in the very same gist as Dave does. Wolpert
wouldn't write (and distribute) papers like this if he *weren't* a bit
skeptical of the universality of our SAM.
Speaking for my inner DaveW, I think *my* antipathy is not really
specifically to Wolpert's specific questions/formulation, but the
*larger* expanse of Wolperts-at-large whose biases are (naturally)
ethno-centric or more accurately human-chauvanistic and
contemporary-western-civilization centric? I am more acutely
antipathic in this regard *because* I often *am one*... there is no
anti-smoker at large than a former smoker, especially one who perchance
sneaks a guilty fag in private now and then?
On 9/14/22 22:29, Marcus Daniels wrote:
What would be convincing evidence of a superior intelligence
independent of cultural inheritance?
On Sep 14, 2022, at 7:34 PM, Steve Smith <[email protected]> wrote:
On 9/14/22 7:31 PM, Marcus Daniels wrote:
ML gets better every day because it learns more like a newborn
child than a university student. This isn't 1970s AI anymore.
It all seems like a strawman argument, whether you know it or not.
And as I have referenced watching a puppy and a kitten grow together
from 3 and 4 months respectively, I believe that broadly,
contemporary ML is learning like they are. Current fetishes for NLP
to drive NLG and Visual Art misses a *lot* that animals (even one's
domesticated by us for millenia) do so well as they express what
their genes and gestation already prepare them for.
I'd claim the puppy knows a modest vocabulary of human
utterances/gestures already, though to a dog, I think human language
is very tonal to animals, to the point that maybe I can say "YES" in
the same tone I say "NO" and vice versa and the tone, not the
phoneme would dominate.
The kitten is (as I feel all cats are) almost entirely disinterested
in our *intentional* communications and *much more* aware of the
implications of our *actions* than in our words. The puppy does seem
to have a much stronger sense of anticipating our interests and
seeking our approval. The cat is more interested in her interests
and treating us as facilitators or constraints to obtaining those.
Paw prints of either species qualify as "art" in our house anytime
they get involved in a painting project or the setting of plaster,
cement, or clay. Our appreciation of same reflects *our* training
more than *theirs*.
-----Original Message-----
From: Friam <[email protected]> On Behalf Of Prof David West
Sent: Wednesday, September 14, 2022 5:54 PM
To: [email protected]
Subject: Re: [FRIAM] Wolpert - discussion thread placeholder
Regarding Wolpert's first four questions:
In my opinion, all four reflect a kind of arrogance that I have
accused Scientists and Mathematicians of many times in the past—an
attitude that modern formal and abstract math and science are a
kind of ultimate achievement of our species. Any and all other
forms/means of understanding are discounted or denied. This is
analogous to the arrogance of Simon and Newell (mentioned
previously) that a machine that thought like a university professor
was necessarily intelligent.
Ignored in the AI instance is the learning ability of a new born
child. Ignored in the case of SAM is the very real Science and
Mathematics exhibited by our species beginning in the Neolithic.
Metallurgy, agriculture, animal husbandry, pottery, weaving,
cooking, food preservation, etc.
Levi-Strauss writes extensively of two different kinds of science:
concrete and abstract; the former grounded in perception and
imagination, the latter divorced from same. The object of all
science is connections and explanations and based on
experimentation and empirical evidence, but "concrete science"
relies far more heavily on sensible intuition and not formal "proof."
SAM, for Wolpert, seems to be restricted to the that which came
into being the past few hundred years. This fetish makes questions
like—"Why do we have that cognitive ability despite its fitness
costs?"—somewhat nonsensical. What fitness costs? Mutually assured
destruction with nuclear weapons?" Certainly there were no
evolutionary fitness costs; and, in fact, those cognitive abilities
were essential and the prime mover of our species out of the
neolithic.
A more reasonable question is what caused a small subset of our
species to 'go beserk' and take a subset of the SAM that served our
species so well for so long, to such abstract extremes? An answer
might be found, and is argued, in the Ian McGilchrist works on
recent "left-brained" dominance. [left-brain is such a limited
shorthand for what McGilchrist argues in some 700 pages of prose,
that I am trepedatious using it lest it evoke the wrong headed
popularization of the notion.]
If we ignore the aberrant contemporary SAM and ask if we can find
evidence that other species, e.g., cephalopods and cetaceans, have
an equivalent to the concrete SAM that was widespread among our own
species as far back as the neolithic. The answer is yes. Tool
making, modification of environment, herding, even
quasi-domestication of other species can be found.
The cognitive abilities of dolphins and octopi (et. al.) are well
documented and include language, reasoning, knowledge of spatial
relationships, planning, and even (when given LSD (famously the
research by John Lilly with dolphins and more recently with
octopi), altered states. There is little, or no, reason not to
assume them to be SAM-sufficient for their environments and needs,
just as humans were prior to, roughly, the Renaissance.
to be continued ...
davew
On Mon, Sep 12, 2022, at 6:29 AM, glen∉ℂ wrote:
My question of how well we can describe graph-based ... what? ...
"statements"? "theorems"? Whatever. It's treated fairly well in
List's
paper:
Levels of Description and Levels of Reality: A General Framework by
Christian List http://philsci-archive.pitt.edu/21103/
in section "6.3 Indexical versus non-indexical and first-personal
versus third-personal descriptions". We tend to think of the 3rd
person graph of possible worlds/states as if it's more universal
... a
complete representation of the world. But there's something captured
by the index/control-pointer *walking* some graph, with or without a
scoping on how many hops away the index/subjective-locus can "see".
I liken this to Dave's (and Frank's to some extent) consistent
insistence that one's inner life is a valid thing in the world, Dave
w.r.t. psychedelics and meditation and Frank's defense of things like
psychodynamics. Wolpert seems to be suggesting a "deserialization" of
the graph when he focuses on "finite sequences of elements from a
finite set of symbols". I.e. walking the graph with the index at a
given node. With the 3rd person ... whole graph of graphs, the
serialization of that bushy thing can only produce an infinitely long
sequence of elements from a (perhaps) infinte set. Is the bushiness
*dense* (greater than countable, as Wolpert asks)? Or sparse?
I'm sure I'm not wording all this well. But that's why I'm glad y'all
are participating, to help clarify these things.
On 9/12/22 06:13, glen∉ℂ wrote:
While math can represent circular definitions (what Robert Rosen
complained about), there are deep problems in the foundations of
math ... things like the iterative conception of sets ... that
are attempts to do what Wolpert asks for in the later questions.
And it's unclear to me that commutative categories reduce to
"finite sequences of elements from a finite set", prolly 'cause
I'm just ignorant. But diagrammatic loops in graphs don't look to
me like finite sequences.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/