Glen-

I think you are (accidentally?) winning me over to the post/trans-humanist fetish.   Just your talk of "play" and realizing how much I *already* play with automatons in the form of (see driving anecdotes) other drivers and roadway systems and (smart or dumb) traffic-lights, etc and bureaucracies.   I admit to always being taken in by (modern) science fiction stories with robot/android - human relationships... playing what might amount to a continuous, infinite game of Turing Test with them.   The same kind of "play" I currently engage in with dogs, cats, horses, watercourses, etc.   As a good animist, I can't see how I could reject the opportunity to "Play" with machine intelligences!

When I get a full-body prosthetic to make up for my slowly failing organic musculo-skeletal system, I will probably find great enjoyment in "playing" with it the way I currently "play" with my bicycle and other vehicles, testing (softly these days) their performance envelope and response modes.

Jump cut to Ridley in her  Space-Mining-Waldo-Exoskeleton  with or without an Alien opponent.

On 4/13/22 9:36 AM, glen wrote:
But we don't "create the neural structure over and over", at least we don't create the *same* neural structure over and over. One way in which big-data-trained self-attending ANN structures now mimic meat intelligence is in that very intense training period. Development (from zygote to (dysfunctional) adult) is the training. Adulting is the testing/execution. But these transformer based mechanisms don't seem, in my ignorance, to be as flexible as those grown in meat. Do we have self-attending machines that can change what parts of self they're attending? Change from soft to hard? Allow for self-attending the part that's self-attending (and up and around in a loopy way)? To what extent can we make them modal, swapping from learning mode to perform mode? As SteveS points out, can machine intelligence "play" or "practice" in the sense normal animals like us do? Are our modes even modes? Or is all performance a type of play? To what extent can we make them "social", collecting/integrating multiple transformer-based ANNs so as to form a materially open problem solving collective?

Anyway, it seems to me the neural structure is *not* an encoding of a means to do things. It's a *complement* to the state(s) of the world in which the neural structure grew. Co-evolutionary processes seem different from encoding. Adversaries don't encode models of their opponents so much as they mold their selves to smear into, fit with, innervate, anastomose [⛧], their adversaries. This is what makes 2 party games similar to team games and distinguishes "play" (infinite or meta-games) from "gaming" (finite, or well-bounded payoff games).

Again, I'm not suggesting machine intelligence can't do any of this; or even that they aren't doing it to some small extent now. I'm only suggesting they'll have to do *more* of it in order to be as capable as meat intelligence.

[⛧] I like "anastomotic" for adversarial systems as opposed to "innervated" for co-evolution because anastomotic tissue seems (to me) to result from a kind of high pressure, biomechanical stress. Perhaps an analogy of soft martial arts styles to innervate and hard styles to anastomose?

On 4/12/22 20:43, Marcus Daniels wrote:
Today, humans go to some length to record history, to preserve companies and their assets.  But for some reason preserving the means to do things -- the essence of a mind -- this has this different status.  Why not seek to inherit minds too?  Sure, I can see the same knowledge base can be represented in different ways.   But, studying those neural representations could also be informative.   What if neural structures have similar topological properties given some curriculum?  What a waste to create that neural structure over and over..

-----Original Message-----
From: Friam <[email protected]> On Behalf Of Steve Smith
Sent: Tuesday, April 12, 2022 7:22 PM
To: [email protected]
Subject: Re: [FRIAM] Selective cultural processes generate adaptive heuristics


On 4/12/22 5:53 PM, Marcus Daniels wrote:
I am not saying such a system would not need to be predatory or parasitic, just that it can be arranged to preserve the contents of a library.

And I can't help knee-jerking that when a cell attempts to live forever (and/or replicate itself perfectly) that it becomes a tumour in the organ(ism) that gave rise to it, and even metastasizes, spreading it's hubris to other organs/systems.

Somehow, I think the inter-planetary post-human singularians are more like metastatic cells than "the future of humanity". Maybe that is NOT a dead-end, but my mortality-chauvanistic "self" rebels.   Maybe if I live long enough I'll come around... or maybe there will be a CAS mediated edit to fix that pessimism in me.


On Apr 12, 2022, at 4:29 PM, glen <[email protected]> wrote:

Dude. Every time I think we could stop, you say something I object to. >8^D You're doing it on purpose. I'm sure of it ... like pulling the wings off flies and cackling like a madman.

No, the maintenance protocol must be *part of* the meat-like intelligence. That's why I mention things like suicide or starving yourself because your wife stops feeding you. To me, a forever-autopoietic system seems like a perpetual motion machine ... there's something being taken for granted by the conception ... some unlimited free energy or somesuch.


Attachment: OpenPGP_0xFD82820D1AAECDAE.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to