[FRIAM] Song stuck in head...

2022-09-16 Thread Gillian Densmore
Been watching  bit of the TV show. The first episode ("pilot") has part of
David Bowies Fame. Ooh no! you'll say. And is that stuck in your head? well
sort of.
I feel like i've heard the starting guitar and drum bit that loops through
that somewhere else. TV show or another song.
Any guesses where that might be from?
And the bigger question: what causes that thing where you might hear part
of a song, and then go: didn't  I hear something like that someplace else.
Even you turn out to be partially, or entirely wrong?
Is that a aspect of the Mandela effect? Ie how a lot of people sometimes
think Sinbad was in a movie Kazam. To find out that's not quite right. He
did sport drop crotch pants, and for a bit was into them. But alas, no
wasn't in a genie movie.
And has anyone figured out what causes that on such a mass scale. Sure
their's *some * evidence of some sort of quantum phenimonominom. But as far
as I know it's just a SWAG: Scientifitic Wild Guess.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] truth-preserving math

2022-09-16 Thread Jon Zingale
Thank you for this article. Subject matter related to quantum fields,
the Schwinger effect and the Casimir effect are very much where my mind has
been lately. I continue an attempt to reconcile recent thoughts on the
different candidate GUTs, the resolution of infinite contributions of
virtual particle energies, analytic continuation and the Riemann zeta
function to some ephemeral thoughts I have on continuum computation,
derivatives of Turing machines, Gödel, and possible thermodynamic
limitations of infinitesimal bits. It would be really great if these ideas
would settle into something tangible. I would like to contribute something
to that discussion.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Wolpert - discussion thread placeholder

2022-09-16 Thread Marcus Daniels
Given the how normal extreme inequality is, probably the they/us distinction is 
already happening.  Technology could accelerate it, though.  Some people will 
have direct and indirect cognitive assists, some will have designer babies and 
some won’t, etc.  Over a few generations we might not really recognize one 
another.  Whether that is utopian or dystopian or neither is subjective.

On Sep 16, 2022, at 10:31 AM, Steve Smith  wrote:



Responding first to Marcus point:

"I think there will be a transition toward a more advanced form of life, but I 
don’t think there will be a clear connection between how they think and how 
humans think.  Human culture won’t be important to how they scale, but may be 
relevant to a bootstrap."

I believe we are "in transition" toward a more advanced form of life, though it 
is hard to demarcate any particular beginning of that transition.  The 
post/trans-humanists among us often seem to have a utopian/dystopian urge about 
all this that I am resistant to. 
Kotler's works 
(Abundance, Rise of the Superman, Tomorrowland, Art of the impossible, etc.) 
are representative of this genre, but since I know him also to be a grounded, 
thoughtful, compassionate person, I try hard to listen between the lines of 
what normally reads to me as egoist utopian fantasy.   His works are always 
well researched and he's fairly good at being clear what is speculation and 
what is fact in his writing/reporting, even though his bias is still a very 
techno-utopian optimism.

I really liked Spike Jonze movie 
"Her" as a compassionate-utopian 
story of a fairly abrupt AI transition/emergence ...  a fantasy by any measure 
of course, but an interesting twist on compassionate abandonment by our 
"children".

With Glen's re-statements, I found specifically the following:

Simulation in place of Symbols -  I don't know all that Marcus intended or Glen 
imputes with this but I think it might be very important in some fundamental 
way.  I wonder at the possibility that this fits into Glen's stuck-bit about 
"episodic" vs "diachronic" identity (and experience?) modes.

I haven't been able to parse the following very completely and look forward to 
more discussion?

- percolation from concrete, participative, perceptual intuition and 
imagination (or perhaps the inverse, a wandering from abstract/formal *toward* 
embodiment as we see with the rise of GANs, zero-shot, and online learning AI)

and in fact, all of these as well... good stuff.

- a more heterarchical, high-dimensional, or high-order understanding of 
"fitness costs" - fitness of fitnesses
- holes or dense regions in a taxonomy of SAMs - including my favorite: 
cross-species mind-reading
- game-theoretic (infinite and meta-gaming) logics of cognition (including 
simulation of simulation and fitness of fitnesses)

I introduced "deictec error" because I think it is maybe core to *my* struggles 
with this whole topic, so I'm glad Glen referenced it, and also look forward to 
possibly more discussion of that in regard to the rest.

- Steve


On 9/16/22 10:25 AM, glen∉ℂ wrote:
I do see us trying to identify the distinguishing markers of ... "cognition we 
can't imagine". That's fantastic. I'll try to collate some of them going 
backwards from Marcus':

- novelty - dissimilarity from "cognition as we know it"
- graded separation from human culture/sociality
- simulation in place of symbols (I failed to come up with a better phrase)
- accelerated look-ahead
- percolation from concrete, participative, perceptual intuition and 
imagination (or perhaps the inverse, a wandering from abstract/formal *toward* 
embodiment as we see with the rise of GANs, zero-shot, and online learning AI)
- a more heterarchical, high-dimensional, or high-order understanding of 
"fitness costs" - fitness of fitnesses
- holes or dense regions in a taxonomy of SAMs - including my favorite: 
cross-species mind-reading
- game-theoretic (infinite and meta-gaming) logics of cognition (including 
simulation of simulation and fitness of fitnesses)

It seems like all these are attempts to at least circumscribe what we can know 
about what we can imagine. And if so, it's like a convex hull beyond which is 
what we can't imagine. I wanted to place "deictic error" in there. But it seems 
to apply to several of the other categories. In particular, part of Dave and 
SteveS' irritation with the arrogance of abstraction is that symbols only ever 
*hook* to their groundings. Logics over those symbols may or may not preserve 
the grounding. Like the rather obvious idiocy of classical logic suggesting 
that anything can be concluded from inconsistent premises. When/if an entity 
can fully replace all shunted/truncated symbols with (perhaps participatory) 
simulations, it might reach the tight coupling with the simulated (possible) 
worlds in the same way Dave implies we couple 

Re: [FRIAM] Wolpert - discussion thread placeholder

2022-09-16 Thread Steve Smith

Responding first to Marcus point:

   "I think there will be a transition toward a more advanced form of
   life, but I don’t think there will be a clear connection between how
   they think and how humans think.  Human culture won’t be important
   to how they scale, but may be relevant to a bootstrap."

I believe we are "in transition" toward a more advanced form of life, 
though it is hard to demarcate any particular beginning of that 
transition.  The post/trans-humanists among us often seem to have a 
utopian/dystopian urge about all this that I am resistant to. Kotler's 
 works 
(Abundance, Rise of the Superman, Tomorrowland, Art of the impossible, 
etc.) are representative of this genre, but since I know him also to be 
a grounded, thoughtful, compassionate person, I try hard to listen 
between the lines of what normally reads to me as egoist utopian 
fantasy.   His works are always well researched and he's fairly good at 
being clear what is speculation and what is fact in his 
writing/reporting, even though his bias is still a very techno-utopian 
optimism.


I really liked Spike Jonze movie "Her" 
 as a compassionate-utopian 
story of a fairly abrupt AI transition/emergence ...  a fantasy by any 
measure of course, but an interesting twist on compassionate abandonment 
by our "children".


With Glen's re-statements, I found specifically the following:

Simulation in place of Symbols -  I don't know all that Marcus intended 
or Glen imputes with this but I think it might be very important in some 
fundamental way.  I wonder at the possibility that this fits into Glen's 
stuck-bit about "episodic" vs "diachronic" identity (and experience?) modes.


I haven't been able to parse the following very completely and look 
forward to more discussion?


   - percolation from concrete, participative, perceptual intuition and
   imagination (or perhaps the inverse, a wandering from
   abstract/formal *toward* embodiment as we see with the rise of GANs,
   zero-shot, and online learning AI)

and in fact, all of these as well... good stuff.


   - a more heterarchical, high-dimensional, or high-order
   understanding of "fitness costs" - fitness of fitnesses
   - holes or dense regions in a taxonomy of SAMs - including my
   favorite: cross-species mind-reading
   - game-theoretic (infinite and meta-gaming) logics of cognition
   (including simulation of simulation and fitness of fitnesses)

I introduced "deictec error" because I think it is maybe core to *my* 
struggles with this whole topic, so I'm glad Glen referenced it, and 
also look forward to possibly more discussion of that in regard to the rest.


- Steve


On 9/16/22 10:25 AM, glen∉ℂ wrote:
I do see us trying to identify the distinguishing markers of ... 
"cognition we can't imagine". That's fantastic. I'll try to collate 
some of them going backwards from Marcus':


- novelty - dissimilarity from "cognition as we know it"
- graded separation from human culture/sociality
- simulation in place of symbols (I failed to come up with a better 
phrase)

- accelerated look-ahead
- percolation from concrete, participative, perceptual intuition and 
imagination (or perhaps the inverse, a wandering from abstract/formal 
*toward* embodiment as we see with the rise of GANs, zero-shot, and 
online learning AI)
- a more heterarchical, high-dimensional, or high-order understanding 
of "fitness costs" - fitness of fitnesses
- holes or dense regions in a taxonomy of SAMs - including my 
favorite: cross-species mind-reading
- game-theoretic (infinite and meta-gaming) logics of cognition 
(including simulation of simulation and fitness of fitnesses)


It seems like all these are attempts to at least circumscribe what we 
can know about what we can imagine. And if so, it's like a convex hull 
beyond which is what we can't imagine. I wanted to place "deictic 
error" in there. But it seems to apply to several of the other 
categories. In particular, part of Dave and SteveS' irritation with 
the arrogance of abstraction is that symbols only ever *hook* to their 
groundings. Logics over those symbols may or may not preserve the 
grounding. Like the rather obvious idiocy of classical logic 
suggesting that anything can be concluded from inconsistent premises. 
When/if an entity can fully replace all shunted/truncated symbols with 
(perhaps participatory) simulations, it might reach the tight coupling 
with the simulated (possible) worlds in the same way Dave implies we 
couple tightly (concretely) with our (actual) world.



On 9/15/22 21:16, Marcus Daniels wrote:
I think there will be a transition toward a more advanced form of 
life, but I don’t think there will be a clear connection between how 
they think and how humans think.  Human culture won’t be important to 
how they scale, but may be relevant to a bootstrap.  I would be 
surprised if compression, deconstruction, 

[FRIAM] truth-preserving math

2022-09-16 Thread glen∉ℂ

70-year-old quantum prediction comes true, as something is created from nothing
https://bigthink.com/starts-with-a-bang/something-from-nothing/

It seems like this is another example where the arrogance of the abstraction 
reigns. Because the math relating holes and electrons is the same (?) as that 
relating electrons and positrons, does it mean studying one gives us insight 
into the other? Does the metaphysics really translate?

Arrogant or not, it's super effing cool.

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Wolpert - discussion thread placeholder

2022-09-16 Thread glen∉ℂ

I do see us trying to identify the distinguishing markers of ... "cognition we can't 
imagine". That's fantastic. I'll try to collate some of them going backwards from 
Marcus':

- novelty - dissimilarity from "cognition as we know it"
- graded separation from human culture/sociality
- simulation in place of symbols (I failed to come up with a better phrase)
- accelerated look-ahead
- percolation from concrete, participative, perceptual intuition and 
imagination (or perhaps the inverse, a wandering from abstract/formal *toward* 
embodiment as we see with the rise of GANs, zero-shot, and online learning AI)
- a more heterarchical, high-dimensional, or high-order understanding of "fitness 
costs" - fitness of fitnesses
- holes or dense regions in a taxonomy of SAMs - including my favorite: 
cross-species mind-reading
- game-theoretic (infinite and meta-gaming) logics of cognition (including 
simulation of simulation and fitness of fitnesses)

It seems like all these are attempts to at least circumscribe what we can know about what 
we can imagine. And if so, it's like a convex hull beyond which is what we can't imagine. 
I wanted to place "deictic error" in there. But it seems to apply to several of 
the other categories. In particular, part of Dave and SteveS' irritation with the 
arrogance of abstraction is that symbols only ever *hook* to their groundings. Logics 
over those symbols may or may not preserve the grounding. Like the rather obvious idiocy 
of classical logic suggesting that anything can be concluded from inconsistent premises. 
When/if an entity can fully replace all shunted/truncated symbols with (perhaps 
participatory) simulations, it might reach the tight coupling with the simulated 
(possible) worlds in the same way Dave implies we couple tightly (concretely) with our 
(actual) world.


On 9/15/22 21:16, Marcus Daniels wrote:

I think there will be a transition toward a more advanced form of life, but I 
don’t think there will be a clear connection between how they think and how 
humans think.  Human culture won’t be important to how they scale, but may be 
relevant to a bootstrap.  I would be surprised if compression, deconstruction, 
and reductionism went unused by this species.  I would be surprised if such a 
species would struggle with quantification.   I would also be surprised if they 
did not use simulation in place of symbols.   I think they will have dreams of 
entire human lives, of the rise and fall of nations, and regard our aspirations 
like I regard my dog dreaming of her encounters at the park.


On Sep 15, 2022, at 4:11 PM, Prof David West  wrote:


Just to be clear, I have zero antipathy towards Wolpert or his efforts at steelmanning. I think 
Wolpert does an excellent job of phrasing as questions what I perceive "Scientists" and 
"Computationalists" to merely assert as Truth. I have long tilted at that particular 
windmill and I applaud Wolpert, and glen for bringing him to our attention, for exposing the 
assertions such that counter arguments might be made.

And when it comes to "computationalism" and AI; I know it is not the 1970s and things 
have "advanced" significantly. And although I do not comprehend the details as well as 
most of you, I do understand sufficiently, I believe, to advance the claim that they are suffering 
from the exact same blind spot (with variable details) as Simon and Newell, et. al. who championed 
GOFAI. Plus you all have heard of Simon and Newell but most of you are unfamiliar with McGilchrist 
and similar contemporary critics.

My antipathy toward "Scientists" and "Computationalists" arises from what I 
perceive as an absolute refusal to credit any science, math, or ways/means of acquiring/expressing 
knowledge and understanding other than theirs. Dismissing neolithic and pre-modern science is one 
example. Failing to acknowledge the intelligence (and probably SAM) of other species—especially 
octopi—simply because they do not build atomic bombs or computers, is another.

A really good book that would inform a discussion of Wolpert's questions, #4 in 
particular, is: /Other Minds: The Octopus, the sea, and the deep origins of 
consciousness/, by Peter Godfrey-Smith.  A blurb follows.

/Although mammals and birds are widely regarded as the smartest creatures on 
earth, it has lately become clear that a very distant branch of the tree of 
life has also sprouted higher intelligence: the cephalopods, consisting of the 
squid, the cuttlefish, and above all the octopus. In captivity, octopuses have 
been known to identify individual human keepers, raid neighboring tanks for 
food, turn off light bulbs by spouting jets of water, plug drains, and make 
daring escapes. How is it that a creature with such gifts evolved through an 
evolutionary lineage so radically distant from our own? What does it mean that 
evolution built minds not once but at least twice? The octopus is the closest 
we will come to meeting an intelligent alien. What can