On Wednesday, October 30, 2019 at 3:44:23 PM UTC-5, Alan Grayson wrote:
>
>
>
> On Wednesday, October 30, 2019 at 4:30:19 AM UTC-6, Philip Thrift wrote:
>>
>>
>>
>> On Wednesday, October 30, 2019 at 5:01:14 AM UTC-5, Alan Grayson wrote:
>>>
>>> https://en.wikipedia.org/wiki/Susan_Schneider
>>>
>>
>>
>>
>>
>> It's a common AI view, that the program (programming) of consciousness 
>> (like what's running in your brain right now) is substrate independent.
>>
>> @philipthrift
>>
>
> What is "substrate independent"? AG 
>



As defined by Mad Max Tegmark. (I have the opposite view.)

https://www.edge.org/response-detail/27126

Substrate-Independence

What do waves, computations and conscious experiences have in common, that 
provides crucial clues about the future of intelligence? They all share an 
intriguing ability to take on a life of their own that’s rather independent 
of their physical substrate. 

*Waves* have properties such as speed, wavelength and frequency, and we 
physicists can study the equations they obey without even needing to know 
what substance they are waves in. When you hear something, you're detecting 
sound waves caused by molecules bouncing around in the mixture of gases we 
call air, and we can calculate all sorts of interesting things about these 
waves—how their intensity fades as the square of the distance, how they 
bend when they pass through open doors, how they reflect off of walls and 
cause echoes, etc.—without knowing what air is made of.


We can ignore all details about oxygen, nitrogen, carbon dioxide, etc., 
because the only property of the wave's substrate that matters and enters 
into the famous wave equation is a single number that we can measure: the 
wave speed, which in this case is about 300 meters per second. Indeed, this 
wave equation that MIT students are now studying was first discovered and 
put to great use long before physicists had even established that atoms and 
molecules existed! 


Alan Turing famously proved that *computations* are substrate-independent 
as well: There’s a vast variety of different computer architectures that 
are “universal” in the sense that they can all perform the exact same 
computations. So if you're a conscious superintelligent character in a 
future computer game, you'd have no way of knowing whether you ran on a 
desktop, a tablet or a phone, because you would be substrate-independent.


Nor could you tell whether the logic gates of the computer were made of 
transistors, optical circuits or other hardware, or even what the 
fundamental laws of physics were. Because of this substrate-independence, 
shrewd engineers have been able to repeatedly replace the technologies 
inside our computers with dramatically better ones without changing the 
software, making computation twice as cheap roughly every couple of years 
for over a century, cutting the computer cost a whopping million million 
million times since my grandmothers were born. It’s precisely this 
substrate-independence of computation that implies that artificial 
intelligence is possible: Intelligence doesn't require flesh, blood or 
carbon atoms. 

This example illustrates three important points.

First, substrate-independence doesn't mean that a substrate is unnecessary, 
but that most details of it don't matter. You obviously can't have sound 
waves in a gas if there's no gas, but any gas whatsoever will suffice. 
Similarly, you obviously can't have computation without matter, but any 
matter will do as long as it can be arranged into logic gates, connected 
neurons or some other building block enabling universal computation.


Second, the substrate-independent phenomenon takes on a life of its own, 
independent of its substrate. A wave can travel across a lake, even though 
none of its water molecules do—they mostly bob up and down.


Third, it's often only the substrate-independent aspect that we're 
interested in: A surfer usually cares more about the position and height of 
a wave than about its detailed molecular composition, and if two 
programmers are jointly hunting a bug in their code, they're probably not 
discussing transistors.

Since childhood, I’ve wondered how tangible physical stuff such as flesh 
and blood can give rise to something that feels as intangible, abstract and 
ethereal as intelligence and consciousness. We’ve now arrived at the 
answer: these phenomena feel so non-physical because they're 
substrate-independent, taking on a life of their own that doesn't depend on 
or reflect the physical details. We still don’t understand intelligence to 
the point of building machines that can match all human abilities, but AI 
researchers are striking ever more abilities from their can’t-do list, from 
image classification to Go-playing, speech recognition, translation and 
driving.


But what about *consciousness,* by which I mean simply "subjective 
experience"? When you’re driving a car, you’re having a conscious 
experience of colors, sounds, emotions, etc. But why are you 
experiencing anything at all? Does it feel like anything to be a 
self-driving car? This is what David Chalmers calls the "hard problem," and 
it’s distinct from merely asking how intelligence works. 


I've been arguing for decades that consciousness is the way information 
feels when being processed in certain complex ways. This leads to a radical 
idea that I really like: If consciousness is the way that information feels 
when it’s processed in certain ways, then it must be substrate-independent; 
*it's 
only the structure of the information processing that matters, not the 
structure of the matter doing the information processing.* In other words, 
consciousness is substrate-independent twice over!


We know that when particles move around in spacetime in patterns obeying 
certain principles, they give rise to substrate-independent phenomena—*e.g.* 
waves 
and computations. We've now taken this idea to another level: *If the 
information processing itself obeys certain principles, it can give rise to 
the higher level substrate-independent phenomenon that we call 
consciousness. *This places your conscious experience not one but two 
levels up from the matter. No wonder your mind feels non-physical! We don’t 
yet know what principles information processing needs to obey to be 
conscious, but concrete proposals have been made that neuroscientists are 
trying to test experimentally.


However, one lesson from substrate-independence is already clear: we should 
reject carbon-chauvinism and the common view that our intelligent machines 
will always be our unconscious slaves. Computation, intelligence and 
consciousness are patterns in the spacetime arrangement of particles that 
take on a life of their own, and it's not the particles but the patterns 
that really matter! Matter doesn't matter.



Matter matters.

@philipthrift
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/158631e4-db33-4ac2-9235-219b619ac297%40googlegroups.com.

Reply via email to