Re: Singularity -- when AI exceeds human intelligence

2018-03-01 Thread Bruno Marchal

> On 28 Feb 2018, at 20:38, Lawrence Crowell  
> wrote:
> 
> On Wednesday, February 28, 2018 at 2:08:43 AM UTC-6, Bruno Marchal wrote:
> 
>> On 26 Feb 2018, at 18:02, Lawrence Crowell > > wrote:
>> 
>> On Monday, February 26, 2018 at 5:53:05 AM UTC-6, Bruno Marchal wrote:
>> 
>>> On 24 Feb 2018, at 00:36, Lawrence Crowell > 
>>> wrote:
>>> 
>>> 
>>> 
>>> On Friday, February 23, 2018 at 11:12:32 AM UTC-6, Bruno Marchal wrote:
>>> 
 On 23 Feb 2018, at 17:15, Lawrence Crowell > 
 wrote:
 
 The MH spacetime in the case of the Kerr metric does permit an observer in 
 principle to witness an infinite stream of bits or qubits up to the inner 
 horizon r_- that is continuous with I^+ in the exterior spacetime. This 
 means due to spacetime effects one could witness the diagonalization in a 
 Zeno machine context. For instance a switch that is switched one in one 
 second, off the next half second, on in the next quarter second and so 
 forth will presumably have a final state. However, what does prevent this 
 in a fundamental way is that a switch flipped in this chirped frequency 
 will diverge in energy and become a black hole before returning a result. 
 We could of course avoid the black hole with a ball that bounces, but of 
 course one does not get an infinite number of little bounces at the end. 
 Because of this an observer could in principle witness a universal Turing 
 machine emulate all possible Turing machines. Thinking according to TMs is 
 for me a bit simpler, but this does illustrate one could get around Godel.
>>> 
>>> I am not sure the observer should not be itself implemented in the MH, and 
>>> its first person perspective might not allow him to see the TM emulating 
>>> all TMs. But even if it did, that would only be the implementation of a 
>>> halting algorithm, which overcome the Turing limitations, but not the Gödel 
>>> one: you will only get the sigma_truth completely, but “time” itself is 
>>> such an oracle, and again, I am not sure if such an observer does not, from 
>>> its personal point of view, have to live an infinite life to assess the 
>>> result. But I have looked at the MH paper a long time ago, so take this 
>>> with caution. Note that Gödel incompleteness cannot be escaped by *any* 
>>> means, even infinite means, unless you directly refer to the semantics, 
>>> which is not an effective process.
>>> 
>>> 
>>> 
 
 However, quantum mechanics as I illustrate seems to throw a spanner in the 
 works. This breaks the continuity between r_- and I_+. It also means the 
 inner horizon is built from quantum fields from the exterior in ways that 
 generates a mass inflation singularity. This is interesting to ponder with 
 respect to the connection between quantum mechanics and general relativity.
>>> 
>>> Yes, very interesting. 
>>> 
>>> 
>>> 
 In fact I think the two are simply aspects of the same thing. This means 
 in some way the incompleteness theorems of Godel are involved with the 
 foundations of physics.
>>> 
>>> Very glad to hear that. 
>>> Note that with Mechanism, incompleteness is responsible for the whole set 
>>> of accessible phenomenologies including the physical one, so yes, it is 
>>> hoped that physicists will someday study incompleteness (in a more serious 
>>> (valid) way than Penrose which has detracted many physicists from Gödel, 
>>> and many logicians from physics). 
>>> 
>>> Bruno
>>> 
>>> In effect quantum mechanics enters into the picture of MH spacetimes to 
>>> prevent physics from providing a loop hole out of Godel's theorem.
>> 
>> That seems very interesting. If you have a link on this. It would help the 
>> derivation of the quantum from arithmetic, and possibly for the derivation 
>> of the GR too. It reminds me Bohr's use of GR to solve an attempt by 
>> Einstein to refute the energy/time “uncertainty” relation (I am sure you 
>> know it). It seems to me to make QM implying GR, at some level, but I have 
>> not been able to do effectively.
>> 
>> Bruno
>> 
>>  I am the link. This is a bit of a sideline project. I have had in mind to 
>> derive QM and GR from the theory of diophatine equations.
> 
> 
> Good project. It is partially done—in some sense, even from one special 
> Diophantine equation, which is Turing-Universal, and QM and GR should (with 
> mechanism) be “theory independent”. If you can derive them from the 
> self-reference implicit in a Diophantine equation, you can derive them from 
> any first order specification of any Turing-complete theory or Turing 
> universal machinery. Three quantum logics appear already where expected, and 
> the whole “many-worlds” aspect of Nature appears formally, and intuitively 
> (through the many computations or the universal dovetailing implicit in all 
> Turing universal 

Re: Singularity -- when AI exceeds human intelligence

2018-02-28 Thread Lawrence Crowell
On Wednesday, February 28, 2018 at 2:08:43 AM UTC-6, Bruno Marchal wrote:
>
>
> On 26 Feb 2018, at 18:02, Lawrence Crowell  > wrote:
>
> On Monday, February 26, 2018 at 5:53:05 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> On 24 Feb 2018, at 00:36, Lawrence Crowell  
>> wrote:
>>
>>
>>
>> On Friday, February 23, 2018 at 11:12:32 AM UTC-6, Bruno Marchal wrote:
>>>
>>>
>>> On 23 Feb 2018, at 17:15, Lawrence Crowell  
>>> wrote:
>>>
>>> The MH spacetime in the case of the Kerr metric does permit an observer 
>>> in principle to witness an infinite stream of bits or qubits up to the 
>>> inner horizon r_- that is continuous with I^+ in the exterior spacetime. 
>>> This means due to spacetime effects one could witness the diagonalization 
>>> in a Zeno machine context. For instance a switch that is switched one in 
>>> one second, off the next half second, on in the next quarter second and so 
>>> forth will presumably have a final state. However, what does prevent this 
>>> in a fundamental way is that a switch flipped in this chirped frequency 
>>> will diverge in energy and become a black hole before returning a result. 
>>> We could of course avoid the black hole with a ball that bounces, but of 
>>> course one does not get an infinite number of little bounces at the end. 
>>> Because of this an observer could in principle witness a universal Turing 
>>> machine emulate all possible Turing machines. Thinking according to TMs is 
>>> for me a bit simpler, but this does illustrate one could get around Godel.
>>>
>>>
>>> I am not sure the observer should not be itself implemented in the MH, 
>>> and its first person perspective might not allow him to see the TM 
>>> emulating all TMs. But even if it did, that would only be the 
>>> implementation of a halting algorithm, which overcome the Turing 
>>> limitations, but not the Gödel one: you will only get the sigma_truth 
>>> completely, but “time” itself is such an oracle, and again, I am not sure 
>>> if such an observer does not, from its personal point of view, have to live 
>>> an infinite life to assess the result. But I have looked at the MH paper a 
>>> long time ago, so take this with caution. Note that Gödel incompleteness 
>>> cannot be escaped by *any* means, even infinite means, unless you directly 
>>> refer to the semantics, which is not an effective process.
>>>
>>>
>>>
>>>
>>> However, quantum mechanics as I illustrate seems to throw a spanner in 
>>> the works. This breaks the continuity between r_- and I_+. It also means 
>>> the inner horizon is built from quantum fields from the exterior in ways 
>>> that generates a mass inflation singularity. This is interesting to ponder 
>>> with respect to the connection between quantum mechanics and general 
>>> relativity.
>>>
>>>
>>> Yes, very interesting. 
>>>
>>>
>>>
>>> In fact I think the two are simply aspects of the same thing. This means 
>>> in some way the incompleteness theorems of Godel are involved with the 
>>> foundations of physics.
>>>
>>>
>>> Very glad to hear that. 
>>> Note that with Mechanism, incompleteness is responsible for the whole 
>>> set of accessible phenomenologies including the physical one, so yes, it is 
>>> hoped that physicists will someday study incompleteness (in a more serious 
>>> (valid) way than Penrose which has detracted many physicists from Gödel, 
>>> and many logicians from physics). 
>>>
>>> Bruno
>>>
>>
>> In effect quantum mechanics enters into the picture of MH spacetimes to 
>> prevent physics from providing a loop hole out of Godel's theorem.
>>
>>
>> That seems very interesting. If you have a link on this. It would help 
>> the derivation of the quantum from arithmetic, and possibly for the 
>> derivation of the GR too. It reminds me Bohr's use of GR to solve an 
>> attempt by Einstein to refute the energy/time “uncertainty” relation (I am 
>> sure you know it). It seems to me to make QM implying GR, at some level, 
>> but I have not been able to do effectively.
>>
>> Bruno
>>
>
>  I am the link. This is a bit of a sideline project. I have had in mind to 
> derive QM and GR from the theory of diophatine equations.
>
>
>
> Good project. It is partially done—in some sense, even from one special 
> Diophantine equation, which is Turing-Universal, and QM and GR should (with 
> mechanism) be “theory independent”. If you can derive them from the 
> self-reference implicit in a Diophantine equation, you can derive them from 
> any first order specification of any Turing-complete theory or Turing 
> universal machinery. Three quantum logics appear already where expected, 
> and the whole “many-worlds” aspect of Nature appears formally, and 
> intuitively (through the many computations or the universal dovetailing 
> implicit in all Turing universal system.
>
> The big advantage, compared to a physical “conventional” approach is that 
> the Gödelian division between proof and truth, 

Re: Singularity -- when AI exceeds human intelligence

2018-02-28 Thread Bruno Marchal

> On 26 Feb 2018, at 18:02, Lawrence Crowell  
> wrote:
> 
> On Monday, February 26, 2018 at 5:53:05 AM UTC-6, Bruno Marchal wrote:
> 
>> On 24 Feb 2018, at 00:36, Lawrence Crowell > > wrote:
>> 
>> 
>> 
>> On Friday, February 23, 2018 at 11:12:32 AM UTC-6, Bruno Marchal wrote:
>> 
>>> On 23 Feb 2018, at 17:15, Lawrence Crowell > 
>>> wrote:
>>> 
>>> The MH spacetime in the case of the Kerr metric does permit an observer in 
>>> principle to witness an infinite stream of bits or qubits up to the inner 
>>> horizon r_- that is continuous with I^+ in the exterior spacetime. This 
>>> means due to spacetime effects one could witness the diagonalization in a 
>>> Zeno machine context. For instance a switch that is switched one in one 
>>> second, off the next half second, on in the next quarter second and so 
>>> forth will presumably have a final state. However, what does prevent this 
>>> in a fundamental way is that a switch flipped in this chirped frequency 
>>> will diverge in energy and become a black hole before returning a result. 
>>> We could of course avoid the black hole with a ball that bounces, but of 
>>> course one does not get an infinite number of little bounces at the end. 
>>> Because of this an observer could in principle witness a universal Turing 
>>> machine emulate all possible Turing machines. Thinking according to TMs is 
>>> for me a bit simpler, but this does illustrate one could get around Godel.
>> 
>> I am not sure the observer should not be itself implemented in the MH, and 
>> its first person perspective might not allow him to see the TM emulating all 
>> TMs. But even if it did, that would only be the implementation of a halting 
>> algorithm, which overcome the Turing limitations, but not the Gödel one: you 
>> will only get the sigma_truth completely, but “time” itself is such an 
>> oracle, and again, I am not sure if such an observer does not, from its 
>> personal point of view, have to live an infinite life to assess the result. 
>> But I have looked at the MH paper a long time ago, so take this with 
>> caution. Note that Gödel incompleteness cannot be escaped by *any* means, 
>> even infinite means, unless you directly refer to the semantics, which is 
>> not an effective process.
>> 
>> 
>> 
>>> 
>>> However, quantum mechanics as I illustrate seems to throw a spanner in the 
>>> works. This breaks the continuity between r_- and I_+. It also means the 
>>> inner horizon is built from quantum fields from the exterior in ways that 
>>> generates a mass inflation singularity. This is interesting to ponder with 
>>> respect to the connection between quantum mechanics and general relativity.
>> 
>> Yes, very interesting. 
>> 
>> 
>> 
>>> In fact I think the two are simply aspects of the same thing. This means in 
>>> some way the incompleteness theorems of Godel are involved with the 
>>> foundations of physics.
>> 
>> Very glad to hear that. 
>> Note that with Mechanism, incompleteness is responsible for the whole set of 
>> accessible phenomenologies including the physical one, so yes, it is hoped 
>> that physicists will someday study incompleteness (in a more serious (valid) 
>> way than Penrose which has detracted many physicists from Gödel, and many 
>> logicians from physics). 
>> 
>> Bruno
>> 
>> In effect quantum mechanics enters into the picture of MH spacetimes to 
>> prevent physics from providing a loop hole out of Godel's theorem.
> 
> That seems very interesting. If you have a link on this. It would help the 
> derivation of the quantum from arithmetic, and possibly for the derivation of 
> the GR too. It reminds me Bohr's use of GR to solve an attempt by Einstein to 
> refute the energy/time “uncertainty” relation (I am sure you know it). It 
> seems to me to make QM implying GR, at some level, but I have not been able 
> to do effectively.
> 
> Bruno
> 
>  I am the link. This is a bit of a sideline project. I have had in mind to 
> derive QM and GR from the theory of diophatine equations.


Good project. It is partially done—in some sense, even from one special 
Diophantine equation, which is Turing-Universal, and QM and GR should (with 
mechanism) be “theory independent”. If you can derive them from the 
self-reference implicit in a Diophantine equation, you can derive them from any 
first order specification of any Turing-complete theory or Turing universal 
machinery. Three quantum logics appear already where expected, and the whole 
“many-worlds” aspect of Nature appears formally, and intuitively (through the 
many computations or the universal dovetailing implicit in all Turing universal 
system.

The big advantage, compared to a physical “conventional” approach is that the 
Gödelian division between proof and truth, inherited in the “material” variants 
of the provability logics provides a mean to distinguish the sharable quanta 
from the private first person 

Re: Singularity -- when AI exceeds human intelligence

2018-02-26 Thread Lawrence Crowell
On Monday, February 26, 2018 at 5:53:05 AM UTC-6, Bruno Marchal wrote:
>
>
> On 24 Feb 2018, at 00:36, Lawrence Crowell  > wrote:
>
>
>
> On Friday, February 23, 2018 at 11:12:32 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> On 23 Feb 2018, at 17:15, Lawrence Crowell  
>> wrote:
>>
>> The MH spacetime in the case of the Kerr metric does permit an observer 
>> in principle to witness an infinite stream of bits or qubits up to the 
>> inner horizon r_- that is continuous with I^+ in the exterior spacetime. 
>> This means due to spacetime effects one could witness the diagonalization 
>> in a Zeno machine context. For instance a switch that is switched one in 
>> one second, off the next half second, on in the next quarter second and so 
>> forth will presumably have a final state. However, what does prevent this 
>> in a fundamental way is that a switch flipped in this chirped frequency 
>> will diverge in energy and become a black hole before returning a result. 
>> We could of course avoid the black hole with a ball that bounces, but of 
>> course one does not get an infinite number of little bounces at the end. 
>> Because of this an observer could in principle witness a universal Turing 
>> machine emulate all possible Turing machines. Thinking according to TMs is 
>> for me a bit simpler, but this does illustrate one could get around Godel.
>>
>>
>> I am not sure the observer should not be itself implemented in the MH, 
>> and its first person perspective might not allow him to see the TM 
>> emulating all TMs. But even if it did, that would only be the 
>> implementation of a halting algorithm, which overcome the Turing 
>> limitations, but not the Gödel one: you will only get the sigma_truth 
>> completely, but “time” itself is such an oracle, and again, I am not sure 
>> if such an observer does not, from its personal point of view, have to live 
>> an infinite life to assess the result. But I have looked at the MH paper a 
>> long time ago, so take this with caution. Note that Gödel incompleteness 
>> cannot be escaped by *any* means, even infinite means, unless you directly 
>> refer to the semantics, which is not an effective process.
>>
>>
>>
>>
>> However, quantum mechanics as I illustrate seems to throw a spanner in 
>> the works. This breaks the continuity between r_- and I_+. It also means 
>> the inner horizon is built from quantum fields from the exterior in ways 
>> that generates a mass inflation singularity. This is interesting to ponder 
>> with respect to the connection between quantum mechanics and general 
>> relativity.
>>
>>
>> Yes, very interesting. 
>>
>>
>>
>> In fact I think the two are simply aspects of the same thing. This means 
>> in some way the incompleteness theorems of Godel are involved with the 
>> foundations of physics.
>>
>>
>> Very glad to hear that. 
>> Note that with Mechanism, incompleteness is responsible for the whole set 
>> of accessible phenomenologies including the physical one, so yes, it is 
>> hoped that physicists will someday study incompleteness (in a more serious 
>> (valid) way than Penrose which has detracted many physicists from Gödel, 
>> and many logicians from physics). 
>>
>> Bruno
>>
>
> In effect quantum mechanics enters into the picture of MH spacetimes to 
> prevent physics from providing a loop hole out of Godel's theorem.
>
>
> That seems very interesting. If you have a link on this. It would help the 
> derivation of the quantum from arithmetic, and possibly for the derivation 
> of the GR too. It reminds me Bohr's use of GR to solve an attempt by 
> Einstein to refute the energy/time “uncertainty” relation (I am sure you 
> know it). It seems to me to make QM implying GR, at some level, but I have 
> not been able to do effectively.
>
> Bruno
>

 I am the link. This is a bit of a sideline project. I have had in mind to 
derive QM and GR from the theory of diophatine equations.

LC

>
>
>
> LC 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-26 Thread Bruno Marchal

> On 24 Feb 2018, at 00:36, Lawrence Crowell  
> wrote:
> 
> 
> 
> On Friday, February 23, 2018 at 11:12:32 AM UTC-6, Bruno Marchal wrote:
> 
>> On 23 Feb 2018, at 17:15, Lawrence Crowell > > wrote:
>> 
>> The MH spacetime in the case of the Kerr metric does permit an observer in 
>> principle to witness an infinite stream of bits or qubits up to the inner 
>> horizon r_- that is continuous with I^+ in the exterior spacetime. This 
>> means due to spacetime effects one could witness the diagonalization in a 
>> Zeno machine context. For instance a switch that is switched one in one 
>> second, off the next half second, on in the next quarter second and so forth 
>> will presumably have a final state. However, what does prevent this in a 
>> fundamental way is that a switch flipped in this chirped frequency will 
>> diverge in energy and become a black hole before returning a result. We 
>> could of course avoid the black hole with a ball that bounces, but of course 
>> one does not get an infinite number of little bounces at the end. Because of 
>> this an observer could in principle witness a universal Turing machine 
>> emulate all possible Turing machines. Thinking according to TMs is for me a 
>> bit simpler, but this does illustrate one could get around Godel.
> 
> I am not sure the observer should not be itself implemented in the MH, and 
> its first person perspective might not allow him to see the TM emulating all 
> TMs. But even if it did, that would only be the implementation of a halting 
> algorithm, which overcome the Turing limitations, but not the Gödel one: you 
> will only get the sigma_truth completely, but “time” itself is such an 
> oracle, and again, I am not sure if such an observer does not, from its 
> personal point of view, have to live an infinite life to assess the result. 
> But I have looked at the MH paper a long time ago, so take this with caution. 
> Note that Gödel incompleteness cannot be escaped by *any* means, even 
> infinite means, unless you directly refer to the semantics, which is not an 
> effective process.
> 
> 
> 
>> 
>> However, quantum mechanics as I illustrate seems to throw a spanner in the 
>> works. This breaks the continuity between r_- and I_+. It also means the 
>> inner horizon is built from quantum fields from the exterior in ways that 
>> generates a mass inflation singularity. This is interesting to ponder with 
>> respect to the connection between quantum mechanics and general relativity.
> 
> Yes, very interesting. 
> 
> 
> 
>> In fact I think the two are simply aspects of the same thing. This means in 
>> some way the incompleteness theorems of Godel are involved with the 
>> foundations of physics.
> 
> Very glad to hear that. 
> Note that with Mechanism, incompleteness is responsible for the whole set of 
> accessible phenomenologies including the physical one, so yes, it is hoped 
> that physicists will someday study incompleteness (in a more serious (valid) 
> way than Penrose which has detracted many physicists from Gödel, and many 
> logicians from physics). 
> 
> Bruno
> 
> In effect quantum mechanics enters into the picture of MH spacetimes to 
> prevent physics from providing a loop hole out of Godel's theorem.

That seems very interesting. If you have a link on this. It would help the 
derivation of the quantum from arithmetic, and possibly for the derivation of 
the GR too. It reminds me Bohr's use of GR to solve an attempt by Einstein to 
refute the energy/time “uncertainty” relation (I am sure you know it). It seems 
to me to make QM implying GR, at some level, but I have not been able to do 
effectively.

Bruno




> 
> LC 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-26 Thread Bruno Marchal

> On 23 Feb 2018, at 23:15, Brent Meeker  wrote:
> 
> 
> 
> On 2/23/2018 3:37 AM, Bruno Marchal wrote:
>>> However though perhaps a spider may exist in a less filtered internal state 
>>> of being than a mouse, I don't see how it is more conscious. Is an amoeba 
>>> even more conscious then than a spider. Is the simplest most elementary 
>>> particle the most conscious entity of all?
>> Normally an amoeba is indeed more conscious than a spider, which is more 
>> conscious than us, but plausibly less conscious of their environment. The 
>> price of that higher consciousness is that it is of the dissociative kind of 
>> consciousness.
>> 
>> Particles are not conscious in the sense that particles have no brain at 
>> all, and actually, does not exist at all.
> 
> So your theory leads us to conclude that consciousness is maximized somewhere 
> between no neurons, even no structure at all, and the neural equipment of a 
> spider.   Perhaps a pebble?  

Pebbles? I don’t think so. A pebbles is not actually anything which exists 
ontologically, so that assuming that a pebble thinks is a bit attributing a 
mind to an illusion (the material things).

To make consciounsness manifest in the relative way, you need to be at the 
least Turing universal, so you need some amount of neurons or Turing 
equivalent. The empty theory is NOT conscious. Nor is the thermostat, in any 
sense sensible with mechanism. Amazingly, your laptop is, but in a dissociative 
state. It is important given that its consciousness is the same as yours, but 
much undifferentiated.



> 
> I find it disingenous that you talk of testing your theory by comparing with 
> experience and quantum mechanics and finding it agrees over a tiny part of 
> their domain and this is confirmation. 

It is not a tiny part, it is the core skeleton, and it is highly non trivial. 
Most of my real opponents bets that all the relevant modalities would collapse, 
making physics purely geographical, a bit like Smullyan sais explicitly in his 
book “Forever Undecided”.

I was expecting finding quick-kly the many-world aspect, but I got the formal 
logic of physics, not a long way from a theorem à-la Gleason. 

OK, it is not much, but it is the only theory which explain why there is a non 
trivial physics, and which explains consciousness. 



> But when your theory leads to an absurdity

Which one?



> you obfuscate the fact with mysticism and redefining consciousness as an 
> illusion…

This is disingenuous!  I am not sure why you say this. Since the start I insist 
that consciousness cannot be an illusion. You need to be conscious to be 
illusionned. Matter is an illusion. Not consciousness. (And by Matter I mean 
always Aristotle primary matter of the matter of the metaphysicians. Not the 
matter studied by the physicists, which on the contrary serves the verification 
procedure.




> although in other contexts you note it is the only thing we can be sure of.

I say this in all contexts. If you find a quote of me saying that consciousness 
is an illusion, I bought you a bottle of whisky!

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Brent Meeker



On 2/23/2018 3:37 AM, Bruno Marchal wrote:


However though perhaps a spider may exist in a less filtered
internal state of being than a mouse, I don't see how it is more
conscious. Is an amoeba even more conscious then than a spider.
Is the simplest most elementary particle the most conscious
entity of all?

Normally an amoeba is indeed more conscious than a spider, which is 
more conscious than us, but plausibly less conscious of their 
environment. The price of that higher consciousness is that it is of 
the dissociative kind of consciousness.


Particles are not conscious in the sense that particles have no brain 
at all, and actually, does not exist at all.


So your theory leads us to conclude that consciousness is maximized 
somewhere between no neurons, even no structure at all, and the neural 
equipment of a spider.   Perhaps a pebble?


I find it disingenous that you talk of testing your theory by comparing 
with experience and quantum mechanics and finding it agrees over a tiny 
part of their domain and this is confirmation. But when your theory 
leads to an absurdity you obfuscate the fact with mysticism and 
redefining consciousness as an illusion...although in other contexts you 
note it is the only thing we can be sure of.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread John Clark
On Wed, Feb 21, 2018 at 2:42 PM, Brent Meeker  wrote:

​> ​
> Which is why mice are more conscious than you are and spiders are more
> conscious than mice.


​I don't know about you but I lack the ability to directly detect the
consciousness of spiders or mice of or even in my fellow human beings; but
I can directly detect intelligent behavior so the best I can do is make an
educated guess about consciousness from that.  It may not be a perfect test
but its all I've got and all I'll ever have so it will just have to do.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Bruno Marchal

> On 23 Feb 2018, at 17:15, Lawrence Crowell  
> wrote:
> 
> On Friday, February 23, 2018 at 7:06:47 AM UTC-6, Bruno Marchal wrote:
> 
>> On 23 Feb 2018, at 12:40, Lawrence Crowell > > wrote:
>> 
>> On Thursday, February 22, 2018 at 6:38:15 AM UTC-6, Bruno Marchal wrote:
>> 
>> > On 21 Feb 2018, at 20:40, Brent Meeker > wrote: 
>> > 
>> > 
>> > 
>> > On 2/21/2018 1:32 AM, Bruno Marchal wrote: 
>> >> I guess you mean enumerable here. I don’t see what physical bounds have 
>> >> to do with Church-Turing thesis, though. We laws suppose that the 
>> >> universal machine have potentially unbounded time and space (in the non 
>> >> physical computer science sense) available for them. 
>> > 
>> > But they are bounded in the physical sense, and not just potentially. 
>> 
>> But Church-Turing thesis has nothing to do with physics or the physical 
>> sense. 
>> 
>> Then you don’t know if a machine, even in the physical world is bounded, 
>> unless you make special assumption on some existing universe. 
>> 
>> With mechanism, there are no evidence for a physical primary universe. We 
>> would have found one if we would have discover a serious discrepancy between 
>> the Nature’s physics and the physics in the “head of the number”, but we 
>> have tested this as far as possible, and found none. 
>> 
>> The relationship between the physical world and mathematics of computation 
>> is something I explore here 
>> .
>>  This is in connection with the theoretical concept of hypercomputation. 
>> Certain types of spacetimes called MH (Malament-Hogarth spacetimes) have the 
>> physical properties that might do an end run around the limits of Godel. On 
>> the other hand quantum mechanics might provide limits on that.
> 
> I know the existence of MH spacetimes, but it is not yet clear how this 
> escapes Gödel incompleteness. Since Turing we know that hyper-computations do 
> not escape incompleteness. It escapes PA and ZF, but it does not lead to 
> effective way to emulate something not Turing emulable, but I would need more 
> time to assess this, and I judge from an early draft I saw on this subject.
> 
> Then you say in your blog “Physics, on the other hand, ultimately attempts to 
> model reality.” But that is the main axiom of Aristotle metaphysics which is 
> doubted at the start when we realise that all computations are run in 
> arithmetic. You invoke “god” in theology, which means that you don’t intent 
> to do metaphysics or theology with the scientific method. When we do theology 
> or metaphysics with the scientific method we must stay neutral on what could 
> be the fundamental reality, especially when some work like mine give a 
> precise tool to assess if the materiality is fundamental or emerging, and the 
> results get so far abounds in the idea that the material reality is not the 
> fundamental reality. The axioms you are using is refuted by the mechanist 
> hypothesis, so you must take into account that you are postulating a “god” 
> incompatible with Mechanism, but then the MH space-time is out of use in 
> metaphysics, as it requires a black hole to work, and there is few evidence 
> that we have a black hole in our head. 
> Note that I do find the MH-space time very interesting, and it suggest we 
> might exploit computationally back holes in some far future, (or not, as you 
> are right that quantum mechanics makes this theoretically close to 
> impossible), but even if done, it would not change the logical conclusion of 
> Mechanism: physics is just not the fundamental science and physics is 
> constructively reducible to machine’s self-reference theory. You have only 
> the arithmetical reality, which emulates (in the sense of Church, Turing) all 
> computations, and physics is given by the non computable statistics on all 
> relative computational consistent extensions. Although it is not computable, 
> the propositional part of physics is computable and decidable, and indeed we 
> have recovered some quantum logics at the place they were mandatory. This is 
> not just an evidence for computationalism it is also a very deep theoretical 
> evidences for quantum mechanics being completely valid.
> 
> Bruno
> 
> The MH spacetime in the case of the Kerr metric does permit an observer in 
> principle to witness an infinite stream of bits or qubits up to the inner 
> horizon r_- that is continuous with I^+ in the exterior spacetime. This means 
> due to spacetime effects one could witness the diagonalization in a Zeno 
> machine context. For instance a switch that is switched one in one second, 
> off the next half second, on in the next quarter second and so forth will 
> presumably have a final state. However, what does prevent this in a 
> fundamental way is that a switch 

Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Thu, Feb 22, 2018 at 9:36 PM, Brent Meeker wrote:
 
 On 2/22/2018 7:06 PM, 'Chris de Morsella' via Everything List wrote:
  

 
 Sent from Yahoo Mail on Android 
 
  On Wed, Feb 21, 2018 at 1:37 AM, Bruno Marchal  wrote:
  
 On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
 wrote: 
  
  
 
  On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell 
 wrote:On Sunday, February 18, 2018 at 
10:00:24 PM UTC-6, Brent wrote: 
  
 
 On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
  
 Computers such as AlphaGo have complex algorithms for taking the rules of a 
game  like chess and running through long Markov chains of game events to 
increase their data base for playing the game. There is not really anything 
about "knowing  something" going on here. There is a lot of hype over AI these 
days, but I suspect a lot of this is meant to beguile people. I do suspect in 
time we will interact with AI as if it were intelligent and conscious. The 
really big changer though I think will be the  neural-cyber interlink that will 
put brains as the primary internet nodes. 
 
 Why would you suppose that when electronics have a signal speed ten million 
times faster than neurons?  Presently neurons have an advantage in connection 
density and power dissipation; but I see no reason they can hold that advantage.
 
 Brent
  
 
  I think it may come down to computers that obey the Church-Turing thesis, 
which is finite and bounded.  Hofstadter's book Godel Escher Bach has a chapter 
Bloop, Floop, Gloop where the Bloop means bounded loop or a halting program on 
a Turing  machine. Biology however is not Bloop, but is rather a web of 
processors that are more Floop, or free loop. The busy beaver algorithm is such 
a case, which grows in complexity with each step. The  computation of many 
fractals is this as well, where the Mandelbrot set with each iteration on a 
certain scale needs refinement to another floating point precision and thus 
grows in huge  complexity. These of course in practice halting because the 
programmer puts in by hand a stop. These are recursively enumerable, and their 
complement in a set theoretic sense are Godel loops or Gloop. For machines to 
have properties at least parallel to conscious behavior we really have to be 
running in at least  Floop and maybe into Gloop. 
  LC 
  Not sure if this has been touched on in this thread but it seems to me that 
the emergent phenomenon  of both self-awareness and consciousness depend on 
information hiding in some fundamental way. Both our self awareness and our 
conscious minds, which from our incomplete perspective seem to be innate  and 
ever present (at least when we are awake) are themselves the emergent outcomes 
of a vast amount of neural networked activities that is exquisitely hidden from 
us. We are unaware of the  Genesis of our own awareness.  
  Evidence from MRI scans supports this conclusion that before we are aware of 
being aware of  some objectively measurable external event, or before we 
experience having a thought, that the almost one hundred billion neurons 
crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
have been very busy and chatty indeed. As the MRI scans indicate. 
  We are aware of being aware and we experience conscious existence, but the 
process by which both  our conscious experience and our own awareness of being 
arises within our minds is largely hidden from us.  I think it is a fair and 
reasonable question to ask: Is information hiding a necessary an  integral 
aspect of processes through which self-awareness and consciousness arise? 
  In computer science the rather recent emergence of deep mind neural networks 
that are characterized  by having many layers, of which only the input layer 
and output layer of neurons are directly measurable, while conversely the many 
other layers that are arrayed in the stack between them  remain hidden offers 
some intriguing parallels that also seem to indicate a critical role for 
information hiding. The Google deep mind machine learned neural networks for 
image processing, for example, have 10 to 30 (or by now perhaps even more) 
stacked layers of artificial neurons, most of which are  hidden. 
  Because of the non-linearity of the processes in play within these artificial 
deep stacks of  layered artificial neurons it is difficult to really know in 
any definitive manner exactly what is going  on. The outcomes from 
experimenting on the statistically trained (or in the vernacular, machine 
learned) models, by for example tweaking training parameters to experimentally 
see how doing so  effects the resulting outcomes and by also subsequently 
forensically analyzing any generated logs & other telemetry are often 
surprisingly beautiful dreamscapes that are not reducible to a  series of 

Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Lawrence Crowell
On Friday, February 23, 2018 at 7:06:47 AM UTC-6, Bruno Marchal wrote:
>
>
> On 23 Feb 2018, at 12:40, Lawrence Crowell  > wrote:
>
> On Thursday, February 22, 2018 at 6:38:15 AM UTC-6, Bruno Marchal wrote:
>>
>>
>> > On 21 Feb 2018, at 20:40, Brent Meeker  wrote: 
>> > 
>> > 
>> > 
>> > On 2/21/2018 1:32 AM, Bruno Marchal wrote: 
>> >> I guess you mean enumerable here. I don’t see what physical bounds 
>> have to do with Church-Turing thesis, though. We laws suppose that the 
>> universal machine have potentially unbounded time and space (in the non 
>> physical computer science sense) available for them. 
>> > 
>> > But they are bounded in the physical sense, and not just potentially. 
>>
>> But Church-Turing thesis has nothing to do with physics or the physical 
>> sense. 
>>
>> Then you don’t know if a machine, even in the physical world is bounded, 
>> unless you make special assumption on some existing universe. 
>>
>> With mechanism, there are no evidence for a physical primary universe. We 
>> would have found one if we would have discover a serious discrepancy 
>> between the Nature’s physics and the physics in the “head of the number”, 
>> but we have tested this as far as possible, and found none. 
>>
>
> The relationship between the physical world and mathematics of computation 
> is something I explore here 
> .
>  
> This is in connection with the theoretical concept of hypercomputation. 
> Certain types of spacetimes called MH (Malament-Hogarth spacetimes) have 
> the physical properties that might do an end run around the limits of 
> Godel. On the other hand quantum mechanics might provide limits on that.
>
>
> I know the existence of MH spacetimes, but it is not yet clear how this 
> escapes Gödel incompleteness. Since Turing we know that hyper-computations 
> do not escape incompleteness. It escapes PA and ZF, but it does not lead to 
> effective way to emulate something not Turing emulable, but I would need 
> more time to assess this, and I judge from an early draft I saw on this 
> subject.
>
> Then you say in your blog “Physics, on the other hand, ultimately attempts 
> to model reality.” But that is the main axiom of Aristotle metaphysics 
> which is doubted at the start when we realise that all computations are run 
> in arithmetic. You invoke “god” in theology, which means that you don’t 
> intent to do metaphysics or theology with the scientific method. When we do 
> theology or metaphysics with the scientific method we must stay neutral on 
> what could be the fundamental reality, especially when some work like mine 
> give a precise tool to assess if the materiality is fundamental or 
> emerging, and the results get so far abounds in the idea that the material 
> reality is not the fundamental reality. The axioms you are using is refuted 
> by the mechanist hypothesis, so you must take into account that you are 
> postulating a “god” incompatible with Mechanism, but then the MH space-time 
> is out of use in metaphysics, as it requires a black hole to work, and 
> there is few evidence that we have a black hole in our head. 
> Note that I do find the MH-space time very interesting, and it suggest we 
> might exploit computationally back holes in some far future, (or not, as 
> you are right that quantum mechanics makes this theoretically close to 
> impossible), but even if done, it would not change the logical conclusion 
> of Mechanism: physics is just not the fundamental science and physics is 
> constructively reducible to machine’s self-reference theory. You have only 
> the arithmetical reality, which emulates (in the sense of Church, Turing) 
> all computations, and physics is given by the non computable statistics on 
> all relative computational consistent extensions. Although it is not 
> computable, the propositional part of physics is computable and decidable, 
> and indeed we have recovered some quantum logics at the place they were 
> mandatory. This is not just an evidence for computationalism it is also a 
> very deep theoretical evidences for quantum mechanics being completely 
> valid.
>
> Bruno
>

The MH spacetime in the case of the Kerr metric does permit an observer in 
principle to witness an infinite stream of bits or qubits up to the inner 
horizon r_- that is continuous with I^+ in the exterior spacetime. This 
means due to spacetime effects one could witness the diagonalization in a 
Zeno machine context. For instance a switch that is switched one in one 
second, off the next half second, on in the next quarter second and so 
forth will presumably have a final state. However, what does prevent this 
in a fundamental way is that a switch flipped in this chirped frequency 
will diverge in energy and become a black hole before returning a result. 
We could of course 

Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Bruno Marchal

> On 23 Feb 2018, at 12:40, Lawrence Crowell  
> wrote:
> 
> On Thursday, February 22, 2018 at 6:38:15 AM UTC-6, Bruno Marchal wrote:
> 
> > On 21 Feb 2018, at 20:40, Brent Meeker  
> > wrote: 
> > 
> > 
> > 
> > On 2/21/2018 1:32 AM, Bruno Marchal wrote: 
> >> I guess you mean enumerable here. I don’t see what physical bounds have to 
> >> do with Church-Turing thesis, though. We laws suppose that the universal 
> >> machine have potentially unbounded time and space (in the non physical 
> >> computer science sense) available for them. 
> > 
> > But they are bounded in the physical sense, and not just potentially. 
> 
> But Church-Turing thesis has nothing to do with physics or the physical 
> sense. 
> 
> Then you don’t know if a machine, even in the physical world is bounded, 
> unless you make special assumption on some existing universe. 
> 
> With mechanism, there are no evidence for a physical primary universe. We 
> would have found one if we would have discover a serious discrepancy between 
> the Nature’s physics and the physics in the “head of the number”, but we have 
> tested this as far as possible, and found none. 
> 
> The relationship between the physical world and mathematics of computation is 
> something I explore here 
> .
>  This is in connection with the theoretical concept of hypercomputation. 
> Certain types of spacetimes called MH (Malament-Hogarth spacetimes) have the 
> physical properties that might do an end run around the limits of Godel. On 
> the other hand quantum mechanics might provide limits on that.

I know the existence of MH spacetimes, but it is not yet clear how this escapes 
Gödel incompleteness. Since Turing we know that hyper-computations do not 
escape incompleteness. It escapes PA and ZF, but it does not lead to effective 
way to emulate something not Turing emulable, but I would need more time to 
assess this, and I judge from an early draft I saw on this subject.

Then you say in your blog “Physics, on the other hand, ultimately attempts to 
model reality.” But that is the main axiom of Aristotle metaphysics which is 
doubted at the start when we realise that all computations are run in 
arithmetic. You invoke “god” in theology, which means that you don’t intent to 
do metaphysics or theology with the scientific method. When we do theology or 
metaphysics with the scientific method we must stay neutral on what could be 
the fundamental reality, especially when some work like mine give a precise 
tool to assess if the materiality is fundamental or emerging, and the results 
get so far abounds in the idea that the material reality is not the fundamental 
reality. The axioms you are using is refuted by the mechanist hypothesis, so 
you must take into account that you are postulating a “god” incompatible with 
Mechanism, but then the MH space-time is out of use in metaphysics, as it 
requires a black hole to work, and there is few evidence that we have a black 
hole in our head. 
Note that I do find the MH-space time very interesting, and it suggest we might 
exploit computationally back holes in some far future, (or not, as you are 
right that quantum mechanics makes this theoretically close to impossible), but 
even if done, it would not change the logical conclusion of Mechanism: physics 
is just not the fundamental science and physics is constructively reducible to 
machine’s self-reference theory. You have only the arithmetical reality, which 
emulates (in the sense of Church, Turing) all computations, and physics is 
given by the non computable statistics on all relative computational consistent 
extensions. Although it is not computable, the propositional part of physics is 
computable and decidable, and indeed we have recovered some quantum logics at 
the place they were mandatory. This is not just an evidence for 
computationalism it is also a very deep theoretical evidences for quantum 
mechanics being completely valid.

Bruno


> 
> LC
> 
>  
> https://physics.stackexchange.com/questions/305346/is-there-something-similar-to-g%C3%B6dels-incompleteness-theorems-in-physics/305368#305368
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message 

Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Bruno Marchal

> On 23 Feb 2018, at 06:36, Brent Meeker  wrote:
> 
> 
> 
> On 2/22/2018 7:06 PM, 'Chris de Morsella' via Everything List wrote:
>> 
>> 
>> Sent from Yahoo Mail on Android 
>> 
>> On Wed, Feb 21, 2018 at 1:37 AM, Bruno Marchal
>>   wrote:
>> 
>>> On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
>>> >> > wrote:
>>> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
>>> 
>>> 
>>> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
 Computers such as AlphaGo have complex algorithms for taking the rules of 
 a game like chess and running through long Markov chains of game events to 
 increase their data base for playing the game. There is not really 
 anything about "knowing something" going on here. There is a lot of hype 
 over AI these days, but I suspect a lot of this is meant to beguile 
 people. I do suspect in time we will interact with AI as if it were 
 intelligent and conscious. The really big changer though I think will be 
 the neural-cyber interlink that will put brains as the primary internet 
 nodes.
>>> 
>>> Why would you suppose that when electronics have a signal speed ten million 
>>> times faster than neurons?  Presently neurons have an advantage in 
>>> connection density and power dissipation; but I see no reason they can hold 
>>> that advantage.
>>> 
>>> Brent
>>> 
>>> I think it may come down to computers that obey the Church-Turing thesis, 
>>> which is finite and bounded. Hofstadter's book Godel Escher Bach has a 
>>> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
>>> program on a Turing machine. Biology however is not Bloop, but is rather a 
>>> web of processors that are more Floop, or free loop. The busy beaver 
>>> algorithm is such a case, which grows in complexity with each step. The 
>>> computation of many fractals is this as well, where the Mandelbrot set with 
>>> each iteration on a certain scale needs refinement to another floating 
>>> point precision and thus grows in huge complexity. These of course in 
>>> practice halting because the programmer puts in by hand a stop. These are 
>>> recursively enumerable, and their complement in a set theoretic sense are 
>>> Godel loops or Gloop. For machines to have properties at least parallel to 
>>> conscious behavior we really have to be running in at least Floop and maybe 
>>> into Gloop.
>>> 
>>> LC
>>> 
>>> Not sure if this has been touched on in this thread but it seems to me that 
>>> the emergent phenomenon of both self-awareness and consciousness depend on 
>>> information hiding in some fundamental way. Both our self awareness and our 
>>> conscious minds, which from our incomplete perspective seem to be innate 
>>> and ever present (at least when we are awake) are themselves the emergent 
>>> outcomes of a vast amount of neural networked activities that is 
>>> exquisitely hidden from us. We are unaware of the Genesis of our own 
>>> awareness. 
>>> 
>>> Evidence from MRI scans supports this conclusion that before we are aware 
>>> of being aware of some objectively measurable external event, or before we 
>>> experience having a thought, that the almost one hundred billion neurons 
>>> crammed into our highly folded cortexual pizza pie stuffed inside our 
>>> skulls have been very busy and chatty indeed. As the MRI scans indicate.
>>> 
>>> We are aware of being aware and we experience conscious existence, but the 
>>> process by which both our conscious experience and our own awareness of 
>>> being arises within our minds is largely hidden from us. 
>>> I think it is a fair and reasonable question to ask: Is information hiding 
>>> a necessary an integral aspect of processes through which self-awareness 
>>> and consciousness arise?
>>> 
>>> In computer science the rather recent emergence of deep mind neural 
>>> networks that are characterized by having many layers, of which only the 
>>> input layer and output layer of neurons are directly measurable, while 
>>> conversely the many other layers that are arrayed in the stack between them 
>>> remain hidden offers some intriguing parallels that also seem to indicate a 
>>> critical role for information hiding. The Google deep mind machine learned 
>>> neural networks for image processing, for example, have 10 to 30 (or by now 
>>> perhaps even more) stacked layers of artificial neurons, most of which are 
>>> hidden.
>>> 
>>> Because of the non-linearity of the processes in play within these 
>>> artificial deep stacks of layered artificial neurons it is difficult to 
>>> really know in any definitive manner exactly what 

Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Lawrence Crowell
On Thursday, February 22, 2018 at 6:38:15 AM UTC-6, Bruno Marchal wrote:
>
>
> > On 21 Feb 2018, at 20:40, Brent Meeker  > wrote: 
> > 
> > 
> > 
> > On 2/21/2018 1:32 AM, Bruno Marchal wrote: 
> >> I guess you mean enumerable here. I don’t see what physical bounds have 
> to do with Church-Turing thesis, though. We laws suppose that the universal 
> machine have potentially unbounded time and space (in the non physical 
> computer science sense) available for them. 
> > 
> > But they are bounded in the physical sense, and not just potentially. 
>
> But Church-Turing thesis has nothing to do with physics or the physical 
> sense. 
>
> Then you don’t know if a machine, even in the physical world is bounded, 
> unless you make special assumption on some existing universe. 
>
> With mechanism, there are no evidence for a physical primary universe. We 
> would have found one if we would have discover a serious discrepancy 
> between the Nature’s physics and the physics in the “head of the number”, 
> but we have tested this as far as possible, and found none. 
>

The relationship between the physical world and mathematics of computation 
is something I explore here 
.
 
This is in connection with the theoretical concept of hypercomputation. 
Certain types of spacetimes called MH (Malament-Hogarth spacetimes) have 
the physical properties that might do an end run around the limits of 
Godel. On the other hand quantum mechanics might provide limits on that.

LC

 
https://physics.stackexchange.com/questions/305346/is-there-something-similar-to-g%C3%B6dels-incompleteness-theorems-in-physics/305368#305368

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-23 Thread Bruno Marchal

> On 23 Feb 2018, at 04:06, 'Chris de Morsella' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> 
> On Wed, Feb 21, 2018 at 1:37 AM, Bruno Marchal
>  wrote:
> 
>> On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
>> > 
>> wrote:
>> 
>> 
>> 
>> On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
>> > 
>> wrote:
>> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
>> 
>> 
>> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>>> Computers such as AlphaGo have complex algorithms for taking the rules of a 
>>> game like chess and running through long Markov chains of game events to 
>>> increase their data base for playing the game. There is not really anything 
>>> about "knowing something" going on here. There is a lot of hype over AI 
>>> these days, but I suspect a lot of this is meant to beguile people. I do 
>>> suspect in time we will interact with AI as if it were intelligent and 
>>> conscious. The really big changer though I think will be the neural-cyber 
>>> interlink that will put brains as the primary internet nodes.
>> 
>> Why would you suppose that when electronics have a signal speed ten million 
>> times faster than neurons?  Presently neurons have an advantage in 
>> connection density and power dissipation; but I see no reason they can hold 
>> that advantage.
>> 
>> Brent
>> 
>> I think it may come down to computers that obey the Church-Turing thesis, 
>> which is finite and bounded. Hofstadter's book Godel Escher Bach has a 
>> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
>> program on a Turing machine. Biology however is not Bloop, but is rather a 
>> web of processors that are more Floop, or free loop. The busy beaver 
>> algorithm is such a case, which grows in complexity with each step. The 
>> computation of many fractals is this as well, where the Mandelbrot set with 
>> each iteration on a certain scale needs refinement to another floating point 
>> precision and thus grows in huge complexity. These of course in practice 
>> halting because the programmer puts in by hand a stop. These are recursively 
>> enumerable, and their complement in a set theoretic sense are Godel loops or 
>> Gloop. For machines to have properties at least parallel to conscious 
>> behavior we really have to be running in at least Floop and maybe into Gloop.
>> 
>> LC
>> 
>> Not sure if this has been touched on in this thread but it seems to me that 
>> the emergent phenomenon of both self-awareness and consciousness depend on 
>> information hiding in some fundamental way. Both our self awareness and our 
>> conscious minds, which from our incomplete perspective seem to be innate and 
>> ever present (at least when we are awake) are themselves the emergent 
>> outcomes of a vast amount of neural networked activities that is exquisitely 
>> hidden from us. We are unaware of the Genesis of our own awareness. 
>> 
>> Evidence from MRI scans supports this conclusion that before we are aware of 
>> being aware of some objectively measurable external event, or before we 
>> experience having a thought, that the almost one hundred billion neurons 
>> crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
>> have been very busy and chatty indeed. As the MRI scans indicate.
>> 
>> We are aware of being aware and we experience conscious existence, but the 
>> process by which both our conscious experience and our own awareness of 
>> being arises within our minds is largely hidden from us. 
>> I think it is a fair and reasonable question to ask: Is information hiding a 
>> necessary an integral aspect of processes through which self-awareness and 
>> consciousness arise?
>> 
>> In computer science the rather recent emergence of deep mind neural networks 
>> that are characterized by having many layers, of which only the input layer 
>> and output layer of neurons are directly measurable, while conversely the 
>> many other layers that are arrayed in the stack between them remain hidden 
>> offers some intriguing parallels that also seem to indicate a critical role 
>> for information hiding. The Google deep mind machine learned neural networks 
>> for image processing, for example, have 10 to 30 (or by now perhaps even 
>> more) stacked layers of artificial neurons, most of which are hidden.
>> 
>> Because of the non-linearity of the processes in play within these 
>> artificial deep stacks of layered artificial neurons it is difficult to 
>> really know in any definitive manner exactly what is going on. The outcomes 
>> from experimenting on the statistically trained (or in the vernacular, 
>> machine learned) models, by for example tweaking training 

Re: Singularity -- when AI exceeds human intelligence

2018-02-22 Thread Brent Meeker



On 2/22/2018 7:06 PM, 'Chris de Morsella' via Everything List wrote:



Sent from Yahoo Mail on Android 



On Wed, Feb 21, 2018 at 1:37 AM, Bruno Marchal
 wrote:


On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List
> wrote:



On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
> wrote:
On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:



On 2/18/2018 6:26 PM, Lawrence Crowell wrote:

Computers such as AlphaGo have complex algorithms for
taking the rules of a game like chess and running
through long Markov chains of game events to increase
their data base for playing the game. There is not
really anything about "knowing something" going on here.
There is a lot of hype over AI these days, but I suspect
a lot of this is meant to beguile people. I do suspect
in time we will interact with AI as if it were
intelligent and conscious. The really big changer though
I think will be the neural-cyber interlink that will put
brains as the primary internet nodes.


Why would you suppose that when electronics have a signal
speed ten million times faster than neurons? Presently
neurons have an advantage in connection density and power
dissipation; but I see no reason they can hold that
advantage.

Brent


I think it may come down to computers that obey the
Church-Turing thesis, which is finite and bounded.
Hofstadter's book /Godel Escher Bach/ has a chapter Bloop,
Floop, Gloop where the Bloop means bounded loop or a halting
program on a Turing machine. Biology however is not Bloop,
but is rather a web of processors that are more Floop, or
free loop. The busy beaver algorithm is such a case, which
grows in complexity with each step. The computation of many
fractals is this as well, where the Mandelbrot set with each
iteration on a certain scale needs refinement to another
floating point precision and thus grows in huge complexity.
These of course in practice halting because the programmer
puts in by hand a stop. These are recursively enumerable, and
their complement in a set theoretic sense are Godel loops or
Gloop. For machines to have properties at least parallel to
conscious behavior we really have to be running in at least
Floop and maybe into Gloop.

LC

Not sure if this has been touched on in this thread but it
seems to me that the emergent phenomenon of both
self-awareness and consciousness depend on information hiding
in some fundamental way. Both our self awareness and our
conscious minds, which from our incomplete perspective seem
to be innate and ever present (at least when we are awake)
are themselves the emergent outcomes of a vast amount of
neural networked activities that is exquisitely hidden from
us. We are unaware of the Genesis of our own awareness.

Evidence from MRI scans supports this conclusion that before
we are aware of being aware of some objectively measurable
external event, or before we experience having a thought,
that the almost one hundred billion neurons crammed into our
highly folded cortexual pizza pie stuffed inside our skulls
have been very busy and chatty indeed. As the MRI scans indicate.

We are aware of being aware and we experience conscious
existence, but the process by which both our conscious
experience and our own awareness of being arises within our
minds is largely hidden from us.
I think it is a fair and reasonable question to ask: Is
information hiding a necessary an integral aspect of
processes through which self-awareness and consciousness arise?

In computer science the rather recent emergence of deep mind
neural networks that are characterized by having many layers,
of which only the input layer and output layer of neurons are
directly measurable, while conversely the many other layers
that are arrayed in the stack between them remain hidden
offers some intriguing parallels that also seem to indicate a
critical role for information hiding. The Google deep mind
machine learned neural networks for image processing, for
example, have 10 to 30 (or by now perhaps even more) stacked
layers of artificial neurons, most of which are hidden.

Because of the 

Re: Singularity -- when AI exceeds human intelligence

2018-02-22 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Wed, Feb 21, 2018 at 1:37 AM, Bruno Marchal wrote:   

On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
 wrote:

 
 
  On Mon, Feb 19, 2018 at 3:56 AM, Lawrence 
Crowell wrote:   On Sunday, February 18, 2018 
at 10:00:24 PM UTC-6, Brent wrote:
  
 
 On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
  
 Computers such as AlphaGo have complex algorithms for taking the rules of a 
game like chess and running through long Markov chains of game events to 
increase  their data base for playing the game. There is not really anything 
about "knowing something" going on here. There is a lot of hype over AI these 
days, but I suspect a lot of this is meant to beguile people. I do suspect in 
time we will interact with AI as if it were intelligent and conscious. The 
really  big changer though I think will be the neural-cyber interlink that will 
put brains as the primary internet nodes. 
 
 Why would you suppose that when electronics have a signal speed ten million 
times faster than neurons?  Presently neurons have an advantage in connection 
density and power dissipation; but I see no reason they can hold that advantage.
 
 Brent


I think it may come down to computers that obey the Church-Turing thesis, which 
is finite and bounded. Hofstadter's book Godel Escher Bach has a chapter Bloop, 
Floop, Gloop where the Bloop means bounded loop or a halting program on a 
Turing machine. Biology however is not Bloop, but is rather a web of processors 
that are more Floop, or free loop. The busy beaver algorithm is such a case, 
which grows in complexity with each step. The computation of many fractals is 
this as well, where the Mandelbrot set with each iteration on a certain scale 
needs refinement to another floating point precision and thus grows in huge 
complexity. These of course in practice halting because the programmer puts in 
by hand a stop. These are recursively enumerable, and their complement in a set 
theoretic sense are Godel loops or Gloop. For machines to have properties at 
least parallel to conscious behavior we really have to be running in at least 
Floop and maybe into Gloop.
LC
Not sure if this has been touched on in this thread but it seems to me that the 
emergent phenomenon of both self-awareness and consciousness depend on 
information hiding in some fundamental way. Both our self awareness and our 
conscious minds, which from our incomplete perspective seem to be innate and 
ever present (at least when we are awake) are themselves the emergent outcomes 
of a vast amount of neural networked activities that is exquisitely hidden from 
us. We are unaware of the Genesis of our own awareness. 
Evidence from MRI scans supports this conclusion that before we are aware of 
being aware of some objectively measurable external event, or before we 
experience having a thought, that the almost one hundred billion neurons 
crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
have been very busy and chatty indeed. As the MRI scans indicate.
We are aware of being aware and we experience conscious existence, but the 
process by which both our conscious experience and our own awareness of being 
arises within our minds is largely hidden from us. I think it is a fair and 
reasonable question to ask: Is information hiding a necessary an integral 
aspect of processes through which self-awareness and consciousness arise?
In computer science the rather recent emergence of deep mind neural networks 
that are characterized by having many layers, of which only the input layer and 
output layer of neurons are directly measurable, while conversely the many 
other layers that are arrayed in the stack between them remain hidden offers 
some intriguing parallels that also seem to indicate a critical role for 
information hiding. The Google deep mind machine learned neural networks for 
image processing, for example, have 10 to 30 (or by now perhaps even more) 
stacked layers of artificial neurons, most of which are hidden.
Because of the non-linearity of the processes in play within these artificial 
deep stacks of layered artificial neurons it is difficult to really know in any 
definitive manner exactly what is going on. The outcomes from experimenting on 
the statistically trained (or in the vernacular, machine learned) models, by 
for example tweaking training parameters to experimentally see how doing so 
effects the resulting outcomes and by also subsequently forensically analyzing 
any generated logs & other telemetry are often surprisingly beautiful 
dreamscapes that are not reducible to a series of algorithmic steps applied by 
the many hidden layers to whatever input signals that have been fed to the 
input layer of neurons.
It seems to me that the emergence of consciousness & self awareness as well is 
exquisitely nonlinear in nature. And that this 

Re: Singularity -- when AI exceeds human intelligence

2018-02-22 Thread Brent Meeker



On 2/22/2018 4:38 AM, Bruno Marchal wrote:

The mechanist answer to this is “yes”. The more you have neurons, the less 
conscious you are.

Which is why mice are more conscious than you are and spiders are more 
conscious than mice.

Indeed. I know it is counter-intuitive, but we were warned by G*: it has to be 
counter-intuitive. A brain is more a filter of consciousness than a producer of 
consciousness, up to some point.


What does "up to some point mean"?  Does it mean that we would gain 
consciousness by killing off neurons "down to some point"?  Or does it 
mean a rock is maximally conscious?  And then one must ask "Conscious of 
what?"  I think you have been seduced by your experiments with salvia.


Brent


This is not used in the derivation of physics, and is just something making the 
whole theory much more smooth. It is also confirmed (actually suggested) by the 
reports of the experimentation with some drugs, or of some near death 
experience.
Even the belief in the induction axioms, which leads to Löbianity, seems to be 
like the beginning of the delusion.

Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-22 Thread Bruno Marchal

> On 21 Feb 2018, at 20:40, Brent Meeker  wrote:
> 
> 
> 
> On 2/21/2018 1:32 AM, Bruno Marchal wrote:
>> I guess you mean enumerable here. I don’t see what physical bounds have to 
>> do with Church-Turing thesis, though. We laws suppose that the universal 
>> machine have potentially unbounded time and space (in the non physical 
>> computer science sense) available for them.
> 
> But they are bounded in the physical sense, and not just potentially. 

But Church-Turing thesis has nothing to do with physics or the physical sense. 

Then you don’t know if a machine, even in the physical world is bounded, unless 
you make special assumption on some existing universe.

With mechanism, there are no evidence for a physical primary universe. We would 
have found one if we would have discover a serious discrepancy between the 
Nature’s physics and the physics in the “head of the number”, but we have 
tested this as far as possible, and found none.



> So that must have consequences when saying yes to the doctor.

Why? 


>> The mechanist answer to this is “yes”. The more you have neurons, the less 
>> conscious you are. 
> 
> Which is why mice are more conscious than you are and spiders are more 
> conscious than mice.


Indeed. I know it is counter-intuitive, but we were warned by G*: it has to be 
counter-intuitive. A brain is more a filter of consciousness than a producer of 
consciousness, up to some point.
This is not used in the derivation of physics, and is just something making the 
whole theory much more smooth. It is also confirmed (actually suggested) by the 
reports of the experimentation with some drugs, or of some near death 
experience.
Even the belief in the induction axioms, which leads to Löbianity, seems to be 
like the beginning of the delusion.

Bruno

> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Lawrence Crowell
On Wednesday, February 21, 2018 at 1:42:25 PM UTC-6, Brent wrote:
>
>
>
> On 2/21/2018 1:37 AM, Bruno Marchal wrote:
>
>
> On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List <
> everyth...@googlegroups.com > wrote:
>
>
>
> On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
>  wrote:
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote: 
>
>
>
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>
> Computers such as AlphaGo have complex algorithms for taking the rules of 
> a game like chess and running through long Markov chains of game events to 
> increase their data base for playing the game. There is not really anything 
> about "knowing something" going on here. There is a lot of hype over AI 
> these days, but I suspect a lot of this is meant to beguile people. I do 
> suspect in time we will interact with AI as if it were intelligent and 
> conscious. The really big changer though I think will be the neural-cyber 
> interlink that will put brains as the primary internet nodes.
>
>
> Why would you suppose that when electronics have a signal speed ten 
> million times faster than neurons?  Presently neurons have an advantage in 
> connection density and power dissipation; but I see no reason they can hold 
> that advantage.
>
> Brent
>
>
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded. Hofstadter's book *Godel Escher Bach* has a 
> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
> program on a Turing machine. Biology however is not Bloop, but is rather a 
> web of processors that are more Floop, or free loop. The busy beaver 
> algorithm is such a case, which grows in complexity with each step. The 
> computation of many fractals is this as well, where the Mandelbrot set with 
> each iteration on a certain scale needs refinement to another floating 
> point precision and thus grows in huge complexity. These of course in 
> practice halting because the programmer puts in by hand a stop. These are 
> recursively enumerable, and their complement in a set theoretic sense are 
> Godel loops or Gloop. For machines to have properties at least parallel to 
> conscious behavior we really have to be running in at least Floop and maybe 
> into Gloop.
>
> LC
>
> Not sure if this has been touched on in this thread but it seems to me 
> that the emergent phenomenon of both self-awareness and consciousness 
> depend on information hiding in some fundamental way. Both our self 
> awareness and our conscious minds, which from our incomplete perspective 
> seem to be innate and ever present (at least when we are awake) are 
> themselves the emergent outcomes of a vast amount of neural networked 
> activities that is exquisitely hidden from us. We are unaware of the 
> Genesis of our own awareness. 
>
> Evidence from MRI scans supports this conclusion that before we are aware 
> of being aware of some objectively measurable external event, or before we 
> experience having a thought, that the almost one hundred billion neurons 
> crammed into our highly folded cortexual pizza pie stuffed inside our 
> skulls have been very busy and chatty indeed. As the MRI scans indicate.
>
> We are aware of being aware and we experience conscious existence, but the 
> process by which both our conscious experience and our own awareness of 
> being arises within our minds is largely hidden from us. 
> I think it is a fair and reasonable question to ask: Is information hiding 
> a necessary an integral aspect of processes through which self-awareness 
> and consciousness arise?
>
> In computer science the rather recent emergence of deep mind neural 
> networks that are characterized by having many layers, of which only the 
> input layer and output layer of neurons are directly measurable, while 
> conversely the many other layers that are arrayed in the stack between them 
> remain hidden offers some intriguing parallels that also seem to indicate a 
> critical role for information hiding. The Google deep mind machine learned 
> neural networks for image processing, for example, have 10 to 30 (or by now 
> perhaps even more) stacked layers of artificial neurons, most of which are 
> hidden.
>
> Because of the non-linearity of the processes in play within these 
> artificial deep stacks of layered artificial neurons it is difficult to 
> really know in any definitive manner exactly what is going on. The outcomes 
> from experimenting on the statistically trained (or in the vernacular, 
> machine learned) models, by for example tweaking training parameters to 
> experimentally see how doing so effects the resulting outcomes and by 
> also subsequently forensically analyzing any generated logs & other 
> telemetry are often surprisingly beautiful dreamscapes that are not 
> reducible to a series of algorithmic steps applied by the many hidden 
> layers to whatever input signals that have been fed to the 

Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Brent Meeker



On 2/21/2018 1:37 AM, Bruno Marchal wrote:


On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
> wrote:




On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
> wrote:
On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:



On 2/18/2018 6:26 PM, Lawrence Crowell wrote:

Computers such as AlphaGo have complex algorithms for taking
the rules of a game like chess and running through long
Markov chains of game events to increase their data base for
playing the game. There is not really anything about
"knowing something" going on here. There is a lot of hype
over AI these days, but I suspect a lot of this is meant to
beguile people. I do suspect in time we will interact with
AI as if it were intelligent and conscious. The really big
changer though I think will be the neural-cyber interlink
that will put brains as the primary internet nodes.


Why would you suppose that when electronics have a signal
speed ten million times faster than neurons?  Presently
neurons have an advantage in connection density and power
dissipation; but I see no reason they can hold that advantage.

Brent


I think it may come down to computers that obey the Church-Turing
thesis, which is finite and bounded. Hofstadter's book /Godel
Escher Bach/ has a chapter Bloop, Floop, Gloop where the Bloop
means bounded loop or a halting program on a Turing machine.
Biology however is not Bloop, but is rather a web of processors
that are more Floop, or free loop. The busy beaver algorithm is
such a case, which grows in complexity with each step. The
computation of many fractals is this as well, where the
Mandelbrot set with each iteration on a certain scale needs
refinement to another floating point precision and thus grows in
huge complexity. These of course in practice halting because the
programmer puts in by hand a stop. These are recursively
enumerable, and their complement in a set theoretic sense are
Godel loops or Gloop. For machines to have properties at least
parallel to conscious behavior we really have to be running in at
least Floop and maybe into Gloop.

LC

Not sure if this has been touched on in this thread but it seems
to me that the emergent phenomenon of both self-awareness and
consciousness depend on information hiding in some fundamental
way. Both our self awareness and our conscious minds, which from
our incomplete perspective seem to be innate and ever present (at
least when we are awake) are themselves the emergent outcomes of
a vast amount of neural networked activities that is exquisitely
hidden from us. We are unaware of the Genesis of our own awareness.

Evidence from MRI scans supports this conclusion that before we
are aware of being aware of some objectively measurable external
event, or before we experience having a thought, that the almost
one hundred billion neurons crammed into our highly folded
cortexual pizza pie stuffed inside our skulls have been very busy
and chatty indeed. As the MRI scans indicate.

We are aware of being aware and we experience conscious
existence, but the process by which both our conscious experience
and our own awareness of being arises within our minds is largely
hidden from us.
I think it is a fair and reasonable question to ask: Is
information hiding a necessary an integral aspect of processes
through which self-awareness and consciousness arise?

In computer science the rather recent emergence of deep mind
neural networks that are characterized by having many layers, of
which only the input layer and output layer of neurons are
directly measurable, while conversely the many other layers that
are arrayed in the stack between them remain hidden offers some
intriguing parallels that also seem to indicate a critical role
for information hiding. The Google deep mind machine learned
neural networks for image processing, for example, have 10 to 30
(or by now perhaps even more) stacked layers of artificial
neurons, most of which are hidden.

Because of the non-linearity of the processes in play within
these artificial deep stacks of layered artificial neurons it is
difficult to really know in any definitive manner exactly what is
going on. The outcomes from experimenting on the statistically
trained (or in the vernacular, machine learned) models, by for
example tweaking training parameters to experimentally see how
doing so effects the resulting outcomes and by also subsequently
forensically analyzing any generated logs & other telemetry are
often 

Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Brent Meeker



On 2/21/2018 1:32 AM, Bruno Marchal wrote:
I guess you mean enumerable here. I don’t see what physical bounds 
have to do with Church-Turing thesis, though. We laws suppose that the 
universal machine have potentially unbounded time and space (in the 
non physical computer science sense) available for them.


But they are bounded in the physical sense, and not just potentially.  
So that must have consequences when saying yes to the doctor.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Lawrence Crowell
On Wednesday, February 21, 2018 at 3:32:06 AM UTC-6, Bruno Marchal wrote:
>
>
> On 19 Feb 2018, at 20:12, Brent Meeker  
> wrote:
>
>
>
> On 2/19/2018 3:56 AM, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>>
>> Computers such as AlphaGo have complex algorithms for taking the rules of 
>> a game like chess and running through long Markov chains of game events to 
>> increase their data base for playing the game. There is not really anything 
>> about "knowing something" going on here. There is a lot of hype over AI 
>> these days, but I suspect a lot of this is meant to beguile people. I do 
>> suspect in time we will interact with AI as if it were intelligent and 
>> conscious. The really big changer though I think will be the neural-cyber 
>> interlink that will put brains as the primary internet nodes.
>>
>>
>> Why would you suppose that when electronics have a signal speed ten 
>> million times faster than neurons?  Presently neurons have an advantage in 
>> connection density and power dissipation; but I see no reason they can hold 
>> that advantage.
>>
>> Brent
>>
>
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded. Hofstadter's book *Godel Escher Bach* has a 
> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
> program on a Turing machine. Biology however is not Bloop, but is rather a 
> web of processors that are more Floop, or free loop. The busy beaver 
> algorithm is such a case, which grows in complexity with each step. The 
> computation of many fractals is this as well, where the Mandelbrot set with 
> each iteration on a certain scale needs refinement to another floating 
> point precision and thus grows in huge complexity. These of course in 
> practice halting because the programmer puts in by hand a stop. These are 
> recursively enumerable, and their complement in a set theoretic sense are 
> Godel loops or Gloop. For machines to have properties at least parallel to 
> conscious behavior we really have to be running in at least Floop and maybe 
> into Gloop.
>
>
> But the complexity is bounded physically.  All these mathematical 
> idealizations of computation assume some kind of infinity.  Since there are 
> physical bounds the Church-Turing thesis will apply and all realizable 
> computers compute the same recursively innumerable
>
>
> I guess you mean enumerable here. I don’t see what physical bounds have to 
> do with Church-Turing thesis, though. We laws suppose that the universal 
> machine have potentially unbounded time and space (in the non physical 
> computer science sense) available for them.
>
> Bruno
>

I am thinking according to the busy beaver program, which grows in 
complexity with each iteration. Because of this there is no manner by which 
a final outcome can be assessed. This is unless one is working with a 
hyper-Turing machine such as what the interior of a black hole may provide. 
The halting problem is recursively enumerable but not recursive. 

LC
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Bruno Marchal

> On 20 Feb 2018, at 02:27, Lawrence Crowell  
> wrote:
> 
> It is getting a bit late and was planning to respond more on this. The 
> nonlinear aspect to consciousness may well relevant. In fact consciousness 
> may have a lot to do with it with respect to chaos theory.


Of course elementary arithmetic is highly non linear, as most computer science 
object. Arithmetic implements many sort of chaos, like for example with 
Goldbach comet, to take an example. The main mystery is where the (quantum) 
linearity comes from, but that is partially answered by the machine’s or 
number’s psychology/theology (the logic of self-reference of Gödel-Löb-Solovay).


> 
> More later LC


No doubt … BM.



> 
> On Monday, February 19, 2018 at 2:27:37 PM UTC-6, cdemorsella wrote:
> 
> 
> On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
>  wrote:
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
> 
> 
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>> Computers such as AlphaGo have complex algorithms for taking the rules of a 
>> game like chess and running through long Markov chains of game events to 
>> increase their data base for playing the game. There is not really anything 
>> about "knowing something" going on here. There is a lot of hype over AI 
>> these days, but I suspect a lot of this is meant to beguile people. I do 
>> suspect in time we will interact with AI as if it were intelligent and 
>> conscious. The really big changer though I think will be the neural-cyber 
>> interlink that will put brains as the primary internet nodes.
> 
> Why would you suppose that when electronics have a signal speed ten million 
> times faster than neurons?  Presently neurons have an advantage in connection 
> density and power dissipation; but I see no reason they can hold that 
> advantage.
> 
> Brent
> 
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded. Hofstadter's book Godel Escher Bach has a 
> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
> program on a Turing machine. Biology however is not Bloop, but is rather a 
> web of processors that are more Floop, or free loop. The busy beaver 
> algorithm is such a case, which grows in complexity with each step. The 
> computation of many fractals is this as well, where the Mandelbrot set with 
> each iteration on a certain scale needs refinement to another floating point 
> precision and thus grows in huge complexity. These of course in practice 
> halting because the programmer puts in by hand a stop. These are recursively 
> enumerable, and their complement in a set theoretic sense are Godel loops or 
> Gloop. For machines to have properties at least parallel to conscious 
> behavior we really have to be running in at least Floop and maybe into Gloop.
> 
> LC
> 
> Not sure if this has been touched on in this thread but it seems to me that 
> the emergent phenomenon of both self-awareness and consciousness depend on 
> information hiding in some fundamental way. Both our self awareness and our 
> conscious minds, which from our incomplete perspective seem to be innate and 
> ever present (at least when we are awake) are themselves the emergent 
> outcomes of a vast amount of neural networked activities that is exquisitely 
> hidden from us. We are unaware of the Genesis of our own awareness. 
> 
> Evidence from MRI scans supports this conclusion that before we are aware of 
> being aware of some objectively measurable external event, or before we 
> experience having a thought, that the almost one hundred billion neurons 
> crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
> have been very busy and chatty indeed. As the MRI scans indicate.
> 
> We are aware of being aware and we experience conscious existence, but the 
> process by which both our conscious experience and our own awareness of being 
> arises within our minds is largely hidden from us. 
> I think it is a fair and reasonable question to ask: Is information hiding a 
> necessary an integral aspect of processes through which self-awareness and 
> consciousness arise?
> 
> In computer science the rather recent emergence of deep mind neural networks 
> that are characterized by having many layers, of which only the input layer 
> and output layer of neurons are directly measurable, while conversely the 
> many other layers that are arrayed in the stack between them remain hidden 
> offers some intriguing parallels that also seem to indicate a critical role 
> for information hiding. The Google deep mind machine learned neural networks 
> for image processing, for example, have 10 to 30 (or by now perhaps even 
> more) stacked layers of artificial neurons, most of which are hidden.
> 
> Because of the non-linearity of the processes in play within these artificial 
> deep stacks of layered artificial neurons it is difficult to 

Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Bruno Marchal

> On 19 Feb 2018, at 21:27, 'Chris de Morsella' via Everything List 
>  wrote:
> 
> 
> 
> On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
>  wrote:
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
> 
> 
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>> Computers such as AlphaGo have complex algorithms for taking the rules of a 
>> game like chess and running through long Markov chains of game events to 
>> increase their data base for playing the game. There is not really anything 
>> about "knowing something" going on here. There is a lot of hype over AI 
>> these days, but I suspect a lot of this is meant to beguile people. I do 
>> suspect in time we will interact with AI as if it were intelligent and 
>> conscious. The really big changer though I think will be the neural-cyber 
>> interlink that will put brains as the primary internet nodes.
> 
> Why would you suppose that when electronics have a signal speed ten million 
> times faster than neurons?  Presently neurons have an advantage in connection 
> density and power dissipation; but I see no reason they can hold that 
> advantage.
> 
> Brent
> 
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded. Hofstadter's book Godel Escher Bach has a 
> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
> program on a Turing machine. Biology however is not Bloop, but is rather a 
> web of processors that are more Floop, or free loop. The busy beaver 
> algorithm is such a case, which grows in complexity with each step. The 
> computation of many fractals is this as well, where the Mandelbrot set with 
> each iteration on a certain scale needs refinement to another floating point 
> precision and thus grows in huge complexity. These of course in practice 
> halting because the programmer puts in by hand a stop. These are recursively 
> enumerable, and their complement in a set theoretic sense are Godel loops or 
> Gloop. For machines to have properties at least parallel to conscious 
> behavior we really have to be running in at least Floop and maybe into Gloop.
> 
> LC
> 
> Not sure if this has been touched on in this thread but it seems to me that 
> the emergent phenomenon of both self-awareness and consciousness depend on 
> information hiding in some fundamental way. Both our self awareness and our 
> conscious minds, which from our incomplete perspective seem to be innate and 
> ever present (at least when we are awake) are themselves the emergent 
> outcomes of a vast amount of neural networked activities that is exquisitely 
> hidden from us. We are unaware of the Genesis of our own awareness. 
> 
> Evidence from MRI scans supports this conclusion that before we are aware of 
> being aware of some objectively measurable external event, or before we 
> experience having a thought, that the almost one hundred billion neurons 
> crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
> have been very busy and chatty indeed. As the MRI scans indicate.
> 
> We are aware of being aware and we experience conscious existence, but the 
> process by which both our conscious experience and our own awareness of being 
> arises within our minds is largely hidden from us. 
> I think it is a fair and reasonable question to ask: Is information hiding a 
> necessary an integral aspect of processes through which self-awareness and 
> consciousness arise?
> 
> In computer science the rather recent emergence of deep mind neural networks 
> that are characterized by having many layers, of which only the input layer 
> and output layer of neurons are directly measurable, while conversely the 
> many other layers that are arrayed in the stack between them remain hidden 
> offers some intriguing parallels that also seem to indicate a critical role 
> for information hiding. The Google deep mind machine learned neural networks 
> for image processing, for example, have 10 to 30 (or by now perhaps even 
> more) stacked layers of artificial neurons, most of which are hidden.
> 
> Because of the non-linearity of the processes in play within these artificial 
> deep stacks of layered artificial neurons it is difficult to really know in 
> any definitive manner exactly what is going on. The outcomes from 
> experimenting on the statistically trained (or in the vernacular, machine 
> learned) models, by for example tweaking training parameters to 
> experimentally see how doing so effects the resulting outcomes and by also 
> subsequently forensically analyzing any generated logs & other telemetry are 
> often surprisingly beautiful dreamscapes that are not reducible to a series 
> of algorithmic steps applied by the many hidden layers to whatever input 
> signals that have been fed to the input layer of neurons.
> 
> It seems to me that the emergence of consciousness & self awareness as well 

Re: Singularity -- when AI exceeds human intelligence

2018-02-21 Thread Bruno Marchal

> On 19 Feb 2018, at 20:12, Brent Meeker  wrote:
> 
> 
> 
> On 2/19/2018 3:56 AM, Lawrence Crowell wrote:
>> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
>> 
>> 
>> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>>> Computers such as AlphaGo have complex algorithms for taking the rules of a 
>>> game like chess and running through long Markov chains of game events to 
>>> increase their data base for playing the game. There is not really anything 
>>> about "knowing something" going on here. There is a lot of hype over AI 
>>> these days, but I suspect a lot of this is meant to beguile people. I do 
>>> suspect in time we will interact with AI as if it were intelligent and 
>>> conscious. The really big changer though I think will be the neural-cyber 
>>> interlink that will put brains as the primary internet nodes.
>> 
>> Why would you suppose that when electronics have a signal speed ten million 
>> times faster than neurons?  Presently neurons have an advantage in 
>> connection density and power dissipation; but I see no reason they can hold 
>> that advantage.
>> 
>> Brent
>> 
>> I think it may come down to computers that obey the Church-Turing thesis, 
>> which is finite and bounded. Hofstadter's book Godel Escher Bach 
>> has a chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a 
>> halting program on a Turing machine. Biology however is not Bloop, but is 
>> rather a web of processors that are more Floop, or free loop. The busy 
>> beaver algorithm is such a case, which grows in complexity with each step. 
>> The computation of many fractals is this as well, where the Mandelbrot set 
>> with each iteration on a certain scale needs refinement to another floating 
>> point precision and thus grows in huge complexity. These of course in 
>> practice halting because the programmer puts in by hand a stop. These are 
>> recursively enumerable, and their complement in a set theoretic sense are 
>> Godel loops or Gloop. For machines to have properties at least parallel to 
>> conscious behavior we really have to be running in at least Floop and maybe 
>> into Gloop.
> 
> But the complexity is bounded physically.  All these mathematical 
> idealizations of computation assume some kind of infinity.  Since there are 
> physical bounds the Church-Turing thesis will apply and all realizable 
> computers compute the same recursively innumerable

I guess you mean enumerable here. I don’t see what physical bounds have to do 
with Church-Turing thesis, though. We laws suppose that the universal machine 
have potentially unbounded time and space (in the non physical computer science 
sense) available for them.

Bruno



> functions.  It's just that electronic ones can do it a lot faster, or looked 
> at another way can be a lot bigger.
> 
> Brent
> 
> 
>> 
>> LC
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> .
>> To post to this group, send email to everything-list@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/everything-list 
>> .
>> For more options, visit https://groups.google.com/d/optout 
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread 'Chris de Morsella' via Everything List

On 2/19/2018 12:27 PM, 'Chris de Morsella' via Everything List wrote:




On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell 
wrote:On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:


On 2/18/2018 6:26 PM, Lawrence Crowell wrote:

Computers such as AlphaGo have complex algorithms for taking the rules of a 
game like chess and running through long Markov chains of game events to 
increase their data base for playing the game. There is not really anything 
about "knowing something" going on here. There is a lot of hype over AI these 
days, but I suspect a lot of this is meant to beguile people. I do suspect in 
time we will interact with AI as if it were intelligent and conscious. The 
really big changer though I think will be the neural-cyber interlink that will 
put brains as the primary internet nodes.

Why would you suppose that when electronics have a signal speed ten million 
times faster than neurons?  Presently neurons have an advantage in connection 
density and power dissipation; but I see no reason they can hold that advantage.

Brent


I think it may come down to computers that obey the Church-Turing thesis, which 
is finite and bounded. Hofstadter's book Godel Escher Bach has a chapter Bloop, 
Floop, Gloop where the Bloop means bounded loop or a halting program on a 
Turing machine. Biology however is not Bloop, but is rather a web of processors 
that are more Floop, or free loop. The busy beaver algorithm is such a case, 
which grows in complexity with each step. The computation of many fractals is 
this as well, where the Mandelbrot set with each iteration on a certain scale 
needs refinement to another floating point precision and thus grows in huge 
complexity. These of course in practice halting because the programmer puts in 
by hand a stop. These are recursively enumerable, and their complement in a set 
theoretic sense are Godel loops or Gloop. For machines to have properties at 
least parallel to conscious behavior we really have to be running in at least 
Floop and maybe into Gloop.
LC
Not sure if this has been touched on in this thread but it seems to me that the 
emergent phenomenon of both self-awareness and consciousness depend on 
information hiding in some fundamental way. Both our self awareness and our 
conscious minds, which from our incomplete perspective seem to be innate and 
ever present (at least when we are awake) are themselves the emergent outcomes 
of a vast amount of neural networked activities that is exquisitely hidden from 
us. We are unaware of the Genesis of our own awareness. 
Evidence from MRI scans supports this conclusion that before we are aware of 
being aware of some objectively measurable external event, or before we 
experience having a thought, that the almost one hundred billion neurons 
crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
have been very busy and chatty indeed. As the MRI scans indicate.
We are aware of being aware and we experience conscious existence, but the 
process by which both our conscious experience and our own awareness of being 
arises within our minds is largely hidden from us. I think it is a fair and 
reasonable question to ask: Is information hiding a necessary an integral 
aspect of processes through which self-awareness and consciousness arise?


I think information hiding is looking at it the wrong way around.  It would 
take more layers of neurons to record and interpret the neurons responsible for 
your thoughts...a total waste from an evolutionary viewpoint.  Taking my 
favorite example of designing an AI Mars Rover, one provides for internal 
monitoring of some systems, e.g. power, temperature, etc.  But what would you 
provide to monitor the computer(s) themselves.  In principle you could record 
every step, but what would you do with it?  If you have mulitple computers on 
board (as is likely) you'd just take one that was doing funny things off 
line...but you don't have to record everything to identify a computer that's 
flaky.  All you need is some error correction and majority voting.  So there's 
just no practical reason to try to "un-hide" all that information processing at 
the cost of a lot more information processing.
Completely agree with aserting that attempting to record, categorize and make 
available an exhaustive and fine grained record of the full spectrum of 
activities, at least in any system that could be considered as being a viable 
candidate for consciousness or self awareness -- and I suspect we agree on this 
-- that it  is a rather hopeless task, at least in any non trivial large scale 
system. As a kind of aside, that being said, raw telemetry however, especially 
numeric metadata, is still valuable for forensic reconstruction of some given 
scenario based on the ingested stream's trail of time graphed signals recorded 
and reposited in the telemetry data store. Say for example when all hell breaks 
loose and all the 

Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Lawrence Crowell
It is getting a bit late and was planning to respond more on this. The 
nonlinear aspect to consciousness may well relevant. In fact consciousness 
may have a lot to do with it with respect to chaos theory.

More later LC

On Monday, February 19, 2018 at 2:27:37 PM UTC-6, cdemorsella wrote:
>
>
>
> On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
>  wrote:
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
>
>
>
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>
> Computers such as AlphaGo have complex algorithms for taking the rules of 
> a game like chess and running through long Markov chains of game events to 
> increase their data base for playing the game. There is not really anything 
> about "knowing something" going on here. There is a lot of hype over AI 
> these days, but I suspect a lot of this is meant to beguile people. I do 
> suspect in time we will interact with AI as if it were intelligent and 
> conscious. The really big changer though I think will be the neural-cyber 
> interlink that will put brains as the primary internet nodes.
>
>
> Why would you suppose that when electronics have a signal speed ten 
> million times faster than neurons?  Presently neurons have an advantage in 
> connection density and power dissipation; but I see no reason they can hold 
> that advantage.
>
> Brent
>
>
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded. Hofstadter's book *Godel Escher Bach* has a 
> chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
> program on a Turing machine. Biology however is not Bloop, but is rather a 
> web of processors that are more Floop, or free loop. The busy beaver 
> algorithm is such a case, which grows in complexity with each step. The 
> computation of many fractals is this as well, where the Mandelbrot set with 
> each iteration on a certain scale needs refinement to another floating 
> point precision and thus grows in huge complexity. These of course in 
> practice halting because the programmer puts in by hand a stop. These are 
> recursively enumerable, and their complement in a set theoretic sense are 
> Godel loops or Gloop. For machines to have properties at least parallel to 
> conscious behavior we really have to be running in at least Floop and maybe 
> into Gloop.
>
> LC
>
> Not sure if this has been touched on in this thread but it seems to me 
> that the emergent phenomenon of both self-awareness and consciousness 
> depend on information hiding in some fundamental way. Both our self 
> awareness and our conscious minds, which from our incomplete perspective 
> seem to be innate and ever present (at least when we are awake) are 
> themselves the emergent outcomes of a vast amount of neural networked 
> activities that is exquisitely hidden from us. We are unaware of the 
> Genesis of our own awareness. 
>
> Evidence from MRI scans supports this conclusion that before we are aware 
> of being aware of some objectively measurable external event, or before we 
> experience having a thought, that the almost one hundred billion neurons 
> crammed into our highly folded cortexual pizza pie stuffed inside our 
> skulls have been very busy and chatty indeed. As the MRI scans indicate.
>
> We are aware of being aware and we experience conscious existence, but the 
> process by which both our conscious experience and our own awareness of 
> being arises within our minds is largely hidden from us. 
> I think it is a fair and reasonable question to ask: Is information hiding 
> a necessary an integral aspect of processes through which self-awareness 
> and consciousness arise?
>
> In computer science the rather recent emergence of deep mind neural 
> networks that are characterized by having many layers, of which only the 
> input layer and output layer of neurons are directly measurable, while 
> conversely the many other layers that are arrayed in the stack between them 
> remain hidden offers some intriguing parallels that also seem to indicate a 
> critical role for information hiding. The Google deep mind machine learned 
> neural networks for image processing, for example, have 10 to 30 (or by now 
> perhaps even more) stacked layers of artificial neurons, most of which are 
> hidden.
>
> Because of the non-linearity of the processes in play within these 
> artificial deep stacks of layered artificial neurons it is difficult to 
> really know in any definitive manner exactly what is going on. The outcomes 
> from experimenting on the statistically trained (or in the vernacular, 
> machine learned) models, by for example tweaking training parameters to 
> experimentally see how doing so effects the resulting outcomes and by 
> also subsequently forensically analyzing any generated logs & other 
> telemetry are often surprisingly beautiful dreamscapes that are not 
> reducible to a series of algorithmic steps applied by the many hidden 

Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread agrayson2000


On Monday, February 19, 2018 at 3:05:53 PM UTC-7, Brent wrote:
>
>
>
> On 2/19/2018 12:37 PM, agrays...@gmail.com  wrote:
>
>
> *I viewed it. Very impressive what they can do. However, I'd be MORE 
> impressed, indeed HUGELY impressed with the existence of consciousness, if 
> without an algorithm explicitly programming it, the computer would REFUSE 
> to do as commanded. AG *
>
>
> I've already got one of those in my "smart" phone.
>
> Brent
>

I am not referring to a broken phone. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread John Clark
On Mon, Feb 19, 2018 at 6:56 AM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​I think it may come down to computers that obey the Church-Turing
> thesis,*
>

​It states that a human can compute a function of ​
 positive integer
​s ​if and only if a Turing Machine (aka a computer) can. Do you have any
reason to think it is untrue.


> * ​> ​which is finite and bounded. Hofstadter's book Godel Escher Bach has
> a chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a
> halting program on a Turing machine. Biology however is not Bloop, *
>

Computers can't do that and neither can people becuae Turing proved that
nothing has a halting program, such a thing does not exist.​


> ​>​
> The busy beaver algorithm is such a case, which grows in complexity with
> each step.
>

​That is a understatement, the busy beaver function is more than just
complex is is not computable.
The Busy Beaver involves a physical object that could actually be built, a
Turing Machine. Starting with a blank tape and a Turing Machine that can be
in N states (that is to say have N rules) then BB(N) is the largest* FINITE*
number of operations the machine will undergo before it halts
​;​
sometimes the machine will continue forever but ignore them, of
​the​
 machines that eventually stop BB(N) is the maximum number of operations
performed before halting. The Busy Beaver function starts out modestly
enough:

BB(1)=1

BB(2)=6

BB(3)=21

BB(4)=107

​But what is the next​ element in the series? Nobody knows for sure because
at that point the function goes nuts. We know that BB(5) is at least
47,176,870, that is to say one 5 state Turing Machine has been found that
halts after 47,176,870 operations, but another 5 state Turing Machine is
still going strong well past that point, if it eventually stops then that
larger number of operations is BB(5) if not then it’s 47,176,870 ; but if
so we'll never be able to prove it’s 47,176,870 because we'll never be able
to prove that other 5 state machine will never stop. Turing showed that in
general you can’t determine if one of his machines will eventually stop,
all you can do is observe it and wait to see if it stops, and you might be
waiting forever. So some (perhaps all) BB numbers greater than 4 are not
computable. It’s a little like having a perfect watch that will never stop,
you can’t make money betting somebody that it will never stop because there
is no point where enough evidence is in to allow you to claim you won and
get the money.

As for BB(6) its at least 7.4* 10^36,534  and probably much larger. BB(7)
is greater than or equal to 10^10^10^10^7.  Its been proven that BB(7,918)
isn't just huge the number is not computable, even a Jupiter Brain will
never know what BB(7,918) is, even the universe itself does not
​ know because it does not​
have sufficient resources to produce it
​,​
so I'm not sure it make sense to say
​such a huge number even
exists. It's unknown what the smallest
​ ​
non-
​c​
omputable BB number is, all we know is its larger than BB(4) and less than
or equal to BB(7,918).


> *​> ​For machines to have properties at least parallel to conscious
> behavior*
>

​There is no observable ​
difference between
​
conscious behavior
​ and intelligent behavior.   ​


> *​> ​we really have to be running in at least Floop and maybe into Gloop.*
>

Then humans are not conscious. It's true a computer can't calculate BB(7918)
​,​
but a human can't either.

​ John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Brent Meeker



On 2/19/2018 12:37 PM, agrayson2...@gmail.com wrote:


*I viewed it. Very impressive what they can do. However, I'd be MORE 
impressed, indeed HUGELY impressed with the existence of 
consciousness, if without an algorithm explicitly programming it, the 
computer would REFUSE to do as commanded. AG *


I've already got one of those in my "smart" phone.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Brent Meeker



On 2/19/2018 12:27 PM, 'Chris de Morsella' via Everything List wrote:



On Mon, Feb 19, 2018 at 3:56 AM, Lawrence Crowell
 wrote:
On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:



On 2/18/2018 6:26 PM, Lawrence Crowell wrote:

Computers such as AlphaGo have complex algorithms for taking
the rules of a game like chess and running through long
Markov chains of game events to increase their data base for
playing the game. There is not really anything about "knowing
something" going on here. There is a lot of hype over AI
these days, but I suspect a lot of this is meant to beguile
people. I do suspect in time we will interact with AI as if
it were intelligent and conscious. The really big changer
though I think will be the neural-cyber interlink that will
put brains as the primary internet nodes.


Why would you suppose that when electronics have a signal
speed ten million times faster than neurons? Presently neurons
have an advantage in connection density and power dissipation;
but I see no reason they can hold that advantage.

Brent


I think it may come down to computers that obey the Church-Turing
thesis, which is finite and bounded. Hofstadter's book /Godel
Escher Bach/ has a chapter Bloop, Floop, Gloop where the Bloop
means bounded loop or a halting program on a Turing machine.
Biology however is not Bloop, but is rather a web of processors
that are more Floop, or free loop. The busy beaver algorithm is
such a case, which grows in complexity with each step. The
computation of many fractals is this as well, where the Mandelbrot
set with each iteration on a certain scale needs refinement to
another floating point precision and thus grows in huge
complexity. These of course in practice halting because the
programmer puts in by hand a stop. These are recursively
enumerable, and their complement in a set theoretic sense are
Godel loops or Gloop. For machines to have properties at least
parallel to conscious behavior we really have to be running in at
least Floop and maybe into Gloop.

LC

Not sure if this has been touched on in this thread but it seems
to me that the emergent phenomenon of both self-awareness and
consciousness depend on information hiding in some fundamental
way. Both our self awareness and our conscious minds, which from
our incomplete perspective seem to be innate and ever present (at
least when we are awake) are themselves the emergent outcomes of a
vast amount of neural networked activities that is exquisitely
hidden from us. We are unaware of the Genesis of our own awareness.

Evidence from MRI scans supports this conclusion that before we
are aware of being aware of some objectively measurable external
event, or before we experience having a thought, that the almost
one hundred billion neurons crammed into our highly folded
cortexual pizza pie stuffed inside our skulls have been very busy
and chatty indeed. As the MRI scans indicate.

We are aware of being aware and we experience conscious existence,
but the process by which both our conscious experience and our own
awareness of being arises within our minds is largely hidden from us.
I think it is a fair and reasonable question to ask: Is
information hiding a necessary an integral aspect of processes
through which self-awareness and consciousness arise?



I think information hiding is looking at it the wrong way around. It 
would take more layers of neurons to record and interpret the neurons 
responsible for your thoughts...a total waste from an evolutionary 
viewpoint.  Taking my favorite example of designing an AI Mars Rover, 
one provides for internal monitoring of some systems, e.g. power, 
temperature, etc.  But what would you provide to monitor the computer(s) 
themselves.  In principle you could record every step, but what would 
you do with it?  If you have mulitple computers on board (as is likely) 
you'd just take one that was doing funny things off line...but you don't 
have to record everything to identify a computer that's flaky.  All you 
need is some error correction and majority voting.  So there's just no 
practical reason to try to "un-hide" all that information processing at 
the cost of a lot more information processing.




In computer science the rather recent emergence of deep mind
neural networks that are characterized by having many layers, of
which only the input layer and output layer of neurons are
directly measurable, while conversely the many other layers that
are arrayed in the stack between them remain hidden offers some
intriguing parallels that also seem to indicate a critical role
for information hiding. The Google deep mind machine 

Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread John Clark
On Sun, Feb 18, 2018 at 9:26 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​Computers such as AlphaGo have complex algorithms for taking the rules
> of a game like chess and running through long Markov chains of game events
> to increase their data base for playing the game.*
>

That won't work. No computer could examine every possible move in the game
GO because there are 2.09*10^170 of them and there are only 10^80 atoms in
the observable universe
​;​
and yet in just 24 hours it taught itself to be, not just a little better
but vastly better than an
​y​
human being
​ at the game​
, it easily beat the computer that beat the world's best human
​GO player
.

​And
 it wasn't a specialized program, it did the same thing with Chess and
sever other games.
​ The most amazing thing of all is that humans didn't teach it to do any of
this, it taught itself, all it started out knowing is which moves
were legal and which were not. That's it. ​

And besides, explaining why
​something​
 is smart does not make it one bit less smart.

>
> *​> ​There is not really anything about "knowing something" going on here.*
>

Call me crazy but I think word should have meaning. If you're right and the
computer does not "know something" then whatever "knowing something" means
(assuming it means anything at all) it has no virtue because human
​s​
, who "know something"
​, ​
behave stupider than a computer that "known nothing".


> ​> ​
> There is a lot of hype over AI these days, but I suspect a lot of this is
> meant to beguile people.
>

​You seem to believe that humans and the meat they are made of have some
special mystical something than computers and the microchips they are made
of can never have. I disagree, I think the idea of a soul is superstitious
nonsense.  ​


​John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 12:56, Lawrence Crowell  
> wrote:
> 
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
> 
> 
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>> Computers such as AlphaGo have complex algorithms for taking the rules of a 
>> game like chess and running through long Markov chains of game events to 
>> increase their data base for playing the game. There is not really anything 
>> about "knowing something" going on here. There is a lot of hype over AI 
>> these days, but I suspect a lot of this is meant to beguile people. I do 
>> suspect in time we will interact with AI as if it were intelligent and 
>> conscious. The really big changer though I think will be the neural-cyber 
>> interlink that will put brains as the primary internet nodes.
> 
> Why would you suppose that when electronics have a signal speed ten million 
> times faster than neurons?  Presently neurons have an advantage in connection 
> density and power dissipation; but I see no reason they can hold that 
> advantage.
> 
> Brent
> 
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded.

The machines are finite, but they are supposed to be in a not bounded space and 
time environment.




> Hofstadter's book Godel Escher Bach has a chapter Bloop, Floop, Gloop where 
> the Bloop means bounded loop or a halting program on a Turing machine.

Bounded loop prevent the machine to be universal. An halting oracle makes the 
machine more powerful than a universal machine, but still obeying the same 
machine theology. Universal machine should be in the largest class (Gloop I 
presume). 



> Biology however is not Bloop, but is rather a web of processors that are more 
> Floop, or free loop.

It is gloop. Or we would been unable to talk about the universal machines.



> The busy beaver algorithm is such a case, which grows in complexity with each 
> step. The computation of many fractals is this as well, where the Mandelbrot 
> set with each iteration on a certain scale needs refinement to another 
> floating point precision and thus grows in huge complexity. These of course 
> in practice halting because the programmer puts in by hand a stop.

Assuming the programmer is not lost in a loop. No universal entity is immune 
against this.



> These are recursively enumerable, and their complement in a set theoretic 
> sense are Godel loops or Gloop.

? Universal = creative set in the sense of post: it means recursively 
enumerable with a complement which is not (but is transfinitely enumerable in 
some sense). The complement is not a machine at all.




> For machines to have properties at least parallel to conscious behavior we 
> really have to be running in at least Floop and maybe into Gloop.

Universality is enough, and Löbianity is enough to be self-conscious like us.

Bruno



> 
> LC
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 04:41, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote:
> 
> 
> On 2/18/2018 12:15 PM, agrays...@gmail.com  wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:
>> 
>> 
>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>> > >  
>>> > > 
>>> > >  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>> 
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>> 
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological change. 
>>> 
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>> 
>>> Cheers 
>>> 
>>> One thing a computer can not do is ask a question. I can ask a question and 
>>> program a computer to help solve the problem. In fact I am doing a program 
>>> to do just this. I am working a computer program to model aspects of 
>>> gravitational memory. What the computer will not do, at least computers we 
>>> currently employ will not do is to ask the question and then work to solve 
>>> it. A computer can find a numerical solution or render something 
>>> numerically, but it does not spontaneously act to ask the question or to 
>>> propose something creative to then solve or render the solution.
>> 
>> You must never have applied for a loan online.
>> 
>> It can only do what it has been programmed to do. I can't act independent of 
>> its program, such as wondering if some theory makes sense, or coming up with 
>> tests of a theory. Or say, it can't invent chess, it can only play it better 
>> than humans. It can't "think" out of the box. AG
> 
> Yes, keep repeating that over and over.  Repitition makes a convincing 
> argument...for some people.
> 
> Brent
> 
> What's your countervailing evidence? You want to think it can think, and 
> that's YOUR repetitious argument. AG 

What is your evidence for something not Turing emulable in the human brain. If 
string Ai is false (machine cannot think) then computationalism is false, but 
then something non Turing emulable exist playing a role in human consciousness: 
what is it? The pineal gland? the microtubules?

Bruno




> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 04:32, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote:
> 
> 
> On 2/18/2018 9:58 AM, agrays...@gmail.com  wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com <> 
>> wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>> > >  
>> > > 
>> > >  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>> 
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>> 
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological change. 
>> 
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>> 
>> Cheers 
>> 
>> One thing a computer can not do is ask a question. I can ask a question and 
>> program a computer to help solve the problem. In fact I am doing a program 
>> to do just this. I am working a computer program to model aspects of 
>> gravitational memory. What the computer will not do, at least computers we 
>> currently employ will not do is to ask the question and then work to solve 
>> it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>> 
>> LC 
>> 
>> You've hit the proverbial nail on the head. If a computer can't ask a 
>> question, it can't, by itself, add to our knowledge. It can't propose a new 
>> theory. It can only be a tool for humans to test our theories. Thus, it is 
>> completely a misnomer to refer to it as "intelligent".  AG
>>  
>> It has no imagination. I doesn't wonder about anything. It's not conscious 
>> and therefore should not be considered as having consciousness or 
>> intelligence. AG 
> 
> Are you aware that AlphaGo Zero won one it's games by making a move that 
> centuries of Go players have consider wrong, and yet it was key to AlphaGo 
> Zero's victory.  So one has to ask, how do you know so much about its inner 
> thoughts so that you can assert it can't ask a question, can't propose a new 
> theory,  doesn't wonder, and is not conscious?
> 
> Brent
> 
> If you give it a task, just about any task within its universe of discourse, 
> it will perform it hugely better than humans. But where is the evidence it 
> can initiate any task without being instructed? AG 

In the mathematics of computer science, especially the theory of 
self-reference. All the G* minus G theory, given by the machines, can be 
considered as the machine’s natural questions which imposes themselves on the 
machines looking inward. 

But this requires doing a bit of computer science, if that was not obvious when 
we assume computationalism.

Bruno



> 
> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com .
>> To post to this group, send email to everyth...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/everything-list 
>> .
>> For more options, visit https://groups.google.com/d/optout 
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit 

Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 02:28, John Clark  wrote:
> 
> On Sun, Feb 18, 2018 at 7:51 PM, Lawrence Crowell 
> > 
> wrote:
> 
> ​> ​That is a canned. It is only a question because we recognize it as such, 
> not because the computer somehow knows that.
> 
> How would the computer behave differently if is did​ ​"​somehow knows that​" 
> ?​


By doing a strike until he get a more interesting users providing better answer 
to its question. By fighting for having social security, etc.

Bruno


> 
> ​John K Clark​
> 
>  
> 
>  
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 00:26, John Clark  wrote:
> 
> On Sun, Feb 18, 2018 at 9:11 AM, Lawrence Crowell 
> > 
> wrote:
> 
> ​> ​One thing a computer can not do is ask a question.
> 
> You've never had a computer ask you what your password is?


That is usually a question asked by a human or a society, transmitted by a 
computer. Usually the computer does not support a person asking a question, 
unless you listen to their personal (self-referential, in both the 1p and 3p 
sense) question, like we do with G and G* (and the variants).

Bruno



> 
> ​John K Clark​
>  
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 18 Feb 2018, at 18:54, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
> > 
> > 
> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
> > > But what is the criterion when AI exceeds human intelligence? AG 
> > > 
> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
> > >  
> > > 
> > >  
> > 
> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
> > 
> > Brent 
> > 
> 
> According to the title (I haven't RTFA), it's the 
> singularity. Starting from a point where a machine designs, 
> and manufactures improved copies of itself, technology will supposedly 
> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
> hyperbolic, it reaches infinity within a finite period of time, 
> expected to be a matter of months perhaps. 
> 
> Given that we really don't understand creative processes (not even 
> good old fashioned biological evolution is really well understood), 
> I'm sceptical about the 30 years prognostication. It is mostly based on 
> extrapolating Moore's law, which is the easy part of technological change. 
> 
> This won't be a problem for my children - my grandchildren perhaps, if 
> I ever end up having any. 
> 
> Cheers 
> 
> One thing a computer can not do is ask a question. I can ask a question and 
> program a computer to help solve the problem. In fact I am doing a program to 
> do just this. I am working a computer program to model aspects of 
> gravitational memory. What the computer will not do, at least computers we 
> currently employ will not do is to ask the question and then work to solve 
> it. A computer can find a numerical solution or render something numerically, 
> but it does not spontaneously act to ask the question or to propose something 
> creative to then solve or render the solution.
> 
> LC 
> 
> You've hit the proverbial nail on the head. If a computer can't ask a 
> question, it can't, by itself, add to our knowledge. It can't propose a new 
> theory. It can only be a tool for humans to test our theories. Thus, it is 
> completely a misnomer to refer to it as "intelligent".  AG


But when we listen to the (Löbian) machine, which already exist (to be sure), 
we got already many questions, in fact much more question than answer, which 
means, indeed, that they are already intelligent.

The universal machine is born maximally incompetent and intelligent. By getting 
more competent, it becomes less intelligent. The singularity is in the past, 
and the new singularity is when the machine will be as stupid as the humans 
being, if we have not destroy the planet before.

Bruno



> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
>
>
>
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>
> Computers such as AlphaGo have complex algorithms for taking the rules of 
> a game like chess and running through long Markov chains of game events to 
> increase their data base for playing the game. There is not really anything 
> about "knowing something" going on here. There is a lot of hype over AI 
> these days, but I suspect a lot of this is meant to beguile people. I do 
> suspect in time we will interact with AI as if it were intelligent and 
> conscious. The really big changer though I think will be the neural-cyber 
> interlink that will put brains as the primary internet nodes.
>
>
> Why would you suppose that when electronics have a signal speed ten 
> million times faster than neurons?  Presently neurons have an advantage in 
> connection density and power dissipation; but I see no reason they can hold 
> that advantage.
>
> Brent
>

I think it may come down to computers that obey the Church-Turing thesis, 
which is finite and bounded. Hofstadter's book *Godel Escher Bach* has a 
chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
program on a Turing machine. Biology however is not Bloop, but is rather a 
web of processors that are more Floop, or free loop. The busy beaver 
algorithm is such a case, which grows in complexity with each step. The 
computation of many fractals is this as well, where the Mandelbrot set with 
each iteration on a certain scale needs refinement to another floating 
point precision and thus grows in huge complexity. These of course in 
practice halting because the programmer puts in by hand a stop. These are 
recursively enumerable, and their complement in a set theoretic sense are 
Godel loops or Gloop. For machines to have properties at least parallel to 
conscious behavior we really have to be running in at least Floop and maybe 
into Gloop.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 9:02:34 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 7:32 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 9:58 AM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
>> wrote: 
>>>
>>>
>>>
>>> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell 
>>> wrote: 

 On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish 
 wrote: 
>
> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
> > 
> > 
> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
> > > But what is the criterion when AI exceeds human intelligence? AG 
> > > 
> > > 
> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>  
> > 
> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
> > 
> > Brent 
> > 
>
> According to the title (I haven't RTFA), it's the 
> singularity. Starting from a point where a machine designs, 
> and manufactures improved copies of itself, technology will supposedly 
> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
> hyperbolic, it reaches infinity within a finite period of time, 
> expected to be a matter of months perhaps. 
>
> Given that we really don't understand creative processes (not even 
> good old fashioned biological evolution is really well understood), 
> I'm sceptical about the 30 years prognostication. It is mostly based 
> on 
> extrapolating Moore's law, which is the easy part of technological 
> change. 
>
> This won't be a problem for my children - my grandchildren perhaps, if 
> I ever end up having any. 
>
> Cheers 
>

 One thing a computer can not do is ask a question. I can ask a question 
 and program a computer to help solve the problem. In fact I am doing a 
 program to do just this. I am working a computer program to model aspects 
 of gravitational memory. What the computer will not do, at least computers 
 we currently employ will not do is to ask the question and then work to 
 solve it. A computer can find a numerical solution or render something 
 numerically, but it does not spontaneously act to ask the question or to 
 propose something creative to then solve or render the solution.

 LC 

>>>
>>> *You've hit the proverbial nail on the head. If a computer can't ask a 
>>> question, it can't, by itself, add to our knowledge. It can't propose a new 
>>> theory. It can only be a tool for humans to test our theories. Thus, it is 
>>> completely a misnomer to refer to it as "intelligent".  AG*
>>>
>>  
>> *It has no imagination. I doesn't wonder about anything. It's not 
>> conscious and therefore should not be considered as having consciousness or 
>> intelligence. AG *
>>
>>
>> Are you aware that AlphaGo Zero won one it's games by making a move that 
>> centuries of Go players have consider wrong, and yet it was key to AlphaGo 
>> Zero's victory.  So one has to ask, how do you know so much about its inner 
>> thoughts so that you can assert it can't ask a question, can't propose a 
>> new theory,  doesn't wonder, and is not conscious?
>>
>> Brent
>>
>
>
> *If you give it a task, just about any task within its universe of 
> discourse, it will perform it hugely better than humans. But where is the 
> evidence it can initiate any task without being instructed? AG *
>
>
> How were you instructed to get hungry, be curious, lust after women?
>
> Brent
>

*In some of those there are feedback loops which are reproducible in 
computers. But to affirm computer *consciousness* is a huge leap. In the 
video you posted, the design computer does what computers do best; process 
a huge number of repetitious tasks, more than any human can do. I don't see 
this as evidence of consciousness. AG*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 7:48 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 8:37:44 PM UTC-7, Brent wrote:



On 2/18/2018 12:19 PM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:



On 2/18/2018 5:05 AM, agrays...@gmail.com wrote:



On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:



On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7,
Brent wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM
UTC-7, Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds
human intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away




Intelligence is multi-dimensional. Computers
already do arithmetic and algebra and calculus
better than me. They play chess and go better
(although so far I beat the Chinese checkers
online :-) ).  They translate more languages,
and faster than I can.  They can take
dictation better. They can write music better
than me (since I'm not even competent).

So we need to sharpen the question.  Exactly
*what* is 30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will
progressively MIMIC human behavior and vastly
exceed it in various functions. But what is
"intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in
the '80s the professor explained that artificial
intelligence is whatever computers can't do yet.

Brent


Do you think there is anything about "consciousness"
that distinguishes it from what a computer can
eventually mimic? AG


I think a robot, i.e. a computer that can act in the
world, can be conscious and to have human level general
intelligence must be conscious, although perhaps in a
somewhat different way than humans.

Brent


Not made of flesh and blood, robot can't feel pain.


Why would you suppose that?


Thus, behavior determined by pure logic; merciless. That's
the danger. AG


Logic doesn't have any values; so pure logic is not motivated
to do anything.


*Without values, it can't be compassionate. *


Neither can it be passionate, or even interested, or even
motivated to do anything.  Yet our Mars Rovers already do things. 
You seem to be the poster boy for "Failure of Imagination".

Brent

*
*
*My former colleague at JPL sends commands to the Mars Rovers. They do 
what they're told to do; nothing more, or less. AG

*


If the Rover is told to go to certain coordinates...but not what path to 
take to avoid obstacles, then it must use intelligence.  I know the JPL 
doesn't not steer the Rover like an automobile.  The time delay is too 
great.


Brent



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 7:41 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote:



On 2/18/2018 12:15 PM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:



On 2/18/2018 6:11 AM, Lawrence Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker
wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away



>
> So we need to sharpen the question. Exactly *what* is
30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology
will supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite period
of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes
(not even
good old fashioned biological evolution is really well
understood),
I'm sceptical about the 30 years prognostication. It is
mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my
grandchildren perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask
a question and program a computer to help solve the problem.
In fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory.
What the computer will not do, at least computers we
currently employ will not do is to ask the question and then
work to solve it. A computer can find a numerical solution
or render something numerically, but it does not
spontaneously act to ask the question or to propose
something creative to then solve or render the solution.


You must never have applied for a loan online.


It can only do what it has been programmed to do. I can't act
independent of its program, such as wondering if some theory
makes sense, or coming up with tests of a theory. Or say, it
can't invent chess, it can only play it better than humans. It
can't "think" out of the box. AG


Yes, keep repeating that over and over.  Repitition makes a
convincing argument...for some people.

Brent


*What's your countervailing evidence? You want to think it can think, 
and that's YOUR repetitious argument. AG

*


https://www.ted.com/talks/maurice_conti_the_incredible_inventions_of_intuitive_ai#t-184772

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 7:32 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote:



On 2/18/2018 9:58 AM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 10:54:58 AM UTC-7,
agrays...@gmail.com wrote:



On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence
Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent
Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away



>
> So we need to sharpen the question. Exactly *what*
is 30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine
designs,
and manufactures improved copies of itself,
technology will supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite
period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative
processes (not even
good old fashioned biological evolution is really
well understood),
I'm sceptical about the 30 years prognostication. It
is mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my
grandchildren perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can
ask a question and program a computer to help solve the
problem. In fact I am doing a program to do just this. I
am working a computer program to model aspects of
gravitational memory. What the computer will not do, at
least computers we currently employ will not do is to ask
the question and then work to solve it. A computer can
find a numerical solution or render something
numerically, but it does not spontaneously act to ask the
question or to propose something creative to then solve
or render the solution.

LC


*You've hit the proverbial nail on the head. If a computer
can't ask a question, it can't, by itself, add to our
knowledge. It can't propose a new theory. It can only be a
tool for humans to test our theories. Thus, it is completely
a misnomer to refer to it as "intelligent".  AG*

*It has no imagination. I doesn't wonder about anything. It's not
conscious and therefore should not be considered as having
consciousness or intelligence. AG *


Are you aware that AlphaGo Zero won one it's games by making a
move that centuries of Go players have consider wrong, and yet it
was key to AlphaGo Zero's victory.  So one has to ask, how do you
know so much about its inner thoughts so that you can assert it
can't ask a question, can't propose a new theory,  doesn't wonder,
and is not conscious?

Brent


*If you give it a task, just about any task within its universe of 
discourse, it will perform it hugely better than humans. But where is 
the evidence it can initiate any task without being instructed? AG

*


How were you instructed to get hungry, be curious, lust after women?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
Computers such as AlphaGo have complex algorithms for taking the rules 
of a game like chess and running through long Markov chains of game 
events to increase their data base for playing the game. There is not 
really anything about "knowing something" going on here. There is a 
lot of hype over AI these days, but I suspect a lot of this is meant 
to beguile people. I do suspect in time we will interact with AI as if 
it were intelligent and conscious. The really big changer though I 
think will be the neural-cyber interlink that will put brains as the 
primary internet nodes.


Why would you suppose that when electronics have a signal speed ten 
million times faster than neurons?  Presently neurons have an advantage 
in connection density and power dissipation; but I see no reason they 
can hold that advantage.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 8:37:44 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 12:19 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 5:05 AM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:
>>>
>>>
>>>
>>> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 



 On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:



 On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>
> But what is the criterion when AI exceeds human intelligence? AG
>
>
> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>
>
> Intelligence is multi-dimensional.  Computers already do arithmetic 
> and algebra and calculus better than me.  They play chess and go better 
> (although so far I beat the Chinese checkers online :-) ).  They 
> translate 
> more languages, and faster than I can.  They can take dictation better.  
> They can write music better than me (since I'm not even competent).
>
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>
> Brent
>

 Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC 
 human behavior and vastly exceed it in various functions. But what is 
 "intelligence"? AFAICT, undefined. AG


 When I took a series of courses in AI at UCLA in the '80s the professor 
 explained that artificial intelligence is whatever computers can't do yet.

 Brent

>>>
>>> Do you think there is anything about "consciousness" that distinguishes 
>>> it from what a computer can eventually mimic? AG
>>>
>>>
>>> I think a robot, i.e. a computer that can act in the world, can be 
>>> conscious and to have human level general intelligence must be conscious, 
>>> although perhaps in a somewhat different way than humans.
>>>
>>> Brent
>>>
>>
>> Not made of flesh and blood, robot can't feel pain. 
>>
>>
>> Why would you suppose that?
>>
>> Thus, behavior determined by pure logic; merciless. That's the danger. AG 
>>
>>
>> Logic doesn't have any values; so pure logic is not motivated to do 
>> anything.
>>
>
> *Without values, it can't be compassionate. *
>
>
> Neither can it be passionate, or even interested, or even motivated to do 
> anything.  Yet our Mars Rovers already do things.  You seem to be the 
> poster boy for "Failure of Imagination".
>
> Brent
>

*My former colleague at JPL sends commands to the Mars Rovers. They do what 
they're told to do; nothing more, or less. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 12:15 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>>
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > 
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>>
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>>
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological 
>>> change. 
>>>
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>>
>>> Cheers 
>>>
>>
>> One thing a computer can not do is ask a question. I can ask a question 
>> and program a computer to help solve the problem. In fact I am doing a 
>> program to do just this. I am working a computer program to model aspects 
>> of gravitational memory. What the computer will not do, at least computers 
>> we currently employ will not do is to ask the question and then work to 
>> solve it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>>
>>
>> You must never have applied for a loan online.
>>
>
> It can only do what it has been programmed to do. I can't act independent 
> of its program, such as wondering if some theory makes sense, or coming up 
> with tests of a theory. Or say, it can't invent chess, it can only play it 
> better than humans. It can't "think" out of the box. AG
>
>
> Yes, keep repeating that over and over.  Repitition makes a convincing 
> argument...for some people.
>
> Brent
>

*What's your countervailing evidence? You want to think it can think, and 
that's YOUR repetitious argument. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 12:46 PM, Lawrence Crowell wrote:



One thing a computer can not do is ask a question. I can ask a
question and program a computer to help solve the problem. In
fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory. What
the computer will not do, at least computers we currently employ
will not do is to ask the question and then work to solve it. A
computer can find a numerical solution or render something
numerically, but it does not spontaneously act to ask the
question or to propose something creative to then solve or render
the solution.


You must never have applied for a loan online.

Brent


I am not sure how that is relevant. No I have not applied for a loan 
online. In fact about 10 years ago or so I made a choice not to do 
financial transactions online. Of course in some sense this means I am 
becoming a bit of a slowpoke in that game, but I have worked to reduce 
my footprint on the digital landscape and to keep my financial 
decisions offline. This reduces my prospects for cyber-snooping and 
having personal information flying around out there.


1. It's an algorithm.
2. It asks lots of questions.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 9:58 AM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
wrote:




On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell
wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away



>
> So we need to sharpen the question.  Exactly *what* is
30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology
will supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite period of
time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes
(not even
good old fashioned biological evolution is really well
understood),
I'm sceptical about the 30 years prognostication. It is
mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my grandchildren
perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask a
question and program a computer to help solve the problem. In
fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory.
What the computer will not do, at least computers we currently
employ will not do is to ask the question and then work to
solve it. A computer can find a numerical solution or render
something numerically, but it does not spontaneously act to
ask the question or to propose something creative to then
solve or render the solution.

LC


*You've hit the proverbial nail on the head. If a computer can't
ask a question, it can't, by itself, add to our knowledge. It
can't propose a new theory. It can only be a tool for humans to
test our theories. Thus, it is completely a misnomer to refer to
it as "intelligent".  AG*

*It has no imagination. I doesn't wonder about anything. It's not 
conscious and therefore should not be considered as having 
consciousness or intelligence. AG *


Are you aware that AlphaGo Zero won one it's games by making a move that 
centuries of Go players have consider wrong, and yet it was key to 
AlphaGo Zero's victory.  So one has to ask, how do you know so much 
about its inner thoughts so that you can assert it can't ask a question, 
can't propose a new theory,  doesn't wonder, and is not conscious?


Brent



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
Computers such as AlphaGo have complex algorithms for taking the rules of a 
game like chess and running through long Markov chains of game events to 
increase their data base for playing the game. There is not really anything 
about "knowing something" going on here. There is a lot of hype over AI 
these days, but I suspect a lot of this is meant to beguile people. I do 
suspect in time we will interact with AI as if it were intelligent and 
conscious. The really big changer though I think will be the neural-cyber 
interlink that will put brains as the primary internet nodes.

LC

On Sunday, February 18, 2018 at 7:21:19 PM UTC-6, John Clark wrote:
>
> On Sun, Feb 18, 2018 at 3:15 PM,  
> wrote:
>
> > It can only do what it has been programmed to do. I can't act 
>> independent of its program
>>
>  
> ​
> Suppose you know
> ​ 
> absolutely nothing about Chess, you're not given a teacher, you
> ​ 
> are not
> ​
>  even given a book on Chess, all you're given is is a short pamphlet 
> explaining the
> ​ 
> basic
> ​ 
> rules of the game, and just 24 hours later you
> ​ 
> taught yourself 
> ​the game ​
> so well that not only
> ​ ​
> ​
> can you
> ​ 
> beat any other
> ​ 
> human being
> ​ 
> on the planet at Chess but you also can beat any
> ​ 
> other Chess program
> ​ 
> at
> ​ Chess. And you're not specialized, ​you're not just good at one thing 
> because during that same 24 hours you also taught yourself to be the 
> world's best Shogi player (a game popular in Japan) and most impressive of 
> all  
> you beat the very specialized program that beat the wold's best player of 
> the immensely complex game GO. That is exactly what the computer program 
> AlphaGo
> ​ 
> did just last December.
>
>
> https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf
>
> https://deepmind.com/research/alphago/
> . 
>
>> ​> ​
>> it can only play it better than humans. It can't "think" out of the box.
>
>
>  
> ​Whistling past the graveyard. 
>
> John K Clark  ​
>
>
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread John Clark
On Sun, Feb 18, 2018 at 7:51 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​That is a canned. It is only a question because we recognize it as
> such, not because the computer somehow knows that.*
>

How would the computer behave differently if is did
​
​"​
somehow knows that
​" ?​

​John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread John Clark
On Sun, Feb 18, 2018 at 3:15 PM,  wrote:

> It can only do what it has been programmed to do. I can't act independent
> of its program
>

​
Suppose you know
​
absolutely nothing about Chess, you're not given a teacher, you
​
are not
​
 even given a book on Chess, all you're given is is a short pamphlet
explaining the
​
basic
​
rules of the game, and just 24 hours later you
​
taught yourself
​the game ​
so well that not only
​ ​
​
can you
​
beat any other
​
human being
​
on the planet at Chess but you also can beat any
​
other Chess program
​
at
​ Chess. And you're not specialized, ​you're not just good at one thing
because during that same 24 hours you also taught yourself to be the
world's best Shogi player (a game popular in Japan) and most impressive of
all
you beat the very specialized program that beat the wold's best player of
the immensely complex game GO. That is exactly what the computer program
AlphaGo
​
did just last December.

https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf

https://deepmind.com/research/alphago/
.

> ​> ​
> it can only play it better than humans. It can't "think" out of the box.



​Whistling past the graveyard.

John K Clark  ​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 5:26:04 PM UTC-6, John Clark wrote:
>
> On Sun, Feb 18, 2018 at 9:11 AM, Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
> *​> ​One thing a computer can not do is ask a question.*
>>
>
> You've never had a computer ask you what your password is?
>
> ​John K Clark​
>

That is a canned. It is only a question because we recognize it as such, 
not because the computer somehow knows that.

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 1:46:55 PM UTC-7, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 1:09:37 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>>
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > 
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>>
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>>
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological 
>>> change. 
>>>
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>>
>>> Cheers 
>>>
>>
>> One thing a computer can not do is ask a question. I can ask a question 
>> and program a computer to help solve the problem. In fact I am doing a 
>> program to do just this. I am working a computer program to model aspects 
>> of gravitational memory. What the computer will not do, at least computers 
>> we currently employ will not do is to ask the question and then work to 
>> solve it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>>
>> P
>> You must never have applied for a loan online.
>>
>> Brent
>>
>
> I am not sure how that is relevant. 
>

*Brent was referring to the many questions a computer asks when someone 
applies for loan online. Of course, the issue here is whether a computer 
can ask a question that is not pre-programmed. It cannot IMO. People will 
argue that humans can only ask questions which are, in effect, 
pre-programmed. One can argue they cannot do so by referring to any new 
theory in physics. Can a computer ask a question about something it has 
newly been informed about? If informed, can it ask any specific question if 
not pre-programmed to do so? AG*

 

> No I have not applied for a loan online. In fact about 10 years ago or so 
> I made a choice not to do financial transactions online. Of course in some 
> sense this means I am becoming a bit of a slowpoke in that game, but I have 
> worked to reduce my footprint on the digital landscape and to keep my 
> financial decisions offline. This reduces my prospects for cyber-snooping 
> and having personal information flying around out there.
>
> LC
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread John Clark
On Sun, Feb 18, 2018 at 9:11 AM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​One thing a computer can not do is ask a question.*
>

You've never had a computer ask you what your password is?

​John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 1:09:37 PM UTC-6, Brent wrote:
>
>
>
> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > 
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>>
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>>
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological 
>> change. 
>>
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>>
>> Cheers 
>>
>
> One thing a computer can not do is ask a question. I can ask a question 
> and program a computer to help solve the problem. In fact I am doing a 
> program to do just this. I am working a computer program to model aspects 
> of gravitational memory. What the computer will not do, at least computers 
> we currently employ will not do is to ask the question and then work to 
> solve it. A computer can find a numerical solution or render something 
> numerically, but it does not spontaneously act to ask the question or to 
> propose something creative to then solve or render the solution.
>
>
> You must never have applied for a loan online.
>
> Brent
>

I am not sure how that is relevant. No I have not applied for a loan 
online. In fact about 10 years ago or so I made a choice not to do 
financial transactions online. Of course in some sense this means I am 
becoming a bit of a slowpoke in that game, but I have worked to reduce my 
footprint on the digital landscape and to keep my financial decisions 
offline. This reduces my prospects for cyber-snooping and having personal 
information flying around out there.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 5:05 AM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
>>>
>>>
>>>
>>> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 



 On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

 But what is the criterion when AI exceeds human intelligence? AG


 https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away


 Intelligence is multi-dimensional.  Computers already do arithmetic and 
 algebra and calculus better than me.  They play chess and go better 
 (although so far I beat the Chinese checkers online :-) ).  They translate 
 more languages, and faster than I can.  They can take dictation better.  
 They can write music better than me (since I'm not even competent).

 So we need to sharpen the question.  Exactly *what* is 30yrs away?

 Brent

>>>
>>> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
>>> behavior and vastly exceed it in various functions. But what is 
>>> "intelligence"? AFAICT, undefined. AG
>>>
>>>
>>> When I took a series of courses in AI at UCLA in the '80s the professor 
>>> explained that artificial intelligence is whatever computers can't do yet.
>>>
>>> Brent
>>>
>>
>> Do you think there is anything about "consciousness" that distinguishes 
>> it from what a computer can eventually mimic? AG
>>
>>
>> I think a robot, i.e. a computer that can act in the world, can be 
>> conscious and to have human level general intelligence must be conscious, 
>> although perhaps in a somewhat different way than humans.
>>
>> Brent
>>
>
> Not made of flesh and blood, robot can't feel pain. 
>
>
> Why would you suppose that?
>
> Thus, behavior determined by pure logic; merciless. That's the danger. AG 
>
>
> Logic doesn't have any values; so pure logic is not motivated to do 
> anything.
>

*Without values, it can't be compassionate. It's like a human who enjoys a 
juicy hamburger, but has no thought of the pain of the cow who died to 
provide it. AG *

>
> Brent
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > 
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>>
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>>
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological 
>> change. 
>>
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>>
>> Cheers 
>>
>
> One thing a computer can not do is ask a question. I can ask a question 
> and program a computer to help solve the problem. In fact I am doing a 
> program to do just this. I am working a computer program to model aspects 
> of gravitational memory. What the computer will not do, at least computers 
> we currently employ will not do is to ask the question and then work to 
> solve it. A computer can find a numerical solution or render something 
> numerically, but it does not spontaneously act to ask the question or to 
> propose something creative to then solve or render the solution.
>
>
> You must never have applied for a loan online.
>

It can only do what it has been programmed to do. I can't act independent 
of its program, such as wondering if some theory makes sense, or coming up 
with tests of a theory. Or say, it can't invent chess, it can only play it 
better than humans. It can't "think" out of the box. AG 

>
> Brent
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 6:11 AM, Lawrence Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote:
> > But what is the criterion when AI exceeds human intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away



>
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology will
supposedly
veer from it's exponential path (Moore's law) etc to hyperbolic.
Being
hyperbolic, it reaches infinity within a finite period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes (not even
good old fashioned biological evolution is really well understood),
I'm sceptical about the 30 years prognostication. It is mostly
based on
extrapolating Moore's law, which is the easy part of technological
change.

This won't be a problem for my children - my grandchildren
perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask a 
question and program a computer to help solve the problem. In fact I 
am doing a program to do just this. I am working a computer program to 
model aspects of gravitational memory. What the computer will not do, 
at least computers we currently employ will not do is to ask the 
question and then work to solve it. A computer can find a numerical 
solution or render something numerically, but it does not 
spontaneously act to ask the question or to propose something creative 
to then solve or render the solution.


You must never have applied for a loan online.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 5:05 AM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:



On 2/17/2018 10:28 PM, agrays...@gmail.com  wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent
wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds human
intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away




Intelligence is multi-dimensional. Computers already do
arithmetic and algebra and calculus better than me. 
They play chess and go better (although so far I beat
the Chinese checkers online :-) ).  They translate more
languages, and faster than I can.  They can take
dictation better.  They can write music better than me
(since I'm not even competent).

So we need to sharpen the question.  Exactly *what* is
30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will progressively
MIMIC human behavior and vastly exceed it in various
functions. But what is "intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the '80s the
professor explained that artificial intelligence is whatever
computers can't do yet.

Brent


Do you think there is anything about "consciousness" that
distinguishes it from what a computer can eventually mimic? AG


I think a robot, i.e. a computer that can act in the world, can be
conscious and to have human level general intelligence must be
conscious, although perhaps in a somewhat different way than humans.

Brent


Not made of flesh and blood, robot can't feel pain.


Why would you suppose that?


Thus, behavior determined by pure logic; merciless. That's the danger. AG


Logic doesn't have any values; so pure logic is not motivated to do 
anything.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
wrote:
>
>
>
> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>>
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>>>
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > 
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>>
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>>
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological 
>>> change. 
>>>
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>>
>>> Cheers 
>>>
>>
>> One thing a computer can not do is ask a question. I can ask a question 
>> and program a computer to help solve the problem. In fact I am doing a 
>> program to do just this. I am working a computer program to model aspects 
>> of gravitational memory. What the computer will not do, at least computers 
>> we currently employ will not do is to ask the question and then work to 
>> solve it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>>
>> LC 
>>
>
> *You've hit the proverbial nail on the head. If a computer can't ask a 
> question, it can't, by itself, add to our knowledge. It can't propose a new 
> theory. It can only be a tool for humans to test our theories. Thus, it is 
> completely a misnomer to refer to it as "intelligent".  AG*
>
 
*It has no imagination. I doesn't wonder about anything. It's not conscious 
and therefore should not be considered as having consciousness or 
intelligence. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>>
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > 
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>>
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>>
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological 
>> change. 
>>
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>>
>> Cheers 
>>
>
> One thing a computer can not do is ask a question. I can ask a question 
> and program a computer to help solve the problem. In fact I am doing a 
> program to do just this. I am working a computer program to model aspects 
> of gravitational memory. What the computer will not do, at least computers 
> we currently employ will not do is to ask the question and then work to 
> solve it. A computer can find a numerical solution or render something 
> numerically, but it does not spontaneously act to ask the question or to 
> propose something creative to then solve or render the solution.
>
> LC 
>

*You've hit the proverbial nail on the head. If a computer can't ask a 
question, it can't, by itself, add to our knowledge. It can't propose a new 
theory. It can only be a tool for humans to test our theories. Thus, it is 
completely a misnomer to refer to it as "intelligent".  AG*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>
> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
> > 
> > 
> > On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote: 
> > > But what is the criterion when AI exceeds human intelligence? AG 
> > > 
> > > 
> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>  
> > 
> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
> > 
> > Brent 
> > 
>
> According to the title (I haven't RTFA), it's the 
> singularity. Starting from a point where a machine designs, 
> and manufactures improved copies of itself, technology will supposedly 
> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
> hyperbolic, it reaches infinity within a finite period of time, 
> expected to be a matter of months perhaps. 
>
> Given that we really don't understand creative processes (not even 
> good old fashioned biological evolution is really well understood), 
> I'm sceptical about the 30 years prognostication. It is mostly based on 
> extrapolating Moore's law, which is the easy part of technological change. 
>
> This won't be a problem for my children - my grandchildren perhaps, if 
> I ever end up having any. 
>
> Cheers 
>

One thing a computer can not do is ask a question. I can ask a question and 
program a computer to help solve the problem. In fact I am doing a program 
to do just this. I am working a computer program to model aspects of 
gravitational memory. What the computer will not do, at least computers we 
currently employ will not do is to ask the question and then work to solve 
it. A computer can find a numerical solution or render something 
numerically, but it does not spontaneously act to ask the question or to 
propose something creative to then solve or render the solution.

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:
>
>
>
> On 2/17/2018 10:28 PM, agrays...@gmail.com  wrote:
>
>
>
> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>>
>>> But what is the criterion when AI exceeds human intelligence? AG
>>>
>>>
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>
>>>
>>> Intelligence is multi-dimensional.  Computers already do arithmetic and 
>>> algebra and calculus better than me.  They play chess and go better 
>>> (although so far I beat the Chinese checkers online :-) ).  They translate 
>>> more languages, and faster than I can.  They can take dictation better.  
>>> They can write music better than me (since I'm not even competent).
>>>
>>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>>
>>> Brent
>>>
>>
>> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
>> behavior and vastly exceed it in various functions. But what is 
>> "intelligence"? AFAICT, undefined. AG
>>
>>
>> When I took a series of courses in AI at UCLA in the '80s the professor 
>> explained that artificial intelligence is whatever computers can't do yet.
>>
>> Brent
>>
>
> Do you think there is anything about "consciousness" that distinguishes it 
> from what a computer can eventually mimic? AG
>
>
> I think a robot, i.e. a computer that can act in the world, can be 
> conscious and to have human level general intelligence must be conscious, 
> although perhaps in a somewhat different way than humans.
>
> Brent
>

Not made of flesh and blood, robot can't feel pain. Thus, behavior 
determined by pure logic; merciless. That's the danger. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Russell Standish
On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
> 
> 
> On 2/17/2018 4:58 PM, agrayson2...@gmail.com wrote:
> > But what is the criterion when AI exceeds human intelligence? AG
> > 
> > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
> 
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
> 
> Brent
> 

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology will supposedly
veer from it's exponential path (Moore's law) etc to hyperbolic. Being
hyperbolic, it reaches infinity within a finite period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes (not even
good old fashioned biological evolution is really well understood),
I'm sceptical about the 30 years prognostication. It is mostly based on
extrapolating Moore's law, which is the easy part of technological change.

This won't be a problem for my children - my grandchildren perhaps, if
I ever end up having any.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread Brent Meeker



On 2/17/2018 10:28 PM, agrayson2...@gmail.com wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com  wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds human intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away




Intelligence is multi-dimensional.  Computers already do
arithmetic and algebra and calculus better than me.  They
play chess and go better (although so far I beat the Chinese
checkers online :-) ).  They translate more languages, and
faster than I can.  They can take dictation better.  They can
write music better than me (since I'm not even competent).

So we need to sharpen the question.  Exactly *what* is 30yrs
away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will progressively
MIMIC human behavior and vastly exceed it in various functions.
But what is "intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the '80s the
professor explained that artificial intelligence is whatever
computers can't do yet.

Brent


Do you think there is anything about "consciousness" that 
distinguishes it from what a computer can eventually mimic? AG


I think a robot, i.e. a computer that can act in the world, can be 
conscious and to have human level general intelligence must be 
conscious, although perhaps in a somewhat different way than humans.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread agrayson2000


On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote:
>
>
>
> On 2/17/2018 5:44 PM, agrays...@gmail.com  wrote:
>
>
>
> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>
>> But what is the criterion when AI exceeds human intelligence? AG
>>
>>
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>
>>
>> Intelligence is multi-dimensional.  Computers already do arithmetic and 
>> algebra and calculus better than me.  They play chess and go better 
>> (although so far I beat the Chinese checkers online :-) ).  They translate 
>> more languages, and faster than I can.  They can take dictation better.  
>> They can write music better than me (since I'm not even competent).
>>
>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>
>> Brent
>>
>
> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
> behavior and vastly exceed it in various functions. But what is 
> "intelligence"? AFAICT, undefined. AG
>
>
> When I took a series of courses in AI at UCLA in the '80s the professor 
> explained that artificial intelligence is whatever computers can't do yet.
>
> Brent
>

Do you think there is anything about "consciousness" that distinguishes it 
from what a computer can eventually mimic? AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread Brent Meeker



On 2/17/2018 5:44 PM, agrayson2...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote:

But what is the criterion when AI exceeds human intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away




Intelligence is multi-dimensional.  Computers already do
arithmetic and algebra and calculus better than me.  They play
chess and go better (although so far I beat the Chinese checkers
online :-) ).  They translate more languages, and faster than I
can.  They can take dictation better.  They can write music better
than me (since I'm not even competent).

So we need to sharpen the question.  Exactly *what* is 30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC 
human behavior and vastly exceed it in various functions. But what is 
"intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the '80s the professor 
explained that artificial intelligence is whatever computers can't do yet.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread agrayson2000


On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote:
>
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote:
>
> But what is the criterion when AI exceeds human intelligence? AG
>
>
> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>
>
> Intelligence is multi-dimensional.  Computers already do arithmetic and 
> algebra and calculus better than me.  They play chess and go better 
> (although so far I beat the Chinese checkers online :-) ).  They translate 
> more languages, and faster than I can.  They can take dictation better.  
> They can write music better than me (since I'm not even competent).
>
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>
> Brent
>

Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
behavior and vastly exceed it in various functions. But what is 
"intelligence"? AFAICT, undefined. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread Brent Meeker



On 2/17/2018 4:58 PM, agrayson2...@gmail.com wrote:

But what is the criterion when AI exceeds human intelligence? AG

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away


Intelligence is multi-dimensional.  Computers already do arithmetic and 
algebra and calculus better than me.  They play chess and go better 
(although so far I beat the Chinese checkers online :-) ). They 
translate more languages, and faster than I can.  They can take 
dictation better.  They can write music better than me (since I'm not 
even competent).


So we need to sharpen the question.  Exactly *what* is 30yrs away?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Singularity -- when AI exceeds human intelligence

2018-02-17 Thread agrayson2000
But what is the criterion when AI exceeds human intelligence? AG

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.