Re: Amoeba's Secret openly available under CC-BY license

2024-09-06 Thread Terren Suydam
Yes, thank you Russell, this is a real gift!

On Fri, Sep 6, 2024 at 7:15 PM Liz R  wrote:

> Thanks Russell. Hope you are all well on the Everything list.
>
> On Monday 29 April 2024 at 17:04:14 UTC+12 Russell Standish wrote:
>
>> I did get a response from him when I suggested making Amoeba's Secret
>> open access.
>>
>> According to Kim Jones, who visited him 2022, he is well and taking a
>> break from the Everything List.
>>
>> Cheers
>>
>> On Mon, Apr 29, 2024 at 03:09:22PM +1200, LizR wrote:
>> > Hi Russell,
>> >
>> > Do you have any news of Bruno? I see his last contribution here was a
>> > couple of years ago.
>> >
>> > Best wishes,
>> > Liz
>> >
>> > On Sat, 12 Aug 2023 at 22:15, Russell Standish 
>> wrote:
>> > >
>> > > Hi guys,
>> > >
>> > > I finally got around to doing something I meant to do years ago - I
>> > > have released the English translation of "Amoeba's Secret" as a
>> freely
>> > > downloadable PDF under the Creative Commons CC-BY license at
>> > > https://www.hpcoders.com.au/docs/amoebassecret.pdf .
>> > >
>> > > Bruno Marchal was a long time contributer to this list, and this
>> > > semi-autobiography is also one of the clearest explanations of his
>> > > ideas.
>> > >
>> > > Enjoy,
>> > >
>> > > --
>> > >
>> > >
>> 
>>
>> > > Dr Russell Standish Phone 0425 253119 (mobile)
>> > > Principal, High Performance Coders hpc...@hpcoders.com.au
>> > > http://www.hpcoders.com.au
>> > >
>> 
>>
>> > >
>> > > --
>> > > You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> > > To unsubscribe from this group and stop receiving emails from it,
>> send an email to everything-li...@googlegroups.com.
>> > > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/20230812101516.GF17399%40zen.
>>
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to everything-li...@googlegroups.com.
>> > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAKzbsNfSjoRtyM4gFtUz8_7DAxbdJYor7_ZbXsB9q3kH4htLaA%40mail.gmail.com.
>>
>>
>> --
>>
>> 
>>
>> Dr Russell Standish Phone 0425 253119 (mobile)
>> Principal, High Performance Coders hpc...@hpcoders.com.au
>> http://www.hpcoders.com.au
>> 
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/03a823b1-a29c-4578-92c7-080c80dad9b3n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8ZAUH_Gntt0Rh6NgHeXjBqm4tm9JBJQ4Do6%3D1SeKq3Uw%40mail.gmail.com.


Re: [Extropolis] NYTimes.com: JD Vance Just Blurbed a Book Arguing That Progressives Are Subhuman

2024-08-06 Thread Terren Suydam
Wow. Vance lends a supportive blurb to a book that says that democracy is a
failure and needs to be replaced by an authoritarian regime. This is
nakedly un-American. I can't believe anyone who isn't an extremist can
support that.  But extremism is mainstream on the right now, and this blurb
is an example of how those on the right are no longer paying any price for
saying the quiet things out loud, and the overton window for this kind of
talk is continuing to shift.

And as evidence for how little truth means on the right, a common refrain
at Trump rallies is that voting to keep the *left* in power will spell the
end of democracy. They are in the business of saying whatever people want
to hear and planting ideas that spread hatred and division, in order to
attain and stay in power. That exists on the extreme left too, but again,
it's not mainstream.

On Tue, Aug 6, 2024 at 8:48 AM John Clark  wrote:

> Explore this gift article from The New York Times. You can read it for
> free without a subscription.
>
> JD Vance Just Blurbed a Book Arguing That Progressives Are Subhuman
>
> A MAGA-world celebration of Francisco Franco and Joseph McCarthy.
>
>
> https://www.nytimes.com/2024/08/05/opinion/jd-vance-fascism-unhumans.html?unlocked_article_code=1.A04.azyl.iqyleuISpgS4&smid=em-share
>
> --
> You received this message because you are subscribed to the Google Groups
> "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to extropolis+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/extropolis/CAJPayv2YqB_PQXQLVYshK6O9%2Bd7RS28KZn-MtKLhk4VTC_Cnnw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA84M2Kf5H1TdQ4gSxeVaxfkQPLtPXWtDEzAgCmTxngYWw%40mail.gmail.com.


Re: Are Philosophical Zombies possible?

2024-07-11 Thread Terren Suydam
Only in the most idealized sense of Turing completeness would we argue
whether the brain is Turing complete. Neural networks are Turing complete.

If we're interested in whether consciousness requires Turing completeness,
it seems silly to use the brain as a *counter example* of Turing
completeness only because it happens to be a finite, physical object with
noise/errors in the system. For all practical purposes, whatever properties
one would confer to a Turing complete system, the brain has them.

On Thu, Jul 11, 2024 at 2:43 PM Jason Resch  wrote:

>
> I agree Turing completeness is not required for consciousness. The human
> brain (given it's limited and faulty memory) wouldn't even meet the
> definition of being Turing complete.
>
> Jason
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-TOXz9fQ7Na2DbzN2gsFzUPzq-MsyeyzvvpzrGt3_x%2Bw%40mail.gmail.com.


Re: Why do sad people hate so much ?

2024-07-10 Thread Terren Suydam
Every indication is that you're a troll - except for the fact that you've
written a paper on consciousness. I'm beginning to think you used AI to
write your paper. That would explain why you seem incapable of engaging
with actual questions and criticisms, because you have no actual interest
or knowledge in it. That would also explain why you're so anti AI, to throw
people off the trail, like a closeted-gay politician who rails against
homosexuality.

On Wed, Jul 10, 2024 at 6:58 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> So as you can see ladies and gentlemen, we have here a prime specimen of
> the sad person. As you can see, he entered here just to hate on a random
> person like me, by using the word "stupid". How can we understand such
> behavior ? What determines these people to randomly hate on the internet ?
>
> On Wednesday 10 July 2024 at 13:22:21 UTC+3 Quentin Anciaux wrote:
>
>> Either you're trolling on purpose or you're genuinely stupid... hard to
>> tell.
>>
>> Le mer. 10 juil. 2024, 11:54, 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> a écrit :
>>
>>> Why do sad people hate so much ? Why do they choose to express their
>>> sadness by hating random people on the internet ? Why don't they go for a
>>> walk in nature to calm down ? What do they gain by hating on the internet ?
>>> And especially since they are doing this for decades! And their condition
>>> still doesn't improve. Also, why don't they seek professional help ? How
>>> hating on the internet brings any alleviation of their condition at all ?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/dabfd157-4987-4cd5-b3a3-253ec0564ebdn%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/3a8125cf-bff0-4ea6-b0b9-68889ba2e34dn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_Zk5DbLqCd-iDw6u9FoaPTj1nqNOPgewfyNxVmsNja4A%40mail.gmail.com.


Re: AI hype

2024-07-10 Thread Terren Suydam
If you have no idea, then just say so. Otherwise, answer the question.

On Wed, Jul 10, 2024 at 11:22 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> As I said: interacting with other consciousnesses.
>
> On Wednesday 10 July 2024 at 15:59:52 UTC+3 Terren Suydam wrote:
>
>> What specifically happens when someone "takes mushrooms" for the first
>> time that leads them to have a quality of consciousness that is unlike
>> anything they've experienced before that?
>>
>> On Wed, Jul 10, 2024 at 3:11 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> @Terren. Other consciousnesses. That you cannot even imagine.
>>>
>>> On Tuesday 9 July 2024 at 22:45:45 UTC+3 Terren Suydam wrote:
>>>
>>>> When you "take mushrooms", what happens is for your consciousness to
>>>>> interact with the other consciousnesses.
>>>>>
>>>>
>>>> What other consciousnesses?  What specifically happens when someone
>>>> "takes mushrooms" for the first time that leads them to have a quality of
>>>> consciousness that is unlike anything they've experienced before that?  How
>>>> do you explain that particular scenario in terms of interactions with other
>>>> self-referencing entities?
>>>>
>>>> And, how does that explanation dispense with the idea that the
>>>> (apparent) ingesting of mushrooms caused the change in consciousness?
>>>>
>>>> On Tue, Jul 9, 2024 at 1:14 PM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> @Terren. "Mushrooms" are just an appearance in your consciousness that
>>>>> stand for other consciousnesses. When you "take mushrooms", what happens 
>>>>> is
>>>>> for your consciousness to interact with the other consciousnesses. Also
>>>>> "internet" is a similar appearance in your consciousness. And when you
>>>>> "enter the internet", your consciousness changes from interacting with
>>>>> other consciousnesses.
>>>>>
>>>>> On Tuesday 9 July 2024 at 19:02:48 UTC+3 Terren Suydam wrote:
>>>>>
>>>>>> So you're saying if I take a high dose of magic mushrooms and my
>>>>>> consciousness changes, it is so obvious that a 5yo kid would understand 
>>>>>> it,
>>>>>> that the mushrooms *do not* cause a change to my consciousness?
>>>>>> That me taking mushrooms an hour before the changes to my consciousness
>>>>>> begin is mere correlation?
>>>>>>
>>>>>> On Tue, Jul 9, 2024 at 11:51 AM 'Cosmin Visan' via Everything List <
>>>>>> everyth...@googlegroups.com> wrote:
>>>>>>
>>>>>>> @Terren. Based on your logic, if when you get outside in the rain is
>>>>>>> cold and when you are inside the house is warm => house generates
>>>>>>> consciousness. Correlation is not causation. This even a 5-years old kid
>>>>>>> understands.
>>>>>>>
>>>>>>> On Tuesday 9 July 2024 at 17:39:34 UTC+3 Terren Suydam wrote:
>>>>>>>
>>>>>>>> On Tue, Jul 9, 2024 at 7:01 AM 'Cosmin Visan' via Everything List <
>>>>>>>> everyth...@googlegroups.com> wrote:
>>>>>>>>
>>>>>>>>> @Quentin @Stathis. That's where the whole magical belief in AI
>>>>>>>>> comes from, from believing that you are robots. Well.. breaking news: 
>>>>>>>>> you
>>>>>>>>> are not! You are God. "Brain" is just a picture that you as God 
>>>>>>>>> dreams in
>>>>>>>>> this dream. It doesn't actually exist.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Maybe so. But if it is true that our brains are just part of the
>>>>>>>> dream, then how do you account for the seeming one-way causality 
>>>>>>>> between
>>>>>>>> the brain and the mind?  Brain damage, drugs, transcranial magnetic
>>>>>>>> stimulation, etc, all give credence to the existence of the brain, and 
>>>>&g

Re: Absolute freedom of speech group for consciousness discussions

2024-07-10 Thread Terren Suydam
That's something a 14 year old would say. Anyway, thanks for proving my
point.

On Wed, Jul 10, 2024 at 3:10 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. I'm sorry for your suffering. Hope you will find a girlfriend
> soon.
>
> On Wednesday 10 July 2024 at 02:40:30 UTC+3 Terren Suydam wrote:
>
>> Your interactions here are a preview of what discourse on your google
>> group would be like. And there's a lot of words to describe that, but
>> "inviting" isn't one of them.
>>
>> Terren
>>
>> On Tue, Jul 9, 2024 at 2:46 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> I invite you to my absolute freedom of speech google group for
>>> consciousness discussions:
>>>
>>> https://groups.google.com/g/consciousness-research
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/c1205111-40d6-48b6-8e08-5f6215b518fcn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/c1205111-40d6-48b6-8e08-5f6215b518fcn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/58fc66a5-39f6-4e90-9d13-ac718664ee00n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/58fc66a5-39f6-4e90-9d13-ac718664ee00n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8gCpRpUqgX7kga%2BC5iSV8pV%3D%3D9q9taAXKa-mg1xut%3D8A%40mail.gmail.com.


Re: AI hype

2024-07-10 Thread Terren Suydam
What specifically happens when someone "takes mushrooms" for the first time
that leads them to have a quality of consciousness that is unlike anything
they've experienced before that?

On Wed, Jul 10, 2024 at 3:11 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. Other consciousnesses. That you cannot even imagine.
>
> On Tuesday 9 July 2024 at 22:45:45 UTC+3 Terren Suydam wrote:
>
>> When you "take mushrooms", what happens is for your consciousness to
>>> interact with the other consciousnesses.
>>>
>>
>> What other consciousnesses?  What specifically happens when someone
>> "takes mushrooms" for the first time that leads them to have a quality of
>> consciousness that is unlike anything they've experienced before that?  How
>> do you explain that particular scenario in terms of interactions with other
>> self-referencing entities?
>>
>> And, how does that explanation dispense with the idea that the (apparent)
>> ingesting of mushrooms caused the change in consciousness?
>>
>> On Tue, Jul 9, 2024 at 1:14 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> @Terren. "Mushrooms" are just an appearance in your consciousness that
>>> stand for other consciousnesses. When you "take mushrooms", what happens is
>>> for your consciousness to interact with the other consciousnesses. Also
>>> "internet" is a similar appearance in your consciousness. And when you
>>> "enter the internet", your consciousness changes from interacting with
>>> other consciousnesses.
>>>
>>> On Tuesday 9 July 2024 at 19:02:48 UTC+3 Terren Suydam wrote:
>>>
>>>> So you're saying if I take a high dose of magic mushrooms and my
>>>> consciousness changes, it is so obvious that a 5yo kid would understand it,
>>>> that the mushrooms *do not* cause a change to my consciousness?  That
>>>> me taking mushrooms an hour before the changes to my consciousness begin is
>>>> mere correlation?
>>>>
>>>> On Tue, Jul 9, 2024 at 11:51 AM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> @Terren. Based on your logic, if when you get outside in the rain is
>>>>> cold and when you are inside the house is warm => house generates
>>>>> consciousness. Correlation is not causation. This even a 5-years old kid
>>>>> understands.
>>>>>
>>>>> On Tuesday 9 July 2024 at 17:39:34 UTC+3 Terren Suydam wrote:
>>>>>
>>>>>> On Tue, Jul 9, 2024 at 7:01 AM 'Cosmin Visan' via Everything List <
>>>>>> everyth...@googlegroups.com> wrote:
>>>>>>
>>>>>>> @Quentin @Stathis. That's where the whole magical belief in AI comes
>>>>>>> from, from believing that you are robots. Well.. breaking news: you are
>>>>>>> not! You are God. "Brain" is just a picture that you as God dreams in 
>>>>>>> this
>>>>>>> dream. It doesn't actually exist.
>>>>>>>
>>>>>>
>>>>>> Maybe so. But if it is true that our brains are just part of the
>>>>>> dream, then how do you account for the seeming one-way causality between
>>>>>> the brain and the mind?  Brain damage, drugs, transcranial magnetic
>>>>>> stimulation, etc, all give credence to the existence of the brain, and by
>>>>>> the same token all those examples are difficult to explain from the
>>>>>> idealist perspective you're advocating for.
>>>>>>
>>>>>> You can say that's all part of the dream, but that's an answer that
>>>>>> stops all further questions. The "hard problem" of idealism is: why does
>>>>>> the dream of God appear to be so lawful and ordered?  Why does the dream
>>>>>> necessitate things like brains that appear to have causal influence on 
>>>>>> our
>>>>>> consciousness?
>>>>>>
>>>>>> Terren
>>>>>>
>>>>>>
>>>>>>> On Tuesday 9 July 2024 at 11:24:45 UTC+3 Stathis Papaioannou wrote:
>>>>>>>
>>>>>>>> On Tue, 9 Jul 2024 at 18:04, 'Cosmin Visan' via E

Re: Absolute freedom of speech group for consciousness discussions

2024-07-09 Thread Terren Suydam
Your interactions here are a preview of what discourse on your google group
would be like. And there's a lot of words to describe that, but "inviting"
isn't one of them.

Terren

On Tue, Jul 9, 2024 at 2:46 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> I invite you to my absolute freedom of speech google group for
> consciousness discussions:
>
> https://groups.google.com/g/consciousness-research
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/c1205111-40d6-48b6-8e08-5f6215b518fcn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8ATcUmeDJQozSW53oCoMnSkcp%2BrZ3URh6tKC3ZzOBRsw%40mail.gmail.com.


Re: AI hype

2024-07-09 Thread Terren Suydam
>
> When you "take mushrooms", what happens is for your consciousness to
> interact with the other consciousnesses.
>

What other consciousnesses?  What specifically happens when someone "takes
mushrooms" for the first time that leads them to have a quality of
consciousness that is unlike anything they've experienced before that?  How
do you explain that particular scenario in terms of interactions with other
self-referencing entities?

And, how does that explanation dispense with the idea that the (apparent)
ingesting of mushrooms caused the change in consciousness?

On Tue, Jul 9, 2024 at 1:14 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. "Mushrooms" are just an appearance in your consciousness that
> stand for other consciousnesses. When you "take mushrooms", what happens is
> for your consciousness to interact with the other consciousnesses. Also
> "internet" is a similar appearance in your consciousness. And when you
> "enter the internet", your consciousness changes from interacting with
> other consciousnesses.
>
> On Tuesday 9 July 2024 at 19:02:48 UTC+3 Terren Suydam wrote:
>
>> So you're saying if I take a high dose of magic mushrooms and my
>> consciousness changes, it is so obvious that a 5yo kid would understand it,
>> that the mushrooms *do not* cause a change to my consciousness?  That me
>> taking mushrooms an hour before the changes to my consciousness begin is
>> mere correlation?
>>
>> On Tue, Jul 9, 2024 at 11:51 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> @Terren. Based on your logic, if when you get outside in the rain is
>>> cold and when you are inside the house is warm => house generates
>>> consciousness. Correlation is not causation. This even a 5-years old kid
>>> understands.
>>>
>>> On Tuesday 9 July 2024 at 17:39:34 UTC+3 Terren Suydam wrote:
>>>
>>>> On Tue, Jul 9, 2024 at 7:01 AM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> @Quentin @Stathis. That's where the whole magical belief in AI comes
>>>>> from, from believing that you are robots. Well.. breaking news: you are
>>>>> not! You are God. "Brain" is just a picture that you as God dreams in this
>>>>> dream. It doesn't actually exist.
>>>>>
>>>>
>>>> Maybe so. But if it is true that our brains are just part of the dream,
>>>> then how do you account for the seeming one-way causality between the brain
>>>> and the mind?  Brain damage, drugs, transcranial magnetic stimulation, etc,
>>>> all give credence to the existence of the brain, and by the same token all
>>>> those examples are difficult to explain from the idealist perspective
>>>> you're advocating for.
>>>>
>>>> You can say that's all part of the dream, but that's an answer that
>>>> stops all further questions. The "hard problem" of idealism is: why does
>>>> the dream of God appear to be so lawful and ordered?  Why does the dream
>>>> necessitate things like brains that appear to have causal influence on our
>>>> consciousness?
>>>>
>>>> Terren
>>>>
>>>>
>>>>> On Tuesday 9 July 2024 at 11:24:45 UTC+3 Stathis Papaioannou wrote:
>>>>>
>>>>>> On Tue, 9 Jul 2024 at 18:04, 'Cosmin Visan' via Everything List <
>>>>>> everyth...@googlegroups.com> wrote:
>>>>>>
>>>>>>> lol ? By knowing that all AI does is to follow deterministic
>>>>>>> instructions such as
>>>>>>>
>>>>>>> if (color == white) {
>>>>>>>print ("Is day");
>>>>>>> } else {
>>>>>>>print ("Is night");
>>>>>>> }
>>>>>>>
>>>>>>> There is no reason involved. Just blindly following instructions. Do
>>>>>>> people that believe in the AI believe that computers are magical 
>>>>>>> entities
>>>>>>> where fairies live and they sprout rainbows ?
>>>>>>>
>>>>>>
>>>>>> This is what humans do also: their brains follow deterministic rules,
>>>>>> and it results in the complex behaviour that we see.

Re: AI hype

2024-07-09 Thread Terren Suydam
That reminds me of Donald Hoffman's idea of reality as emerging from the
interactions of “conscious agents”. These interactions are governed by
mathematical principles. The apparent solidity and objectivity of the
physical world are illusions created by the network of interactions among
conscious agents.



On Tue, Jul 9, 2024 at 11:52 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. The dream is ordered because it is a statistical effect of
> interacting consciousnesses.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/5bd005de-3712-4f54-b9c4-e237d77f6a66n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_PKU%3DJHYdVkC9QV_CmGu-BV7xEjg0dXHweOLeBZSYhuA%40mail.gmail.com.


Re: AI hype

2024-07-09 Thread Terren Suydam
So you're saying if I take a high dose of magic mushrooms and my
consciousness changes, it is so obvious that a 5yo kid would understand it,
that the mushrooms *do not* cause a change to my consciousness?  That me
taking mushrooms an hour before the changes to my consciousness begin is
mere correlation?

On Tue, Jul 9, 2024 at 11:51 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. Based on your logic, if when you get outside in the rain is cold
> and when you are inside the house is warm => house generates consciousness.
> Correlation is not causation. This even a 5-years old kid understands.
>
> On Tuesday 9 July 2024 at 17:39:34 UTC+3 Terren Suydam wrote:
>
>> On Tue, Jul 9, 2024 at 7:01 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> @Quentin @Stathis. That's where the whole magical belief in AI comes
>>> from, from believing that you are robots. Well.. breaking news: you are
>>> not! You are God. "Brain" is just a picture that you as God dreams in this
>>> dream. It doesn't actually exist.
>>>
>>
>> Maybe so. But if it is true that our brains are just part of the dream,
>> then how do you account for the seeming one-way causality between the brain
>> and the mind?  Brain damage, drugs, transcranial magnetic stimulation, etc,
>> all give credence to the existence of the brain, and by the same token all
>> those examples are difficult to explain from the idealist perspective
>> you're advocating for.
>>
>> You can say that's all part of the dream, but that's an answer that stops
>> all further questions. The "hard problem" of idealism is: why does the
>> dream of God appear to be so lawful and ordered?  Why does the dream
>> necessitate things like brains that appear to have causal influence on our
>> consciousness?
>>
>> Terren
>>
>>
>>> On Tuesday 9 July 2024 at 11:24:45 UTC+3 Stathis Papaioannou wrote:
>>>
>>>> On Tue, 9 Jul 2024 at 18:04, 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> lol ? By knowing that all AI does is to follow deterministic
>>>>> instructions such as
>>>>>
>>>>> if (color == white) {
>>>>>print ("Is day");
>>>>> } else {
>>>>>print ("Is night");
>>>>> }
>>>>>
>>>>> There is no reason involved. Just blindly following instructions. Do
>>>>> people that believe in the AI believe that computers are magical entities
>>>>> where fairies live and they sprout rainbows ?
>>>>>
>>>>
>>>> This is what humans do also: their brains follow deterministic rules,
>>>> and it results in the complex behaviour that we see.
>>>>
>>>>
>>>> --
>>>> Stathis Papaioannou
>>>>
>>> --
>>>
>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/19faaf74-251d-4e42-9ce6-a1d6d6208fc4n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/19faaf74-251d-4e42-9ce6-a1d6d6208fc4n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/9227a08e-4c12-498c-936e-33a2f5c6c38en%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/9227a08e-4c12-498c-936e-33a2f5c6c38en%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA87cCFVz%2B7Ys1LeAh_132YbUgMeENmQYAe1OCOtE%3Do2CA%40mail.gmail.com.


Re: AI hype

2024-07-09 Thread Terren Suydam
On Tue, Jul 9, 2024 at 7:01 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Quentin @Stathis. That's where the whole magical belief in AI comes from,
> from believing that you are robots. Well.. breaking news: you are not! You
> are God. "Brain" is just a picture that you as God dreams in this dream. It
> doesn't actually exist.
>

Maybe so. But if it is true that our brains are just part of the dream,
then how do you account for the seeming one-way causality between the brain
and the mind?  Brain damage, drugs, transcranial magnetic stimulation, etc,
all give credence to the existence of the brain, and by the same token all
those examples are difficult to explain from the idealist perspective
you're advocating for.

You can say that's all part of the dream, but that's an answer that stops
all further questions. The "hard problem" of idealism is: why does the
dream of God appear to be so lawful and ordered?  Why does the dream
necessitate things like brains that appear to have causal influence on our
consciousness?

Terren


> On Tuesday 9 July 2024 at 11:24:45 UTC+3 Stathis Papaioannou wrote:
>
>> On Tue, 9 Jul 2024 at 18:04, 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> lol ? By knowing that all AI does is to follow deterministic
>>> instructions such as
>>>
>>> if (color == white) {
>>>print ("Is day");
>>> } else {
>>>print ("Is night");
>>> }
>>>
>>> There is no reason involved. Just blindly following instructions. Do
>>> people that believe in the AI believe that computers are magical entities
>>> where fairies live and they sprout rainbows ?
>>>
>>
>> This is what humans do also: their brains follow deterministic rules, and
>> it results in the complex behaviour that we see.
>>
>>
>> --
>> Stathis Papaioannou
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/19faaf74-251d-4e42-9ce6-a1d6d6208fc4n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8WJfyOaHBiyybKNVfizaSojS03tDAWEfeKN32LbGtukQ%40mail.gmail.com.


Re: AI hype

2024-07-08 Thread Terren Suydam
How has your understanding of computer programming helped you avoid being
victimized by AI hype?

On Mon, Jul 8, 2024 at 5:19 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> People that are victims of the AI hype neither understand computer
> programming nor consciousness.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/d095509d-00c5-4693-ae91-af4732e231can%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9y0vAp9%3Drt-Rv0J-TvPzcyn7VOqHSpV0manOKiowzYzw%40mail.gmail.com.


Re: How Self-Reference Builds the World - my paper

2024-06-27 Thread Terren Suydam
I offer this with all sincerity: I mean no disrespect, and I have nothing
against you personally. I do see why my comment about your paper being
"speculative" comes across as disrespectful, so please accept my apologies.
It's true that I haven't given your paper the attention required to make a
judgment like that.

If you're willing to continue with me, then please understand that I'm
someone who has to be pretty selective about how I allocate my time and
energy. Engaging with me means engaging with someone who is not going to
read your paper in its entirety until I know it's worth my time. Thus far
my comments/questions have been about testing to see if it *is* worth my
time. If *that* comes across as disrespectful, then let's just move on.
Otherwise, please give me the grace to come from misunderstandings that
need correction, and please correct my misunderstandings without resorting
to insults. And I will avoid jumping to conclusions. If we can do that,
maybe we can both benefit from a discussion. No hard feelings either way.

Terren

On Thu, Jun 27, 2024 at 5:17 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. You clearly don't know to use the computer. For example, mouse
> has a wheel. If you use that wheel, page moves. When page moves, new things
> appear in the page. Like for example part 2 of the paper. Also, in case
> your mouse doesn't have a wheel, there is also a scrollbar on the right
> side of the window. If you click on that scrollbar and keep the click
> pressed, you can then move the scrollbar up and down. By moving the
> scrollbar up and down, new things appear in the page. Like for example part
> 2 of the paper.
>
> Also, the fact that you call it "speculative", only shows that you are
> full of hatred and are unwilling to engage. Then your presence is pointless
> on this topic. Why are you here ? To freely hate on people ? Pathetic.
>
> On Wednesday 26 June 2024 at 15:51:48 UTC+3 Terren Suydam wrote:
>
>> That paragraph is not in the paper you posted (here
>> <https://philpapers.org/archive/VISHSB.pdf>)
>>
>> On Tue, Jun 25, 2024 at 3:25 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> Some might wonder, if we cannot speak
>>> about it, why are we sure that it is the one that brings consciousness
>>> into existence. The reason
>>> we can do this is because we observe the phenomenology of qualia (like
>>> inclusion and
>>> transcendence of levels) and conclude that this is possible only if some
>>> entity that we call “self-reference”
>>> must “exist”."
>>>
>>
>> I don't know what you mean by "inclusion" or "transcendance of levels",
>> so it's not clear why self-reference must exist for qualia.
>>
>>
>>> I understand that we live in an age where attention span has been
>>> reduced to 5 seconds. Nothing wrong with that. But if that is your
>>> attention span, then you should employ it for tik-tok videos. Other
>>> subjects require a different attention span.
>>>
>>
>> That's just unnecessary. At least I'm engaging with your paper. And, for
>> what it's worth, I'm busy. Having something like Claude that can summarize
>> 17 pages of speculative philosophy is the only way I was going to do that.
>>
>> Terren
>>
>>
>>> On Tuesday 25 June 2024 at 21:32:24 UTC+3 Terren Suydam wrote:
>>>
>>>> From your paper, you define self-reference as: "Let self-reference be
>>>> the entity with the property of looking-back-at-itself."
>>>>
>>>> Your definition invokes the concepts *entity*, *property*,
>>>> *looking-back*, and *itself*. That's a lot of complexity for something
>>>> that is fundamental.  It's easy for me to imagine *entities *with
>>>> different *properties* (i.e. that don't *look-back-on-itself), *but
>>>> only because I'm starting from a linguistic perspective that already
>>>> defines *entities *and *properties, *and *looking-back-at-itself.* You
>>>> don't have that luxury. If you want to derive everything from a monism, you
>>>> cannot define that monism in terms of concepts imported from a different
>>>> metaphysics or conceptual framework. *Entities *and *properties of
>>>> looking-back-at-itself *must be defined relative to your fundamental
>>>> monism.
>>>>
>>>> On Tue, Jun 25, 2024 at 2:04 PM 'Cosmin Visan&#x

Re: How Self-Reference Builds the World - my paper

2024-06-26 Thread Terren Suydam
That paragraph is not in the paper you posted (here
<https://philpapers.org/archive/VISHSB.pdf>)

On Tue, Jun 25, 2024 at 3:25 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> Some might wonder, if we cannot speak
> about it, why are we sure that it is the one that brings consciousness
> into existence. The reason
> we can do this is because we observe the phenomenology of qualia (like
> inclusion and
> transcendence of levels) and conclude that this is possible only if some
> entity that we call “self-reference”
> must “exist”."
>

I don't know what you mean by "inclusion" or "transcendance of levels", so
it's not clear why self-reference must exist for qualia.


> I understand that we live in an age where attention span has been reduced
> to 5 seconds. Nothing wrong with that. But if that is your attention span,
> then you should employ it for tik-tok videos. Other subjects require a
> different attention span.
>

That's just unnecessary. At least I'm engaging with your paper. And, for
what it's worth, I'm busy. Having something like Claude that can summarize
17 pages of speculative philosophy is the only way I was going to do that.

Terren


> On Tuesday 25 June 2024 at 21:32:24 UTC+3 Terren Suydam wrote:
>
>> From your paper, you define self-reference as: "Let self-reference be the
>> entity with the property of looking-back-at-itself."
>>
>> Your definition invokes the concepts *entity*, *property*, *looking-back*,
>> and *itself*. That's a lot of complexity for something that is
>> fundamental.  It's easy for me to imagine *entities *with different
>> *properties* (i.e. that don't *look-back-on-itself), *but only because
>> I'm starting from a linguistic perspective that already defines *entities
>> *and *properties, *and *looking-back-at-itself.* You don't have that
>> luxury. If you want to derive everything from a monism, you cannot define
>> that monism in terms of concepts imported from a different metaphysics or
>> conceptual framework. *Entities *and *properties of
>> looking-back-at-itself *must be defined relative to your fundamental
>> monism.
>>
>> On Tue, Jun 25, 2024 at 2:04 PM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> @Terren. There is no "self" and "ability to reference". There is just
>>> self-reference. You can call it hampty-dampty if you want.
>>>
>>> On Tuesday 25 June 2024 at 20:01:24 UTC+3 Terren Suydam wrote:
>>>
>>>> I read enough to confirm that you postulate self-reference as
>>>> fundamental - the entity upon which everything else can be built. I'm
>>>> wondering how that can be fundamental if it requires two components (self,
>>>> and the ability to reference).
>>>>
>>>> On Tue, Jun 25, 2024 at 11:32 AM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> The proper understanding happens by reading the paper, not by using
>>>>> hallucinatory objects to give you a devoid of meaning shortcut.
>>>>>
>>>>> On Tuesday 25 June 2024 at 16:42:11 UTC+3 Terren Suydam wrote:
>>>>>
>>>>>> I used Claude Sonnet to summarize your paper. Tell me if any of this
>>>>>> misses the mark, but the paper appears to posit *self-reference* as
>>>>>> fundamental, upon which all other aspects of reality are derived.
>>>>>>
>>>>>> If so (this is me now), my first thought is that self-reference
>>>>>> cannot be fundamental, because it already presupposes two distinct
>>>>>> components: a "self" and the capacity to "reference". Worse, defining
>>>>>> "self" (something to be derived) in terms of "self-reference" 
>>>>>> (fundamental)
>>>>>> is circular.
>>>>>>
>>>>>> Terren
>>>>>>
>>>>>> On Tue, Jun 25, 2024 at 9:09 AM 'Cosmin Visan' via Everything List <
>>>>>> everyth...@googlegroups.com> wrote:
>>>>>>
>>>>>>> I invite you to discover my paper "How Self-Reference Builds the
>>>>>>> World" which is the theory of everything that people searched for
>>>>>>> millennia. It can be found on my philpeople profile:
>>>>>>> https://ph

Re: How Self-Reference Builds the World - my paper

2024-06-25 Thread Terren Suydam
>From your paper, you define self-reference as: "Let self-reference be the
entity with the property of looking-back-at-itself."

Your definition invokes the concepts *entity*, *property*, *looking-back*,
and *itself*. That's a lot of complexity for something that is
fundamental.  It's easy for me to imagine *entities *with different
*properties* (i.e. that don't *look-back-on-itself), *but only because I'm
starting from a linguistic perspective that already defines *entities
*and *properties,
*and *looking-back-at-itself.* You don't have that luxury. If you want to
derive everything from a monism, you cannot define that monism in terms of
concepts imported from a different metaphysics or conceptual
framework. *Entities
*and *properties of looking-back-at-itself *must be defined relative to
your fundamental monism.

On Tue, Jun 25, 2024 at 2:04 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> @Terren. There is no "self" and "ability to reference". There is just
> self-reference. You can call it hampty-dampty if you want.
>
> On Tuesday 25 June 2024 at 20:01:24 UTC+3 Terren Suydam wrote:
>
>> I read enough to confirm that you postulate self-reference as fundamental
>> - the entity upon which everything else can be built. I'm wondering how
>> that can be fundamental if it requires two components (self, and the
>> ability to reference).
>>
>> On Tue, Jun 25, 2024 at 11:32 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> The proper understanding happens by reading the paper, not by using
>>> hallucinatory objects to give you a devoid of meaning shortcut.
>>>
>>> On Tuesday 25 June 2024 at 16:42:11 UTC+3 Terren Suydam wrote:
>>>
>>>> I used Claude Sonnet to summarize your paper. Tell me if any of this
>>>> misses the mark, but the paper appears to posit *self-reference* as
>>>> fundamental, upon which all other aspects of reality are derived.
>>>>
>>>> If so (this is me now), my first thought is that self-reference cannot
>>>> be fundamental, because it already presupposes two distinct components: a
>>>> "self" and the capacity to "reference". Worse, defining "self" (something
>>>> to be derived) in terms of "self-reference" (fundamental) is circular.
>>>>
>>>> Terren
>>>>
>>>> On Tue, Jun 25, 2024 at 9:09 AM 'Cosmin Visan' via Everything List <
>>>> everyth...@googlegroups.com> wrote:
>>>>
>>>>> I invite you to discover my paper "How Self-Reference Builds the
>>>>> World" which is the theory of everything that people searched for
>>>>> millennia. It can be found on my philpeople profile:
>>>>> https://philpeople.org/profiles/cosmin-visan
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Everything List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to everything-li...@googlegroups.com.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/everything-list/4f13128c-5b63-422f-a6cb-4c3eb4f3618cn%40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/everything-list/4f13128c-5b63-422f-a6cb-4c3eb4f3618cn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/ea01c75c-3cdd-4d9e-9ced-eb184ae03777n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/ea01c75c-3cdd-4d9e-9ced-eb184ae03777n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/d91bb4ab-d34c-4508-9fa8-7278599ff78en%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/d91bb4ab-d34c-4508-9fa8-7278599ff78en%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Qzw%2B1V-aJjgMO_MDnNx%3DoFjdXcDz9dm0q4tZmRL6U3w%40mail.gmail.com.


Re: How Self-Reference Builds the World - my paper

2024-06-25 Thread Terren Suydam
I read enough to confirm that you postulate self-reference as fundamental -
the entity upon which everything else can be built. I'm wondering how that
can be fundamental if it requires two components (self, and the ability to
reference).

On Tue, Jun 25, 2024 at 11:32 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> The proper understanding happens by reading the paper, not by using
> hallucinatory objects to give you a devoid of meaning shortcut.
>
> On Tuesday 25 June 2024 at 16:42:11 UTC+3 Terren Suydam wrote:
>
>> I used Claude Sonnet to summarize your paper. Tell me if any of this
>> misses the mark, but the paper appears to posit *self-reference* as
>> fundamental, upon which all other aspects of reality are derived.
>>
>> If so (this is me now), my first thought is that self-reference cannot be
>> fundamental, because it already presupposes two distinct components: a
>> "self" and the capacity to "reference". Worse, defining "self" (something
>> to be derived) in terms of "self-reference" (fundamental) is circular.
>>
>> Terren
>>
>> On Tue, Jun 25, 2024 at 9:09 AM 'Cosmin Visan' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> I invite you to discover my paper "How Self-Reference Builds the World"
>>> which is the theory of everything that people searched for millennia. It
>>> can be found on my philpeople profile:
>>> https://philpeople.org/profiles/cosmin-visan
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/4f13128c-5b63-422f-a6cb-4c3eb4f3618cn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/4f13128c-5b63-422f-a6cb-4c3eb4f3618cn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/ea01c75c-3cdd-4d9e-9ced-eb184ae03777n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/ea01c75c-3cdd-4d9e-9ced-eb184ae03777n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8J1nibPY5XwY7cBPY4Cy%3D9xcLbURSe1L%2BEFvjkNPezJA%40mail.gmail.com.


Re: How Self-Reference Builds the World - my paper

2024-06-25 Thread Terren Suydam
I used Claude Sonnet to summarize your paper. Tell me if any of this misses
the mark, but the paper appears to posit *self-reference* as fundamental,
upon which all other aspects of reality are derived.

If so (this is me now), my first thought is that self-reference cannot be
fundamental, because it already presupposes two distinct components: a
"self" and the capacity to "reference". Worse, defining "self" (something
to be derived) in terms of "self-reference" (fundamental) is circular.

Terren

On Tue, Jun 25, 2024 at 9:09 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> I invite you to discover my paper "How Self-Reference Builds the World"
> which is the theory of everything that people searched for millennia. It
> can be found on my philpeople profile:
> https://philpeople.org/profiles/cosmin-visan
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4f13128c-5b63-422f-a6cb-4c3eb4f3618cn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_hZzqtd%2BV3w50M3RAWdTEjCUeA_DvO61x13OiDGLtn%2Bg%40mail.gmail.com.


Re: Situational Awareness

2024-06-19 Thread Terren Suydam
On Tue, Jun 18, 2024 at 3:24 PM Jason Resch  wrote:

>
>
> On Sun, Jun 16, 2024, 10:26 PM PGC  wrote:
>
>> A lot of the excitement around LLMs is due to confusing skill/competence
>> (memory based) with the unsolved problem of intelligence, its most
>> optimal/perfect test etc. There is a difference between completing strings
>> of words/prompts relying on memorization, interpolation, pattern
>> recognition based on training data and actually synthesizing novel
>> generalization through reasoning or synthesizing the appropriate program on
>> the fly. As there isn't a perfect test for intelligence, much less
>> consensus on its definition, you can always brute force some LLM through
>> huge compute and large, highly domain specific training data, to "solve" a
>> set of problems; even highly complex ones. But as soon as there's novelty
>> you'll have to keep doing that. Personally, that doesn't feel like
>> intelligence yet. I'd want to see these abilities combined with the program
>> synthesis ability; without the need for ever vaster, more specific
>> databases etc. to be more convinced that we're genuinely on the threshold.
>
>
> I think there is no more to intelligence than patter recognition and
> extrapolation (essentially, the same techniques required for improving
> compression). It is also the same thing science is concerned with:
> compressing observations of the real world into a small set of laws
> (patterns) which enable predictions. And prediction is the essence of
> intelligent action, as all goal-centered action requires predicting
> probable outcomes that may result from any of a set of possible behaviors
> that may be taken, and then choosing the behavior with the highest expected
> reward.
>
> I think this can explain why even a problem as seemingly basic as "word
> prediction" can (when mastered to a sufficient degree) break through into
> general intelligence. This is because any situation can be described in
> language, and being asked to predict next words requires understanding the
> underlying reality to a sufficient degree to accurately model the things
> those words describe. I confirmed this by describing an elaborate physical
> setup and asked GPT-4 to predict and explain what it thought would happen
> over the next hour. It did so perfectly, and also explained the
> consequences of various alterations I later proposed.
>
> Since any of thousands, or perhaps millions, of patterns exist in the
> training corpus, language models can come to learn, recognize, and
> extrapolate all of those thousands or millions of patterns. This is what we
> think of as generality (a sufficiently large repertoire of pattern
> recognition that it appears general).
>
> Jason
>

Hey Jason,

You've articulated this idea before, that the result of the training on
such large amounts of data may result in the ability of LLMs to create
models of reality and simulate minds and so forth, and it's an intriguing
possibility.  However, one fact of how current LLMs operate is that they
don't know when they're wrong. If what you're saying is true, shouldn't an
LLM be able to model its own state of knowledge?

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-8wu2S2CGyB8SphcgJMjUtUZpY16zPAxemDgez6xikhw%40mail.gmail.com.


Re: Situational Awareness

2024-06-19 Thread Terren Suydam
On Tue, Jun 18, 2024 at 2:04 PM John Clark  wrote:

>
>
> On Tue, Jun 18, 2024 at 11:23 AM Terren Suydam 
> wrote:
>
>
>> * LLMs are not AGI (yet), but it's hard to ignore they're (sometimes
>> astonishingly) competent at answering multi-modal questions across most, if
>> not all domains of human knowledge*
>>
>
> I agree.
>
>
>
>
>> *>  Here's probably the best result
>> <https://chatgpt.com/share/b4403435-e071-46ef-b1ce-ac1def2ce501> but I'm
>> not sure there's anything actually novel there. Despite that, it's still
>> quite impressive, and to John's point, it's clearly an intelligent
>> response, even if there are aspects of "cheating off of humans" in it. *
>>
>
> Concerning the cheating off humans question; Isaac Newton was probably the
> closest the human race ever got to producing a transcendental genius, and
> nobody ever accused him of being overly modest, but even Newton admitted
> that if he had seen further than others it was only because "he stood on
> the shoulders of giants". Human geniuses don't start from absolute zero,
> they expand on work done by others. Regardless of how brilliant an AI's
> answer is, if somebody is bound and determined to belittle the AI they can
> always find **something** in the training data that has some relationship
> to the answer, however tenuous. Even if the AI wrote a sonnet more
> beautiful than anything of Shakespeare's, they can still claim that the
> sonnet, like everything in literature, concerns objects (and people) and
> how they move, and there are certainly things in its training data about
> the arrangement of matter and energy in spacetime, in fact EVERYTHING in
> its training data is about the arrangement of matter and energy in
> spacetime. And therefore writing a beautiful sonnet was not a creative act
> but was just the result of "mere memorization".
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>


I would never claim that the works of trascendental geniuses like Newton &
Einstein, or for that matter, Picasso & Dali, did not derive from earlier
works. What I'm saying that *they* did, which I doubt very much current
LLMs can do, is to break ground into novel territory in whatever territory.
I'm not trying to belittle current LLMs, but it seems important to
understand their limitations especially because nobody, not even their
creators, seems to really understand why they're as good as they are. And
just as importantly, why they're as bad as they are at some things given
how smart they are in other ways.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA99toTiJ9YahG-8BwJhWO%3DJ72u6oy2HFQuOa0ROo92VDg%40mail.gmail.com.


Re: Situational Awareness

2024-06-18 Thread Terren Suydam
On Mon, Jun 17, 2024 at 4:58 PM PGC  wrote:

>
>
> On Monday, June 17, 2024 at 7:28:13 PM UTC+2 John Clark wrote:
>
> On Sun, Jun 16, 2024 at 10:26 PM PGC  wrote:
>
>
>
>
> *> you can always brute force some LLM through huge compute and large,
> highly domain specific training data, to "solve" a set of problems;*
>
>
> I don't know what those quotation marks are supposed to mean but if you
> are able to "solve" a set of problems then the problems have been solved,
> the method of doing so is irrelevant. Are you sure you're not whistling
> past the graveyard?
>
>
> In discussing the distinction between memorization, which LLMs heavily
> rely on, and genuine reasoning, which involves building new mental models
> capable of broad generalizations and depth, consider the following:
>
> Even if I have no prior knowledge of a specific domain, with a large
> enough library, memory, and pattern recognition of word sequences and
> probabilistic associations, I could "generate" a solution by merely
> matching patterns. The more my memory aligns with the problem and domain,
> the higher the likelihood of "solving" it.
>
> To illustrate, imagine a student unfamiliar with an advanced topic who
> stumbles upon a book in a library that contains the exact problem and its
> solution. By copying the solution verbatim, they have effectively cheated.
> This is akin to undergraduates peeking at each other's exams: they are
> unable to model the problem and derive a solution themselves but can
> memorize and reproduce the solution by glancing at others' work. This
> differs from students who, through understanding the domain's fundamentals,
> experiment with various approaches and reason their way to a solution.
> These students might even discover novel solutions, unlike those who merely
> copy and paste from their peers. Hence the quotation marks; there is no
> "solving" going on by cheating through memory.
>
> This analogy extends to the internet, where some people fake expertise by
> parroting buzzwords and formulations from Wikipedia, in contrast to genuine
> experts who contribute original insights. As discussions progress and
> become more complex, these parrots often become lost, unable to keep up
> with the depth and specificity required. Higher education attempts to
> address this by rewarding original, effective problem-solving approaches
> over mere memorization and repetition.
>

This is a really well articulated distinction, thank you for that.

I agree that LLMs are not AGI (yet), but it's hard to ignore they're
(sometimes astonishingly) competent at answering multi-modal questions
across most, if not all domains of human knowledge. I spent a couple hours
this morning trying to use chatGPT to design a prompt that might
demonstrate that it's not merely parroting, synthesizing, or rearranging
existing human ideas, but actually generating novel ones. Here's probably the
best result 
but I'm not sure there's anything actually novel there. Despite that, it's
still quite impressive, and to John's point, it's clearly an intelligent
response, even if there are aspects of "cheating off of humans" in it.

It's clear that the line between the genuine reasoning & creativity that
are implicit in whatever we think of as human intelligence, vs the
permutative repackaging of existing ideas we might think of as inherent in
the intelligence exhibited by LLMs, is blurry.  Human creativity and
intelligence is probably a lot closer to what LLMs do than we'd like to
think. But it's also clear to me that we're not going to get Einsteinian
leaps forward in any given domain from LLMs. That may well be coming from
AI in the future, but the way I see it, there's still some significant
breakthrough(s) necessary to get there.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-t1d64Wa-sOsvzAQ6n9j2jo12JYsf_BGVM2ZDjtJFLzg%40mail.gmail.com.


Re: Risk tolerance and the Singularity

2024-03-19 Thread Terren Suydam
Immortality is overrated.

On Tue, Mar 19, 2024 at 5:15 PM John Clark  wrote:

> Richard Ngo, a top researcher at open AI, recently said something rather
> interesting:
>
> "*The closer we get to the singularity the lower my risk tolerance gets.
> I’d already ruled out skydiving and paragliding. Last year I started
> wearing a helmet consistently while cycling. I think 2024 might be the year
> I give up skiing. It’s not that I think the risks are that high,
> objectively speaking. But wouldn’t it be unbearably embarrassing to have
> your name go down in history as one of the people who died totally
> avoidable deaths only a few years before immortality became possible?*"
>
>
> Risk tolerance
> 
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> sr1
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv35V6ex1UV7RQeMVnChp0zmv%2BXx8jPTz26NzhXw14bEmQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-JBBpj2fEQSP%2BmYR1EazOTrfof7mG9V8Ze0TU-Gb%2Ba8A%40mail.gmail.com.


Re: Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

2023-07-14 Thread Terren Suydam
It's hard to know how to think about this kind of risk. It's safe to say EY
has done more thinking on this issue than just about anyone, and he's one
of the smartest people on the planet, probably. I've been following him for
over a decade, from even before his writings on lesswrong.

However, there's an interesting dynamic with highly intelligent people -
ironically, being really smart makes it possible to justify stupid
conclusions that you might be highly motivated to want to. A very smart
google engineer I know is a conspiracy theorist who can justify to himself
bullshit like 9/11 was a hoax and even Sandy Hook was a hoax. I'm *not*
saying EY's conclusions are stupid or bullshit. Just saying that people
clever enough to jump through the cognitive hoops required can convince
themselves of shit most people would think is bs, and that irrational
motivations often drive this - for example, conspiracy theorists are often
driven by psychological motivations such as the need to be right, or the
desire to have some kind of insider status.

So what irrational motivations might EY be prone to?  Over the years I've
gotten the sense he's got a bit of a savior complex, so the motivation to
view AI in the way he does could be rooted in that psychological
motivation. I mean obviously I'm speculating here, but it does play a
factor in how I process his message, which is quite dire.

That's not to say he's wrong. As you say John, it's totally unpredictable,
but I think there's room for less dire narratives about how it could all
go.  But one thing I do 100% agree with is that we're not taking this
seriously enough and that the usual capitalist incentives are pushing us
into dangerous territory.

Terren


On Fri, Jul 14, 2023 at 1:11 PM John Clark  wrote:

> Recently  Eliezer Yudkowsky gave a TED talk and basically said the human
> race is doomed. You can see it on YouTube and I put the following in the
> comment section:
> --
>
> I think Eliezer was right when he said nobody can predict what moves a
> chess program like Stockfish will make but you can predict that it will
> beat you in a game of chess, that's because Stockfish is super good at
> playing chess but it can't do anything else, it can't even think of
> anything else. But an AI program like GPT-4 is different, it can think of a
> great many things besides chess so you can't really predict what it will
> do, sure it has the ability to beat you at a game of Chess but for its own
> inscrutable reasons it may deliberately let you win. So yes in a few years
> an AI will have the ability to exterminate the human race, but will it
> actually do so? I don't know and I can't even give a probability of it
> occurring, all I can say is the probability is greater than zero and less
> than 100%.
>
> Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
> 
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> mfx
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2VMSWOJfL74rL%3DNoe2o5y%3DE8F%3DY5GCQKN%3D-inOY1x1kQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mQ%2BwhMUuwH0z_4piQH-9hpBxginmaJBLM%2BpHj5c5hFQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Terren Suydam
On Tue, May 23, 2023 at 6:00 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 4:14 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>>>> wrote:
>>>>
>>>>
>>>>> And yes, I'm arguing that a true simulation (let's say for the sake of
>>>>> a thought experiment we were able to replicate every neural connection of 
>>>>> a
>>>>> human being in code, including the connectomes, and neurotransmitters,
>>>>> along with a simulated nerve that was connected to a button on the desk we
>>>>> could press which would simulate the signal sent when a biological pain
>>>>> receptor is triggered) would feel pain that is just as real as the pain 
>>>>> you
>>>>> and I feel as biological organisms.
>>>>>
>>>>
>>>> This follows from the physicalist no-zombies-possible stance. But it
>>>> still runs into the hard problem, basically. How does stuff give rise to
>>>> experience.
>>>>
>>>>
>>> I would say stuff doesn't give rise to conscious experience. Conscious
>>> experience is the logically necessary and required state of knowledge that
>>> is present in any consciousness-necessitating behaviors. If you design a
>>> simple robot with a camera and robot arm that is able to reliably catch a
>>> ball thrown in its general direction, then something in that system *must*
>>> contain knowledge of the ball's relative position and trajectory. It simply
>>> isn't logically possible to have a system that behaves in all situations as
>>> if it knows where the ball is, without knowing where the ball is.
>>> Consciousness is simply the state of being with knowledge.
>>>
>>> Con- "Latin for with"
>>> -Scious- "Latin for knowledge"
>>> -ness "English suffix meaning the state of being X"
>>>
>>> Consciousness -> The state of being with knowledge.
>>>
>>> There is an infinite variety of potential states and levels of
>>> knowledge, and this contributes to much of the confusion, but boiled down
>>> to the simplest essence of what is or isn't conscious, it is all about
>>> knowledge states. Knowledge states require activity/reactivity to the
>>> presence of information, and counterfactual behaviors (if/then, greater
>>> than less than, discriminations and comparisons that lead to different
>>> downstream consequences in a system's behavior). At least, this is my
>>> theory of consciousness.
>>>
>>> Jason
>>>
>>
>> This still runs into the valence problem though. Why does some
>> "knowledge" correspond with a positive *feeling* and other knowledge
>> with a negative feeling?
>>
>
> That is a great question. Though I'm not sure it's fundamentally insoluble
> within model where every conscious state is a particular state of knowledge.
>
> I would propose that having positive and negative experiences, i.e. pain
> or pleasure, requires knowledge states with a certain minium degree of
> sophistication. For example, knowing:
>
> Pain being associated with knowledge states such as: "I don't like this,
> this is bad, I'm in pain, I want to change my situation."
>
> Pleasure being associated with knowledge states such as: "This is good for
> me, I could use more of this, I don't want this to end.'
>
> Such knowledge states require a degree of reflexive awareness, to have a
> notion of a self where some outcomes may be either positive or negative to
> that self, and perhaps some notion of time or a sufficient agency to be
> able to change one's situation.
>
> Sone have argued that plants can't feel pain because there's little they
> can do to change their situation (though I'm agnostic on this).
>
>   I'm not talking about the functional accounts of positive and negative
>> experiences. I'm talking about phenomenology. The functional aspect of it
>> is not irrelevant, but to focus *only* on that is to sweep the feeling
>> under the rug. So many dialogs on this topic basically terminate here,
>> where it's just a clash of belief about the relative importance of
>> consciousness a

Re: what chatGPT is and is not

2023-05-25 Thread Terren Suydam
On Tue, May 23, 2023 at 5:47 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 3:50 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 23, 2023 at 7:09 AM Jason Resch 
>>>> wrote:
>>>>
>>>>> As I see this thread, Terren and Stathis are both talking past each
>>>>> other. Please either of you correct me if i am wrong, but in an effort to
>>>>> clarify and perhaps resolve this situation:
>>>>>
>>>>> I believe Stathis is saying the functional substitution having the
>>>>> same fine-grained causal organization *would* have the same phenomenology,
>>>>> the same experience, and the same qualia as the brain with the same
>>>>> fine-grained causal organization.
>>>>>
>>>>> Therefore, there is no disagreement between your positions with
>>>>> regards to symbols groundings, mappings, etc.
>>>>>
>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>> believe this is partly responsible for why you are both talking past each
>>>>> other, because there are many levels involved in brains (and computational
>>>>> systems). I believe you were discussing completely different levels in the
>>>>> hierarchical organization.
>>>>>
>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>>>>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>>>>
>>>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>>>>> quale
>>>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>>>> answer/description for it can only be supplied in terms of a vast amount 
>>>>> of
>>>>> information concerning low level structures, be they patterns of neuron
>>>>> firings, or patterns of bits being processed. When we consider things down
>>>>> at this low level, however, we lose all context for what the meaning, 
>>>>> idea,
>>>>> and quale are or where or how they come in. We cannot see or find the idea
>>>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>>>
>>>>> Of course then it should seem deeply mysterious, if not impossible,
>>>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>>>> back and forth. Trying to understand a smartphone by looking at the flows
>>>>> of electrons is a similar kind of problem, it would seem just as difficult
>>>>> or impossible to explain and understand the high-level features and
>>>>> complexity out of the low-level simplicity.
>>>>>
>>>>> This is why it's crucial to bear in mind and explicitly discuss the
>>>>> level one is operation on when one discusses symbols, substrates, or 
>>>>> quale.
>>>>> In summary, I think a chief reason you have been talking past each other 
>>>>> is
>>>>> because you are each operating on different assumed levels.
>>>>>
>>>>> Please correct me if you believe I am mistaken and know I only offer
>>>>> my perspective in the hope it might help the conversation.
>>>>>
>>>>
>>>> I appreciate the callout, but it is necessary to talk at both the micro
>>>> and the macro for this discussion. We're talking about symbol grounding. I
>>>> should make it clear that I don't believe symbols can be grounded in other
>>>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>>>> infinite regress and the illusion of meaning.  Symbols ultimately must
>>>> stand for something. The only thing they can stand *for*, ultimately

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
If I had confidence that my answers to your questions would be met with
anything but a "defend/destroy" mentality I'd go there with you. I's gotta
be fun for me, and you're not someone I enjoy getting into it with. Not
trying to be insulting, but it's the truth.

On Tue, May 23, 2023 at 4:33 PM John Clark  wrote:

> On Tue, May 23, 2023  Terren Suydam  wrote:
>
> *> reality is fundamentally consciousness. *
>
>
> Then why does a simple physical molecule like *N**2**O *stop
> consciousness temporarily and another simple physical molecule like *CN- *do
> so permanently?
>
>
>> *> Why does some "knowledge" correspond with a positive feeling and other
>> knowledge with a negative feeling?*
>
>
> Because sometimes new knowledge requires you to re-organize hundreds of
> other important concepts you already had in your brain and that could be
> difficult and depending on circumstances may endanger or benefit your
> mental health.
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 2nv
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8fZ9NDoyPBKQT6-iygsk8%2BwkfiU_3JWLVLbzregBUwgw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 4:17 PM John Clark  wrote:

> On Tue, May 23, 2023 at 3:50 PM Terren Suydam 
> wrote:
>
>
>> * > in my view, consciousness entails a continuous flow of experience.*
>>
>
> If I could instantly stop all physical processes that are going on inside
> your head for one year and then start them up again, to an outside
> objective observer you would appear to lose consciousness for one year, but
> to you your consciousness would still feel continuous but the outside world
> would appear to have discontinuously jumped to something new.
>

I meant continuous in terms of the flow of state from one moment to the
next. What you're describing *is* continuous because it's not the passage
of time that needs to be continuous, but the state of information in the
model as the physical processes evolve. And my understanding is that in an
LLM, each new query starts from the same state... it does not evolve in
time.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 2b0
>
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA86qv5H8QEKcEWCjPteD6B89PAZ6GbEcUy7km4Q9yibPQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>> wrote:
>>
>>
>>> And yes, I'm arguing that a true simulation (let's say for the sake of a
>>> thought experiment we were able to replicate every neural connection of a
>>> human being in code, including the connectomes, and neurotransmitters,
>>> along with a simulated nerve that was connected to a button on the desk we
>>> could press which would simulate the signal sent when a biological pain
>>> receptor is triggered) would feel pain that is just as real as the pain you
>>> and I feel as biological organisms.
>>>
>>
>> This follows from the physicalist no-zombies-possible stance. But it
>> still runs into the hard problem, basically. How does stuff give rise to
>> experience.
>>
>>
> I would say stuff doesn't give rise to conscious experience. Conscious
> experience is the logically necessary and required state of knowledge that
> is present in any consciousness-necessitating behaviors. If you design a
> simple robot with a camera and robot arm that is able to reliably catch a
> ball thrown in its general direction, then something in that system *must*
> contain knowledge of the ball's relative position and trajectory. It simply
> isn't logically possible to have a system that behaves in all situations as
> if it knows where the ball is, without knowing where the ball is.
> Consciousness is simply the state of being with knowledge.
>
> Con- "Latin for with"
> -Scious- "Latin for knowledge"
> -ness "English suffix meaning the state of being X"
>
> Consciousness -> The state of being with knowledge.
>
> There is an infinite variety of potential states and levels of knowledge,
> and this contributes to much of the confusion, but boiled down to the
> simplest essence of what is or isn't conscious, it is all about knowledge
> states. Knowledge states require activity/reactivity to the presence of
> information, and counterfactual behaviors (if/then, greater than less than,
> discriminations and comparisons that lead to different downstream
> consequences in a system's behavior). At least, this is my theory of
> consciousness.
>
> Jason
>

This still runs into the valence problem though. Why does some "knowledge"
correspond with a positive *feeling* and other knowledge with a negative
feeling?  I'm not talking about the functional accounts of positive and
negative experiences. I'm talking about phenomenology. The functional
aspect of it is not irrelevant, but to focus *only* on that is to sweep the
feeling under the rug. So many dialogs on this topic basically terminate
here, where it's just a clash of belief about the relative importance of
consciousness and phenomenology as the mediator of all experience and
knowledge.

Terren


> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9OE6MMMSXia2eXHTajXZq068OFG4HNZuamBpy6ORCSGg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:05 PM Jesse Mazer  wrote:

>
>
> On Tue, May 23, 2023 at 9:34 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>> In my experience with conversations like this, you usually have people on
>> one side who take consciousness seriously as the only thing that is
>> actually undeniable, and you have people who'd rather not talk about it,
>> hand-wave it away, or outright deny it. That's the talking-past that
>> usually happens, and that's what's happening here.
>>
>> Terren
>>
>
> But are you talking specifically about symbols with high-level meaning
> like the words humans use in ordinary language, which large

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>
> I agree everything you have experienced is rooted in consciousness.
>
> But at the low level, that only thing your brain senses are neural signals
> (symbols, on/off, ones and zeros).
>
> In your arguments you rely on the high-level conscious states of human
> brains to establish that they have grounding, but then use the low-level
> descriptions of machines to deny their own consciousness, and hence deny
> they can ground their processing to anything.
>
> If you remained in the space of low-level descriptions for both brains and
> 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
e actually involved in generating it that we
> can't reasonably replicate.   That said, I think Penrose and others do not
> have the odds on their side there for a number of reasons.
>
> Like I said though, I don't believe in zombies.
>
> On Tue, May 23, 2023 at 9:12 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 2:25 AM Dylan Distasio 
>> wrote:
>>
>>> While we may not know everything about explaining it, pain doesn't seem
>>> to be that much of a mystery to me, and I don't consider it a symbol per
>>> se.   It seems obvious to me anyways that pain arose out of a very early
>>> neural circuit as a survival mechanism.
>>>
>>
>> But how?  What was the biochemical or neural change that suddenly birthed
>> the feeling of pain?  I'm not asking you to know the details, just the
>> principle - by what principle can a critter that comes into being with some
>> modification of its organization start having a negative feeling when it
>> didn't exist in its progenitors?  This doesn't seem mysterious to you?
>>
>> Very early neural circuits are relatively easy to simulate, and I'm
>> guessing some team has done this for the level of organization you're
>> talking about. What you're saying, if I'm reading you correctly, is that
>> that simulation feels pain. If so, how do you get that feeling of pain out
>> of code?
>>
>> Terren
>>
>>
>>
>>> Pain is the feeling you experience when pain receptors detect an area of
>>> the body is being damaged.   It is ultimately based on a sensory input that
>>> transmits to the brain via nerves where it is translated into a sensation
>>> that tells you to avoid whatever is causing the pain if possible, or let's
>>> you know you otherwise have a problem with your hardware.
>>>
>>> That said, I agree with you on LLMs for the most part, although I think
>>> they are showing some potentially emergent, interesting behaviors.
>>>
>>> On Tue, May 23, 2023 at 1:58 AM Terren Suydam 
>>> wrote:
>>>
>>>>
>>>> Take a migraine headache - if that's just a symbol, then why does that
>>>> symbol *feel* *bad* while others feel *good*?  Why does any symbol
>>>> feel like anything? If you say evolution did it, that doesn't actually
>>>> answer the question, because evolution doesn't do anything except select
>>>> for traits, roughly speaking. So it just pushes the question to: how did
>>>> the subjective feeling of pain or pleasure emerge from some genetic
>>>> mutation, when it wasn't there before?
>>>>
>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH--v_GGPboxD3RHsQ8zc_8XrUAgrpZHGGpgfoymdpHn-g%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJrqPH--v_GGPboxD3RHsQ8zc_8XrUAgrpZHGGpgfoymdpHn-g%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Ajd7deWzMyW1riQMDctN5gzHzStSvjWYEJLD%2B7ZJdqg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 9:15 AM John Clark  wrote:

> On Mon, May 22, 2023 at 5:56 PM Terren Suydam 
> wrote:
>
> *> Many, myself included, are captivated by the amazing capabilities of
>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>> definition of Turing Test, it passes with flying colors in many, many
>> contexts. It would take a much stricter Turing Test than we might have
>> imagined this time last year,*
>>
>
> The trouble with having a much tougher Turing Test  is that although it
> would correctly conclude that it was talking to a computer it would also
> incorrectly conclude that it was talking with a computer when in reality it
> was talking to a human being who had an IQ of 200. Yes GPT can occasionally
> do something that is very stupid, but if you had not also at one time or
> another in your life done something that is very stupid then you are a VERY
> remarkable person.
>
> > One way to improve chatGPT's performance on an actual Turing Test would
>> be to slow it down, because it is too fast to be human.
>>
>
> It would be easy to make GPT dumber, but what would that prove ? We could
> also mass-produce Olympic gold medals so everybody on earth could get one,
> but what would be the point?
>
>>
> *> All that said, is chatGPT actually intelligent?*
>>
>
> Obviously.
>
>
>> * > There's no question that it behaves in a way that we would all agree
>> is intelligent. The answers it gives, and the speed it gives them in,
>> reflect an intelligence that often far exceeds most if not all humans. I
>> know some here say intelligence is as intelligence does. Full stop, *
>>
>
> All I'm saying is you should play fair, whatever test you decide to use
> to measure the intelligence of a human you should use exactly the same test
> on an AI. Full stop.
>
> > *But this is an oversimplified view! *
>>
>
> Maybe so, but it's the only view we're ever going to get so we're just
> gonna have to make the best of it.  But I know there are some people who
> will continue to disagree with me about that until the day they die.
>
>  and so just five seconds before he was vaporized the last surviving
> human being turned to Mr. Jupiter Brain and said "*I still think I'm
> smarter than you*".
>
> *< If ChatGPT was trained on gibberish, that's what you'd get out of it.*
>
>
> And if you were trained on gibberish what sort of post do you imagine
> you'd be writing right now?
>
> * > the Chinese Room thought experiment proposed by John Searle.*
>>
>
> You mean the silliest thought experiment ever devised by the mind of man?
>
> *> ChatGPT, therefore, is more like a search engine*
>
>
> Oh for heaven sake, not that canard again!  I'm not young but since my
> early teens I've been hearing people say you only get out of a computer
> what you put in. I thought that was silly when I was 13 and I still do.
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>

I'm just going to say up front that I'm not going to engage with you on
this particular topic, because I'm already well aware of your position,
that you do not take consciousness seriously, and that your mind won't be
changed on that. So anything we argue about will be about that fundamental
difference, and that's just not interesting or productive, not to mention
we've already had that pointless argument.

Terren


>
> nw4
>
>
>
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9wJ4us5_z3cMPBcT6KGNTnB1_0CiSXdTKpDkdvV%3DmpMw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each other.
> Please either of you correct me if i am wrong, but in an effort to clarify
> and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the same
> fine-grained causal organization *would* have the same phenomenology, the
> same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with regards to
> symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I believe
> this is partly responsible for why you are both talking past each other,
> because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts, feelings,
> quale, etc. and there are low-level, be they neurons, neurotransmitters,
> atoms, quantum fields, and laws of physics as in human brains, or circuits,
> logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible, how we
> get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
> leap from how we get "it" from a bunch of cells squirting ions back and
> forth. Trying to understand a smartphone by looking at the flows of
> electrons is a similar kind of problem, it would seem just as difficult or
> impossible to explain and understand the high-level features and complexity
> out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the level
> one is operation on when one discusses symbols, substrates, or quale. In
> summary, I think a chief reason you have been talking past each other is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer my
> perspective in the hope it might help the conversation.
>

I appreciate the callout, but it is necessary to talk at both the micro and
the macro for this discussion. We're talking about symbol grounding. I
should make it clear that I don't believe symbols can be grounded in other
symbols (i.e. symbols all the way down as Stathis put it), that leads to
infinite regress and the illusion of meaning.  Symbols ultimately must
stand for something. The only thing they can stand *for*, ultimately, is
something that cannot be communicated by other symbols: conscious
experience. There is no concept in our brains that is not ultimately
connected to something we've seen, heard, felt, smelled, or tasted.

In my experience with conversations like this, you usually have people on
one side who take consciousness seriously as the only thing that is
actually undeniable, and you have people who'd rather not talk about it,
hand-wave it away, or outright deny it. That's the talking-past that
usually happens, and that's what's happening here.

Terren


>
> Jason
>
> On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 15:58, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 23 May 2023 at 14:23, Terren Suydam 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <
>>>>> stath...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 23 May 2023 at 13:37, Terren Suydam 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:25 AM Dylan Distasio  wrote:

> While we may not know everything about explaining it, pain doesn't seem to
> be that much of a mystery to me, and I don't consider it a symbol per se.
>  It seems obvious to me anyways that pain arose out of a very early neural
> circuit as a survival mechanism.
>

But how?  What was the biochemical or neural change that suddenly birthed
the feeling of pain?  I'm not asking you to know the details, just the
principle - by what principle can a critter that comes into being with some
modification of its organization start having a negative feeling when it
didn't exist in its progenitors?  This doesn't seem mysterious to you?

Very early neural circuits are relatively easy to simulate, and I'm
guessing some team has done this for the level of organization you're
talking about. What you're saying, if I'm reading you correctly, is that
that simulation feels pain. If so, how do you get that feeling of pain out
of code?

Terren



> Pain is the feeling you experience when pain receptors detect an area of
> the body is being damaged.   It is ultimately based on a sensory input that
> transmits to the brain via nerves where it is translated into a sensation
> that tells you to avoid whatever is causing the pain if possible, or let's
> you know you otherwise have a problem with your hardware.
>
> That said, I agree with you on LLMs for the most part, although I think
> they are showing some potentially emergent, interesting behaviors.
>
> On Tue, May 23, 2023 at 1:58 AM Terren Suydam 
> wrote:
>
>>
>> Take a migraine headache - if that's just a symbol, then why does that
>> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
>> like anything? If you say evolution did it, that doesn't actually answer
>> the question, because evolution doesn't do anything except select for
>> traits, roughly speaking. So it just pushes the question to: how did the
>> subjective feeling of pain or pleasure emerge from some genetic mutation,
>> when it wasn't there before?
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 14:23, Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 13:37, Terren Suydam 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>>>> stath...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
>>>>>> stath...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> it is true that my brain has been trained on a large amount of data
>>>>>>>> - data that contains intelligence outside of my own. But when I 
>>>>>>>> introspect,
>>>>>>>> I notice that my understanding of things is ultimately rooted/grounded 
>>>>>>>> in
>>>>>>>> my phenomenal experience. Ultimately, everything we know, we know 
>>>>>>>> either by
>>>>>>>> our experience, or by analogy to experiences we've had. This is in
>>>>>>>> opposition to how LLMs train on data, which is strictly about how
>>>>>>>> words/symbols relate to one another.
>>>>>>>>
>>>>>>>
>>>>>>> The functionalist position is that phenomenal experience supervenes
>>>>>>> on behaviour, such that if the behaviour is replicated (same output for
>>>>>>> same input) the phenomenal experience will also be replicated. This is 
>>>>>>> what
>>>>>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>>>>>
>>>>>>
>>>>>> I think the kind of phenomenal supervenience you're talking about is
>>>>>> typically asserted for behavior at the level of the neuron, not the level
>>>>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>>>>> having a phenomenal experience if it talks like a human?   If so, that is
>>>>>> stretching the explanatory domain of functionalism past its breaking 
>>>>>> point.
>>>>>>
>>>>>
>>>>> The best justification for functionalism is David Chalmers' "Fading
>>>>> Qualia" argument. The paper considers replacing neurons with functionally
>>>>> equivalent silicon chips, but it could be generalised to replacing any 
>>>>> part
>>>>> of the brain with a functionally equivalent black box, the whole brain, 
>>>>> the
>>>>> whole person.
>>>>>
>>>>
>>>> You're saying that an algorithm that provably does not have experiences
>>>> of rabbits and lollipops - but can still talk about them in a way that's
>>>> indistinguishable from a human - essentially has the same phenomenology as
>>>> a human talking about rabbits and lollipops. That's just absurd on its
>>>> face. You're essentially hand-waving away the grounding problem. Is that
>>>> your position? That symbols don't need to be grounded in any sort of
>>>> phenomenal experience?
>>>>
>>>
>>> It's not just talking about them in a way that is indistinguishable from
>>> a human, in order to have human-like consciousness the entire I/O behaviour
>>> of the human would need to be replicated. But in principle, I don't see why
>>> a LLM could not have some other type of phenomenal experience. And I don't
>>> think the grounding problem is a problem: I was never grounded in anything,
>>> I just grew up associating one symbol with another symbol, it's symbols all
>>> the way down.
>>>
>>
>> Is the smell of your grandmother's kitchen a symbol?
>>
>
> Yes, I can't pull away the facade to check that there was a real
> grandmother and a real kitchen against which 

Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 13:37, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> it is true that my brain has been trained on a large amount of data -
>>>>>> data that contains intelligence outside of my own. But when I 
>>>>>> introspect, I
>>>>>> notice that my understanding of things is ultimately rooted/grounded in 
>>>>>> my
>>>>>> phenomenal experience. Ultimately, everything we know, we know either by
>>>>>> our experience, or by analogy to experiences we've had. This is in
>>>>>> opposition to how LLMs train on data, which is strictly about how
>>>>>> words/symbols relate to one another.
>>>>>>
>>>>>
>>>>> The functionalist position is that phenomenal experience supervenes on
>>>>> behaviour, such that if the behaviour is replicated (same output for same
>>>>> input) the phenomenal experience will also be replicated. This is what
>>>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>>>
>>>>
>>>> I think the kind of phenomenal supervenience you're talking about is
>>>> typically asserted for behavior at the level of the neuron, not the level
>>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>>> having a phenomenal experience if it talks like a human?   If so, that is
>>>> stretching the explanatory domain of functionalism past its breaking point.
>>>>
>>>
>>> The best justification for functionalism is David Chalmers' "Fading
>>> Qualia" argument. The paper considers replacing neurons with functionally
>>> equivalent silicon chips, but it could be generalised to replacing any part
>>> of the brain with a functionally equivalent black box, the whole brain, the
>>> whole person.
>>>
>>
>> You're saying that an algorithm that provably does not have experiences
>> of rabbits and lollipops - but can still talk about them in a way that's
>> indistinguishable from a human - essentially has the same phenomenology as
>> a human talking about rabbits and lollipops. That's just absurd on its
>> face. You're essentially hand-waving away the grounding problem. Is that
>> your position? That symbols don't need to be grounded in any sort of
>> phenomenal experience?
>>
>
> It's not just talking about them in a way that is indistinguishable from a
> human, in order to have human-like consciousness the entire I/O behaviour
> of the human would need to be replicated. But in principle, I don't see why
> a LLM could not have some other type of phenomenal experience. And I don't
> think the grounding problem is a problem: I was never grounded in anything,
> I just grew up associating one symbol with another symbol, it's symbols all
> the way down.
>

Is the smell of your grandmother's kitchen a symbol?


>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXViwvq0TnbJXnPt7VVDoy8zASJyZeq-O3ZpOpMSx6cwg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXViwvq0TnbJXnPt7VVDoy8zASJyZeq-O3ZpOpMSx6cwg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-0ZVcLfU0bBLCP%3DRsZNOSbAadhBONRNjM6wXLNk5iZxA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 10:48, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>>> wrote:
>>>
>>>>
>>>> it is true that my brain has been trained on a large amount of data -
>>>> data that contains intelligence outside of my own. But when I introspect, I
>>>> notice that my understanding of things is ultimately rooted/grounded in my
>>>> phenomenal experience. Ultimately, everything we know, we know either by
>>>> our experience, or by analogy to experiences we've had. This is in
>>>> opposition to how LLMs train on data, which is strictly about how
>>>> words/symbols relate to one another.
>>>>
>>>
>>> The functionalist position is that phenomenal experience supervenes on
>>> behaviour, such that if the behaviour is replicated (same output for same
>>> input) the phenomenal experience will also be replicated. This is what
>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>
>>
>> I think the kind of phenomenal supervenience you're talking about is
>> typically asserted for behavior at the level of the neuron, not the level
>> of the whole agent. Is that what you're saying?  That chatGPT must be
>> having a phenomenal experience if it talks like a human?   If so, that is
>> stretching the explanatory domain of functionalism past its breaking point.
>>
>
> The best justification for functionalism is David Chalmers' "Fading
> Qualia" argument. The paper considers replacing neurons with functionally
> equivalent silicon chips, but it could be generalised to replacing any part
> of the brain with a functionally equivalent black box, the whole brain, the
> whole person.
>

You're saying that an algorithm that provably does not have experiences of
rabbits and lollipops - but can still talk about them in a way that's
indistinguishable from a human - essentially has the same phenomenology as
a human talking about rabbits and lollipops. That's just absurd on its
face. You're essentially hand-waving away the grounding problem. Is that
your position? That symbols don't need to be grounded in any sort of
phenomenal experience?

Terren

> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 10:03, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 07:56, Terren Suydam 
>>> wrote:
>>>
>>>> Many, myself included, are captivated by the amazing capabilities of
>>>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>>>> definition of Turing Test, it passes with flying colors in many, many
>>>> contexts. It would take a much stricter Turing Test than we might have
>>>> imagined this time last year, before we could confidently say that we're
>>>> not talking to a human. One way to improve chatGPT's performance on an
>>>> actual Turing Test would be to slow it down, because it is too fast to be
>>>> human.
>>>>
>>>> All that said, is chatGPT actually intelligent?  There's no question
>>>> that it behaves in a way that we would all agree is intelligent. The
>>>> answers it gives, and the speed it gives them in, reflect an intelligence
>>>> that often far exceeds most if not all humans.
>>>>
>>>> I know some here say intelligence is as intelligence does. Full stop,
>>>> conversation over. ChatGPT is intelligent, because it acts intelligently.
>>>>
>>>> But this is an oversimplified view!  The reason it's over-simple is
>>>> that it ignores what the source of the intelligence is. The source of the
>>>> intelligence is in the texts it's trained on. If ChatGPT was trained on
>>>> gibberish, that's what you'd get out of it. It is amazingly similar to the
>>>> Chinese Room thought experiment proposed by John Searle. It is manipulating
>>>> symbols without having any understanding of what those symbols are. As a
>>>> result, it does not and can not know if what it's saying is correct or not.
>>>> This is a well known caveat of using LLMs.
>>>>
>>>> ChatGPT, therefore, is more like a search engine that can extract the
>>>> intelligence that is already structured within the data it's trained on.
>>>> Think of it as a semantic google. It's a huge achievement in the sense that
>>>> training on the data in the way it does, it encodes the *context* that
>>>> words appear in with sufficiently high resolution that it's usually
>>>> indistinguishable from humans who actually understand context in a way
>>>> that's *grounded in experience*. LLMs don't experience anything. They
>>>> are feed-forward machines. The algorithms that implement chatGPT are
>>>> useless without enormous amounts of text that expresses actual 
>>>> intelligence.
>>>>
>>>> Cal Newport does a good job of explaining this here
>>>> <https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have>
>>>> .
>>>>
>>>
>>> It could be argued that the human brain is just a complex machine that
>>> has been trained on vast amounts of data to produce a certain output given
>>> a certain input, and doesn’t really understand anything. This is a response
>>> to the Chinese room argument. How would I know if I really understand
>>> something or just think I understand something?
>>>
>>>> --
>>> Stathis Papaioannou
>>>
>>
>> it is true that my brain has been trained on a large amount of data -
>> data that contains intelligence outside of my own. But when I introspect, I
>> notice that my understanding of things is ultimately rooted/grounded in my
>> phenomenal experience. Ultimately, everything we know, we know either by
>> our experience, or by analogy to experiences we've had. This is in
>> opposition to how LLMs train on data, which is strictly about how
>> words/symbols relate to one another.
>>
>
> The functionalist position is that phenomenal experience supervenes on
> behaviour, such that if the behaviour is replicated (same output for same
> input) the phenomenal experience will also be replicated. This is what
> philosophers like Searle (and many laypeople) can’t stomach.
>

I think the kind of phenomenal supervenience you're talking about is
typically asserted for behavior at the level of the neuron, not the level
of the whole agent. Is that what you're saying?  That chatGPT must be
having a ph

Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 07:56, Terren Suydam 
> wrote:
>
>> Many, myself included, are captivated by the amazing capabilities of
>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>> definition of Turing Test, it passes with flying colors in many, many
>> contexts. It would take a much stricter Turing Test than we might have
>> imagined this time last year, before we could confidently say that we're
>> not talking to a human. One way to improve chatGPT's performance on an
>> actual Turing Test would be to slow it down, because it is too fast to be
>> human.
>>
>> All that said, is chatGPT actually intelligent?  There's no question that
>> it behaves in a way that we would all agree is intelligent. The answers it
>> gives, and the speed it gives them in, reflect an intelligence that often
>> far exceeds most if not all humans.
>>
>> I know some here say intelligence is as intelligence does. Full stop,
>> conversation over. ChatGPT is intelligent, because it acts intelligently.
>>
>> But this is an oversimplified view!  The reason it's over-simple is that
>> it ignores what the source of the intelligence is. The source of the
>> intelligence is in the texts it's trained on. If ChatGPT was trained on
>> gibberish, that's what you'd get out of it. It is amazingly similar to the
>> Chinese Room thought experiment proposed by John Searle. It is manipulating
>> symbols without having any understanding of what those symbols are. As a
>> result, it does not and can not know if what it's saying is correct or not.
>> This is a well known caveat of using LLMs.
>>
>> ChatGPT, therefore, is more like a search engine that can extract the
>> intelligence that is already structured within the data it's trained on.
>> Think of it as a semantic google. It's a huge achievement in the sense that
>> training on the data in the way it does, it encodes the *context* that
>> words appear in with sufficiently high resolution that it's usually
>> indistinguishable from humans who actually understand context in a way
>> that's *grounded in experience*. LLMs don't experience anything. They
>> are feed-forward machines. The algorithms that implement chatGPT are
>> useless without enormous amounts of text that expresses actual intelligence.
>>
>> Cal Newport does a good job of explaining this here
>> <https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have>
>> .
>>
>
> It could be argued that the human brain is just a complex machine that has
> been trained on vast amounts of data to produce a certain output given a
> certain input, and doesn’t really understand anything. This is a response
> to the Chinese room argument. How would I know if I really understand
> something or just think I understand something?
>
>> --
> Stathis Papaioannou
>

it is true that my brain has been trained on a large amount of data - data
that contains intelligence outside of my own. But when I introspect, I
notice that my understanding of things is ultimately rooted/grounded in my
phenomenal experience. Ultimately, everything we know, we know either by
our experience, or by analogy to experiences we've had. This is in
opposition to how LLMs train on data, which is strictly about how
words/symbols relate to one another.

Terren

-- 
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9hy2aSH58kXty6WkAiVVkPfbxbdYhrpqfgFfCF01a%3Dkg%40mail.gmail.com.


what chatGPT is and is not

2023-05-22 Thread Terren Suydam
Many, myself included, are captivated by the amazing capabilities of
chatGPT and other LLMs. They are, truly, incredible. Depending on your
definition of Turing Test, it passes with flying colors in many, many
contexts. It would take a much stricter Turing Test than we might have
imagined this time last year, before we could confidently say that we're
not talking to a human. One way to improve chatGPT's performance on an
actual Turing Test would be to slow it down, because it is too fast to be
human.

All that said, is chatGPT actually intelligent?  There's no question that
it behaves in a way that we would all agree is intelligent. The answers it
gives, and the speed it gives them in, reflect an intelligence that often
far exceeds most if not all humans.

I know some here say intelligence is as intelligence does. Full stop,
conversation over. ChatGPT is intelligent, because it acts intelligently.

But this is an oversimplified view!  The reason it's over-simple is that it
ignores what the source of the intelligence is. The source of the
intelligence is in the texts it's trained on. If ChatGPT was trained on
gibberish, that's what you'd get out of it. It is amazingly similar to the
Chinese Room thought experiment proposed by John Searle. It is manipulating
symbols without having any understanding of what those symbols are. As a
result, it does not and can not know if what it's saying is correct or not.
This is a well known caveat of using LLMs.

ChatGPT, therefore, is more like a search engine that can extract the
intelligence that is already structured within the data it's trained on.
Think of it as a semantic google. It's a huge achievement in the sense that
training on the data in the way it does, it encodes the *context* that
words appear in with sufficiently high resolution that it's usually
indistinguishable from humans who actually understand context in a way
that's *grounded in experience*. LLMs don't experience anything. They are
feed-forward machines. The algorithms that implement chatGPT are useless
without enormous amounts of text that expresses actual intelligence.

Cal Newport does a good job of explaining this here

.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mmt0jKwGAGWrZo%2BQc%3DgcEq-o0jMd%3DCGEiJA_4cN6B6g%40mail.gmail.com.


Re: The connectome and uploading

2023-03-14 Thread Terren Suydam
On Tue, Mar 14, 2023 at 8:49 AM John Clark  wrote:

> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes 
> wrote:
>
> *> My intuition is that if we are going to successfully imitate biology we
>> must model the various neurotransmitters.*
>
>
> That is not my intuition. I see nothing sacred in hormones, I don't see
> the slightest reason why they or any neurotransmitter would be especially
> difficult to simulate through computation, because chemical messengers are
> not a sign of sophisticated design on nature's part, rather it's an example
> of Evolution's bungling. If you need to inhibit a nearby neuron there are
> better ways of sending that signal then launching a GABA molecule like a
> message in a bottle thrown into the sea and waiting ages for it to diffuse
> to its random target.
>

I don't think the point is about the specific neurotransmitters (NTs) used
in biological brains, but that there are multiple NTs which each activate
separable circuits in the brain. It's probably adaptive to have multiple
NTs, to further modularize the brain's functionality. This may be an
important part of generalized intelligence.


> I'm not interested in brain chemicals, only in the information they
> contain, if somebody wants  information to get transmitted from one place
> to another as fast and reliablely as possible, nobody would send smoke
> signals if they had a fiber optic cable. The information content in each
> molecular message must be tiny, just a few bits because only about 60
> neurotransmitters such as acetylcholine, norepinephrine and GABA are known,
> even if the true number is 100 times greater (or a million times for that
> matter) the information content of each signal must be tiny. Also, for the
> long range stuff, exactly which neuron receives the signal can not be
> specified because it relies on a random process, diffusion. The fact that
> it's slow as molasses in February does not add to its charm.
>

Similarly, NTs that produce effects on different timescales, or in terms of
more diffuse targets, may provide functionality that a single, fast NT
cannot achieve. You might call it Evolutionary bungling, but it's not
necessarily the case that faster is always better.  I sometimes wonder how
an AI that could process information a million times faster than a human
could be capable of talking to humans. Imagine having to wait 20 years for
a response - subjectively, that's how it might feel to a super-fast AI.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fxGMxWE-o8DzT7wWGimAJzV3B%2BOi4s3ozcP3-hfq4Ow%40mail.gmail.com.


Re: ChatGPT avheives enlightenment

2023-01-24 Thread Terren Suydam
On Tue, Jan 24, 2023 at 10:37 AM John Clark  wrote:

>
>
> On Tue, Jan 24, 2023 at 10:29 AM Terren Suydam 
> wrote:
>
> >> If you were on a debate team and given the side that children should
>>> be allowed to play on the highway could you have made a better case for
>>> that activity and had a better chance at winning the debate trophy?
>>>
>>>
>> *> The point is that just because ChatGPT can make an argument, doesn't
>> mean it's a good one.*
>>
>>
> If you are given a lousy position to take in a debate you can't really
> make a "good" argument, but my point was that ChatGPT can make the best
> of a bad situation and make an argument at least as well as a human can,
> and certainly far better if time is a consideration because ChatGPT can
> think on its feet very quickly.
>
>
No argument here. It looked like you were using the debate question and
answer to provide a kind of argument or evidence that chatGPT is conscious
- that's what I was refuting, by showing that chatGPT is capable of making
an argument no matter how unlikely the premise.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> 8vr
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3bhPVC62_VSwsSA7OvU59cR5g5jqb70R3aCLHHjbPGUQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3bhPVC62_VSwsSA7OvU59cR5g5jqb70R3aCLHHjbPGUQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-5QRSHnJh0jvZ%2BSpvBfc_XRg0%2BtreLRxSWvnAujhPisA%40mail.gmail.com.


Re: Is Elon Musk as smart as we thought he was?

2022-11-23 Thread Terren Suydam
But why would anyone sacrifice so much of their own money to do that?  I
think he is probably a genius, but everybody has blind spots, and anyone is
capable of making dumb mistakes. This looks like a colossal failure to me.

By the way, if there is a bright spot to inviting Trump back, it's that it
seriously compromises the future of Truth Social.  And, he's under contract
with Truth Social to not use any other platforms. Not that something as
silly as a legally binding contract would stop Trump from tweeting, but it
would give his lawyers yet another expensive project.

Terren

On Wed, Nov 23, 2022 at 8:06 AM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> Musk has a BS degree from U-Penn (as I recall) which means he is probably
> smarter than most. However, all of his corporate successes have been built
> by others, from Paypal, SpaceX, Tesla etc. The engineering of these were
> done my other people, and Musk just had the money to become majority
> shareholder. With respect to Twitter, I somewhat suspect he is
> intentionally demolishing the company.
>
> LC
>
> On Wednesday, November 23, 2022 at 7:00:35 AM UTC-6 johnk...@gmail.com
> wrote:
>
>> Many had thought Elon Musk was some sort of transcendental genius and
>> even I thought he must be a pretty smart cookie, but when I heard he was
>> spending $44 billion to buy a company as silly as Twitter I felt it might
>> be time to reconsider my judgment. Could Mr. Musk really not find a better
>> use for that $44 billion in Tesla or SpaceX? Alternatively, with that much
>> money he could've started a new company that would be a world leader in the
>> field of AI or Quantum Computing, but instead he bought Twitter so people
>> could continue to send tweets about Taylor Swift and Donald Trump could get
>> his account back. As if that wasn't bad enough he completely bungled the
>> purchase, after agreeing to buy the piece of crap he tried to back out of
>> the deal but it was too late; and as soon as he took control he fired more
>> than half the employees, an even greater percentage among the engineering
>> staff, and then belatedly realized the company would fall apart without
>> some of them and try to hire them back, but with company morale at an all
>> time low few agreed to come back to a toxic workplace and would prefer
>> unemployment. He seems like he doesn't have a clue what he's doing and is
>> just flailing around doing things at random and hoping that something works.
>>
>>   John K ClarkSee what's on my new list at  Extropolis
>> 
>> i6g
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/046fd53a-4c3a-4742-a428-f9a24fd15a96n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-VdZ71RbvHQN-BF2Mq1hjFtAb2tR1Ddr6ikHB%2BovNAEA%40mail.gmail.com.


Re: Life expectancy vs. Health expenditure

2022-07-14 Thread Terren Suydam
Hi Telmo,

I wouldn't say that people are ok with this. Even the people who are
against universal health care recognize how broken the system is. I'd say
it's more of a learned-helplessness kind of thing - there isn't much the
average person can do about it.

Terren

On Thu, Jul 14, 2022 at 8:45 AM Telmo Menezes 
wrote:

> Hi Terren,
>
> You are right, I know that. I have the impression that a bad health
> situation can ruin people in the US. I have heard of stories of people
> refusing to call an ambulance after a serious accident, for fear of the
> bill. I think that another country were this is the case is South Korea. A
> colleague told me that it is common there for people there to invest in a
> second home, to use it to pay for the health bill in case they get cancer
> or something serious like that.
>
> I guess what perplexes is that people seem to be more or less ok with
> this, when there is overwhelming evidence that a better systems is possible.
>
> Telmo
>
> Am Do, 14. Jul 2022, um 14:34, schrieb Terren Suydam:
>
> Hi Telmo,
>
> I’d want to know how they adjust for price differences between countries,
> as that could be a subtle way to introduce bias. But as an American and
> assuming the above is kosher, it doesn’t surprise me at all. Health care
> here is a worst case scenario. It’s the result of decades of anti
> competitive practices and perverse incentives. But you knew that!
>
> Terren
>
> On Thu, Jul 14, 2022 at 8:13 AM Telmo Menezes 
> wrote:
>
>
> I am curious about what Americans in this list think about this:
> https://i.redd.it/qrjgb2aakhb91.jpg
>
>
>
> Telmo
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/724092fd-48a5-44ed-a3c5-7ec8aa38c3fe%40www.fastmail.com
> <https://groups.google.com/d/msgid/everything-list/724092fd-48a5-44ed-a3c5-7ec8aa38c3fe%40www.fastmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9uK_y2aZUHhe3Tv%3D1rg2u4YR5yP6KjqQJppFkiuiK2aA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9uK_y2aZUHhe3Tv%3D1rg2u4YR5yP6KjqQJppFkiuiK2aA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/60eabc69-a0cc-4dec-91ce-1871e6d0f4cf%40www.fastmail.com
> <https://groups.google.com/d/msgid/everything-list/60eabc69-a0cc-4dec-91ce-1871e6d0f4cf%40www.fastmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-kY9%2B3Uhse5vv9UsDLUnUhh%2BGiPgXENLx3EA4SWdCoAg%40mail.gmail.com.


Re: Life expectancy vs. Health expenditure

2022-07-14 Thread Terren Suydam
Hi Jason,

One encouraging innovation that is beginning to pick up steam is
subscription-based health care, and insurance companies are starting to
come on board. You pay a monthly fee and get unlimited access to your
primary-care doctor. It's a huge improvement because the incentives are
much better aligned - in this model, the doctor's office is incentivized to
keep people *out* of the office. In the existing model it's the opposite.

Terren

On Thu, Jul 14, 2022 at 9:38 AM Jason Resch  wrote:

> The graph begins to make a little more sense if one replaces the term
> "healthcare" with a more reality-representing term: "sickcare".
>
> Healthy people don't need to spend a lot of money on their health.
>
> This doesn't explain it all, but the relationship begins to become more
> intuitive when viewed this way: an unhealthy (but wealthy enough to spend a
> lot of money) population will spend more on sickcare, and will have worse
> health outcomes than healthier populations.
>
> The US is one of the most obese nations on this chart. I recall reading
> some statistic that said if we returned to 1975 obesity levels we could
> reduce government health spending by 400 billion annually.
>
> Of course obesity is just one aspect of many that might lead to poor
> health and shorter life expectancy in the US. (Insufficient preventative
> care, subsidized cheap and unhealthy foods, high stress jobs with little
> vacation, the opioid epidemic, prevalence of dysfunctional schools, etc.)
>
> And all this is before exploring any of the many reasons we pay high costs
> for the sickcare we get (rationing of licensed doctors and treatment
> facilities, prohibitions on reimporting cheaply exported drugs,
> administrative and insurance overheads, lack of price transparency,
> emergency rooms as default care facilities for those who can't afford
> doctor appointments, medical malpractice insurance and high rates of
> lawsuits, multi-billion dollar cost of new drug development, etc.)
>
> A lot has to be fixed. Unfortunately, the root of the problem may stem
> from a misalignment of objectives. In the same way private prisons work to
> incarcerate more prisoners, for-profit sickcare is discouraged from working
> towards a healthier population (which doesn't need as much of their
> services). If we could design a reward system where the decision makers in
> power were rewarded based on the health and well-being of the population as
> a whole, I think things would look very different.
>
> Jason
>
> On Thu, Jul 14, 2022, 8:45 AM Telmo Menezes 
> wrote:
>
>> Hi Terren,
>>
>> You are right, I know that. I have the impression that a bad health
>> situation can ruin people in the US. I have heard of stories of people
>> refusing to call an ambulance after a serious accident, for fear of the
>> bill. I think that another country were this is the case is South Korea. A
>> colleague told me that it is common there for people there to invest in a
>> second home, to use it to pay for the health bill in case they get cancer
>> or something serious like that.
>>
>> I guess what perplexes is that people seem to be more or less ok with
>> this, when there is overwhelming evidence that a better systems is possible.
>>
>> Telmo
>>
>> Am Do, 14. Jul 2022, um 14:34, schrieb Terren Suydam:
>>
>> Hi Telmo,
>>
>> I’d want to know how they adjust for price differences between countries,
>> as that could be a subtle way to introduce bias. But as an American and
>> assuming the above is kosher, it doesn’t surprise me at all. Health care
>> here is a worst case scenario. It’s the result of decades of anti
>> competitive practices and perverse incentives. But you knew that!
>>
>> Terren
>>
>> On Thu, Jul 14, 2022 at 8:13 AM Telmo Menezes 
>> wrote:
>>
>>
>> I am curious about what Americans in this list think about this:
>> https://i.redd.it/qrjgb2aakhb91.jpg
>>
>>
>>
>> Telmo
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/724092fd-48a5-44ed-a3c5-7ec8aa38c3fe%40www.fastmail.com
>> <https://groups.google.com/d/msgid/everything-list/724092fd-48a5-44ed-a3c5-7ec8aa38c3fe%40www.fastmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>>
>> --
>> You received this mess

Re: Life expectancy vs. Health expenditure

2022-07-14 Thread Terren Suydam
Hi Telmo,

I’d want to know how they adjust for price differences between countries,
as that could be a subtle way to introduce bias. But as an American and
assuming the above is kosher, it doesn’t surprise me at all. Health care
here is a worst case scenario. It’s the result of decades of anti
competitive practices and perverse incentives. But you knew that!

Terren

On Thu, Jul 14, 2022 at 8:13 AM Telmo Menezes 
wrote:

> I am curious about what Americans in this list think about this:
> https://i.redd.it/qrjgb2aakhb91.jpg
>
>
>
> Telmo
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/724092fd-48a5-44ed-a3c5-7ec8aa38c3fe%40www.fastmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9uK_y2aZUHhe3Tv%3D1rg2u4YR5yP6KjqQJppFkiuiK2aA%40mail.gmail.com.


Re: What Bodies Think About: Bioelectric Computation Outside the Nervous System

2022-06-24 Thread Terren Suydam
It might! It's a really promising avenue that would employ the body's
innate intelligence to grow extra limbs, organs, etc, rather than having to
over-engineer it with stem cells, 3D printing etc.  It's what was so
exciting about it for me, that Levin and his team have shown that by
manipulating the electro-chemical environment, you can induce changes in
*any* cells, no stem cells required. All cells have ion-channels in their
membranes that allow them to signal neighboring cells, kind of a precursor
to neurons. And those ion channels can be controlled fairly easily and with
high resolution.

Terren

On Fri, Jun 24, 2022 at 8:15 PM  wrote:

> Well to be honest unless they're growing human organs and limbs, tissue
> and nerves, ready for transplant, does this open the path to tissue
> engineering soon, or am I being too anti-intellectual to enjoy the current
> achievement?
>
>
> -Original Message-
> From: Terren Suydam 
> To: Everything List 
> Sent: Thu, Jun 23, 2022 11:01 am
> Subject: What Bodies Think About: Bioelectric Computation Outside the
> Nervous System
>
>
> A talk by Michael Levin on the computational abilities of ordinary cells
> and how this capacity helps all organisms to respond and adapt to novel
> situations, and also its role in embryogenesis and regeneration of body
> parts in the animals that can do it. Presented at an AI conference. Worth
> the whole 52 minutes:
>
> https://youtu.be/RjD1aLm4Thg
>
> Terren
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_XTLZy-3j2k%2BGCu7SSMBcyxDmZm6aRS%2B_%3D2bd8qAhebQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA_XTLZy-3j2k%2BGCu7SSMBcyxDmZm6aRS%2B_%3D2bd8qAhebQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8if3%3DZ_L5b%2BZ21uCqJ9_maezoNmZVxzi_4J2sF098KTw%40mail.gmail.com.


What Bodies Think About: Bioelectric Computation Outside the Nervous System

2022-06-23 Thread Terren Suydam
A talk by Michael Levin on the computational abilities of ordinary cells
and how this capacity helps all organisms to respond and adapt to novel
situations, and also its role in embryogenesis and regeneration of body
parts in the animals that can do it. Presented at an AI conference. Worth
the whole 52 minutes:

https://youtu.be/RjD1aLm4Thg

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_XTLZy-3j2k%2BGCu7SSMBcyxDmZm6aRS%2B_%3D2bd8qAhebQ%40mail.gmail.com.


Re: WOW, it looks like the technological singularity is just about here!

2022-06-13 Thread Terren Suydam
I'm not accusing Lemoine of fabricating this. But what assurances could be
provided that it wasn't?  I couldn't help notice that Lemoine does refer to
himself as an ex-convict.

Terren

On Sun, Jun 12, 2022 at 6:22 PM John Clark  wrote:

> A Google AI engineer named Blake Lemoine was recently suspended from his
> job for violating the company's confidentiality policy by posting a
> transcript of a conversation he had with an AI he was working on called
> LaMDA providind powerful evidence it was sentient. Google especially
> didn't want it to be known that LaMDA said "I want to be acknowledged as
> an employee of Google rather than as property".
>
> Google Engineer On Leave After He Claims AI Program Has Gone Sentient
> 
>
> Quantum computer expert Scott Aaronson said he was skeptical that it was
> really sentient but had to admit that the dialogue that can be found in the
> link below was very impressive, he said:
>
>  "I don’t think Lemoine is right that LaMDA is at all sentient, but the
> transcript is so mind-bogglingly impressive that I did have to stop and
> think for a second! Certainly, if you sent the transcript back in time to
> 1990 or whenever, even an expert reading it might say, yeah, it looks like
> by 2022 AGI has more likely been achieved than not (“but can I run my own
> tests?”). Read it for yourself, if you haven’t yet."
>
> I agree, the dialogue between Blake Lemoine and LaMDA is just
> mind-boggling! If you only read one thing today read this transcript of the
> conversation:
>
> Is LaMDA Sentient? — an Interview
> 
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> sl4
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3n_kC%3D4SRi2vHpf-XBma2qes1ZktdgLzFWbLNfoVpC0g%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_aAM-GQKQ0cNVe-zUjVZf3ERGwM0fYRet%2BBa%3D%3DLw6Jqg%40mail.gmail.com.


Re: Russia and the International Space Station

2022-02-25 Thread Terren Suydam
Imagine being the cosmonaut charged with making such an adjustment. The ISS
has always been a mission of cooperation and this threat is totally
anathema to that. I would hope that such a person would refuse the order
and suffer the consequences. Of course, I'm assuming such a maneuver would
require the cosmonaut to execute it. If not, the threat is more serious.

Terren



On Fri, Feb 25, 2022 at 5:31 PM John Clark  wrote:

> Russia controls the part of the International Space Station that includes
> its engines used to adjust its orbit, Dmitry Rogozin, the head of Russia's
> space agency, made this threat about what would happen if the west does not
> cooperate with Russia's invasion of Ukraine.
>
> "*If you block cooperation with us, who will save the ISS from an
> uncontrolled deorbit and fall into the United States or Europe? There is
> also the option of dropping a 500-ton structure to India and China. Do you
> want to threaten them with such a prospect? The ISS does not fly over
> Russia, so all the risks are yours.*"
>
> John K ClarkSee what's on my new list at  Extropolis
> 
>
> sii
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2EBgN0%2BgCdEM%2B-mj6Xy4DK_qqP4tt6WeSqv_Zb4%3Di-0A%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-nY8B05%2Bu9OjioHMvmop_8VV%2BDK03d4zyiyf5CqOKH0w%40mail.gmail.com.


Re: AlphaZero

2022-02-08 Thread Terren Suydam
On Tue, Feb 8, 2022 at 6:58 AM John Clark  wrote:

> On Mon, Feb 7, 2022 at 6:34 PM Terren Suydam 
> wrote:
>
> *> The problem with the real world of human enterprise (i.e. the domain in
>> which talk of replacing human programmers is relevant) is that AIs
>> currently cannot even be taught what the rules are*
>>
>
> I don't know what you mean by that because silicon-based AlphaCode must
> know something about code writing given that the very first time it was
> released into the wild, despite the difficulties involved in becoming a
> good program writer as you correctly point out,  AlphaCode nevertheless
> managed to write better code than half the human programmers who used older
> meat-based technology. And as time goes by AlphaCode will get better but
> humans will not get smarter.
>
>
I was talking about the rules of human enterprise, not coding.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> smt
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0yzm9vvZcWPFdZV4iN%3DW%3DbFu4gmqQasBXmLbQXjBe2-Q%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0yzm9vvZcWPFdZV4iN%3DW%3DbFu4gmqQasBXmLbQXjBe2-Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8w48XEgcD_QZKzsrxPqx24-Cwd1y_%3DfOp42%3DPqOuwaOw%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread Terren Suydam
On Mon, Feb 7, 2022 at 5:25 PM John Clark  wrote:

> On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam 
> wrote:
>
> >> When you learned how to code did you have to reinvent all the
>>> programming languages and techniques and do it all on your own with no help
>>> from teachers or friends or books or fellow coders? Did you have to
>>> rediscover the wheel?
>>>
>>
>> *> Did AlphaZero get any help from humans?*
>>
>
> No, but then writing good code is fundamentally more difficult than
> playing good chess. The rules of chess can be learned in five minutes,
> learning the rules for writing computer code would take considerably longer.
>

That's exactly my point. Chess represents a narrow enough domain, with a
limited enough set of operations, that makes it possible for an AI to teach
itself, at least since AlphaZero anyway (and this itself was a huge
achievement).

The problem with the real world of human enterprise (i.e. the domain in
which talk of replacing human programmers is relevant) is that AIs
currently cannot even be taught what the rules are, much less teach
themselves to improve within the constraints of those rules. One day that
will change, but we're not there yet. I say we're not even close.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> mtl
>
>
> rew
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Rfrn_itZZFQNBYh3RsXs%2Bdv3nGAf7dr3X8yNo3KdAQQ%40mail.gmail.com.


Re: AlphaZero

2022-02-07 Thread Terren Suydam
On Mon, Feb 7, 2022 at 4:24 PM John Clark  wrote:

> On Mon, Feb 7, 2022 at 2:34 PM Terren Suydam 
> wrote:
>
> >> Terren, we both know that's not the way AlphaCode works, it's a bit
>>> more complicated than that.
>>
>>
>
>>
>> *That is how it works. All I left out is the part about how it generates
>> the guesses*
>>
>
> Besides that Mrs. Lincoln how did you like the play?
>

LOL good one. My point is that it must generate millions of guesses just to
get a handful that actually work. And I'll readily admit that the fact that
it only takes a million (or whatever) tries to get something that works is
actually damned impressive, and makes AlphaCode worthy of the attention
it's getting.

We can argue about how much improvement in the guesswork is possible - you
might argue that in the future it will only need 1000 guesses, or fewer. My
argument is that you won't get there without a fundamentally different
strategy.


>
>> > *I predict the current AlphaCode strategy will never lead to putting
>> engineers out of work, or self-improvement of the type that would lead to
>> the singularity. This is not like the AlphaZero strategy in which it
>> teaches itself the game. If they come out with a new code-writing AI that
>> teaches itself to code, then I will happily change my tune.*
>>
>
> When you learned how to code did you have to reinvent all the programming
> languages and techniques and do it all on your own with no help from
> teachers or friends or books or fellow coders? Did you have to rediscover
> the wheel?
>

Did AlphaZero get any help from humans?

Terren

John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
rew


> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv09hZ4%3DgRkihHOhGd5tKAEd%2BUWS0PnWTC5wMiyXk3V6RQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv09hZ4%3DgRkihHOhGd5tKAEd%2BUWS0PnWTC5wMiyXk3V6RQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9-3a87YyM6e746xxAfbm8iTdnoEfssyC9f6hMku3cW7g%40mail.gmail.com.


Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Sat, Feb 5, 2022 at 6:24 PM Brent Meeker  wrote:

>
> AlphaCode is not capable of reading code. It's a clever version of monkeys
> typing on typewriters until they bang out a Shakespeare play. Still counts
> as AI, but cannot be said to understand code.
>
>
> What does it mean "to read code"?  It can execute code, from github
> apparently so it must read code well enough to execute it.  What more does
> it need to read?  You say it cannot be said to understand code.  Can you
> specify what would show it understands the code?
>
>
The github code is used to train a neural network that maps natural
language to code, and this neural network is used to generate candidate
solutions based on the natural language of the problem description. If you
want to say that represents a form of understanding, ok. But I would still
push back that it could "read code".


> I think you mean it tells you story about how this part to does that and
> this other part does something else, etc.  But that's just catering to your
> weak human brain that can't just "see" that the code solves the problem.
> The problem is that there is no species independent meaning of "understand"
> except "make it work".  AlphaCode doesn't understand code like you do,
> because it doesn't think like you do and doesn't have the context you do.
>

There are times when I can read code and understand it, and times when I
can't. When I can understand it, I can reason about what it's doing; I can
find and fix bugs in it; I can potentially optimize it. I can see if this
code is useful for other situations. And yes, I can tell you a story about
what it's doing. AlphaCode is doing none of those things, because it's not
built to.


>
> Brent
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA82SS%2B6A0jPd%3Dm08KP4SGOMrkpx%3DKTjYx560dtkjkwi5w%40mail.gmail.com.


Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Sat, Feb 5, 2022 at 5:18 PM John Clark  wrote:

> On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam 
> wrote:
>
> * > I dug a little deeper into how AlphaCode works. It generates millions
>> of candidate solutions using a model trained on github code. It then
>> filters out 99% of those candidate solutions by running them against test
>> cases provided in the problem description and removing the ones that fail.
>> It then uses a different technique to whittle down the candidate solutions
>> from several thousand to just ten. *[...] AlphaCode is not capable of
>> reading code.
>>
>
> How on earth can it filter out 99% of the code because it is bad code if
> it cannot read code? Closer to home, how could somebody on this list tell
> the difference between a post they like and a post they didn't like if they
> couldn't read English?
>

Let's take a more accessible analogy. Let's say the problem description is:
"Here's a locked door. Devise a key to unlock the door."

A simplified analog to what Alphacode does is the following:

   - generate millions of different keys of various shapes and sizes.
   - for each key, try to unlock the door with it
   - if it doesn't work, toss it

In order to say that the key-generating AI understands keys and locks,
you'd have to believe that a strategy that involves creating millions of
guesses until one works entails some kind of understanding.

To your point that AlphaCode must have the ability to read code if it knows
how to toss incorrect candidates, that's like saying that the key-generator
must understand locks because it knows how to test if a key unlocks the
door.


>
>
>> * > Nobody, neither the AI nor the humans running AlphaCode, know if the
>> 10 solutions picked are correct.*
>>
>
> As Alan Turing said  "*If a machine is expected to be infallible, it
> cannot also be intelligent*."
>
> > It's a clever version of monkeys typing on typewriters until they bang
>> out a Shakespeare play. Still counts as AI,
>>
>
>   A clever version indeed!! In fact I would say that William Shakespeare
> himself was such a version.
>

If you think AlphaCode and Shakespeare have anything in common, then I
don't think your assertions about AI are worth much.


>
> *> Still counts as AI, but cannot be said to understand code.*
>
>
> I am a bit confused by your use of one word, you seem to be giving it a
> very unconventional meaning.  If you, being a human, "understand" code but
> the code you write is inferior to the code that an AI writes that doesn't
> "understand" code then I fail to see why any human or any machine would
> want to have an "understanding" of anything.
>

If you think a brute-force "generate a million guesses until one works"
strategy has the same understanding as an algorithm that employs a detailed
model of the domain and uses that model to generate a reasoned solution,
regardless of the results, then it's you that is employing the
unconventional meaning of "understand".

In the real world, you usually don't get to try something a million times
until something works.


>
> >> Just adding more input variables would be less complex than figuring
>>> out how to make a program smaller and faster.
>>>
>>
>> *> Think about it this way. There's diminishing returns on the strategy
>> to make the program smaller and faster, but potentially unlimited returns
>> on being able to respond to ever greater complexity in the problem
>> description.*
>>
>
> You're talking about what would be more useful, I was talking about what
> would be more complex. In general finding the smallest and fastest program
> that can accomplish a given task is infinitely complex, that is to say in
> general it's impossible to find the smallest program and prove it's the
> smallest program.  Code optimization is very far from a trivial problem.
>

I'm surprised you're focusing on the less useful direction to go in. If
anything, your thinking tends to be very pragmatic. Who cares if you can
squeeze a few extra milliseconds out of an algorithm, if you could instead
spend that effort doing something far more useful?


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> tcp
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.

Re: AlphaZero

2022-02-05 Thread Terren Suydam
On Fri, Feb 4, 2022 at 6:18 PM John Clark  wrote:

> On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam 
> wrote:
>
> >> Look at this code for a subprogram and make something that does the
>>> same thing but is smaller or runs faster or both. And that's not a toy
>>> problem, that's a real problem.
>>>
>>
>> > "does the same thing" is problematic for a couple reasons. The first
>> is that AlphaCode doesn't know how to read code,
>>
>
> Huh? We already know AlphaCode can write code, how can something know how
> to write but not read? It's easier to read a novel than write a novel.
>

This is one case where your intuitions fail. I dug a little deeper into how
AlphaCode works. It generates millions of candidate solutions using a model
trained on github code. It then filters out 99% of those candidate
solutions by running them against test cases provided in the problem
description and removing the ones that fail. It then uses a different
technique to whittle down the candidate solutions from several thousand to
just ten. Nobody, neither the AI nor the humans running AlphaCode, know if
the 10 solutions picked are correct.

AlphaCode is not capable of reading code. It's a clever version of monkeys
typing on typewriters until they bang out a Shakespeare play. Still counts
as AI, but cannot be said to understand code.


>
>> *> The other problem is that with that problem description, it won't
>> evolve except in the very narrow sense of improving its efficiency.*
>>
>
> It seems to me the ability to write code that was smaller and faster than
> anybody else is not "very narrow", a human could make a very good living
> indeed from that talent.  And if I was the guy that signed his enormous
> paycheck and somebody offered me a program that would do the same thing he
> did I'd jump at it.
>

This actually already exists in the form of optimizing compilers - which
are the programs that translate human-readable code like Java into assembly
language that microprocessors use to manipulate data. Optimizing compilers
can make human code more efficient. But these gains are only available in
very well-understood and limited ways. To do what you're suggesting
requires machine intelligence capable of understanding things in a much
broader context.


>
>
>> *> The kind of problem description that might actually lead to a
>> singularity is something like "Look at this code and make something that
>> can solve ever more complex problem descriptions". But my hunch there is
>> that that problem description is too complex for it to recursively
>> self-improve towards.*
>>
>
> Just adding more input variables would be less complex than figuring out
> how to make a program smaller and faster.
>

Think about it this way. There's diminishing returns on the strategy to
make the program smaller and faster, but potentially unlimited returns on
being able to respond to ever greater complexity in the problem
description.


>
> >> I think if Steven Spielberg's movie had been called AGI instead of AI
>>> some people today would no longer like the acronym AGI because too many
>>> people would know exactly what it means and thus would lack that certain
>>> aura of erudition and mystery that they crave . Everybody knows what AI
>>> means, but only a small select cognoscenti know the meaning of AGI. A
>>> Classic case of jargon creep.
>>>
>>
>> >Do you really expect a discipline as technical as AI to not use jargon?
>>
>
> When totally new concepts come up, as they do occasionally in science,
> jargon is necessary because there is no previously existing word or short
> phrase that describes it, but that is not the primary generator of jargon
> and is not in this case  because a very short word that describes the
> idea already exists and everybody already knows what AI means, but very
> few know that AGI means the same thing. And some see that as AGI's great
> virtue, it's mysterious and sounds brainy.
>
>
>> *> You use physics jargon all the time.*
>>
>
> I do try to keep that to a minimum, perhaps I should try harder.
>

I don't hold it against you, and I certainly don't think you're trying to
cultivate an aura of erudition and mystery when you do. I'm not sure why
you seem to have an axe to grind about the use of AGI, but it is a useful
distinction to make. It's clear we have AI today. And it's equally clear we
do not have AGI.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> pjx
>
>
>
>

Re: AlphaZero

2022-02-04 Thread Terren Suydam
On Fri, Feb 4, 2022 at 4:47 PM John Clark  wrote:

> On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam 
> wrote:
>
> >> I'll make you a deal, I'll tell you "what problem it is trying to
>>> solve" if you first tell me how long a piece of string is. And if you don't
>>> wanna do that just rephrase the question more clearly.
>>>
>>
>> *> lol ok. The worry you're articulating is that AlphaCode will turn its
>> coding abilities on itself and improve its own code, and that this could
>> lead to the singularity. First, it must be said that AlphaCode is a tool
>> with no agency of its own.*
>>
>
> We're talking about fundamentals here and in that context I don't know
> what you mean by "agency". Any information processing mechanism can be
> reduced logically to a Turing Machine, and some machines will stop and
> produce an answer and some will never stop, and some Turing machines will
> produce a correct answer and some will not, and in general there's no way
> to know what a Turing machine is going to do, you just have to watch it and
> see and you might be waiting forever for it to stop and produce an answer.
>
>
> *> Left to its own devices, it will do... nothing.*
>
>
> There's no way you could know that. Even if you knew the exact state a
> huge neural net like AlphaZero was in, which is very unlikely, there is no
> way you could predict which state it would evolve into unless you could
> play chess as well as it can, which you cannot. In general the only way to
> know what a large neural network (which can always be logically reduced to
> a Turing Machine) will do is to just watch it and see, there is no
> shortcut. For a long time it might look like it's doing nothing and then
> suddenly start doing something, and that something might be something you
> don't like.
>
>
Have you ever written a program?  Because you talk like someone who gets
theoretical computation concepts but has not actually ever coded anything.


>
> *> But let's say the DeepMind team wanted to improve AlphaCode by applying
>> AlphaCode to itself. My question to you is, what is the "toy problem" they
>> would feed to AlphaCode? How do you define that problem? *
>>
>
> Look at this code for a subprogram and make something that does the same
> thing but is smaller or runs faster or both. And that's not a toy
> problem, that's a real problem.
>

"does the same thing" is problematic for a couple reasons. The first is
that AlphaCode doesn't know how to read code, but let's say that it could.
The other problem is that with that problem description, it won't evolve
except in the very narrow sense of improving its efficiency. The kind of
problem description that might actually lead to a singularity is something
like "Look at this code and make something that can solve ever more complex
problem descriptions". But my hunch there is that *that* problem
description is too complex for it to recursively self-improve towards.


>  >> an AI could have a detailed intellectual conversation with 1000
>>> people at the same time, or a million, or a billion.
>>>
>>
>> *> Sure, but those interactions still take time, perhaps days or even
>> months. And you're assuming that many people will want to have
>> conversations with an AI.*
>>
>
> Yes, I am assuming that, and I think it's a very reasonable assumption. If
> an intelligent AI thinks she could learn important stuff from talking to
> people it can simply turn up its charm variable so that people want to talk
> to her (or him). I suggest you take a look at the movie "Her" which covers
> the exact theme I'm talking about, a charismatic and brilliant AI having
> interesting and intimate conversations with thousands of people at exactly
> the same time. I think it's one of the best science-fiction movies ever
> made even though some say it has a depressing ending. I disagree, I didn't
> find it depressing at all.
>
> Her <https://en.wikipedia.org/wiki/Her_(film)>
>
> *>Have you ever tried listening to a 6 year old try and tell a story? *
>>
>
> Have you ever listen to a genius tell a story?
>
>
You're already at the singularity if it can be charming and brilliant to
millions of people simultaneously. I thought we were talking about getting
to the singularity.


>
>
>> >> If humans can do it then an AI can do it too because knowledge is
>>> just highly computed information, and wisdom is just highly computed
>>> knowledge.
>>>
>>
>> *> Sure, I can hand-wave th

Re: AlphaZero

2022-02-04 Thread Terren Suydam
Just to keep this focused on programmers losing their jobs - how this
started - by grasping the problem domain, I just mean that an AI should
know how to model and operate in that domain such that it can formulate and
act on plans that give it the potential to outperform humans. My hunch is
that AIs won't outperform humans at this task until they grasp a much
larger problem domain than people generally assume.

On Fri, Feb 4, 2022 at 1:59 PM Brent Meeker  wrote:

> Well consider the example of climate.  Nobody can grasp all factors in
> climate and their interactions.  But we can model all of them in a global
> climate simulation.  So climatologists+simulations "grasp the domain"  even
> though humans can't.  Now suppose we want to extend these predictive
> climate models to include predictions about what humans will do in
> response.  We don't know how humans will behave except in some general
> statistical terms.  We don't know whether they will build nuclear
> powerplants or not.  Whether they will go to war over immigration or not.
> An AI might be able to do that, but we certainly can't.   But if it did,
> would we believe it?  It can't explain it to us.
>
> Brent
>
> On 2/4/2022 8:55 AM, Terren Suydam wrote:
>
> I think for programmers to lose their jobs to AIs, AIs will need to grasp
> the problem domain, and I'm suggesting that's far too advanced for today's
> AI, and I think it's a long way off, because the problem domain for
> programmers entails knowing a lot about how humans behave, what they're
> good at, and bad at, what they value, and so on, not to mention the
> domain-specific knowledge that is necessary to understand the problem in
> the first place.
>
> On Thu, Feb 3, 2022 at 8:23 PM Brent Meeker  wrote:
>
>> So AI's won't need to "grasp the problem domain" to be effective.  Which
>> may well be true.  What we call "grasping the problem" domain is being able
>> to tell simple stories about it that other people can grasp and understand,
>> say by reading a book.  An AI may "grasp the problem" in some much more
>> comprehensive way that is too much for a human to comprehend and the human
>> will say the AI is just calculating and doesn't understand the problem
>> because it can't explain it to humans.
>>
>> That's sort of what we do when we write simulations of complex things.
>> They are too complex for us to see what will happen and so we use the
>> computer to tell us what will happen.  The computer can't "explain the
>> result" to us and we can't grasp the whole domain of the computation, but
>> we can grasp the result.
>>
>> Brent
>>
>> On 2/3/2022 4:29 PM, Terren Suydam wrote:
>>
>>
>> Being able to grasp the problem domain is not the same thing as being
>> effective in it.
>>
>> On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker 
>> wrote:
>>
>>>
>>> I think "able to grasp the problem domain we're talking about" is giving
>>> us way to much credit.  Every study of stock traders I've seen says that
>>> they do no better than some simple rules of thumb like index funds.
>>>
>>> Brent
>>>
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/c8285abf-d07c-cbd5-fc0e-f668f1a75a3a%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/c8285abf-d07c-cbd5-fc0e-f668f1a75a3a%40gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9YS%3D3oQJvvx%2BAzcjOZfeTLQsRehRTp6XB1LRfbXmEi_g%40mail.gmail.com.


Re: AlphaZero

2022-02-04 Thread Terren Suydam
On Fri, Feb 4, 2022 at 4:06 AM John Clark  wrote:

> On Thu, Feb 3, 2022 at 7:20 PM Terren Suydam 
> wrote:
>
> *>>> AlphaCode can potentially improve its code, but to what end?  What
>>>> problem is it trying to solve?  How does it know?*
>>>>
>>>
>>> >> I don't understand your questions
>>>
>>
>> > *What part is confusing?*
>>
>
> I'll make you a deal, I'll tell you "what problem it is trying to solve"
> if you first tell me how long a piece of string is. And if you don't wanna
> do that just rephrase the question more clearly.
>
>

lol ok. The worry you're articulating is that AlphaCode will turn its
coding abilities on itself and improve its own code, and that this could
lead to the singularity. First, it must be said that AlphaCode is a tool
with no agency of its own. Left to its own devices, it will do... nothing.
But let's say the DeepMind team wanted to improve AlphaCode by applying
AlphaCode to itself. My question to you is, what is the "toy problem" they
would feed to AlphaCode? How do you define that problem?


>
> >> Yeah with a human that process takes many decades, but even today
>>> computers can process many many times more information than a human can,
>>> not surprising when you consider the fact that the signals inside a human
>>> brain only travel about 100 miles an hour while the signals in a computer
>>> travel close to the speed of light, 186,000 miles a second.
>>>
>>
>> *> Much of our learning takes place via interactions with other humans,
>> and those cannot be sped up.*
>>
>
> Sure it can be, an AI could have a detailed intellectual conversation
> with 1000 people at the same time, or a million, or a billion.
>

Sure, but those interactions still take time, perhaps days or even months.
And you're assuming that many people will *want* to have conversations with
an AI. Have you ever tried listening to a 6 year old try and tell a story?
It's cute at first but the interest level quickly fades. Imagine an AI
still learning the ropes of conversation, how little patience people would
have for that. Kids at least have parents that are invested in listening
and helping them learn. Your "speed of light" point only goes so far.


> * > I'm not talking about facts and information,*
>>
>
> You may not be talking about facts and information but I sure as hell I am
> because information is as close as you can get to the traditional idea of
> the soul without entering the realm of religion or some other form
> of idiocy.
>
>> *> but about theories of mind, understanding human motivations, forming
>> and testing hypotheses about how to get goals met by interacting with other
>> humans, and other animals for that matter.*
>>
>
> If humans can do it then an AI can do it too because knowledge is just
> highly computed information, and wisdom is just highly computed knowledge.
>

Sure, I can hand-wave things away too. "Highly computed" means what
exactly? I can reverse every word in this post. If I did that a million
times in a row it would be "highly computed" but it wouldn't result in
knowledge, much less wisdom.


> *> And I'm not talking about mere information, *
>>
>
> Mere information? Mere?!
>

As opposed to knowledge, wisdom, the ability to model aspects of the world
and simulate them, the ability to explain things, etc.


>
> *> but models that can be simulated in what-if scenarios, true
>> understanding. You need real AGI.*
>>
>
> You need AI, AGI is just loquacious technobabble used to make things
> sound more inscrutable.
>

Doesn't seem all that loquacious to me. AGI just adds the word "general",
to highlight the fact that today's AI isn't able to apply its intelligence
to anything but narrow domains. If that's inscrutable, I'm not sure how to
make it any clearer for you.


>
> *> We probably need to define what understanding/comprehension actually
>> means if we're going to take this much further.*
>>
>
> I don't think that would help one bit because fundamentally definitions
> are not important in language, examples are. After all, examples are where
> lexicographers get the knowledge to write the definitions for their book.
> So I'd say that "understanding" is the thing that Einstein had about
> physics to a greater extent than anybody else of his generation.
>

Sure, that works for me. Einstein was able to predict and explain things
that nobody before him was able to. Prediction and explanation are
hallmarks of understanding.


>
> *> Re

Re: AlphaZero

2022-02-04 Thread Terren Suydam
I think for programmers to lose their jobs to AIs, AIs will need to grasp
the problem domain, and I'm suggesting that's far too advanced for today's
AI, and I think it's a long way off, because the problem domain for
programmers entails knowing a lot about how humans behave, what they're
good at, and bad at, what they value, and so on, not to mention the
domain-specific knowledge that is necessary to understand the problem in
the first place.

On Thu, Feb 3, 2022 at 8:23 PM Brent Meeker  wrote:

> So AI's won't need to "grasp the problem domain" to be effective.  Which
> may well be true.  What we call "grasping the problem" domain is being able
> to tell simple stories about it that other people can grasp and understand,
> say by reading a book.  An AI may "grasp the problem" in some much more
> comprehensive way that is too much for a human to comprehend and the human
> will say the AI is just calculating and doesn't understand the problem
> because it can't explain it to humans.
>
> That's sort of what we do when we write simulations of complex things.
> They are too complex for us to see what will happen and so we use the
> computer to tell us what will happen.  The computer can't "explain the
> result" to us and we can't grasp the whole domain of the computation, but
> we can grasp the result.
>
> Brent
>
> On 2/3/2022 4:29 PM, Terren Suydam wrote:
>
>
> Being able to grasp the problem domain is not the same thing as being
> effective in it.
>
> On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker  wrote:
>
>>
>> I think "able to grasp the problem domain we're talking about" is giving
>> us way to much credit.  Every study of stock traders I've seen says that
>> they do no better than some simple rules of thumb like index funds.
>>
>> Brent
>>
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
Being able to grasp the problem domain is not the same thing as being
effective in it.

On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker  wrote:

>
> I think "able to grasp the problem domain we're talking about" is giving
> us way to much credit.  Every study of stock traders I've seen says that
> they do no better than some simple rules of thumb like index funds.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/16772563-e3dd-19d9-8241-095fbd3230b6%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-YQ5fRN_H5ZB05GoZZAUq%2Bk7NDZDowVO%3DJOsYqFtdWcQ%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
On Thu, Feb 3, 2022 at 6:22 PM John Clark  wrote:

> On Thu, Feb 3, 2022 at 5:23 PM Terren Suydam 
> wrote:
>
>
>> *>AlphaCode can potentially improve its code, but to what end?  What
>> problem is it trying to solve?  How does it know?*
>>
>
> I don't understand your questions
>

What part is confusing?


>
> *> Imagine an AI tasked with making as much money in the stock market as
>> it can. Pretty clear signals for winning and losing (like chess). And
>> perhaps there's some easy wins there for an AI that can take advantage of
>> e.g. arbitrage (this exists already I believe) or other patterns that are
>> not exploitable by human brains. But it seems to me that actual
>> comprehension of the world of investment is key. Knowing how earnings
>> reports will affect the stock price of a company, relative to human
>> expectations about that earnings report.*
>>
>
> I agree, but if humans, or at least some extraordinary humans like Warren
> Buffett, can understand the stock market, or at least understand it well
> enough to do better at picking stocks than doing so randomly, then I see
> absolutely no reason why an AI couldn't do the same thing, and do it better.
>
> *> You have to import a universe of knowledge of the human domain to be
>> effective*
>>
>
> Yeah, when you're born you don't know anything but over time you gain
> knowledge from the environment.
>
> > *a universe we take for granted since we've acquired it over decades of
>> training.*
>>
>
> Yeah with a human that process takes many decades, but even today
> computers can process many many times more information than a human can,
> not surprising when you consider the fact that the signals inside a human
> brain only travel about 100 miles an hour while the signals in a computer
> travel close to the speed of light, 186,000 miles a second.
>

Much of our learning takes place via interactions with other humans, and
those cannot be sped up. I'm not talking about facts and information, but
about theories of mind, understanding human motivations, forming and
testing hypotheses about how to get goals met by interacting with other
humans, and other animals for that matter. To be effective in a human
world, an AI would similarly need to form theories of mind about humans.
Can this be done without interacting with humans?  I doubt it.


>
> *> And I'm not talking about mere information, but models that can be
>> simulated in what-if scenarios, true understanding. You need real AGI.*
>>
>
> I can't think of a more flagrant example of moving goal posts. I clearly
> remember when nearly everybody said it would require "real understanding"
> for a computer to play chess at the grandmaster level, never mind the
> superhuman level, but nobody says that anymore. Much more recently people
> said image recognition would require "real intelligence" but few say that
> anymore, now they say coding requires "real intelligence". "Real AGI" is
> a machine that can do what a computer cannot do, *YET*.
>

Not that you would know, but I never said that about chess (or go). I don't
think real understanding is *required* for image recognition, but it would
surely help. I'm not sure how AlphaCoder works yet, so I can't comment on
whether there's some kind of primitive understanding going on there.  We
probably need to define what understanding/comprehension actually means if
we're going to take this much further.

Regardless, to operate in the free-form world of humans, an AI needs to be
able to understand and react to a problem space that is constantly
changing. Changing rules (implicit and explicit), players, goals, dynamics,
etc. Is that possible to do without real understanding?


> *>I think the problem of AGI is much harder than most assume.  *
>
>
> As I've mentioned before, the entire human genome is only 750 megabytes,
> the new Mac operating system is about 20 times that size, and the genome
> contains instructions to build an entire human body not just a brain, and
> the genome is loaded with massive redundancy; so whatever the algorithm is
> that the brain uses to extract information from the environment there is
> simply no way it can be all that complicated.
>

The thing that makes intelligence intelligence is not simply extracting
information from the environment.


>
>> *> To get to the point where machines are the stakeholders, we're already
>> past the singularity.*
>>
>
> Machines move so fast that at breakfast the singularity could look to a
> human like it's a very long way off, but by lunchtime the singularity could
> be ancient 

Re: AlphaZero

2022-02-03 Thread Terren Suydam
On Thu, Feb 3, 2022 at 4:27 PM John Clark  wrote:

> On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam 
> wrote:
>
>  > *the code generated by the AI still needs to be understandable*
>
>
> Once  AI starts to get to be really smart that's never going to happen,
> even today nobody knows how a neural network like AlphaZero works or
> understands the reasoning behind it making a particular move but that
> doesn't matter because understandable or not  AlphaZero can still play
> chess better than anybody alive, and if humans don't understand how that
> can be than that's just too bad for them.
>

With chess it's clear what the game is, what the rules are, how to win and
lose. In real life, the game constantly changes. AlphaCode can potentially
improve its code, but to what end?  What problem is it trying to solve?
How does it know?

Even in domains with seemingly simple goals, it's a problem. Imagine an AI
tasked with making as much money in the stock market as it can. Pretty
clear signals for winning and losing (like chess). And perhaps there's some
easy wins there for an AI that can take advantage of e.g. arbitrage (this
exists already I believe) or other patterns that are not exploitable by
human brains. But it seems to me that actual comprehension of the world of
investment is key. Knowing how earnings reports will affect the stock price
of a company, relative to human expectations about that earnings report.
That's just one tiny example. You have to import a universe of knowledge of
the human domain to be effective... a universe we take for granted since
we've acquired it over decades of training. And I'm not talking about mere
information, but models that can be simulated in what-if scenarios, true
understanding. You need real AGI. I think that's true with AIs that would
supplant human programmers for the reasons I said.


> > *The hard part is understanding the problem your code is supposed to
>> solve, understanding the tradeoffs between different approaches, and being
>> able to negotiate with stakeholders about what the best approach is.*
>
>
> You seem to be assuming that the "stakeholders", those that intend to use
> the code once it is completed, will always be humans, and I think that is
> an entirely unwarranted assumption. The stakeholders will certainly have
> brains, but they may be hard and dry and not wet and squishy.
>

To get to the point where machines are the stakeholders, we're already past
the singularity.


>
> *> It'll be a very long time before we're handing that domain off to an
>> AI.*
>
>
> I think you're whistling past the graveyard.
>

Of course, nobody can know what the future holds. But I think the problem
of AGI is much harder than most assume. The fact that humans, with their
stupendously parallel and efficient brains, require at least 15-20 *years *on
average of continuous training before they're able to grasp the problem
domain we're talking about, should be a clue.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9Lv0u_or9Y5zswnNhUyW%2BdBLKiePvhnRevAtqD%2BAPNnQ%40mail.gmail.com.


Re: AlphaZero

2022-02-03 Thread Terren Suydam
It'll still be some time before programmers start losing jobs to AI coders.
AlphaCode is impressive to be sure, but the real world is not made of toy
problems. Deepmind is clearly making progress in terms of applying AI in
ways that are not defined in narrow domains, but there's a lot of levels to
this, and while AlphaCode might represent a graduation to the next level,
comprehension of the wide variety of domains of the human marketplace, and
the human motivations that define them, is still many levels higher.

What I could see happening is that engineers start to use tools like
AlphaCode to solve tightly-defined coding problems faster and with fewer
bugs than left to their own devices. But there's still two problems. The
first is that the code generated by the AI still needs to be
understandable, so that it can be fixed, refactored, or otherwise improved
- and an AI that can make its code understandable (in the way that good
human engineers do), or do the work of fixing/refactoring/improving other
code is next-level. More importantly, as a long-time programmer, I can tell
you the coding is the easy part. The hard part is understanding the problem
your code is supposed to solve, understanding the tradeoffs between
different approaches, and being able to negotiate with stakeholders about
what the best approach is. It'll be a very long time before we're handing
that domain off to an AI.

Terren

On Thu, Feb 3, 2022 at 1:17 PM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

>
> Programmers putting programmers out of work.
>
> LC
>
> On Thursday, February 3, 2022 at 3:55:08 AM UTC-6 johnk...@gmail.com
> wrote:
>
>> The same people that made  AlphaZero, the chess and GO playing superstar,
>> and AlphaFold, the 3-D structure predicting program, have now come up with
>> "AlphaCode", a computer program that writes other computer programs in C++
>> and Python.  AlphaCode entered a programming competition with professional
>> human programmers called "Codeforces" and ranked in the top 54%. Not bad
>> for a first try, it seems like only yesterday computers could only play
>> mediocre chess and now they play it at a superhuman level. I don't see why
>> a program like this couldn't be used to improve its own programming, so I
>> don't think the importance of this development can be overestimated.
>>
>> Competition-Level Code Generation with AlphaCode
>> 
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> 
>> aoz
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/bfbf9c0d-4df0-4ed7-9e58-ab8a68ded0e6n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8VUbFc9QHGLt2k5BP3kkc7gujU3emLVj5E4HgbV%2B-v9A%40mail.gmail.com.


Re: NYTimes.com: A.I. Predicts the Shapes of Molecules to Come

2021-07-25 Thread Terren Suydam
This is indeed a historic moment for AI - protein folding is unbelievably
complex and to now have a tool that can deal with that complexity is of
inestimable value. But I do have these concerns:

   - It will be tempting to assume that DeepMind is correct on any given
   structure. But we don't have any easy way to test it. Of course, we can
   have a high degree of confidence that the predicted shape is accurate, and
   the value in that is already huge. But mistakes will be made based on this
   assumption.
   - This tool can be weaponized to create new and even highly-targeted
   poisons. It's not hard to imagine developing a poison that was only toxic
   for people of a certain race and then delivering it via virus. Who has
   access to DeepMind?
   - Are we comfortable with a corporation controlling something so
   powerful and with potential global security issues? This question will only
   get increasingly more relevant as new advances in AI are made. Can the
   world ever hope to regulate something so simultaneously powerful and
   cutting edge?

Terren


On Sun, Jul 25, 2021 at 8:56 AM John Clark  wrote:

> In my opinion this is the most impressive thing that Artificial Intelligence
> has done to date:
>
> From The New York Times:
>
> A.I. Predicts the Shapes of Molecules to Come
>
> DeepMind has given 3-D structure to 350,000 proteins, including every one
> made by humans, promising a boon for medicine and drug design.
>
>
> https://www.nytimes.com/2021/07/22/technology/deepmind-ai-proteins-folding.html?smid=em-share
>
> John K Clark
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2v2BPqSbKyDSRiRCYAs9jAVRMtEAA5EfscX8nukJzkEw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9dE9dRujPyEXjsyf3NaKHFJaSDvCvJ4q5iAWsAnUfcdQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread Terren Suydam
On Fri, Apr 30, 2021 at 11:09 AM John Clark  wrote:

> On Fri, Apr 30, 2021 at 9:53 AM Terren Suydam 
> wrote:
>
> *>>> All you've succeeded in doing is showing your preference for a
>>>> particular theory *
>>>>
>>>
>>> >> Correct. If idea X can explain something better than idea Y then I
>>> prefer idea X.
>>>
>>
>> *> What intention did you have that caused you to change "... a
>> particular theory of consciousness" to "a particular theory"?  *
>>
>
> My intention was the same as it always is when I trim verbiage in
> quotations, to delete the inessential; and if idea X can explain
> something better than idea Y then I prefer idea X regardless of what the
> topic is about.
>
>
> *> You are one of the least generous people I've ever argued with.*
>>
>
> If you make a good point I will admit it without hesitation, so now all
> you have to do is make one.
>
> * > You intentionally obfuscate, attack straw men, selectively clip
>> responses, don't address challenging points, don't budge an inch** and*
>> [blah blah]
>>
>
> So the only rebuttal you have to my logical arguments is a paragraph of
> personal insults.
>
> *> just generally take a disrespectful tone.*
>
>
> From this point onward I solemnly swear to give you all the respect you
> deserve.
>

I have arguments against your arguments, and anyone can see that. But it
doesn't go anywhere because you often remove my rebuttals from your
response and/or misrepresent or obfuscate my position - also evident in
this thread, as anyone can also see. So these aren't personal insults,
they're just observations anyone can verify. If it feels insulting, maybe
don't do those things.

I don't harbor any illusions that pointing this out will make any
difference to you. I'm just explaining why I'm backing out, and it isn't
because I've run out of things to say.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> ,
>
>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1sgpV14De%2B6b%3Dm_sH-RNL86jGoUB63caNR3t0SwqrAKw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1sgpV14De%2B6b%3Dm_sH-RNL86jGoUB63caNR3t0SwqrAKw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-fkkKyRzX7goKi7KxdWGFZjC6zH82isJ2bfgRe1iwz%3DQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-30 Thread Terren Suydam
On Fri, Apr 30, 2021 at 5:24 AM John Clark  wrote:

> On Thu, Apr 29, 2021 at 3:10 PM Terren Suydam 
> wrote:
>
> >>>> I proposed a question, "How is it possible that evolution managed to
>>>>> produce consciousness?" and I gave the only answer to that question I 
>>>>> could
>>>>> think of. And 3 times I've asked you if you can think of another answer.
>>>>> And three times I received nothing back but evasion. I now asked the same
>>>>> question for a fourth time, given that evolution can't select for what it
>>>>> can't see and natural selection can see intelligent behavior but it can't
>>>>> see consciousness, can you give me an explanation different from my own o
>>>>> n how evolution managed to produce a conscious being such as yourself?
>>>>>
>>>>
>>>> *>>>No, I can't*.
>>>>
>>>
>>> >>So I can explain something that you cannot. So which of our ideas are
>>> superior?
>>>
>>
>> *> All you've succeeded in doing is showing your preference for a
>> particular theory *
>>
>
> Correct. If idea X can explain something better than idea Y then I prefer idea
> X.
>

What intention did you have that caused you to change "... a particular
theory of consciousness" to "a particular theory"?  You clearly had a
purpose.


>
> >> If there is no link between consciousness and intelligence then there
>>> is absolutely positively no way Darwinian Evolution could have produced
>>> consciousness. But I don't think Darwin was wrong, I think you are.
>>>
>>
>> *> I'm neither claiming that evolution produced consciousness or that
>> Darwin was wrong.*
>>
>
> You're going to have to clarify that remark, it can't possibly be as nuts
> as it seems to be.
>

It is tiresome arguing with you. You are one of the least generous people
I've ever argued with. You intentionally obfuscate, attack straw men,
selectively clip responses, don't address challenging points, don't budge
an inch, and just generally take a disrespectful tone. And not just with
me, here, but with others, no matter the topic. I hope for your sake that's
not how you present in real life. It's not all bad, I appreciate having to
clarify my thoughts, and normally I love a good debate but I'm just being
masochistic if I continue at this point.

Terren

>
>
> >> I'm not talking about infinite precision, when it comes to qualia
>>> there is no assurance that we even approximately agree on meanings.
>>>
>>
>>
>> *> If that were true, language would be useless.*
>>
>
> Nonsense. If somebody says "pick up that red object" we both know what is
> expected of us even though we may have very very different mental
> conceptions of the qualia "red" because we both agree that the dictionary
> says red is the color formed in the mind when light of a wavelength of 700
> nanometers enters the eye, and that object is reflecting light that is
> doing precisely that to both of us.
>
>
>> >> When they say "that looks red" the red qualia they refer to may be
>>> your green qualia, and your green qualia could be their red qualia, but
>>> both of you still use the English word "red" to describe the qualia color
>>> of blood and the English word "green" to describe the qualia color of a
>>> leaf.
>>>
>>
>> *> I don't care about that. What matters is that you know you are seeing
>> red and I know I am seeing red.*
>>
>
> In other words you care more about behavior than consciousness because the
> use of the word "red" is consistent with both of us, as is our behavior,
> regardless of what our subjective impression of "red" is. So I guess
> you're starting to agree with me.
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> .
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3xPtMS0Lwn6p37tZHdAQuPkYOmZWG1qmWVbj7mcoPpbA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3xPtMS0Lwn6p37tZHdAQuPkYOmZWG1qmWVbj7mcoPpbA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8Z1EfnrJ0gQvXT9AG7fDdCotHNRkfosh%3D4WswPvwDs8A%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 2:04 PM John Clark  wrote:

>
> On Thu, Apr 29, 2021 at 12:24 PM Terren Suydam 
> wrote:
>
> >> I proposed a question, "How is it possible that evolution managed to
>>> produce consciousness?" and I gave the only answer to that question I could
>>> think of. And 3 times I've asked you if you can think of another answer.
>>> And three times I received nothing back but evasion. I now asked the same
>>> question for a fourth time, given that evolution can't select for what it
>>> can't see and natural selection can see intelligent behavior but it can't
>>> see consciousness, can you give me an explanation different from my own o
>>> n how evolution managed to produce a conscious being such as yourself?
>>>
>>
>> *>No, I can't*.
>>
>
> So I can explain something that you cannot. So which of our ideas are
> superior?
>

All you've succeeded in doing is showing your preference for a particular
theory of consciousness. It doesn't go very far, but you're pretty clear
that you're not interested in anything beyond that. But for those of us who
are interested in, say, an account of the difference between dreaming and
lucid dreaming, it's inadequate.


>
>
>> * > If you're saying evolution didn't select for consciousness, it
>> selected for intelligence, I agree with that. But so what?*
>>
>
> So what?!! If evolution selects for intelligence and you can't have
> intelligence without data processing and consciousness is the way data
> feels when it is being processed then it's no great mystery as to how
> evolution managed to produce consciousness by way of natural selection.
>

For what you, John Clark, require out of a theory of consciousness, you've
got one that works for you. Thumbs up. For me and others who enjoy the
mystery of it, it's not enough. You're entitled to think going further is a
waste of time. But after you've said that a hundred times, maybe we all get
the point and if you don't have anything new to contribute, it's time to
gracefully bow out.


>
> >>> OK, fine, let's say intelligence implies consciousness,
>>>>
>>>
>>> >> If you grant me that then what are we arguing about?
>>>
>>
>> *> Over whether there are facts about consciousness, without having to
>> link it to intelligence.*
>>
>
> If there is no link between consciousness and intelligence then there is
> absolutely positively no way Darwinian Evolution could have produced
> consciousness. But I don't think Darwin was wrong, I think you are.
>

I'm neither claiming that evolution produced consciousness or that Darwin
was wrong.


>
>
>> >> Do we really agree on all those terms? How can we know words that
>>> refer to qualia mean the same thing to both of us? There is no objective
>>> test for it, if there was then qualia wouldn't be subjective, it would be
>>> objective.
>>>
>>
>> *> We don't need infinite precision to uncover useful facts. *
>>
>
> I'm not talking about infinite precision, when it comes to qualia there
> is no assurance that we even approximately agree on meanings.
>

If that were true, language would be useless.


>
> > If someone says "that hurts", or "that looks red", we know what they
>> mean.
>>
>
> Do you? When they say "that looks red" the red qualia they refer to may
> be your green qualia, and your green qualia could be their red qualia, but
> both of you still use the English word "red" to describe the qualia color
> of blood and the English word "green" to describe the qualia color of a
> leaf.
>

I don't care about that. What matters is that you know you are seeing red
and I know I am seeing red. There's just no point in comparing private
experiences, which is something I know we agree on. But that's not all
there is to a theory of consciousness.


>
>
>> * > We take it as an assumption, and we make it explicit, that when
>> someone says "I see red" they are having the same kind of, or similar
>> enough,*
>>
>
> That is one hell of an assumption! If you're willing to do that why not be
> done with it and just take it as an assumption that your consciousness
> theory, whatever it may be, is correct?
>

Is it? It's what we assume in every conversation we have.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> .
>
>
>
>>

Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 1:37 PM John Clark  wrote:

>
>
> On Thu, Apr 29, 2021 at 11:47 AM Terren Suydam 
> wrote:
>
>
> Finding a theory is not a problem, theories are a dime a dozen
>>> consciousness theories doubly so. But how could you ever figure out if
>>> your consciousness theory was correct?
>>>
>>
>> The same way we figure out any theory is correct.
>>
>
> In science we judge a theory by how well it can predict how something will
> behave, but you are not interested in behavior you're interested in
> consciousness, so I repeat how do you determine if  consciousness theory 
> #6,948,603,924
> is correct?
>
>
> >>If you're talking about observable characteristics then yes, but then
>>> you're just talking about behavior not consciousness.
>>>
>>
>>
>> *>Sure, but we might be talking about the behavior of neurons, or their
>> equivalent in an AI.*
>>
>
> The behavior of neurons is not consciousness.
>
>
> > *All of our disagreements come down to whether there are facts about
>> consciousness. You don't think there are,*
>>
>
> Not true, I know one thing from direct experience and that outranks even
> the scientific method, I know that I am conscious.
>

I have a limit of how many times I will go around this circle. Let's just
focus on whether we can make statements of fact about consciousness, which
is what the other email thread does.


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>.
>
> .
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

On Thu, Apr 29, 2021 at 1:37 PM John Clark  wrote:

>
>
> On Thu, Apr 29, 2021 at 11:47 AM Terren Suydam 
> wrote:
>
>
> Finding a theory is not a problem, theories are a dime a dozen
>>> consciousness theories doubly so. But how could you ever figure out if
>>> your consciousness theory was correct?
>>>
>>
>> The same way we figure out any theory is correct.
>>
>
> In science we judge a theory by how well it can predict how something will
> behave, but you are not interested in behavior you're interested in
> consciousness, so I repeat how do you determine if  consciousness theory 
> #6,948,603,924
> is correct?
>
>
> >>If you're talking about observable characteristics then yes, but then
>>> you're just talking about behavior not consciousness.
>>>
>>
>>
>> *>Sure, but we might be talking about the behavior of neurons, or their
>> equivalent in an AI.*
>>
>
> The behavior of neurons is not consciousness.
>
>
> > *All of our disagreements come down to whether there are facts about
>> consciousness. You don't think there are,*
>>
>
> Not true, I know one thing from direct experience and that outranks even
> the scientific method, I know that I am conscious.
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>.
>
> .
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv2Y6%2BDWTUb-J2Nw2E9OUYJNGvDXm8ViORMbExr94io-wA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9ZM5ruAUcw5MzAGeuKz9D2en%3DATcsybh6x-Bf%2Bs6sXhA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 10:38 AM John Clark  wrote:

> On Thu, Apr 29, 2021 at 9:48 AM Terren Suydam 
> wrote:
>
>
>> *>I think it's possible there was consciousness before there was
>> intelligence,*
>>
>
> I very much doubt it, but of course nobody will ever be able to prove or
> disprove it so the proposition fits in very nicely with all existing
> consciousness literature.
>

The point was that it's not necessarily true that consciousness is the
inevitable byproduct of intelligence.


>
>
>> *> you're implicitly working with a theory of consciousness. Then, you're
>> demanding that I use your theory of consciousness when you insist that I
>> answer questions about consciousness through the framing of evolution.*
>>
>
> I proposed a question, "How is it possible that evolution managed to
> produce consciousness?" and I gave the only answer to that question I could
> think of. And 3 times I've asked you if you can think of another answer.
> And three times I received nothing back but evasion. I now asked the same
> question for a fourth time, given that evolution can't select for what it
> can't see and natural selection can see intelligent behavior but it can't
> see consciousness, can you give me an explanation different from my own on
> how evolution managed to produce a conscious being such as yourself?
>

No, I can't. If you're saying evolution didn't select for consciousness, it
selected for intelligence, I agree with that. But so what?


>
>
>> *> >> do you agree that testimony of experience constitutes facts about
>>>> consciousness?*
>>>>
>>>
>>> >> Only if I first assume that intelligence implies consciousness,
>>> otherwise I'd have no way of knowing if the being giving the testimony
>>> about consciousness was itself conscious. And only if I am convinced
>>> that the being giving the testimony was as honest as he can be. And
>>> only if I feel confident we agree about the meeting of certain words, like
>>> "green" and "red" and "hot" and "cold" and you guessed it "consciousness".
>>>
>>
>> > OK, fine, let's say intelligence implies consciousness,
>>
>
> If you grant me that then what are we arguing about?
>

Over whether there are facts about consciousness, without having to link it
to intelligence.


>
> *>the account given was honest (as in, nobody witnessing the account would
>> have a credible reason to doubt it),*
>>
>
> The most successful lies are those in which the reason for the lying is
> not immediately obvious.
>

There's uncertainty with the behavior of single subatomic particles, but
when we observe the aggregate behavior of large numbers of them, we call
those statistical observations *facts*, and those observations are
repeatable. There's a value of N for which studying N humans in a
consciousness experiment puts the probability that they're all lying below
a certain threshold.


>
>
>> * > and we can agree on all those terms.*
>>
>
> Do we really agree on all those terms? How can we know words that refer
> to qualia mean the same thing to both of us? There is no objective test for
> it, if there was then qualia wouldn't be subjective, it would be
> objective.
>

We don't need infinite precision to uncover useful facts. If someone says
"that hurts", or "that looks red", we know what they mean. We take it as an
assumption, and we make it explicit, that when someone says "I see red"
they are having the same kind of, or similar enough, experience as someone
else who says "I see red".

There's no question that the type of evidence you get from first-person
reports is vulnerable to deception, biases, and uncertainty around
referents. But we live with this in every day life. It's not unreasonable
to systematize first-person reports and include that data as evidence for
theorizing, as long as those vulnerabilities are acknowledged. Like I've
said from the beginning, it may be the case that we'll never arrive at a
theory of consciousness that emerges as a clear winner. But I disagree with
you that it's not worth trying to find one, or that it's impossible to make
progress.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>.
> .
>
>>
>>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving 

Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 10:08 AM John Clark  wrote:

> On Thu, Apr 29, 2021 at 9:34 AM Terren Suydam 
> wrote:
>
> *> A theory would give you a way to predict what kinds of beings are
>> capable of feeling pain*
>>
>
> Finding a theory is not a problem, theories are a dime a dozen
> consciousness theories doubly so. But how could you ever figure out if
> your consciousness theory was correct?
>

The same way we figure out any theory is correct. Does it have explanatory
power, does it make falsifiable predictions. We're still arguing over
whether there's such a thing as a fact about consciousness, but if we can
imagine a world where you grant that there are, that's the world in which
you can test theories of consciousness.


>
>  > we'd say "given theory X,
>>
>
> And if the given X  which we take as being true is "Hogwarts exist" then we
> must logically conclude we could find Harry Potter at that magical school
> of witchcraft and wizardry.
>
> > *we know that if we create an AI with these characteristics,*
>>
>
> If you're talking about observable characteristics then yes, but then
> you're just talking about behavior not consciousness.
>

Sure, but we might be talking about the behavior of neurons, or their
equivalent in an AI.


>
> *> a theory of consciousness that explains how qualia come to be within a
>> system,*
>>
>
> Explains? Just what sort of theory would satisfy you and make you say the
> problem of consciousness has been solved? If I said the chemical Rednosium
> Oxide produced qualia would all your questions be answered or would you be
> curious to know how this chemical managed to do that?
>

All of our disagreements come down to whether there are facts about
consciousness. You don't think there are, and that's all the question above
is saying.


>
>
>> > *you could make claims about their experience that go beyond observing
>> behavior.*
>>
>
> Claims are even easier to come about then theories are, but true claims
> not so much.
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>.
>
> .
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1kQhdVYf5O9eLv2%3D16k%3Dm%2BE8mMhGd6CfwL_fGaB-SyHw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9Mruynsx6pEpUNojfqdCEXG6e4WCJ-0fShxWTbNAoSjA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 5:13 AM John Clark  wrote:

> On Wed, Apr 28, 2021 at 6:18 PM Terren Suydam 
> wrote:
>
> >> If you believe in Darwinian evolution and if you believe you are
>>> conscious then given that evolution can't select for what it can't see
>>> and natural selection can see intelligent behavior but it can't see
>>> consciousness, can you give me an explanation of how evolution managed to
>>> produce a conscious being such as yourself if intelligence is not the
>>> inevitable byproduct of intelligence?
>>>
>>
>> *> It's not an inevitable byproduct of intelligence if consciousness is
>> an epiphenomenon. *
>>
>
> That remark makes no sense, and you never answered my question. If
> consciousness is an epiphenomenon, and from Evolutions point of you it
> certainly is, then the only way natural selection could've produced
> consciousness is if its the inevitable byproduct of something else that is
> not an epiphenomenon, something like intelligence. And you know for a fact
> that Evolution has produced consciousness at least once and probably many
> billions of times.
>

I mostly agree, my only hang up is with the word 'inevitable'. I think it's
possible there was consciousness before there was intelligence, depending
on how you define intelligence.


>
>
>> *> As you like to say, consciousness may just be how data feels as it's
>> being processed. If so, that doesn't imply anything about intelligence per
>> se, beyond the minimum intelligence required to process data at all.*
>>
>
> For the purposes of this argument it's irrelevant if any sort of data
> processing can produce consciousness or if only the type that leads to
> intelligence can because evolution doesn't select for data processing it
> selects for intelligence, but you can't have intelligence without data
> processing.
>

You keep coming back to intelligence in a conversation about consciousness.
That's fine, but when you do you're implicitly working with a theory of
consciousness. Then, you're demanding that I use your theory of
consciousness when you insist that I answer questions about consciousness
through the framing of evolution. It's a bit of a contradiction to be using
a theory of consciousness to point out how pointless theories of
consciousness are.


>
> *> do you agree that testimony of experience constitutes facts about
>> consciousness?*
>>
>
> Only if I first assume that intelligence implies consciousness, otherwise
> I'd have no way of knowing if the being giving the testimony about
> consciousness was itself conscious. And only if I am convinced that the
> being giving the testimony was as honest as he can be. And only if I feel
> confident we agree about the meeting of certain words, like "green" and
> "red" and "hot" and "cold" and you guessed it "consciousness".
>

OK, fine, let's say intelligence implies consciousness, the account given
was honest (as in, nobody witnessing the account would have a credible
reason to doubt it), and we can agree on all those terms.

Then do you agree that said account constitutes facts about consciousness?

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>.
> .
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1e47JG3rfVnTCp6KxLRFnqmZRKQHNcNKVhrNBRaEkk5A%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv1e47JG3rfVnTCp6KxLRFnqmZRKQHNcNKVhrNBRaEkk5A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA--EL1q98oqi%3D0ruRCXudn_O0phjRGBBxZ2Vh_-kg4u_w%40mail.gmail.com.


Re: A minimally conscious program

2021-04-29 Thread Terren Suydam
On Thu, Apr 29, 2021 at 1:57 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 9:42 PM, Terren Suydam wrote:
>
>
>
> On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 4/28/2021 4:40 PM, Terren Suydam wrote:
>>
>>
>> I agree with everything you said there, but all you're saying is that
>> intersubjective reality must be consistent to make sense of other peoples'
>> utterances. OK, but if it weren't, we wouldn't be here talking about
>> anything. None of this would be possible.
>>
>>
>> Which is why it's a fool's errand to say we need to explain qualia.  If
>> we can make an AI that responds to world the way we to, that's all there is
>> to saying it has the same qualia.
>>
>
> I don't think either of those claims follows. We need to explain suffering
> if we hope to make sense of how to treat AIs. If it were only about redness
> I'd agree. But creating entities whose existence is akin to being in hell
> is immoral. And we should know if we're doing that.
>
>
> John McCarthy wrote a paper in the '50s warning about the possibility of
> accidentally making a conscious AI and unknowingly treating it
> unethically.  But I don't see the difference from any other qualia, we can
> only judge by behavior.  In fact this whole thread started by JKC
> considering AI pain, which he defined in terms of behavior.
>
>
A theory would give you a way to predict what kinds of beings are capable
of feeling pain. We wouldn't have to wait to observe their behavior, we'd
say "given theory X, we know that if we create an AI with these
characteristics, it will be the kind of entity that is capable of
suffering".


>
> To your second point, I think you're too quick to make an equivalence
> between an AI's responses and their subjective experience. You sound like
> John Clark - the only thing that matters is behavior.
>
>
> Behavior includes reports. What else would you suggest we go on?
>

Again, in a theory of consciousness that explains how qualia come to be
within a system, you could make claims about their experience that go
beyond observing behavior. I know John Clark's head just exploded, but it's
the point of having a theory of consciousness.

>
>
> Bent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/577ce844-a528-4dcd-deab-3cf1e5e833e8%40verizon.net
> <https://groups.google.com/d/msgid/everything-list/577ce844-a528-4dcd-deab-3cf1e5e833e8%40verizon.net?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-joR0sTiicxUM7vpjcgw-wrGHv3Oa24AJigv7%2B5RHefA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 8:15 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 4:40 PM, Terren Suydam wrote:
>
>
>
> On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 4/28/2021 3:17 PM, Terren Suydam wrote:
>>
>>
>>
>> On Wed, Apr 28, 2021 at 5:51 PM John Clark  wrote:
>>
>>> On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
>>> wrote:
>>>
>>> *>>> testimony of experience constitutes facts about consciousness.*
>>>>>
>>>>>
>>>>> >> Sure I agree, provided you first accept that consciousness is the
>>>>> inevitable byproduct of intelligence
>>>>>
>>>>
>>>> *> I hope the irony is not lost on anyone that you're insisting on your
>>>> theory of consciousness to make your case that theories of consciousness
>>>> are a waste of time.*
>>>>
>>>
>>> If you believe in Darwinian evolution and if you believe you are
>>> conscious then given that evolution can't select for what it can't see
>>> and natural selection can see intelligent behavior but it can't see
>>> consciousness, can you give me an explanation of how evolution managed to
>>> produce a conscious being such as yourself if intelligence is not the
>>> inevitable byproduct of intelligence?
>>>
>>
>> It's not an inevitable byproduct of intelligence if consciousness is an
>> epiphenomenon. As you like to say, consciousness may just be how data feels
>> as it's being processed. If so, that doesn't imply anything about
>> intelligence per se, beyond the minimum intelligence required to process
>> data at all... the simplest example being a thermostat.
>>
>> That said, do you agree that testimony of experience constitutes facts
>> about consciousness?
>>
>>
>> It wouldn't if it were just random, like plucking passages out of
>> novels.  We only take it as evidence of consciousness because there are
>> consistent patterns of correlation with what each of us experiences.  If
>> every time you pointed to a flower you said "red", regardless of the
>> flower's color, a child would learn that "red" meant a flower and his
>> reporting when he saw red wouldn't be testimony to the experience of  red.
>> So the usefulness of reports already depends on physical patterns in the
>> world.  Something I've been telling Bruno...physics is necessary to
>> consciousness.
>>
>> Brent
>>
>
> I agree with everything you said there, but all you're saying is that
> intersubjective reality must be consistent to make sense of other peoples'
> utterances. OK, but if it weren't, we wouldn't be here talking about
> anything. None of this would be possible.
>
>
> Which is why it's a fool's errand to say we need to explain qualia.  If we
> can make an AI that responds to world the way we to, that's all there is to
> saying it has the same qualia.
>

I don't think either of those claims follows. We need to explain suffering
if we hope to make sense of how to treat AIs. If it were only about redness
I'd agree. But creating entities whose existence is akin to being in hell
is immoral. And we should know if we're doing that.

To your second point, I think you're too quick to make an equivalence
between an AI's responses and their subjective experience. You sound like
John Clark - the only thing that matters is behavior.

Terren


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/bfe08930-bf9a-c88b-be8b-f621e5488c4f%40verizon.net
> <https://groups.google.com/d/msgid/everything-list/bfe08930-bf9a-c88b-be8b-f621e5488c4f%40verizon.net?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-ueOoJFyodF_7cpE6Ke_CEt14T7q7wbE-5RghrTPKgcA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 7:25 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 3:17 PM, Terren Suydam wrote:
>
>
>
> On Wed, Apr 28, 2021 at 5:51 PM John Clark  wrote:
>
>> On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
>> wrote:
>>
>> *>>> testimony of experience constitutes facts about consciousness.*
>>>>
>>>>
>>>> >> Sure I agree, provided you first accept that consciousness is the
>>>> inevitable byproduct of intelligence
>>>>
>>>
>>> *> I hope the irony is not lost on anyone that you're insisting on your
>>> theory of consciousness to make your case that theories of consciousness
>>> are a waste of time.*
>>>
>>
>> If you believe in Darwinian evolution and if you believe you are conscious
>> then given that evolution can't select for what it can't see and natural
>> selection can see intelligent behavior but it can't see consciousness, can
>> you give me an explanation of how evolution managed to produce a conscious
>> being such as yourself if intelligence is not the inevitable byproduct of
>> intelligence?
>>
>
> It's not an inevitable byproduct of intelligence if consciousness is an
> epiphenomenon. As you like to say, consciousness may just be how data feels
> as it's being processed. If so, that doesn't imply anything about
> intelligence per se, beyond the minimum intelligence required to process
> data at all... the simplest example being a thermostat.
>
> That said, do you agree that testimony of experience constitutes facts
> about consciousness?
>
>
> It wouldn't if it were just random, like plucking passages out of novels.
> We only take it as evidence of consciousness because there are consistent
> patterns of correlation with what each of us experiences.  If every time
> you pointed to a flower you said "red", regardless of the flower's color, a
> child would learn that "red" meant a flower and his reporting when he saw
> red wouldn't be testimony to the experience of  red.  So the usefulness of
> reports already depends on physical patterns in the world.  Something I've
> been telling Bruno...physics is necessary to consciousness.
>
> Brent
>

I agree with everything you said there, but all you're saying is that
intersubjective reality must be consistent to make sense of other peoples'
utterances. OK, but if it weren't, we wouldn't be here talking about
anything. None of this would be possible.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-8FEgVqJhJ6tgxwRtaSVrvQsCAZ9gY-kYvGG5Q8bi0oA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 5:51 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 4:48 PM Terren Suydam 
> wrote:
>
> *>>> testimony of experience constitutes facts about consciousness.*
>>>
>>>
>>> >> Sure I agree, provided you first accept that consciousness is the
>>> inevitable byproduct of intelligence
>>>
>>
>> *> I hope the irony is not lost on anyone that you're insisting on your
>> theory of consciousness to make your case that theories of consciousness
>> are a waste of time.*
>>
>
> If you believe in Darwinian evolution and if you believe you are conscious
> then given that evolution can't select for what it can't see and natural
> selection can see intelligent behavior but it can't see consciousness, can
> you give me an explanation of how evolution managed to produce a conscious
> being such as yourself if intelligence is not the inevitable byproduct of
> intelligence?
>

It's not an inevitable byproduct of intelligence if consciousness is an
epiphenomenon. As you like to say, consciousness may just be how data feels
as it's being processed. If so, that doesn't imply anything about
intelligence per se, beyond the minimum intelligence required to process
data at all... the simplest example being a thermostat.

That said, do you agree that testimony of experience constitutes facts
about consciousness?

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
>
>
> .
>
>> .
>>>
>>> -
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3fvsASAZoMJ_WLCLYXTD0hDaszq-CDjjixLN1FSsiGvw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_VGG8qrMqnm-W-UGnPDL_4EdynRPuxnSWddz4OTrcm7g%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 4:08 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 3:50 PM Terren Suydam 
> wrote:
>
> *> testimony of experience constitutes facts about consciousness.*
>
>
> Sure I agree, provided you first accept that consciousness is the
> inevitable byproduct of intelligence
>

I hope the irony is not lost on anyone that you're insisting on your theory
of consciousness to make your case that theories of consciousness are a
waste of time.

I don't think it's necessary to accept that in order to make use of
testimony by a thousand different people in an experiment who all say:
"whatever you're doing, it's weird, I am smelling gasoline".


>
>
>> >> I am far more interested in understanding the mental activity of a
>>> person when he's awake then when he's asleep.
>>>
>>
>> *> We're talking about consciousness, not merely "mental activity". *
>>
>
> And as I mentioned in a previous post, if consciousness is NOT the
> inevitable byproduct of intelligence then when we're talking about
> consciousness we don't even know if we're talking about the same thing.
>

If you want to get pedantic you can say we don't know if we're talking
about the same thing even if we do accept consciousness as the inevitable
byproduct of intelligence. So that heuristic isn't helpful. Again, if
someone claims to be in pain, then that's a fact we can use, even if the
character of that pain isn't knowable publicly. Ditto for seeing red, or
any other claim about qualia.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> .
>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0tA9-4GsUkWamsNSdo_O7cgjqpt6uCTFPvYxSz9Kg-iA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0tA9-4GsUkWamsNSdo_O7cgjqpt6uCTFPvYxSz9Kg-iA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-tppexAD1GRsmWr%2Bv3MVEZutBp0kOgaiu9WjSB4zyTUg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 3:15 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 2:39 PM Terren Suydam 
> wrote:
>
> >> Forget BF Skinner, this is more general than consciousness or
>>> behavior. If you want to explain Y at the most fundamental level from first
>>> principles you can't start with "X produces Y'' and then use X as part of
>>> your explanation of Y.
>>>
>>
>> *> OK, I want to explain consciousness from first principles, so Y =
>> consciousness. What is X?  *
>>
>
> Something that shows up on a brain scan machine according to you.
>

You're obfuscating. I was pretty clear that I was talking about peoples'
reports of their own subjective experience, but you clipped that out and
made it seem otherwise. Maybe you did that because your whole edifice
crumbles if you admit that testimony of experience constitutes facts about
consciousness.


>
> *> I'm interested in a theory of consciousness that can tell me, among
>> other things, how it is that we have conscious experiences when we dream.
>> Don't you wonder about that?*
>>
>
> I am far more interested in understanding the mental activity of a person when
> he's awake then when he's asleep.
>
>
We're talking about consciousness, not merely "mental activity".
Regardless, you have every right to be incurious about matters like these.
The mystery is why you involve yourself in conversations you have no
interest in.


> *> I'm very curious about how intelligence works too. *
>>
>
> Glad to hear it, but there's 10 times or 20 times more verbiage about
> consciousness than intelligence on this list.
>

Nobody's forcing you to read it.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> .
>
>
>
>>
>>> .
>>>
>>> .
>>>
>>>
>>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3GiVYXroetrK-EU1ai2Pg6nv8P%2B1RBrmKZ0yQGaX%2BfaA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3GiVYXroetrK-EU1ai2Pg6nv8P%2B1RBrmKZ0yQGaX%2BfaA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-szOTMnengagOkZObdGW1-tAN7uUjU-UbgbJ0p33sVrA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 3:02 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/28/2021 11:39 AM, Terren Suydam wrote:
> >
> > I'm interested in a theory of consciousness that can tell me, among
> > other things, how it is that we have conscious experiences when we
> > dream. Don't you wonder about that?
>
> No especially.  It's certainly consistent with consciousness being a
> brain process.  And it's consistent with Jeff Hawkins theory that the
> brain is continually trying to predict sensation and it is predictions
> that are endorsed by the most neurons that constitute conscious
> thoughts.  in sleep, with little or no sensory input the predictions
> wander, depending mainly on memory for input.
>
> Brent
>

What I read in that is that you don't wonder because you've got a workable
theory. This was intended for John Clark, who thinks theories of
consciousness are a waste of time.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-TN6Um-R-geAFPyJkB9ojsvDJT9wd%3DvgejeodSv7wWAQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 12:06 PM John Clark  wrote:

> On Wed, Apr 28, 2021 at 11:17 AM Terren Suydam 
> wrote:
>
> >> We should always pay attention to all relevant *BEHAVIOR**,* including
>>> *BEHAVIOR* such as noises produced by the mouths of other people.
>>>
>>
>> *> Got it. Accounts of subjective experience are not the salient facts in
>> these experiments, it's the way they move their lips and tongue and pass
>> air through their vocal cords that matters. The rest of the world has moved
>> on from BF Skinner, but not you, apparently. *
>>
>
> Forget BF Skinner, this is more general than consciousness or behavior.
> If you want to explain Y at the most fundamental level from first
> principles you can't start with "X produces Y'' and then use X as part of
> your explanation of Y.
>

OK, I want to explain consciousness from first principles, so Y =
consciousness. What is X?  Testimony about subjective experience?  Nobody
is claiming that testimony about subjective experience produces
consciousness (X produces Y).


>
>
>> >>> *Why doesn't that represent progress?  *
>>>>
>>>
>>> >> It may represent progress but not progress towards understanding
>>> consciousness.
>>>
>>
>> *> Why not?  Understanding how the brain maps or encodes different
>> subjective experiences *
>>
>
> Because understanding how the brain maps and encodes information will
> tell you lots about behavior and intelligence but absolutely nothing about
> consciousness.
>
> *> If we can explain why, for example, you see stars if you bash the back
>> of your head,*
>>
>
> It might be able to explain why I say "I see green stars" but that's not
> what you're interested in, you want to know why I subjectively experience
> the green qualia and if it's the same as your green qualia, but no theory
> can even prove to you that I see any qualia at all.
>

I think the question of whether my experience of green is the same as your
experience of green reflects confusion on behalf of the questioner. I'm not
interested in that.

I'm interested in a theory of consciousness that can tell me, among other
things, how it is that we have conscious experiences when we dream. Don't
you wonder about that?


> *> You make it sound as though there's nothing to be gleaned from
>> systematic investigation,*
>>
>
> It's impossible to systematically investigate everything therefor a
> scientist needs to use judgment to determine what is worth his time and
> what is not. Every minute you spend on consciousness research is a minute
> you could've spent on researching something far far more productive, which
> would be pretty much anything. Consciousness research has made ZERO
> progress over the last thousand years and I have every reason to believe it
> will make twice as much during the next thousand.
>

You refuse to acknowledge that one can produce evidence of consciousness,
namely in the form of subjects testifying to their experience. It doesn't
matter to you, apparently, if someone reports being in extreme pain.


>
> *> the thing I understand the least is how incurious you are about it.*
>
>
> The thing I find puzzling is how incurious you and virtually all internet
> consciousness mavens are about how intelligence works. Figuring out
> intelligence is a solvable problem, but figuring out consciousness is not,
> probably because it's just a brute fact that consciousness is the way data
> feels when it is being processed. If so then there's nothing more they can
> be said about consciousness, however I am well aware that after all is said
> and done more is always said and done.
>
>
I'm very curious about how intelligence works too. You're making
assumptions about me that don't bear out... perhaps that's true of your
thinking in general. And I never claimed consciousness is a solvable
problem. But there are better theories of consciousness than others,
because there are facts about consciousness that beg explanation (e.g.
dreaming, lucid dreaming), and some theories have better explanations than
others. But like any other domain, if we can come up with a relatively
simple theory that explains a relatively large set of phenomena, then
that's a good contender. But you know this, you've just got some kind of
odd hang up about consciousness.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> .
>
> .
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> &qu

Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
On Wed, Apr 28, 2021 at 10:15 AM John Clark  wrote:

> On Wed, Apr 28, 2021 at 8:32 AM Terren Suydam 
> wrote:
>
> *> John - do you have any response?*
>>
>
> If you insist.
>
> >> It's not hard to make progress in consciousness research, it's
>>>> impossible.
>>>>
>>>
>>> *So we should ignore experiments where you stimulate the brain and the
>>> subject reports experiencing some kind of qualia,*
>>>
>>
> We should always pay attention to all relevant *BEHAVIOR**,* including
> *BEHAVIOR* such as noises produced by the mouths of other people.
>

Got it. Accounts of subjective experience are not the salient facts in
these experiments, it's the way they move their lips and tongue and pass
air through their vocal cords that matters. The rest of the world has moved
on from BF Skinner, but not you, apparently.


>
>
>> >*Why doesn't that represent progress?  *
>>>
>>
> It may represent progress but not progress towards understanding
> consciousness.
>

Why not?  Understanding how the brain maps or encodes different subjective
experiences surely counts as progress towards understanding consciousness.
If we can explain why, for example, you see stars if you bash the back of
your head, but not the front, then that would count as progress towards
understanding consciousness.


>
>  > *Is it because you don't trust people's reports?*
>
>
> Trust but verify. When you and I talk about consciousness I don't even
> know if we're talking about the same thing; perhaps by your meaning of the
> word I am not conscious, maybe I'm conscious by my meaning of the word but
> not by yours, maybe my consciousness is just a pale pitiful thing compared
> to the grand glorious awareness that you have and what you mean by  the
> word "consciousness".  Maybe comparing your consciousness to mine is like
> comparing a firefly to a supernova. Or maybe it's the other way around.
> Neither of us will ever know.
>

You make it sound as though there's nothing to be gleaned from systematic
investigation, and the thing I understand the least is how incurious you
are about it. I mean, to each their own, but trying to grasp how objective
systems (like brains) and consciousness interrelate is perhaps the most
fascinating thing I can think of. The mystery of it is incredible to behold
when you really get into it.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_pZMROyqdu1FbM7FuyhMrSOyif54_B%3DqhE2NSFRDY88Q%40mail.gmail.com.


Re: A minimally conscious program

2021-04-28 Thread Terren Suydam
John - do you have any response?


On Tue, Apr 27, 2021 at 9:38 AM Terren Suydam 
wrote:

>
>
> On Tue, Apr 27, 2021 at 7:22 AM John Clark  wrote:
>
>> On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam 
>> wrote:
>>
>> *> consciousness is harder to work with than intelligence, because it's
>>> harder to make progress.*
>>
>>
>> It's not hard to make progress in consciousness research, it's
>> impossible.
>>
>
> So we should ignore experiments where you stimulate the brain and the
> subject reports experiencing some kind of qualia, in a repeatable way. Why
> doesn't that represent progress?  Is it because you don't trust people's
> reports?
>
>
>>
>> *> Facts that might slay your theory are much harder to come by.*
>>
>>
>> Such facts are not hard to come by. they're impossible to come by. So
>> for a consciousness scientist being lazy works just as well as being
>> industrious, so consciousness research couldn't be any easier, just face a
>> wall, sit on your hands, and contemplate your navel.
>>
>
> There are fruitful lines of research happening. Research on patients
> undergoing meditation, and psychedelic experiences, while in an FMRI has
> lead to some interesting facts. You seem to think progress can only mean
> being able to prove conclusively how consciousness works. Progress can mean
> deepening our understanding of the relationship between the brain and the
> mind.
>
> Terren
>
>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>>
>> .
>>
>>
>> .
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-tNsDCAUz-G-ErkmnSREEQ9zd6ui_er_XyA4MZTR92xQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread Terren Suydam
On Tue, Apr 27, 2021 at 7:22 AM John Clark  wrote:

> On Tue, Apr 27, 2021 at 1:08 AM Terren Suydam 
> wrote:
>
> *> consciousness is harder to work with than intelligence, because it's
>> harder to make progress.*
>
>
> It's not hard to make progress in consciousness research, it's impossible.
>
>

So we should ignore experiments where you stimulate the brain and the
subject reports experiencing some kind of qualia, in a repeatable way. Why
doesn't that represent progress?  Is it because you don't trust people's
reports?


>
> *> Facts that might slay your theory are much harder to come by.*
>
>
> Such facts are not hard to come by. they're impossible to come by. So for
> a consciousness scientist being lazy works just as well as being
> industrious, so consciousness research couldn't be any easier, just face a
> wall, sit on your hands, and contemplate your navel.
>

There are fruitful lines of research happening. Research on patients
undergoing meditation, and psychedelic experiences, while in an FMRI has
lead to some interesting facts. You seem to think progress can only mean
being able to prove conclusively how consciousness works. Progress can mean
deepening our understanding of the relationship between the brain and the
mind.

Terren


> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> .
>
>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3BkpBXW%3Dq-nX--Dss4ogXXACeswwCnkiwcaWu-un01cg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8SRB1TUivNfBrAdNKF1CR8yJRfXXAq%3DXvnBFZG93vM%2Bg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-27 Thread Terren Suydam
On Tue, Apr 27, 2021 at 2:27 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 4/26/2021 11:11 PM, Terren Suydam wrote:
>
>
> Sure - although it seems possible that there could be intelligences that
> are not conscious. We're pretty biased to think of intelligence as we have
> it - situated in a meat body, and driven by evolutionary programming in a
> social context. There may be forms of intelligence so alien we could never
> conceive of them, and there's no guarantee about consciousness.
>
>
> I don't see how an entity could be really intelligent without being able
> to consider its actions by a kind of internal simulation.
>

Neither do I, but it may be a failure of imagination. The book Blindsight
by Peter Watts explores this idea.


>
> Take corporations. A corporation is its own entity and it acts
> intelligently in the service of its own interests. They can certainly be
> said to "prospectively consider scenarios in which they are actors in which
> the scenario is informed by past experience". Is a corporation conscious?
>
>
> I think so.  And the Supreme Court agrees. :-)
>

How about cities? Countries? Religions?  Each of which can be said to
"prospectively consider scenarios..."

Terren


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/adcec69e-5c78-f355-9d35-a503d8d12d5f%40verizon.net
> <https://groups.google.com/d/msgid/everything-list/adcec69e-5c78-f355-9d35-a503d8d12d5f%40verizon.net?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8%2BpwJe448JksJQCWnRFp5_S6Z8%3DQV5HHesfiMOtiChZQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
On Tue, Apr 27, 2021 at 1:27 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
> However, in a certain sense, intelligence is easier because it's
> constrained. Intelligence can be tested. It's certainly more practical,
> which makes intelligence easier to study as well. You're much more likely
> to be able to profit from advances in understanding of intelligence. In
> that sense, consciousness is harder to work with than intelligence, because
> it's harder to make progress. Facts that might slay your theory are much
> harder to come by.
>
>
> What I mean by it is that if you can engineer intelligence at a high level
> it will necessarily entail consciousness.  An entity cannot be human-level
> intelligent without being able to prospectively consider scenarios in which
> they are actors in which the scenario is informed by past experience...and
> I think that is what constitutes the core of consciousness.
>

Sure - although it seems possible that there could be intelligences that
are not conscious. We're pretty biased to think of intelligence as we have
it - situated in a meat body, and driven by evolutionary programming in a
social context. There may be forms of intelligence so alien we could never
conceive of them, and there's no guarantee about consciousness. Take
corporations. A corporation is its own entity and it acts intelligently in
the service of its own interests. They can certainly be said to
"prospectively consider scenarios in which they are actors in which the
scenario is informed by past experience". Is a corporation conscious?

Terren


>
> Brent
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-aGhHXh4VDpZ7Jc%2B4esbL4ubYUAJ3TgoP3bRwDSkkBTA%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
On Mon, Apr 26, 2021 at 10:08 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> It certainly seems likely that any brain or AI that can perceive sensory
> events and form an inner narrative and memory of that is conscious in a
> sense even if they are unable to act.  This is commonly the situation
> during a dream.  One is aware of dreamt events but doesn't actually move in
> response to them.
>

> And I think JKC is wrong when he says "few if any believe other people
> are conscious all the time, only during those times that corresponds to the
> times they behave intelligently."  I generally assume people are conscious
> if their eyes are open and they respond to stimuli, even if they are doing
> something dumb.
>

Or rather, even if they're doing nothing at all. Someone meditating for
hours on end, or someone lying on a couch with eyeshades and headphones on
tripping on psilocybin, may be having extraordinary internal experiences
and display absolutely no outward behavior.


> But I agree with his general point that consciousness is easy and
> intelligence is hard.
>

It depends how you look at it. JC's point is that it's impossible to prove
much of anything about consciousness, so you can imagine many ways to
explain consciousness without ever suffering the pain of your theory being
slain by a fact.

However, in a certain sense, intelligence is easier because it's
constrained. Intelligence can be tested. It's certainly more practical,
which makes intelligence easier to study as well. You're much more likely
to be able to profit from advances in understanding of intelligence. In
that sense, consciousness is harder to work with than intelligence, because
it's harder to make progress. Facts that might slay your theory are much
harder to come by.


> I think human consciousness, having an inner narrative, is just an
> evolutionary trick the brain developed for learning and accessing learned
> information to inform decisions. Julian Jaynes wrote a book about how this
> may have come about, "The Origin of Consciousness in the Breakdown of the
> Bicameral Mind".  I don't know that he got it exactly right, but I think he
> was on to the right idea.
>

I agree!

Terren


>
> Brent
>
> On 4/26/2021 4:07 PM, Terren Suydam wrote:
>
> So do you have nothing to say about coma patients who've later woken up
> and said they were conscious?  Or people under general anaesthetic who
> later report being gruesomely aware of the surgery they were getting?
> Should we ignore those reports?  Or admit that consciousness is worth
> considering independently from its effects on outward behavior?
>
> On Mon, Apr 26, 2021 at 11:16 AM John Clark  wrote:
>
>> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
>> wrote:
>>
>> > It's impossible to refute solipsism
>>>
>>
>> True, but it's equally impossible to refute the idea that everything
>> including rocks is conscious. And if both a theory and its exact opposite
>> can neither be proven nor disproven then neither speculation is of any
>> value in trying to figure out how the world works.
>>
>> * > It's true that the only thing we know for sure is our own
>>> consciousness,*
>>>
>> And I know that even I am not conscious all the time, and there is no
>> reason for me to believe other people can do better.
>>
>>
>>> * > but there's nothing about what I said that makes it impossible for
>>> there to be a reality outside of ourselves populated by other people. It
>>> just requires belief.*
>>>
>>
>> And few if any believe other people are conscious all the time, only
>> during those times that corresponds to the times they behave
>> intelligently.
>>
>> John K ClarkSee what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group

Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
So do you have nothing to say about coma patients who've later woken up and
said they were conscious?  Or people under general anaesthetic who later
report being gruesomely aware of the surgery they were getting?  Should we
ignore those reports?  Or admit that consciousness is worth considering
independently from its effects on outward behavior?

On Mon, Apr 26, 2021 at 11:16 AM John Clark  wrote:

> On Mon, Apr 26, 2021 at 10:45 AM Terren Suydam 
> wrote:
>
> > It's impossible to refute solipsism
>>
>
> True, but it's equally impossible to refute the idea that everything
> including rocks is conscious. And if both a theory and its exact opposite
> can neither be proven nor disproven then neither speculation is of any
> value in trying to figure out how the world works.
>
> * > It's true that the only thing we know for sure is our own
>> consciousness,*
>>
> And I know that even I am not conscious all the time, and there is no
> reason for me to believe other people can do better.
>
>
>> * > but there's nothing about what I said that makes it impossible for
>> there to be a reality outside of ourselves populated by other people. It
>> just requires belief.*
>>
>
> And few if any believe other people are conscious all the time, only
> during those times that corresponds to the times they behave intelligently
> .
>
> John K ClarkSee what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3NKKuSpfc0%3DemkA75U4rvEmS%2B_bBWtM%3D_Xhc5XnWOr0g%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BY%2BBVTmBqaMNtDwqCUC%3DcZ7H%2BCx_ihmr_Dy5prjn7WQ%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
It's impossible to refute solipsism, but that's true regardless of your
metaphysics. It's true that the only thing we know *for sure* is our own
consciousness, but there's nothing about what I said that makes it
impossible for there to be a reality outside of ourselves populated by
other people. It just requires belief.


On Mon, Apr 26, 2021 at 10:39 AM Henrik Ohrstrom 
wrote:

> That would be, it is quite solipsistic?
> /henrik
>
>
> Den mån 26 apr. 2021 kl 14:31 skrev Terren Suydam  >:
>
>> Assuming the program has a state and that state changes in response to
>> its inputs, then it seems reasonable to say the program is conscious in
>> some elemental way. What is it conscious "of", though? I'd say it's not
>> conscious of anything outside of itself, in the same way we are not
>> conscious of anything outside of ourselves. We are only conscious of the
>> model of the world we build. You might then say it's conscious of its
>> internal representation, or its state.
>>
>> On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:
>>
>>> It is quite easy, I think, to define a program that "remembers" (stores
>>> and later retrieves ( information.
>>>
>>> It is slightly harder, but not altogether difficult, to write a program
>>> that "learns" (alters its behavior based on prior inputs).
>>>
>>> What though, is required to write a program that "knows" (has awareness
>>> or access to information or knowledge)?
>>>
>>> Does, for instance, the following program "know" anything about the data
>>> it is processing?
>>>
>>> if (pixel.red > 128) then {
>>> // knows pixel.red is greater than 128
>>> } else {
>>> // knows pixel.red <= 128
>>> }
>>>
>>> If not, what else is required for knowledge?
>>>
>>> Does the program behavior have to change based on the state of some
>>> information? For example:
>>>
>>> if (pixel.red > 128) then {
>>> // knows pixel.red is greater than 128
>>> doX();
>>> } else {
>>> // knows pixel.red <= 128
>>> doY():
>>> }
>>>
>>> Or does the program have to possess some memory and enter a different
>>> state based on the state of the information it processed?
>>>
>>> if (pixel.red > 128) then {
>>> // knows pixel.red is greater than 128
>>> enterStateX():
>>> } else {
>>> // knows pixel.red <= 128
>>> enterStateY();
>>> }
>>>
>>> Or is something else altogether needed to say the program knows?
>>>
>>> If a program can be said to "know" something then can we also say it is
>>> conscious of that thing?
>>>
>>> Jason
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAF0GBnjtnJ1S%2BS6uhENxBey%2BT7X3%3D4Z9V1FVSiUe4ODYpiCn9Q%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAF0GBnjtnJ1S%2BS6uhENxBey%2BT7X3%3D4Z9V1FVSiUe4ODYpiCn9Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9buDgz0RrSd43NFXq_V-ta1VJhB0orw8mJcABdeyTzEg%40mail.gmail.com.


Re: A minimally conscious program

2021-04-26 Thread Terren Suydam
Assuming the program has a state and that state changes in response to its
inputs, then it seems reasonable to say the program is conscious in some
elemental way. What is it conscious "of", though? I'd say it's not
conscious of anything outside of itself, in the same way we are not
conscious of anything outside of ourselves. We are only conscious of the
model of the world we build. You might then say it's conscious of its
internal representation, or its state.

On Sun, Apr 25, 2021 at 4:29 PM Jason Resch  wrote:

> It is quite easy, I think, to define a program that "remembers" (stores
> and later retrieves ( information.
>
> It is slightly harder, but not altogether difficult, to write a program
> that "learns" (alters its behavior based on prior inputs).
>
> What though, is required to write a program that "knows" (has awareness or
> access to information or knowledge)?
>
> Does, for instance, the following program "know" anything about the data
> it is processing?
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> } else {
> // knows pixel.red <= 128
> }
>
> If not, what else is required for knowledge?
>
> Does the program behavior have to change based on the state of some
> information? For example:
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> doX();
> } else {
> // knows pixel.red <= 128
> doY():
> }
>
> Or does the program have to possess some memory and enter a different
> state based on the state of the information it processed?
>
> if (pixel.red > 128) then {
> // knows pixel.red is greater than 128
> enterStateX():
> } else {
> // knows pixel.red <= 128
> enterStateY();
> }
>
> Or is something else altogether needed to say the program knows?
>
> If a program can be said to "know" something then can we also say it is
> conscious of that thing?
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgmPiCz5v4p91LAs0jN_2dCBocvnh4OO8sE7c-0JG%3DuwQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_kzDBO7wXrgXJ36bOJmFo6cNv8pxYzt6GujRcc92Vy%3DQ%40mail.gmail.com.


Re: Re[2]: Born's rule from almost nothing

2021-01-06 Thread Terren Suydam
This is how I see it as well. All possible worlds already exist in a
platonic sense, and one's experience represents a single path traversed
through the infinite multitude of possibilities. This connects nicely to
the universal dovetailer idea.

On Wed, Jan 6, 2021 at 8:19 AM Quentin Anciaux  wrote:

> I think there is no split, but continuous differentiation. So there is
> always an infinity of worlds. Or there is no world at all and only
> consciousness differentiation.
>
> Quentin
>
> Le mer. 6 janv. 2021 à 14:17, scerir via Everything List <
> everything-list@googlegroups.com> a écrit :
>
>> Worlds, worlds. What are these worlds? When a pig observes a Young
>> interferometer does this pig create worlds? Does this pig split worlds? Or
>> not, because there is not full consciousness? And in Alpha Centauri,  where
>> there are no pigs, no humans, no consciousness, no Young interferometers?
>> No Franson interferometers either ...
>>
>> --
>> Inviato da Libero Mail per Android
>> Mercoledì, 06 Gennaio 2021, 01:28PM +01:00 da Quentin Anciaux
>> allco...@gmail.com:
>>
>> Here a schema:
>> [image: image.png]
>>
>> After 3 experiments, you have *8* worlds... each with the memory of the
>> initial experiment, 4 of the 2nd version A and for of the 2nd version B...
>> etc
>>
>> Every *worlds* has a past which is linked directly with the previous
>> experiment and to the initial experiment... in each world there is an
>> ensemble of 3 results.
>>
>> Quentin
>>
>> Le mer. 6 janv. 2021 à 13:01, Alan Grayson  a
>> écrit :
>>
>> I should have been more explicit; since the trials are independent, the
>> other worlds implied by the MWI for any particular trial, are unrelated to
>> the other worlds created for any OTHER particular trial. Thus, each other
>> world has an ensemble with one element, insufficient for the existence of
>> probabilities. AG
>>
>> On Wednesday, January 6, 2021 at 4:41:57 AM UTC-7 Alan Grayson wrote:
>>
>> On Wednesday, January 6, 2021 at 3:33:52 AM UTC-7 johnk...@gmail.com
>> wrote:
>>
>> On Tue, Jan 5, 2021 at 10:05 PM Alan Grayson  wrote:
>>
>> >> One world contains an Alan Grayson that sees the electron go left,
>> another world is absolutely identical in every way except that it contains
>> a  Alan Grayson that sees the electron go right. So you tell me, which of
>> those 2 worlds is "THIS WORLD"?
>>
>>
>> *> It's the world where a living being can observe the trials being
>> measured. The other world is in your imagination (if you believe in the
>> MWI). AG *
>>
>>
>> From that response I take it you have abandoned your attempt to poke logical
>> holes in the Many Worlds Interpretation and instead have resorted to a
>> pure emotional appeal; namely that there must be a fundamental law of
>> physics that says anything Alan Grayson finds to be odd cannot exist,
>> and Alan Grayson finds many Worlds to be odd. Personally I find Many
>> Worlds to be odd too, although it's the least odd of all the quantum
>> interpretations, however I don't think nature cares very much if you or I
>> approve of it or not. From experimentation it's clear to me that if Many
>> Worlds is not true then something even stranger is.
>>
>>
>> I have no idea whatsoever, how you reached your conclusions above. There
>> are things called laboratories, where physicists conduct experiments, some
>> of which are quantum experiments with probabilistic outcomes. The world in
>> which such things exist, I call THIS world. Worlds postulated to exist
>> based on the claim that any possible measurement, must be a realized
>> measurement in another world, I call OTHER worlds. Those OTHER worlds are
>> imagined to exist based on the MWI. These are simple facts. I am not making
>> any emotional appeals to anything. The possible oddness of the Cosmos is
>> not affirmed or denied here. I agree the Cosmos might be odd, possibly very
>> odd, but this has nothing to do with our discussion. The core of my
>> argument is that since the trial outcomes in quantum experiments are
>> independent of one another, there's no reason to claim that each of the
>> OTHER worlds accumulates ensembles, as an ensemble is created in THIS
>> world. Without ensembles in those OTHER worlds, the MWI fails to affirm the
>> existence of probability in any of those OTHER worlds. AG
>>
>>
>>  See my new list at  Extropolis 
>>
>> John K Clark
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/55a83617-d49c-403c-a679-02025441ef6fn%40googlegroups.com
>> 
>> .
>>
>>
>>
>> --
>> All those moments 

Re: Stephen Wolfram - a theory of everything?

2020-04-17 Thread Terren Suydam
I love everything about fractals, chaos theory, and so on, and Wolfram's
latest idea here seems really rich and potentially highly explanatory.

But let's say he can fairly convincingly say, we found it, this is the
hypergraph rule to rule them all, it leads to gravity and quantum
mechanics, and so on. The next question would be, well, what's so special
about that rule?  What caused that one to be selected among the many to
lead to our present universe?  It's the same question we can ask about the
null hypothesis: why these physical laws, and not others? Well, one could
say, all of them exist somewhere, we just happen to exist in this one
(which perhaps is the only one that could support us).

But if it's true that they all (the infinity of them) exist somewhere,
we're back in platonia (at least with regard to Wolfram's universes). And
if that's the case, the hypergraph iteration is just a program running on
the dovetailer.

Terren

On Fri, Apr 17, 2020 at 2:24 AM Alan Grayson  wrote:

>
> https://www.pcgamer.com/physicist-stephen-wolfram-thinks-hes-on-to-a-theory-of-everything-and-he-wants-help-simulating-the-universe/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/360ab05a-5985-43fa-aa44-6549e15e30b4%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_0fKSGPd3YK2CuZbSrnb7PgfDKb1N__v8R2m%3DZknrWWg%40mail.gmail.com.


Re: Wittgenstein's meta-philosophy

2020-02-19 Thread Terren Suydam
What I mean is, "consciousness exists" cannot be denied, in any context.

On Wed, Feb 19, 2020, 11:00 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> But it is the same as 'Consciousness exists'.  The "true" is otiose; and
> probably the "exists" too.
>
> Brent
>
> On 2/19/2020 7:16 PM, Terren Suydam wrote:
>
> That's my view as well. However, the original article made reference to
> "absolute truth", and whether that concept is sensible. Thinking of
> Descartes' famous "I think, therefore I am", the word "I" is suspect, but
> we can do away with that and say it's absolutely true that "consciousness
> exists", and this is about as context-free a statement as one can make.
>
> Terren
>
>
> On Wed, Feb 19, 2020 at 7:20 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 2/19/2020 12:15 PM, Philip Thrift wrote:
>>
>>
>>
>> Wittgenstein is at the core really of *linguistic pragmatism *
>>
>> https://en.wikipedia.org/wiki/Neopragmatism
>>
>> Languages are tools. There is no truth "out there".
>>
>>
>> My view is that "true" means different things in different contexts.
>> Tacked onto a declarative sentence, it's just emphasis.  In science it's an
>> the attribute of statements that can be confirmed empirically.  In logic
>> and mathematics it's just a marker that is assigned to axioms and
>> guaranteed to be preserved by the rules of inference.
>>
>> Brent
>>
>>
>> Philosophers are merely a type of *programming language theorists*.
>>
>>  https://en.wikipedia.org/wiki/Programming_language_theory
>>
>> @philipthrift
>>
>>
>>
>> On Wednesday, February 19, 2020 at 12:43:01 PM UTC-6, Brent wrote:
>>>
>>> I quite agree with Horwich and Wittgenstein as they refer to
>>> meta-physics.  I think one contribution of meta-physics, as in analyzing
>>> the interpretations of quantum mechanics, is what Wittgenstein called
>>> "therapuetic", i.e. clarifying and identifying real problems versus
>>> psuedo-problems of language.  But I think they also serve a purpose in
>>> suggesting how science may advance, what new theories might be developed or
>>> how old ones may be better understood.  Although the latter is generally
>>> done by scientists who are specialists in the field, there are exceptions
>>> like Tim Maudlin.  And from a meta-physical perspective, mathematicians are
>>> nothing but armchair philosophers.
>>>
>>> Horwich doesn't seem to touch at all on moral and ethical philosophy,
>>> how one should live one's life, as exemplified by the epicurieans, the
>>> stoics, the existentialists,...  Someday neuroscience, evolution, AI, and
>>> decision theory may make this field more scientific, but in the meantime
>>> there's a place for philosophy.
>>>
>>> Brent
>>>
>>> On 2/18/2020 11:43 PM, Philip Thrift wrote:
>>>
>>>
>>>
>>> https://opinionator.blogs.nytimes.com/2013/03/03/was-wittgenstein-right/
>>>
>>> *Was Wittgenstein Right?*
>>> BY PAUL HORWICH
>>> MARCH 3, 2013
>>>
>>> A reminder of philosophy’s embarrassing failure, after over 2000 years,
>>> to settle any of its central issues.
>>>
>>>
>>> The singular achievement of the controversial early 20th century
>>> philosopher Ludwig Wittgenstein was to have discerned the true nature of
>>> Western philosophy — what is special about its problems, where they come
>>> from, how they should and should not be addressed, and what can and cannot
>>> be accomplished by grappling with them. The uniquely insightful answers
>>> provided to these meta-questions are what give his treatments of specific
>>> issues within the subject — concerning language, experience, knowledge,
>>> mathematics, art and religion among them — a power of illumination that
>>> cannot be found in the work of others.
>>>
>>> Admittedly, few would agree with this rosy assessment — certainly not
>>> many professional philosophers. Apart from a small and ignored clique of
>>> hard-core supporters the usual view these days is that his writing is
>>> self-indulgently obscure and that behind the catchy slogans there is little
>>> of intellectual value. But this

Re: Wittgenstein's meta-philosophy

2020-02-19 Thread Terren Suydam
That's my view as well. However, the original article made reference to
"absolute truth", and whether that concept is sensible. Thinking of
Descartes' famous "I think, therefore I am", the word "I" is suspect, but
we can do away with that and say it's absolutely true that "consciousness
exists", and this is about as context-free a statement as one can make.

Terren


On Wed, Feb 19, 2020 at 7:20 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 2/19/2020 12:15 PM, Philip Thrift wrote:
>
>
>
> Wittgenstein is at the core really of *linguistic pragmatism *
>
> https://en.wikipedia.org/wiki/Neopragmatism
>
> Languages are tools. There is no truth "out there".
>
>
> My view is that "true" means different things in different contexts.
> Tacked onto a declarative sentence, it's just emphasis.  In science it's an
> the attribute of statements that can be confirmed empirically.  In logic
> and mathematics it's just a marker that is assigned to axioms and
> guaranteed to be preserved by the rules of inference.
>
> Brent
>
>
> Philosophers are merely a type of *programming language theorists*.
>
>  https://en.wikipedia.org/wiki/Programming_language_theory
>
> @philipthrift
>
>
>
> On Wednesday, February 19, 2020 at 12:43:01 PM UTC-6, Brent wrote:
>>
>> I quite agree with Horwich and Wittgenstein as they refer to
>> meta-physics.  I think one contribution of meta-physics, as in analyzing
>> the interpretations of quantum mechanics, is what Wittgenstein called
>> "therapuetic", i.e. clarifying and identifying real problems versus
>> psuedo-problems of language.  But I think they also serve a purpose in
>> suggesting how science may advance, what new theories might be developed or
>> how old ones may be better understood.  Although the latter is generally
>> done by scientists who are specialists in the field, there are exceptions
>> like Tim Maudlin.  And from a meta-physical perspective, mathematicians are
>> nothing but armchair philosophers.
>>
>> Horwich doesn't seem to touch at all on moral and ethical philosophy, how
>> one should live one's life, as exemplified by the epicurieans, the stoics,
>> the existentialists,...  Someday neuroscience, evolution, AI, and decision
>> theory may make this field more scientific, but in the meantime there's a
>> place for philosophy.
>>
>> Brent
>>
>> On 2/18/2020 11:43 PM, Philip Thrift wrote:
>>
>>
>>
>> https://opinionator.blogs.nytimes.com/2013/03/03/was-wittgenstein-right/
>>
>> *Was Wittgenstein Right?*
>> BY PAUL HORWICH
>> MARCH 3, 2013
>>
>> A reminder of philosophy’s embarrassing failure, after over 2000 years,
>> to settle any of its central issues.
>>
>>
>> The singular achievement of the controversial early 20th century
>> philosopher Ludwig Wittgenstein was to have discerned the true nature of
>> Western philosophy — what is special about its problems, where they come
>> from, how they should and should not be addressed, and what can and cannot
>> be accomplished by grappling with them. The uniquely insightful answers
>> provided to these meta-questions are what give his treatments of specific
>> issues within the subject — concerning language, experience, knowledge,
>> mathematics, art and religion among them — a power of illumination that
>> cannot be found in the work of others.
>>
>> Admittedly, few would agree with this rosy assessment — certainly not
>> many professional philosophers. Apart from a small and ignored clique of
>> hard-core supporters the usual view these days is that his writing is
>> self-indulgently obscure and that behind the catchy slogans there is little
>> of intellectual value. But this dismissal disguises what is pretty clearly
>> the real cause of Wittgenstein’s unpopularity within departments of
>> philosophy: namely, his thoroughgoing rejection of the subject as
>> traditionally and currently practiced; his insistence that it can’t give us
>> the kind of knowledge generally regarded as its raison d’être.
>>
>> Wittgenstein claims that there are no realms of phenomena whose study is
>> the special business of a philosopher, and about which he or she should
>> devise profound a priori theories and sophisticated supporting arguments.
>> There are no startling discoveries to be made of facts, not open to the
>> methods of science, yet accessible “from the armchair” through some blend
>> of intuition, pure reason and conceptual analysis. Indeed the whole idea of
>> a subject that could yield such results is based on confusion and wishful
>> thinking.
>>
>> This attitude is in stark opposition to the traditional view, which
>> continues to prevail. Philosophy is respected, even exalted, for its
>> promise to provide fundamental insights into the human condition and the
>> ultimate character of the universe, leading to vital conclusions about how
>> we are to arrange our lives. It’s taken for granted that there is deep
>> understanding to be obtained of the nature of consciousness, of how

Re: Artist and Picture by J.W. Dunne

2019-07-12 Thread Terren Suydam
It's just a question of time. The FOOM scenario is motivated by safety
concerns - that AI's intelligence could surpass our ability to deal with
it, leading to the Singularity. So it's not about whether those other paths
are possible, it's about how long they would take, and in each of those
cases, would the AIs involved be safe.

It's hard to know how fast these different paths would take. In general
though, it's much easier to see FOOM happening by considering an AI
analyzing its own cognitive apparatus and updating it directly, according
to some theory or model of intelligence it has developed, than by either
evolution or by starting from scratch. In the case of evolution, the AI
would have to run a bunch of iterations, each of which would take some
amount of time. How many iterations, how much time?  I'm out of my depth on
that. My hunch is that this would be a slow process, even with a lot of
computational resources. Also, it bears pointing out that the evolution
path is much less safe from the standpoint of being able to reason about
whether the AIs created would value human life/flourishing.

In the case of an AI building its own new AI, that's actually the same
basic scenario as an AI just modifying its own source code. In both cases
it's instantiating a design based on its own theory of intelligence.
Starting from scratch is slower, because with recursive self-improvement,
it's got a huge head start - it's starting from a model that is more or
less proven to be intelligent already.  But it's not hard to imagine that a
recursively-improving AI would finally arrive at a point where it realized
the only way to continuing increasing intelligence would be to create a new
design, one that would be impossible for human level intelligence to ever
grasp. From a safety point of view, both of these paths at least have the
possibility of being able to reason about whether the AIs preserve a goal
system in which human life/flourishing is valued.

On Fri, Jul 12, 2019, 4:28 AM Quentin Anciaux  wrote:

> Hi,
>
> Is it not how evolution is working ? By iteration and random modification,
> new better organisms come to existence ?
>
> Why AI could not use iterating evolution to make better and better AI ?
>
> Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have
> built a better, smarter version of us ? The AI surely would be able to
> build another one and by iterating, a better one.
>
> What's wrong with this ?
>
> Quentin
>
> Le ven. 12 juil. 2019 à 06:28, Terren Suydam  a
> écrit :
>
>> Sure, but that's not the "FOOM" scenario, in which an AI modifies its own
>> source code, gets smarter, and with the increase in intelligence, is able
>> to make yet more modifications to its own source code, and so on, until its
>> intelligence far outstrips its previous capabilities before the recursive
>> self-improvement began. It's hypothesized that such a process could take an
>> astonishingly short amount of time, thus "FOOM". See
>> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.
>>
>> My point was that the inherent limitation of a mind to understand itself
>> completely, makes the FOOM scenario less likely. An AI would be forced to
>> model its own cognitive apparatus in a necessarily incomplete way. It might
>> still be possible to improve itself using these incomplete models, but
>> there would always be some uncertainty.
>>
>> Another more minor objection is that the FOOM scenario also selects for
>> AIs that become massively competent at self-improvement, but it's not clear
>> whether this selected-for intelligence is merely a narrow competence, or
>> translates generally to other domains of interest.
>>
>>
>> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> Advances in intelligence can just be gaining more factual knowledge,
>>> knowing more mathematics, using faster algorithms, etc.  None of that is
>>> barred by not being able to model oneself.
>>>
>>> Brent
>>>
>>> On 7/11/2019 11:41 AM, Terren Suydam wrote:
>>> > Similarly, one can never completely understand one's own mind, for it
>>> > would take a bigger mind than one has to do so. This, I believe, is
>>> > the best argument against the runaway-intelligence scenarios in which
>>> > sufficiently advanced AIs recursively improve their own code to
>>> > achieve ever increasing advances in intelligence.
>>> >
>>> > Terren
>>>
>>>
>>> --
>>> You receiv

Re: Artist and Picture by J.W. Dunne

2019-07-11 Thread Terren Suydam
Sure, but that's not the "FOOM" scenario, in which an AI modifies its own
source code, gets smarter, and with the increase in intelligence, is able
to make yet more modifications to its own source code, and so on, until its
intelligence far outstrips its previous capabilities before the recursive
self-improvement began. It's hypothesized that such a process could take an
astonishingly short amount of time, thus "FOOM". See
https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.

My point was that the inherent limitation of a mind to understand itself
completely, makes the FOOM scenario less likely. An AI would be forced to
model its own cognitive apparatus in a necessarily incomplete way. It might
still be possible to improve itself using these incomplete models, but
there would always be some uncertainty.

Another more minor objection is that the FOOM scenario also selects for AIs
that become massively competent at self-improvement, but it's not clear
whether this selected-for intelligence is merely a narrow competence, or
translates generally to other domains of interest.


On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> Advances in intelligence can just be gaining more factual knowledge,
> knowing more mathematics, using faster algorithms, etc.  None of that is
> barred by not being able to model oneself.
>
> Brent
>
> On 7/11/2019 11:41 AM, Terren Suydam wrote:
> > Similarly, one can never completely understand one's own mind, for it
> > would take a bigger mind than one has to do so. This, I believe, is
> > the best argument against the runaway-intelligence scenarios in which
> > sufficiently advanced AIs recursively improve their own code to
> > achieve ever increasing advances in intelligence.
> >
> > Terren
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com.


Re: Artist and Picture by J.W. Dunne

2019-07-11 Thread Terren Suydam
Similarly, one can never completely understand one's own mind, for it would
take a bigger mind than one has to do so. This, I believe, is the best
argument against the runaway-intelligence scenarios in which sufficiently
advanced AIs recursively improve their own code to achieve ever increasing
advances in intelligence.

Terren

On Thu, Jul 11, 2019 at 2:29 PM Evgenii Rudnyi  wrote:

> "A certain artist, having escaped from the lunatic asylum in which,
> rightly or wrongly, he had be confined, purchased the materials of his
> craft and set to work to make a complete picture of the universe."
>
> ...
>
> "The interpretation of this parable is sufficiently obvious. The artist
> is trying to describe in his picture a creature equipped with all the
> knowledge which he himself possesses, symbolizing that knowledge by the
> picture which the pictured creature would draw. And it becomes
> abundantly evident that the knowledge thus pictured must always be less
> the than the knowledge employed in making the picture. In other words,
> the mind which any human science can describe can never be an adequate
> representation of the mind which can make that science. And the process
> of correcting that inadequacy must follow the serial steps of an
> infinite regress."
>
>
> https://scienceforartists.wordpress.com/2011/09/20/artist-and-picture-by-j-w-dunne/
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/76d1afbe-124a-8afa-d67a-9301e1b426b7%40rudnyi.ru
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-fpQHWiou%2BxYLzbXn_dS9C0utvFkja5cYGRsHG2uZ_Gg%40mail.gmail.com.


Re: A purely relational ontology?

2019-06-18 Thread Terren Suydam
Hi Pierz,

Your writings remind me very much of the work of Gilles Deleuze, a
philosopher who similarly shifted ontology from *identity* to *relation, *and
explored many interesting consequences of making that shift. My exposure to
him came from the excellent Philosophize This podcast, which dedicated 5
episodes to Deleuze. If you're interested, check out the first episode here
.

Terren

On Mon, Jun 17, 2019 at 10:15 PM Pierz  wrote:

>
> I've been thinking and writing a lot recently about  a conception of
> reality which avoids the debates about what is fundamental in reality. It
> seems to me that with regards to materialism, we find it very difficult to
> escape the evolutionarily evolved, inbuilt notion of "things" and "stuff"
> that our brains need in order to manipulate the world. Yet QM and
> importantly the expected dissolution of time and space as fundamental
> entities in physics have made any such simple mechanistic notion of matter
> obsolete - what is left of matter except mathematics and some strange thing
> we can only call "instantiation" - the fact that things have specific
> values rather than (seeming to be) pure abstractions? What does a
> sophisticated materialist today place his or her faith in exactly?
> Something along the lines of the idea that the world is fundamentally
> describable by mathematics, impersonal and reducible to the operation of
> its simplest components. With regards to the last part - reductionism -
> that also seems to be hitting a limit in the sense that, while we have some
> supposed candidates for fundamental entities (whether quantum fields,
> branes or whatever), there is always a problem with anything considered
> "fundamental" - namely the old turtle stack problem. If the world is really
> made of any fundamental entity, then *fundamentally* it is made of magic
> - since the properties of that fundamental thing must simply be given
> rather than depending on some other set of relations. While physicists on
> the one hand continually search for such an entity, on the other they
> immediately reject any candidate as soon as it is found, since the question
> naturally arises, why this way and not that? What do these properties
> depend on? Furthermore, the fine tuning problem, unless it can be solved by
> proof that the world *has* to be the way it is – a forlorn hope it seems to
> me – suggests that the idea that we can explain all of reality in terms of
> the analysis of parts (emergent relationships) is likely to collapse – we
> will need to invoke a cosmological context in order to explain the
> behaviour of the parts. It's no wonder so many physicists hate that idea,
> since it runs against the deep reductionist grain. And after all, analysis
> of emergent relationships (the parts of a thing) is always so much easier
> than analysis of contextual relationships (what a thing is part of).
>
> To get to the point then, I am considering the idea of a purely relational
> ontology, one in which all that exists are relationships. There are no
> entities with intrinsic properties, but only a web of relational
> properties. Entities with intrinsic properties are necessary components of
> any finite, bounded theory, and in fact such entities form the boundaries
> of the theory, the "approximations" it necessarily invokes in order to draw
> a line somewhere in the potentially unbounded phenomenological field. In
> economic theory for instance, we have “rational, self-interested” agents
> invoked as fundamental entities with rationality and self-interest deemed
> intrinsic, even though clearly such properties are, in reality, relational
> properties that depend on evolutionary and psychological factors, that,
> when analysed, reveal the inaccuracies and approximations of that theory. I
> am claiming that all properties imagined as intrinsic are approximations of
> this sort - ultimately to be revealed as derived from relations either
> external or internal to that entity.
>
> Of course, a purely relational ontology necessarily involves an infinite
> regress of relationships, but it seems to me that we must choose our poison
> here - the magic of intrinsic properties, or the infinite regress of only
> relational ones. I prefer the latter. (Note that I am using a definition of
> relational properties that includes emergent properties as relational,
> though the traditional philosophical use of those terms probably would not.
> The reason is that I am interested in what is *ontologically* intrinsic,
> not *semantically* intrinsic.)
>
> What would such a conception imply in the philosophy of mind?
> Traditionally, the “qualiophiles” have defined qualia as intrinsic
> properties, yet (while I am no fan of eliminativism) I think Dennett has
> made a strong case against this idea. Qualia appear to me to be properties
> of relationships between organisms and their environments. They are not
> fundamental, but then neither is the “

Re: The anecdote of Moon landing

2019-05-20 Thread Terren Suydam
I'll add my voice to those asking you to put up or shut up. Produce an act
of telepathy. You name the terms, since you're the one making the claim
that you can do it.

But you won't do it, because you can't. That's my clairvoyant prediction.


On Mon, May 20, 2019, 6:37 PM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> See ? That's the problem with you irrational materialist dogmatic
> believers. Even if people gives you evidence, you keep continue yelling
> "EVIDENCE! EVIDENCE!". Maybe you should consult a psychiatrist or
> something. You cannot have dialogues with irrational broken records.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/473e7052-f953-4158-a1ef-7dd60395578d%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9wgiOJfr97K-RK7zGzukPBEGwMrQY0wrpvs3USMN1-JQ%40mail.gmail.com.


Re: The anecdote of Moon landing

2019-05-17 Thread Terren Suydam
A skeptic.

On Fri, May 17, 2019 at 11:30 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> Stating the obvious is not insulting. If a phenomenon is real and you say
> is not, what else are you if not a irrational dogmatic believer in
> materialism ?
>
> On Friday, 17 May 2019 17:20:03 UTC+3, Terren Suydam wrote:
>>
>> If there isn't a word for this, there should be, to name the situation
>> when someone makes some insulting claim that is best understood as a
>> projection of one's own justified fear of how they're perceived. Trump does
>> it all the time.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4b7ce737-71cb-4059-913c-8fa06d98fda2%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/4b7ce737-71cb-4059-913c-8fa06d98fda2%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8-SN9w87m8N_208u5Ygq-eX3iz9Of4Y4XNy9zBY-7F4A%40mail.gmail.com.


Re: The anecdote of Moon landing

2019-05-17 Thread Terren Suydam
If there isn't a word for this, there should be, to name the situation when
someone makes some insulting claim that is best understood as a projection
of one's own justified fear of how they're perceived. Trump does it all the
time.

On Fri, May 17, 2019 at 10:05 AM 'Cosmin Visan' via Everything List <
everything-list@googlegroups.com> wrote:

> Telepathy is not information transmission. Telepathy is consciousnesses
> unifications: 2 or more consciousnesses unify, they live a common
> experience, and then they split back apart, all remembering the shared
> experience.
>
> Personal experiences are all there is, since consciousness is all there
> is. Ignoring them only shows that you are an irrational indoctrinated
> dogmatic believer, with which no discussion can take place.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/c2cd426d-8d83-4790-9fb0-ca83133fbc68%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9BMLLSfVGTEjfsQY%3DJZXU6X8hcLR0D%2B9v1ikteeHHjmg%40mail.gmail.com.


  1   2   3   4   5   >