On Sunday, May 5, 2019 at 11:25:59 PM UTC+2, Brent wrote:
>
>
>
> On 5/5/2019 2:06 PM, [email protected] <javascript:> wrote:
>
>
> I of course think that "consciousness arises from the function of 
> matter in some configurations" (the conscious brain is nothing but the 
> cells and chemicals operating inside the skull), but it's doing more than* 
> information processing*. It's doing *experience processing*. People can 
> deliberate until the cows come home why information processing is 
> sufficient or is not sufficient. If one is already an "information 
> processing is sufficient for consciousness" fan, then nothing will probably 
> change their belief in that. 
>
> The brain is an experience processing engine. Experience cannot be reduced 
> to information.
>
>
> The question is whether it can be reduced to a physical process and if so 
> what processes produce experience?  Does information processing that 
> produces intelligence also produce experience?  If not, there can be 
> philosophical zombies.
>
> Brent
>

Ok, but we're mostly surrounded by zombies 99% of the time anyway, 
including members/posts of this old list, with occasional spring chicken 
fresh meat, so it wouldn't make much of a difference in experience terms. 
lol

Nah, in this area I'm less intrigued by the list's 20 year preoccupation 
with UDA, which merely applies Star Trek (and older Sci-Fi such as: 
https://www.youtube.com/watch?v=xO9ppicjlFg [yes have some fun once in 
awhile], but already Frankenstein and even older ideas/fiction) to the old 
and dusty mathematical philosophy debates. Just because it is on-topic 
doesn't mean that it isn't a time waster or intractable infinite oracle 
problem/solution.

In contrast, I'm always interested in AI's connection to language, 
analyzing discourse, and reading what's up with research on applying AI to 
improve and speed up theorem proving. Like this conference one month ago: 
http://aitp-conference.org/2019/

Or meta learning being given some steroids, e.g. applying multiple AI 
algorithms to solve cognitive problems in some framework, with each 
algorithm solving a few steps of a problem, then switching (or parallel 
whatever) after some intermediate result is obtained, with which another 
appropriate algorithm produces another intermediate result etc. then apply 
pattern mining with logical transformation rules to look at what was done. 
Like bridging the usual gap by applying operations of commonsense intuition 
to mathematical inference problems and endowing more mathematical precision 
to commonsense reasoning problems. This is fascinating as it's perhaps a 
step towards AI reasoning about its own code and the underlying algorithms 
and be less zombie. As in "Yo AI: Are you experienced?"

Now, if we could just formalize aesthetics: what makes a theorem 
interesting or sexy as fuck? If any of you know-it-alls have work on this, 
well you have my attention + we should hold another conference for that. 
Spring chicken edition in Europe. Hosted by the big bad wolf, killer of 
zombies. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to