On Monday, May 6, 2019 at 5:51:54 AM UTC-5, PGC wrote: > > > > On Sunday, May 5, 2019 at 11:25:59 PM UTC+2, Brent wrote: >> >> >> >> On 5/5/2019 2:06 PM, [email protected] wrote: >> >> >> I of course think that "consciousness arises from the function of >> matter in some configurations" (the conscious brain is nothing but the >> cells and chemicals operating inside the skull), but it's doing more than* >> information processing*. It's doing *experience processing*. People can >> deliberate until the cows come home why information processing is >> sufficient or is not sufficient. If one is already an "information >> processing is sufficient for consciousness" fan, then nothing will probably >> change their belief in that. >> >> The brain is an experience processing engine. Experience cannot be >> reduced to information. >> >> >> The question is whether it can be reduced to a physical process and if so >> what processes produce experience? Does information processing that >> produces intelligence also produce experience? If not, there can be >> philosophical zombies. >> >> Brent >> > > Ok, but we're mostly surrounded by zombies 99% of the time anyway, > including members/posts of this old list, with occasional spring chicken > fresh meat, so it wouldn't make much of a difference in experience terms. > lol > > Nah, in this area I'm less intrigued by the list's 20 year preoccupation > with UDA, which merely applies Star Trek (and older Sci-Fi such as: > https://www.youtube.com/watch?v=xO9ppicjlFg [yes have some fun once in > awhile], but already Frankenstein and even older ideas/fiction) to the old > and dusty mathematical philosophy debates. Just because it is on-topic > doesn't mean that it isn't a time waster or intractable infinite oracle > problem/solution. > > In contrast, I'm always interested in AI's connection to language, > analyzing discourse, and reading what's up with research on applying AI to > improve and speed up theorem proving. Like this conference one month ago: > http://aitp-conference.org/2019/ > > Or meta learning being given some steroids, e.g. applying multiple AI > algorithms to solve cognitive problems in some framework, with each > algorithm solving a few steps of a problem, then switching (or parallel > whatever) after some intermediate result is obtained, with which another > appropriate algorithm produces another intermediate result etc. then apply > pattern mining with logical transformation rules to look at what was done. > Like bridging the usual gap by applying operations of commonsense intuition > to mathematical inference problems and endowing more mathematical precision > to commonsense reasoning problems. This is fascinating as it's perhaps a > step towards AI reasoning about its own code and the underlying algorithms > and be less zombie. As in "Yo AI: Are you experienced?" > > Now, if we could just formalize aesthetics: what makes a theorem > interesting or sexy as fuck? If any of you know-it-alls have work on this, > well you have my attention + we should hold another conference for that. > Spring chicken edition in Europe. Hosted by the big bad wolf, killer of > zombies. PGC >
One way to spot a zombie: Its declaration of adherence to the Church-Turing thesis. On theorem proving evolution, see Simon Sch¨afer and Stephan Schulz. *Breeding theorem proving heuristics with genetic algorithms.* In Georg Gottlob, Geoff Sutcliffe, and Andrei Voronkov, editors, Proc. of the Global Conference on Artificial Intelligence, Tibilisi, Georgia, volume 36 of EPiC, pages 263–274. EasyChair, 2015. cited in http://www.cs.man.ac.uk/~regerg/arcade/papers/paper_16.pdf @philipthrift -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

