Excellent! Now we're getting somewhere. So the problem of qualia and, say, whether or not we could build a machine that *enjoys* playing the piano, you fall in the camp of the strong-AI people. We can definitely build a machine that thinks and feels just like a human. Is that right?
(Full disclosure: I'm a strong-AI person. But I'm also pretty practical in my understanding of AI and the achievement of it exists far beyond at least one inflection point. And we'll probably all go extinct before it happens.) On 5/1/20 2:50 PM, [email protected] wrote: > Perhaps I misspoke. I certainly agree that working out an entity's point of > view is a problem. I just don't see why it's a hard problem. In otherwords, > when Chalmers asserts that there is a Hard Problem of consciousness, him > implies that he is pointing to some problem unique in its hardness. I think > I am only denying there is not such uniquely hard problem, not that there is > not a problem of working out what is from different points of view or a > problem of working out some entity's point of view from what is. -- ☣ uǝlƃ .-. .- -. -.. --- -- -..-. -.. --- - ... -..-. .- -. -.. -..-. -.. .- ... .... . ... FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
