On Thursday, March 20, 2014 8:48:30 PM UTC-5, Craig Weinberg wrote: > > > On Thursday, March 20, 2014 1:01:43 PM UTC-4, Gabriel Bodeen wrote: >> >> >> On Thursday, March 20, 2014 11:16:19 AM UTC-5, Craig Weinberg wrote: >>> >>> >>> On Thursday, March 20, 2014 11:09:39 AM UTC-4, Gabriel Bodeen wrote: >>> >>> >>>> It formed increasingly high-level associations between bundles of >>>> sensory data, eventually also combining sounds and vocal behavior into >>>> those associations. There's nothing obviously intractable about >>>> describing >>>> such data input and analysis in computational terms. >>>> >>> >>> If that were true, the oldest words would describe things like danger or >>> food, but they don't. They are concepts like I, who, two, three and five ( >>> http://media.tumblr.com/8b5d411063f5291737c4a36681474205/tumblr_inline_mmrdbhECQY1qz4rgp.png >>> ) >>> >> >> BTW, that chart is about the most-conserved words in the Indo-European >> family of languages. It says nothing either way about what the earliest >> words were. >> > > Most conserved = earliest words that are still in use. >
Indeed, but that doesn't rescue the original point. The earliest words still in use today don't tell us what the earliest words were. > Computationalism need not have anything to do with the brain. It's about > consciousness arising from computation, i.e., it supports strong AI, which > would not be about brains. > Ah, that's an important comment. You are indeed talking about a specific kind of CTM that wasn't clear to me. Thanks for clarifying. The usual sense of CTM is that consciousness is literally computation, not that it arises from computation. > The brain doesn't figure into this at all. My point was that if > consciousness is computation, and qualia are just complex computational > labels, then we should expect languages to develop from simple, > modal-independent forms to modal-dependent forms in which computations > become so diversified that the lose any common vocabulary. Would you agree > that this is precisely the opposite of what is seen in nature? > Yes, I still agree about how we observe language to form. It's just that your characterization of CTM as making the predictions you mention is wrong. It only makes those predictions when supplemented with additional assumptions that are not generally part of CTM. > They don't reduce to a binary code like we would expect them to in CTM. > That is not a prediction of CTM. Here's a relevant quote from the Stanford Encyclopedia of Philosophy: "Turing himself seems to have been of the opinion that a machine operating in this way would literally be doing the same things that the human performing computations is doing—that it would be 'duplicating' what the human computer does. But other writers have suggested that what the computer does is merely a 'simulation' of what the human computer does: a reproduction of human-level performance, perhaps through a set of steps that is [at] some level isomorphic to those the human undertakes, but not in such a fashion as to constitute doing the same thing in all relevant respects." -Gabe -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com. To post to this group, send email to everything-list@googlegroups.com. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.