Ed Porter wrote:
Richard,
Please describe some of the counterexamples, that you can easily come up
with, that make a mockery of Tononi's conclusion.

Ed Porter

Alas, I will have to disappoint. I put a lot of effort into understanding his paper first time around, but the sheer agony of reading (/listening to) his confused, shambling train of thought, the non-sequiteurs, and the pages of irrelevant math .... that I do not need to experience a second time. All of my original effort only resulted in the discovery that I had wasted my time, so I have no interest in wasting more of my time.

With other papers that contain more coherent substance, but perhaps what looks like an error, I would make the effort. But not this one.

It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S. A hint. All I remember was that he started talking about multiple regions (columns?) of the brain exchanging information with one another in a particular way, and then he asserted a conclusion which, on quick reflection, I knew would not be true of a system resembling the distributed one that I described in my consciousness paper (the molecular model). Knowing that his conclusion was flat-out untrue for that one case, and for a whole class of similar systems, his argument was toast.









-----Original Message-----
From: Richard Loosemore [mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a
machine that can learn from experience

Ed Porter wrote:
I don't think this AGI list should be so quick to dismiss a $4.9 million dollar grant to create an AGI. It will not necessarily be "vaporware." I think we should view it as a good sign.

Even if it is for a project that runs the risk, like many DARPA projects (like most scientific funding in general) of not necessarily placing its money where it might do the most good --- it is likely to at least produce some interesting results --- and it just might make some very important advances in our field.

The article from http://www.physorg.com/news148754667.html said:

".a $4.9 million grant.for the first phase of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

Tononi and scientists from Columbia University and IBM will work on the "software" for the thinking computer, while nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create the "hardware." Dharmendra Modha of IBM is the principal investigator.

The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and make logical decisions.

There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day.

I have just spent several hours reading a Tononi paper, "An information integration theory of consciousness" and skimmed several parts of his book "A Universe of Consciousness" he wrote with Edleman, whom Ben has referred to often in his writings. (I have attached my mark up of the article, which if you read just the yellow highlighted text, or (for more detail) the red, you can get a quick understanding of. You can also view it in MSWord outline mode if you like.)

This paper largely agrees with my notion, stated multiple times on this list, that consciousness is an incredibly complex computation that interacts with itself in a very rich manner that makes it aware of itself.

For the record, this looks like the paper that I listened to Tononi talk about a couple of years ago -- the one I mentioned in my last message.

It is, for want of a better word, nonsense. And since people take me to task for being so dismissive, let me add that it is the central thesis of the paper that is "nonsense": if you ask yourself very carefully what it is he is claiming, you can easily come up with counterexammples that make a mockery of his conclusion.



Richard Loosemore


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to