On Thursday, October 24, 2013 4:47:15 PM UTC-4, Liz R wrote:
>
> I think what Deep Blue does is similar to what *parts *of the brain do, 
> and it probably does *that* better (some "human computers" seem to use 
> this facility in a more direct way than most of us can). However obviously 
> something is missing - possibly the system that integrates all these little 
> "engines" into a whole. (Or possibly not...)
>

I agree that what Deep Blue does is similar to what parts of the brain do *for 
us*, just as our own actions contribute to what a city or country does. 
That doesn't mean that is all that the parts of the brain are doing, or 
even that what they are doing would be comprehensible to us. Neurons have 
their own lives to worry about...perhaps not in the same way that we have 
our lives to worry about - indeed they may have shed much of their 
independence long ago evolutionarily, but I imagine that they at least have 
more local conditions to worry about than we would probably assume. If the 
relation of what our bodies do to what we experience is any guide, 'what it 
is like to be a neuron' is probably mostly inconceivable from our 
perspective.

As far as the binding or integration, I see that as only half of the 
picture. All neurons are replications of a single stem cell, so they are 
already one thing. Mitosis is a diffraction or divergence from singularity. 
What is emerging is insensitivity and space...synapse, convolution, 
hemisphere separation.


>
> On 25 October 2013 08:55, Telmo Menezes <te...@telmomenezes.com<javascript:>
> > wrote:
>
>> Hi John,
>>
>> On Thu, Oct 24, 2013 at 9:08 PM, John Mikes <jam...@gmail.com<javascript:>> 
>> wrote:
>> > Craig and Telmo:
>> > Is "anticipation" involved at all? Deep Blue anticipated hundreds of 
>> steps
>> > in advance (and evaluated a potential outcome before accepting, or
>> > rejecting).
>>
>> Sure. This issue though is that Deep Blue does this by brute force. It
>> computes billions of possible scenarios to arrive at a decision. It's
>> clear that human beings don't do that. They are more intelligent in
>> the sense that they can play competitively while only considering a
>> small fraction of the scenarios. How do we do this? There is almost no
>> real AI research nowadays because people gave up on answering this
>> question. It's related to many other interesting questions: how do we
>> read and understand the meaning of a text? Google is like something
>> with the intelligence of an ant (probably still way less) but vast
>> amounts of computational power. Again, this is brute-forcing the
>> problem and it doesn't come close to the level of understanding that a
>> smart 9 year old can have when reading.
>>
>> On the linguistic side, Chomsky is also outspoken against the
>> statistical "dumb" approaches.
>>
>> > What else is in "thinking" involved? I would like to know, because I 
>> have no
>> > idea.
>>
>> Hofstadter's ideas are very deep and I don't claim to fully understand
>> them. I do think that is concept of "strange loop" is important. Every
>> time there's something we can't define (intelligence, life,
>> consciousness), strange loops seems to be involved. Strange loops
>> feedback across abstraction layers. Goals->feelings->cognition->Goals.
>> Environment->DNA->Organism->Environment and so on -- in a very
>> informal way, please pay no attention to the lack of rigour here.
>>
>> I think this is compatible with comp and several thing that Bruno
>> alludes to. The insight also seems to come from similar sources --
>> notably Gödel's theorems.
>>
>> On the engineering of AI side, I believe we are still in the middle
>> ages when it comes to computation environments and languages. One of
>> my intuitions is that languages that facilitate the creation of
>> self-modifying computer code are an important step.
>>
>> Telmo.
>>
>> > John Mikes
>> >
>> >
>> > On Thu, Oct 24, 2013 at 1:02 PM, Craig Weinberg 
>> > <whats...@gmail.com<javascript:>
>> >
>> > wrote:
>> >>
>> >>
>> >>
>> >> On Thursday, October 24, 2013 12:43:49 PM UTC-4, telmo_menezes wrote:
>> >>>
>> >>> On Thu, Oct 24, 2013 at 6:39 PM, Craig Weinberg <whats...@gmail.com>
>> >>> wrote:
>> >>> >
>> >>> > 
>> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>> >>> >
>> >>> > The Man Who Would Teach Machines to Think
>> >>> >
>> >>> > "...Take Deep Blue, the IBM supercomputer that bested the chess
>> >>> > grandmaster
>> >>> > Garry Kasparov. Deep Blue won by brute force. For each legal move it
>> >>> > could
>> >>> > make at a given point in the game, it would consider its opponent’s
>> >>> > responses, its own responses to those responses, and so on for six 
>> or
>> >>> > more
>> >>> > steps down the line. With a fast evaluation function, it would
>> >>> > calculate a
>> >>> > score for each possible position, and then make the move that led to
>> >>> > the
>> >>> > best score. What allowed Deep Blue to beat the world’s best humans 
>> was
>> >>> > raw
>> >>> > computational power. It could evaluate up to 330 million positions a
>> >>> > second,
>> >>> > while Kasparov could evaluate only a few dozen before having to 
>> make a
>> >>> > decision.
>> >>> >
>> >>> > Hofstadter wanted to ask: Why conquer a task if there’s no insight 
>> to
>> >>> > be had
>> >>> > from the victory? “Okay,” he says, “Deep Blue plays very good 
>> chess—so
>> >>> > what?
>> >>> > Does that tell you something about how we play chess? No. Does it 
>> tell
>> >>> > you
>> >>> > about how Kasparov envisions, understands a chessboard?” A brand of 
>> AI
>> >>> > that
>> >>> > didn’t try to answer such questions—however impressive it might have
>> >>> > been—was, in Hofstadter’s mind, a diversion. He distanced himself 
>> from
>> >>> > the
>> >>> > field almost as soon as he became a part of it. “To me, as a 
>> fledgling
>> >>> > AI
>> >>> > person,” he says, “it was self-evident that I did not want to get
>> >>> > involved
>> >>> > in that trickery. It was obvious: I don’t want to be involved in
>> >>> > passing off
>> >>> > some fancy program’s behavior for intelligence when I know that it 
>> has
>> >>> > nothing to do with intelligence. And I don’t know why more people
>> >>> > aren’t
>> >>> > that way...”
>> >>>
>> >>> I was just reading this too. I agree.
>> >>>
>> >>> > This is precisely my argument against John Clark's position.
>> >>> >
>> >>> > Another quote I will be stealing:
>> >>> >
>> >>> > "Airplanes don’t flap their wings; why should computers think?"
>> >>>
>> >>> I think the intended meaning is closer to: "airplanes don't fly by
>> >>> flapping their wings, why should computers be intelligent by
>> >>> thinking?".
>> >>
>> >>
>> >> It depends whether you want 'thinking' to imply awareness or not. I 
>> think
>> >> the point is that we should not assume that computation is in any way
>> >> 'thinking' (or intelligence for that matter). I think that 'thinking' 
>> is not
>> >> passive enough to describe computation. It is to say that a net is
>> >> 'fishing'. Computation is many nets within nets, devoid of intention or
>> >> perspective. It does the opposite of thinking, it is a method for 
>> petrifying
>> >> the measurable residue or reflection of thought.
>> >>
>> >>
>> >>>
>> >>> > --
>> >>> > You received this message because you are subscribed to the Google
>> >>> > Groups
>> >>> > "Everything List" group.
>> >>> > To unsubscribe from this group and stop receiving emails from it, 
>> send
>> >>> > an
>> >>> > email to everything-li...@googlegroups.com.
>> >>> > To post to this group, send email to everyth...@googlegroups.com.
>> >>> > Visit this group at http://groups.google.com/group/everything-list.
>> >>> > For more options, visit https://groups.google.com/groups/opt_out.
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google 
>> Groups
>> >> "Everything List" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send 
>> an
>> >> email to everything-li...@googlegroups.com <javascript:>.
>> >> To post to this group, send email to 
>> >> everyth...@googlegroups.com<javascript:>
>> .
>> >> Visit this group at http://groups.google.com/group/everything-list.
>> >> For more options, visit https://groups.google.com/groups/opt_out.
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google 
>> Groups
>> > "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an
>> > email to everything-li...@googlegroups.com <javascript:>.
>> > To post to this group, send email to 
>> > everyth...@googlegroups.com<javascript:>
>> .
>> > Visit this group at http://groups.google.com/group/everything-list.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com <javascript:>.
>> To post to this group, send email to everyth...@googlegroups.com<javascript:>
>> .
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to