(Sorry for the bad formatting, gmail is getting worse at this. They
must have sent all the smart people to the AlphaGO team..)

> OK, lets
>
> put aside the question of
>
>  consciousness and talk about intelligence. AI has made a lot of progress in
> just the last few years, the most spectacular example is Google's AlphaGO
> that defeated the worlds best human player at GO, the most
> difficult
>  board game around, 3 games to zero.

Yes, this was a spectacular achievement.

> The most impressive thing is it didn't
> win by brute force as IBM did with chess 20 years
> ago when its computer became the wold chess champion

This is true to a degree. AlphaGO still uses the same algorithm as
DeepBlue at its core: an alpha-beta search tree. The difference is
that they train a neural network to prune the tree, both in depth
(something that has been done before) and in breath (here I think,
with the help of some Monte Carlo simulations). Brute force still
plays a role, but combining brute force with the learning power deep
convolutional neural networks lead to this breakthrough.

I am not surprised that this sort of hybrid approach is the most
fruitful. I would love to see more complex hybrid systems, possibly
also including stochastic algorithms that are good at creativity.

>  instead of human experts telling the computer how to win
> AlphaGO played millions of games against itself and its electronic neural
> network learned the best strategies in much the same way as a novice human
> GO player practices and learns ways to win and become a grandmaster.
> AlphaGO

It learned the best strategies to prune the tree, just to be clear.
Training systems to play games by playing against themselves is also a
rather old idea, but it was very well implemented here. I don't mean
to downplay the achievement of AlphaGO, just pointing out that it is
sitting on top of decades of research. I think it is important to
point this out, so that perhaps we don't overlook ideas that have not
produced such amazing results but might, even given a chance.

> 's learning algorithm may be better than humans at GO but it is very
> specialized and can't compare to the skillful way a 3 year old learns
> all sorts of things about
> how the world works. So is the human brain's general learning algorithm so
> astronomically complex that it will never be incorporated into a computer,
> or at least not for many millennium? I would argue that we know for a fact
> it can't be all that complex and thus human level
> and
>  above general AI can't be very far away.
>
> We don't yet know what the brain's master learning algorithm is but we can
> put upper limits on how complex that algorithm can be. In the entire human
> genome there are only 3 billion base pairs. There are 4 bases so each base
> can represent 2 bits, there are 8 bits per byte so that comes out to 750
> meg. Just 750 meg, that's about the same amount of information as a old CD
> disk could hold when they first came out 35 years ago!  And all that 750 meg 
> certainly can
> NOT be used just for the master learning software algorithm, you've got to 
> leave
> room for instructions on how to build a human body as well as the brain 
> hardware.  So the
> information MUST contain wiring directions such as "wire up a neuron this way 
> and then repeat that
> procedure exactly the same way 917 billion times". And the 750 meg isn't
> even efficiently coded, there is a ridiculous amount of redundancy in the
> human genome. So there is no way, absolutely no way, the
> master algorithm can be very complex.
> I'll bet it's less than a meg in size, possibly a lot less. If
> random mutation and natural selection can
> find it then it's just a matter of time before we do too.
> And it won't take 500 million years to find either.

I am also interested in the question of why hasn't the master
algorithm been found yet, and I agree with your reasoning.

A few thoughts on what we might be doing wrong, or why it might be
harder than it looks:

1) 1 Mb of algorithm might not look like much, especially if we
consider the size of boring programs such as Microsoft Office.
However, humans write algorithms in a highly modular fashion, and part
of what goes into taming complexity and making sure that the problem
is divided into small enough chunks, so that an average engineer can
grok it. The problem with this is that it severely limits complexity,
by restricting inter-dependencies. Modern software engineering
practice might be the very thing that prevents us from developing the
master algorithm;

2) It could also be that a complex kludge of 1Mb is beyond human
cognitive power. In this case, I would say that our hope is to evolve
it somehow;

3) There could be a hardware problem. Modern computers are mostly
based of the Von Neumann model. This is starting to change slowly,
notably with GPUs, but applied computer science is mostly done on Von
Neumann assumptions. The building blocks of the brain are very slow
when compared to silica, but its level of parallelization and cheer
complexity is astounding;

4) More generally, we could be using the wrong tools. We have many
intuitions on the master algorithm, that come to diverse fields such
as neuroscience, psychology, and cognitive science. One interesting
fact is this: we know that if a child doesn't learn a language before
about 12 (I think), then the child loses the ability to learn
languages forever. So the master algorithm is highly dependent on
external stimula, especially at an early stage. This is not
surprising, but for me it really highlights that the master algorithm
knows how to build a thing that can modify itself as it grows and
learns more. Self-modification is more or less anathema in modern
programming languages. Lisp comes the closest, mainly because it has
the nice property of homoiconicity, meaning that a program in the
language is a valid data structured in the same language, and can be
directly modified as data. Due to Church-Turing, of course anything
that can be done with homoiconicity can be done with any other
Turing-complete language, but it doesn't mean it will be easy. Try to
write a natural language processing system directly in C and you will
go insane, because string manipulation in C is hard and unnatural. The
same could be true of self-modification and the current tools we have.

Telmo.

>  John K Clark
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to