On Thu, Jun 8, 2017 at 2:09 AM, John Clark <johnkcl...@gmail.com> wrote:
> On Wed, Jun 7, 2017  Telmo Menezes <te...@telmomenezes.com> wrote:
>
> Thanks Telmo, very interesting post.
>
>>
>> >
>> It could also be that a complex kludge of 1Mb is beyond human
>>
>> cognitive power. In this case, I would say that our hope is to evolve
>>
>> it somehow;
>
>
> I agree that being a product of random mutation and natural selection
> biology almost certainly uses a kludgey mess of spaghetti code that would be
> hard to tease out and understand, but we should be able to find the
> underlining principle, or something better, on our own. Perhaps
> we could
> modify AlphaGo so instead of looking for better strategies to win at Go it
> looks for better learning algorithms.

An idea I have for something like this is to combine neural networks
with genetic programming, in a way that genetic programs direct
network growth and weight upgrades (replacing both the learning
algorithm and the fixed topology). A simple approach could be to
evolve populations of such neural networks, but I imagine the
computational costs to be astronomical for anything remotely
interesting. Instead, I imagine something inspired by biological
chimeras: there is a single big network *with which* evolution takes
place, and errors introduced by unviable mutations are somewhat
tolerated. The biological analogy here is that the genetic programs
play the role of the genetic instructions contained in DNA.

>> > There could be a hardware problem.
>>
>>  Modern computers are mostly
>> based of the Von Neumann model. This is starting to change slowly,
>> notably with GPUs, but applied computer science is mostly done on Von
>> Neumann assumptions.
>
>
> Yes, a lot of AI involves matrix multiplication and GPUs
> are
> especially good at that, and that's why Nvidia's stock has tripled in just
> the last year. In addition both Google and Apple are coming out with
> dedicated AI chips.
> A little further down the technology pipeline are
> memristors that inherently act a lot like neurons. And then of course there
> are Quantum Computers.

Yes.

>>
>> >
>> The building blocks of the brain are very slow
>>
>> when compared to silica, but its level of parallelization and cheer
>>
>> complexity is astounding;
>
>
> I agree the brain is massively parallel and very slow, but as for complexity
> I think once you know how one neuron in a newborn infant's brain is wired up
> you'd have a pretty good idea how all of them are.

I agree up to a point. I think there are epigenetic factors to take
into account. I'm sure there is genetic code that is only expressed
once the brain reaches a certain size, when certain structures are
already in place and so on. I suspect that the wiring algorithm can
change in this way, and I think this is relevant to the master
algorithm

>>
>> >
>> One interesting
>>
>> fact is this: we know that if a child doesn't learn a language before
>>
>> about 12 (I think), then the child loses the ability to learn
>>
>> languages forever.
>
>
> I've always wondered why we generally wait till high school to teach a
> second language, shouldn't we start in kindergarten?

Starting to learn a second language in kindergarten is becoming
relatively popular in Europe. Perhaps it is a harder sell in the US
because you already speak the dominant language, and do not have a lot
of linguistic diversity around you.

But yes, for people who can appreciate what a tremendous gift it is to
provide your kid with an extra language almost "for free", it seems
obvious.

>
>>
>> >
>> Try to
>>
>> write a natural language processing system directly in C and you will
>> go insane, because string manipulation in C is hard and unnatural.
>
>
> I wonder how long it will be before somebody writes a program that optimize
> slow buggy C programs the same way AlphaGo optimized slow buggy game
> strategies. My guess is not long.

I would say that we already know how to do that. The limiting factor
at the moment is likely to be computational power. It looks like a
highly parallelizable problem, so your guess might very well be right.

Telmo.

>
>  John K Clark
>
>
>
>
>
>
>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to