A couple of things it is worth keeping in mind regarding issues in this
conversation:
Speed of machine learning: currently known ML techniques take time
exponential in the complexity of the problem such that simple problems are
learned very quickly (in some cases at faster than biological speed, as
befits a seven order of magnitude advantage in clock speed) but more
complex problems would take longer than the age of the universe. ("Simple"
here doesn't just refer to number of variables; current ML algorithms can
sometimes handle many variables if they are mostly independent, i.e. if the
search space is fairly smooth/non-deceptive.)
Of course the human brain like any physically possible learning system must
eventually hit an exponential curve but in practice stays polynomial much
further than any known algorithm. Thus when we talk about the need to
accelerate machine learning, it's not a case of needing to speed it up by a
fixed number of orders of magnitude but to extend the sort of problems it
can handle before becoming steeply exponential.
Learned versus hardwired: the boundary between these in the human brain is
not sharp. For example conservation of mass, the fact that rearranging
stuff does not change its quantity, is something children learn at an early
age, but I will conjecture our brains are hardwired to be biased in the
direction of learning this regularity of our universe as opposed to any
number of mathematically conceivable regularities of alternative universes.
I will also conjecture that the part of our DNA which we would construe as
built-in software as opposed to recipes for hardware, works primarily by
supplying this sort of inductive bias as opposed to the sort of straight
hardwired knowledge typical of insects and current artificial software. (I
will further conjecture that conscious observers tend to find themselves in
universes anthropically selected for rates of evolution that put pressure
on genetic information to limit itself to that kind of inductive bias,
though this digresses from engineering into philosophy.)
I will add the suggestion that the current state of cognitive science only
tells us that an intelligent system must have a lot of built-in knowledge
in some form, and must learn a lot more; exactly what gets supplied as
built-in knowledge and in what form is still a set of free parameters in
the design.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com