On 17/01/2015 17:14, Matt Mahoney via AGI wrote: > On Sat, Jan 17, 2015 at 8:30 AM, Tim Tyler via AGI <[email protected]> wrote:
>> - http://edge.org/response-detail/26066 > ... >> The idea that the growth of intelligent machines is will be >> inherently self-limiting, due to the lack of any new information >> to learn once the machines become as smart as humanity seems >> stupid to me. There's a whole universe out there, brimming with >> information. Machines can learn by trial-and-error - not just >> via instructional learning from human mentors. Chess computers >> didn't stop improving when they reached human-level competence. >> Nor is it likely that other types of intelligent machine will do so. > > de Gray is arguing against the scenario where a recursively self > improving AI in a box goes FOOM! It seems like a straw man scenario. Has anyone seriously proposed it? Many of today's most intelligent machines live on server farms, connected to a distributed network of cameras, microphones and keyboards. They are not isolated from the rest of the world: they are hyper-connected to it. That's the reality. The machine in an isolated box does not seem very relevant. > Most of what AI in general already knows comes from humans. AI cannot > learn human knowledge faster than humans can communicate, about 5-10 > bits per second, but that is faster than anything else. Machines can easily learn about all the images and videos freely available on the internet. They can slurp that information up over as many T1 lines as you have going into your data center. There's no 5-10 bits per second limit. > Once all human knowledge has been acquired, the rate will slow to the > speed at which it can do experiments. For example, if the question is > what interventions will help humans will live longer, it will take > decades to do experiments that yield one bit of information. It is > why we know so little about this subject. No matter how smart AI > is, it can't do any better. Of course that isn't really how knowledge acquisition works. Those interested in making humans live longer do all kinds of experiments on cell cultures, rodents and monkeys - as well as looking at humans. It is not obvious that knowledge acquisition will slow down once the biosphere has been slurped up by machines. The rate of data acquisition will clearly continue to rise - as cameras and other sensors continue to proliferate. I think any support for the idea of a slow-down would have to be based on some conception of adaptively-useful knowledge - so that most of the the future's massive data streams would be disqualified. Of course, this is far from De Grey's "once these machines become as smart as humanity they won't have any new information to learn." That's silly and indefensible. > After surpassing human level, improvement will depend on experiments > that can be done quickly. If the question is how to acquire the atoms > and energy needed for computation, the learning rate will be one bit > per generation, which will favor small, fast replicators with short > life spans. [...] IMO, we'll continue to see a range of organism lifespans. Different niches favour different lifespans. A bacterium has one lifespan, a whale has longer one and a solar farmer has a much longer one. Slow experiments will continue to be time-consuming to perform. Our descendants will still have to build particle accelerators, radio telescopes and the like. However, there's a lot to learn that can take place inside a cubic centimeter. The world of protein folding fits inside such a cube - for example. Our descendants will be able to explore such realms using rapid experiments. -- __________ |im |yler http://timtyler.org/ [email protected] Remove lock to reply. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
