@ Matt

I read about that test. An exciting result!

Read another one on the 3-body problem. Also found it to be very-exciting news.

Ethics is an interesting subject. So is one's philosophical orientation. I 
regard myself as a version of an existential humanist. I want mankind to evolve 
and survive, with the help of machines. Perhaps, when we consider the recent 
European report on the dismal state of our holosphere, then we may come to 
realize that we have run out of time for natural evolution to take its course. 
Perhaps, evolution of quantum-leap proportions are what we going to need then? 
I suppose, most human bodies would not be able to survive the shock of such an 
evolutionary step, but then again, would they be able to survive tsunamis, 
37-year droughts, or unthinkable radiation by the sun? 

Would it be ethical to ask for volunteers, like we already do, to pay people to 
take the chance, like we already do? At least, it would hold a promise of 
survival, which is more than what we can say for the trade in futures and 
building an economy on the reselling of the resale of debt. The pragmatist in 
me agrees with your concrete view, but he is a most-agreeable fellow at the 
worst of times.

 Intelligence yes. Seems mankind's intelligence was not enough to preserve our 
essential habitat for our children's children. Being pragmatic, it would follow 
that we would probably need some help, from a special, intelligence machine 
that would not make the same mistakes we are persisting in. The first, 
non-invasive machine-to-monkey-to-machine interface was intelligently 
demonstrated on Ted-Talk recently.  This experiment opens the door to machine 
learning like never before possible. 

A learning machine could learn rapidly from humans, even about the environment, 
and from other machines, even from animals it seems. It would not require the 
years of experimentation to adapt, for it is not going to try and be human like 
us. After all, it has its own identity. It is a machine. Still, such an 
intelligent machine would have the advantage of dealing with explicit and tacit 
knowledge as information only and would be able to apply its processing power 
to its fullest processing ability. Why would intelligence stand in the way of 
such evolution, if it would ultimately save and spare many, human lives?

Some of my humanist axioms are: First, do no harm. Preserve life. Exploit all 
survival tools to the max. What value intelligence, if it cannot safeguard and 
promote the integrity of the survival of our, own species? I have my imperative 
for contributing to the realization of such a machine in any way possible, and 
the sooner the better for us. And now I have made my imperative and inclination 
clear, lest I be misunderstood in any way.   

    

    

> Date: Tue, 17 Feb 2015 18:10:07 -0500
> Subject: Re: [agi] Couple thoughts
> From: [email protected]
> To: [email protected]
> 
> On Tue, Feb 17, 2015 at 4:18 PM, Aaron Hosford <[email protected]> wrote:
> >> We can prove that good solutions must have high algorithmic complexity.
> >
> > Could you give a sketch for such a proof?
> 
> Suppose you have a simple predictor with Kolmogorov complexity N. Then
> I can create a sequence with about the same complexity that your
> predictor can't predict. My program simulates your program and outputs
> the opposite of whatever you predict.
> 
> The long version by Legg: http://arxiv.org/abs/cs/0606070
> 
> > Why does the complexity have to
> > initially reside in the algorithm, rather than in the environment?
> 
> Because the environment can lie. I could give your predictor some code
> that would make it a better predictor, or worse. It has no way to
> know.
> 
> And once it has collected all human knowledge, learning will slow down
> because it will have to learn by experiment just like we do. Some
> experiments take a long time no matter how smart you are. For example,
> we know very little about what therapies will slow down aging. Any
> experiment takes decades to get an answer. This is the reason that the
> rate of increase of worldwide life expectancy peaked at 0.2 years per
> year in the 1990's (1970's in developed countries) and is declining.
> 
> And yes, we have machines that can go to the moon. They are stronger
> and faster. They are also more intelligent, depending on what test you
> use for intelligence. If the test is arithmetic speed and accuracy,
> then machines surpassed human level intelligence 100 years ago.
> 
> But again, intelligence is not the goal.
> 
> -- 
> -- Matt Mahoney, [email protected]
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to