Hi,

Regarding a)

- Intelligence and rational do not seem to be synonyms to me, they could be
equivalent in practice under some assumptions though. Indeed, we should
state clearly what we understand for each of these concepts and then derive
conclusions. For instance, "rationality" in the sense of game theory is not
a very smart strategy in general, as there are many games in which Nash
equilibrium is not optimal in the Pareto sense.
For me, intelligence is very much related to learning representation and
control of the environment in order to optimize our happiness function. If
you mean by rationality that we do what makes us happier then it would be
the same as to do behave intelligently, and I agree that they could be
equivalent.

 - I like very much the theory of reincarnations and karma. And my
understanding about that is that we incarnate in the planet which help us
to learn more. From that point of view, our beautiful Mother Earth is kind
of pursuing graduate studies in the Ivy league  :-). Although there is not
such a big ecological pressure over mankind in the Earth (at least not as
much as in the Dune <http://en.wikipedia.org/wiki/Dune_(novel)> by Frank
Herbert) there is still a lot of human stupidity and egoistic behaviors
that lead to big sorrows.
So I suppose that there should be 'softer' planets, with very
smart people and very much evolved as they do not have to learn from these
sorrows, and also other planets with 'undergrad' people who is not ready to
these top-grad studies.
Does it make sense to you?


Regarding b)

I think we do not understand our brains at all. Much worthier than all the
millions of the Obama plan would be to sit down and meditate for 2 hours
everyday and for some days every some weeks. Meditation makes space in our
mind, removes all the patterns in which we waste our energy, help us to
focus, to be creative, and so on. In other words it teach us to mold our
mind, and I believe that the mind molds the brain.
I believe the problem is not in the hardware, but in whether the firmware
we run everyday to make decisions based on   intelligent network-embracing
and full of love awareness or in short stupid repetitive egocentric ideas.


Finally, I do not think we will create AGI, properly said, but that it will
be the evolutionary force of this Universe who will create AGI, but of
course it could use as a tool.


Best!
Sergio


On Fri, Apr 12, 2013 at 1:45 PM, Mike Archbold <[email protected]> wrote:

> On 4/5/13, just camel <[email protected]> wrote:
> > Matt (et al),
> >
> > Do you think it is likely that stronger/different evolutionary pressures
> > on life on a distant planet (let's ignore different universes with
> > different laws of nature for now) could result in a species which
> >
> > a) had to be way more intelligent/rational than Homo sapiens in order to
> > become the/a dominant/technological species and thus allowing them to
> > arrive at their AGI equivalent way earlier/easier because of their
> > higher a priori intelligence?
> >
> > b) features a less complicated or more flexible "brain" that would be
> > more straight forward to augment/upgrade for them? Like hair that just
> > grows as you consume food ... more processing power in good times, less
> > processing power in bad times. Of course this would not work with our
> > own brain design as processing power and memory are somewhat interwoven.
> >
> > So this question really is about the universality of evolution and
> > whether some life forms might have huge advantages over Homo sapiens
> > when it comes to creating AGI/BCI/WEB technology (and I will not even
> > write about the implications on Fermi's paradox this time).
> >
> > -- jc
> >
>
> It's an interesting question and ties in with the whole idea that
> "human-level intelligence" is really anything like some kind of
> gold-standard of intelligence.  We kind of imply that is so, but only
> because we need to interact with it -- but that is conflating two
> separate issues.   One issue is the idea that there is an intelligence
> that pervades all reality, how do we define that? .... and the other
> issue is if automating some form of intelligence as close to being
> like ours is possible.
>
> >
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/15717384-a248fe41
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to