Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-10 Thread rouncer81
Bill Hibbard,  your talking about impossible questions.
Questions that cannot be answered logically.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mcdfea4cf3408f95dbd39d7fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread John Rose
Perhaps we need definitions of stupidity. With all artificial intelligence 
there is artificial stupidity? Take the diff and correlate to bliss 
(ignorance). Blue pill me baby. Consumes less watts. More efficient? But 
survival is negentropy. So knowledge is potential energy. Causal entropic force?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M464c55ef1215f51c8a4afc56
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread Bill Hibbard via AGI
> Philosophy is arguing about the meanings of words.

For me, the great lesson of philosophy is that any
language that is general enough to express all the
ideas we need to express is able to express questions
that do not have answers. For example, "Is there a god?"

This may be related to the fact that if a programming
language is general enough to express all algorithms
then there are undecidable questions about programs in
the language. For example, "Which programs halt?"

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M621ffbdf83c11a26afeef9b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler

On 2019-11-08 15:58:PM, Matt Mahoney wrote:
You can choose to model I/O peripherals as either part of the agent or 
part of the environment. Likewise for an input delay line. In one case 
it lowers intelligence and in the other case it doesn't.


Thinking about it in computer science terms blurs the issue,
because there you can model everything as signal processing,
and the agent-environment distinction can become more murky.

Definitions of intelligence should also apply to biological
systems. The distinction between agent and environment can
get a bit blurry there as well, what with the "extended phenotype",
but eyes, ears, and muscles are normally part of the agent,
not part of the environment. I don't think it can coherently
be argued that Legg and Hutter intended to exclude sensory /
motor systems from their definition on the grounds that those
were part of the environment.

--

__
 |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M4f1a14bcbce2577d0f70a8f5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread TimTyler

On 2019-11-08 17:53:PM, Matt Mahoney wrote:

> we can approximate reward as dollars per hour over a set of
> real environments of practical value. In that case, it does
> matter how well you can see, hear, walk, and lift heavy objects.
> Whether you think that's fair or not, it matters for AGI too,
> whether it's purpose is to automate human labor or to upload
> your mind into a robot.

The issue isn't about whether sensors and motors are important.
It is about *terminology* - whether we include these components
in definitions of intelligence.

> Defining intelligence is proving to be as big a distraction
> as defining consciousness.

No way! ;-)

> Philosophy is arguing about the meanings of words.

Fair enough. An engineer might not care much how intelligence
is defined. However, Orwell argued that language shapes thought -
and I believe it. I first want to make sure I know that my audience
knows what I am talking about when I use a word, and also want
to make sure the meanings I use are good - and not too rare or
counter-intuitive.

--
__
 |im |yler http://timtyler.org/


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M28d95acfa84a9240607536fe
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread Nanograte Knowledge Technologies
I use rational in the sense of being reasonable. To me, the phrase: "It stands 
to reason." = "It seems rational."

The difference between my version of 'rational' and your version seems rather 
odd to me too. Being rational is not being sentient. An animal- when acting 
outside the scope of its instinct alone - could be said to be rational. A 
judgment of 'pragmatism" has nothing to do with the fact that it wags its tail 
at you because it recognizes you, or ignores you when it doesn't. Pragmatism is 
a rational means for solving paradoxical situations.

Sentience, of senses and spirit, of dimensionality - is not something one can 
induce via rational thought alone. I suspect your universe of the mind has much 
room for expansion, for you seem to limit the boundaries of your vocabulary to 
become less than even the Oxford dictionary allows for.

Rational  - "​(of a person) able to think clearly and make decisions based on 
reason rather than emotions synonym reasonable No rational person would ever 
behave like that. Oxford Collocations Dictionary."

Sentient - "[usually before noun] (formal) ​able to see or feel things through 
the senses Man is a sentient being. There was no sign of any sentient life or 
activity. Oxford Collocations DictionarySentient is used with these nouns: 
being See full entry Word Origin."

Irrational - I somewhat confer with Miriam Websters b version thereof ": not 
governed by or according to reason"
But then, if we did that, we would have to reject all science making use of any 
irrational term. Clearly, the term irrational, in this sense, refers to another 
form of reason we have not yet defined properly. For example, is consciousness 
rational, or irrational, or something else?

From: WriterOfMinds 
Sent: Saturday, 09 November 2019 08:46
To: AGI 
Subject: Re: [agi] Against Legg's 2007 definition of intelligence

Nanograte, you seem to use "rational" oddly.  Almost as if it's a synonym for 
"pragmatic." That's not what I was trying to say at all.

In the sense I had in mind, the word means "possessing higher reasoning 
powers," as in the phrase, "man is a rational animal."  I paired it with 
"sapient" because that's a similar concept.  I did not mean "strictly logical" 
or "hyper-practical" or "single-minded and obsessive" or "amoral" or "rigid."

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M86277be3ece1a0ee9cc25ad5>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M97f5e999ae7c66479fe6cef5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread WriterOfMinds
Nanograte, you seem to use "rational" oddly.  Almost as if it's a synonym for 
"pragmatic." That's not what I was trying to say at all.

In the sense I had in mind, the word means "possessing higher reasoning 
powers," as in the phrase, "man is a rational animal."  I paired it with 
"sapient" because that's a similar concept.  I did *not* mean "strictly 
logical" or "hyper-practical" or "single-minded and obsessive" or "amoral" or 
"rigid."

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M86277be3ece1a0ee9cc25ad5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread rouncer81
Its like the world goes to madness.   I think AGI wont give us anything 
remarkably new than ourselves, but it will be ASI - because you could make its 
brain never forget, have instant reflexes, have constant never ending 
motivation, its like making the "DAEMON OF EFFICIENCY" are we mad and heading 
off to trouble?

Thank god we are all too stupid to do it,  otherwise I think the world probably 
would blow up.

One more thing to say - Maybe there is ASI already here, and hes not impressed 
with us, cause its god.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mdcca2a420906d3d174c0f9db
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread Nanograte Knowledge Technologies
I'm not in favor of a dominant, rational mind without mechanisms towards 
equilibrium. Any action may be rationalized, even genocide.

Agreed, AGI should generally benefit human populations at large. That could 
already be said for robotics.

Even though humans only see AGI as powerful resources to exploit like they do 
robots, AGI should be different though.

AGI should benefit communities, but not in the sense of warring against each 
other, but in the sense of collectively following a base standard of values 
towards the survival of human communities.

To idealize any such objectives would guarantee eventual failure. For example, 
suppose an AGI was manufactured to protect the oceans. Would it sink whaling 
ships to do so, or march upstream to detect industrial polluters and neutralize 
the source?

To many, such actions would seem rational.
However, what if AGI mistook a passenger freighter (old school possibility for 
migrants) for a whaler, or destroyed a human-waste disposal plant that was 
testing a new bio-degradable method of water-based treatment?

Would AGI have to learn at the cost of human communities? If it were 
bootstrapped and the learning proved inadequate, or insufficient, would the 
bootstrappers be held accountable?

Only the usual megalogoth human communities (superpower governments and 
industrial giants) would dare industrialize AGI, because they would have armies 
of soldiers and lawyers and politicians to defend them against retaliating 
human communities when things go wrong.

These, and other, rational problems would prevent AGI from achieving its 
theoretical potential. Ironically, that would make AGI more human than anyone 
would probably be willing to admit to. In most cases, human potential is 
constrained by environmental factors. Seems the rationality of humans would 
copy that DNA into AGI products as functions.

Wouldn't it be cheaper, or most rational to simply invest in the optimization 
of human potential?


From: WriterOfMinds 
Sent: Saturday, 09 November 2019 03:27
To: AGI 
Subject: Re: [agi] Against Legg's 2007 definition of intelligence

Requirements for AGI.

1. To automate human labor so we don't have to work.
2. To provide a platform for uploading our minds so we don't have to die.
3. To create Kardashev level I, II, and III civilizations, controlling the 
Earth, Sun, and galaxy respectively.

Okay; now we know what Matt wants.  All I really want is an example of the 
Rational Other to interact with and relate to.  For me, the act of creation is 
its own sufficient reward; if my digital image-bearer happens to achieve 
anything that practically benefits me or civilization, that's a bonus.

My particular goal would seem to imply three broad requirements:

1. AGI shall be rational/sapient.  (I bet we could have lots of fun defining 
these words too.)
2. AGI shall be communicative.
3. AGI shall be inclined to cordial relationships with humans.

A robotic body with human-equivalent sensorimotor capabilities is not strictly 
necessary for any of these.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mf817402fca55c3af8fd67306>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mfee0f1fb5dd1e17263c1e3b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread rouncer81
hang on, was I just being a skeptic myself,  sorry,   maybe you can reduce 
conversation to rules?    But u need them computer detectable...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mf03997499fd5dd2670eef08d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread WriterOfMinds
> Requirements for AGI.
> 
> 1. To automate human labor so we don't have to work.
> 2. To provide a platform for uploading our minds so we don't have to die.
> 3. To create Kardashev level I, II, and III civilizations, controlling the 
> Earth, Sun, and galaxy respectively.

Okay; now we know what Matt wants.  All I really want is an example of the 
Rational Other to interact with and relate to.  For me, the act of creation is 
its own sufficient reward; if my digital image-bearer happens to achieve 
anything that practically benefits me or civilization, that's a bonus.

My particular goal would seem to imply three broad requirements:

1. AGI shall be rational/sapient.  (I bet we could have lots of fun defining 
these words too.)
2. AGI shall be communicative.
3. AGI shall be inclined to cordial relationships with humans.

A robotic body with human-equivalent sensorimotor capabilities is not strictly 
necessary for any of these.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mf817402fca55c3af8fd67306
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread rouncer81
When I say quantum I just mean exponential power, I dont mean quantum mechanics 
sorry.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Md5d1fca136ee73c386640a76
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread Matt Mahoney
Actually no, a quantum computer doesn't solve AGI. Neural networks are not
unitary. A quantum computer can only perform time reversible operations. It
can't copy bits or write into memory.

In my paper on the cost of AI I specified the requirements for step 1
(automating labor) in more detail and analyzed the hardware, software, and
training costs. To get the hardware cost under the ROI break even point of
$1 quadrillion, we need to reduce power consumption by a factor of 100,000.
That will require using nanotechnology instead of transistors, moving atoms
instead of electrons. To train it, we need to accept a nearly complete loss
of privacy.

Step 2 (uploading) will only result in human extinction if everyone chooses
to do it and people stop reproducing the old fashioned way. I doubt that
will happen.

Step 3, self replicating nanotechnology, has the potential to out compete
DNA based life. This requires great care because once the technology is
cheap, anyone could produce malicious replicators the same way that anyone
with a computer can write a virus or worm. Fortunately Freitas analyzed the
physics of replicators and concluded they could out compete bacteria only
marginally in size, speed, and energy usage.
https://foresight.org/nano/Ecophagy.php


On Fri, Nov 8, 2019, 6:30 PM  wrote:

> Funny you said that,  because 2 of those happenings it actually dont
> require human level intelligence to be automated, a quantum computer alone
> would suffice.   but the platform for the "artificial heaven" may actually
> not even be possible even with AGI,  theres huge security risks there only
> god should be in charge of.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M84710c0635f72ce2d80ad823
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread rouncer81
Funny you said that,  because 2 of those happenings it actually dont require 
human level intelligence to be automated, a quantum computer alone would 
suffice.   but the platform for the "artificial heaven" may actually not even 
be possible even with AGI,  theres huge security risks there only god should be 
in charge of.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M93f7ff5591aa47acfed7a35d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread Matt Mahoney
Defining intelligence is proving to be as big a distraction as defining
consciousness. Remember when I said that the biggest mistake my students
make is to start designing a program after skipping the requirements? We're
doing it again.

Requirements for AGI.

1. To automate human labor so we don't have to work.
2. To provide a platform for uploading our minds so we don't have to die.
3. To create Kardashev level I, II, and III civilizations, controlling the
Earth, Sun, and galaxy respectively.

Step 1 requires matching human level performance in vision, language,
robotics, art, and modeling human behavior. It does not require that
machines have emotions or other human weaknesses, but only that it be able
to model them to facilitate communication with their owners.

Step 2 has no additional requirements except for good models of the
behavior of the individuals to be uploaded. A robot that looks like you is
programmed to carry out it's predictions of your actions in real time.

Step 3 requires self replicating nanotechnology and space travel. A type
III civilization can take a billion years, so conventional rockets will
suffice.

Notice that nowhere did I need to mention intelligence.

On Fri, Nov 8, 2019, 4:43 PM  wrote:

> I like how Writer of Minds said the environment includes the agents body,
> which I always considered it true to, and fixes the definition somewhat.
>
> I was also going to say, What Colin Hayes said,  that it refers to a
> computer intelligence, not real intelligence, and its the leading method
> today for getting the computer to solve problems.
>
> If it included the robot developing its own goals, it would be closer to
> intelligence, but even then im pretty sure theres more to it than that as
> well!   But if you managed to do that youd be 1000 billion dollars rich,
> the world saved of all problems,   or its a stinking pile of rubble after
> the world blew up when someone turned it in to a weapon.
>
> I dont know out of the two itll be, still,but in the old terminator
> script moralling again.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mb4f439c7762842f9676fb281
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread rouncer81
I like how Writer of Minds said the environment includes the agents body,  
which I always considered it true to, and fixes the definition somewhat.

I was also going to say, What Colin Hayes said,  that it refers to a computer 
intelligence, not real intelligence, and its the leading method today for 
getting the computer to solve problems.

If it included the robot developing its own goals, it would be closer to 
intelligence, but even then im pretty sure theres more to it than that as well! 
  But if you managed to do that youd be 1000 billion dollars rich,  the world 
saved of all problems,   or its a stinking pile of rubble after the world blew 
up when someone turned it in to a weapon.

I dont know out of the two itll be, still,    but in the old terminator script 
moralling again.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Meee4f3feb7c8947cfe5667bc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread immortal . discoveries
Survival requires general adaptive plans. Thinking allows you flexibility to 
generate plans. Real arms allow you to refine your plans plus carry them out.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M23960ee15c5c065f6c9cb99b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread Matt Mahoney
Legg's formal definition of intelligence models an agent exchanging symbols
with an environment, both Turing machines. Like all models, it isn't going
to exactly coincide with what you think intelligence ought to mean, whether
that's school grades or a score on a particular IQ test.

You can choose to model I/O peripherals as either part of the agent or part
of the environment. Likewise for an input delay line. In one case it lowers
intelligence and in the other case it doesn't.

We can't measure expected reward over a Solomonoff distribution of
environments because that requires infinite computation. But we can
approximate reward as dollars per hour over a set of real environments of
practical value. In that case, it does matter how well you can see, hear,
walk, and lift heavy objects. Whether you think that's fair or not, it
matters for AGI too, whether it's purpose is to automate human labor or to
upload your mind into a robot.

On Fri, Nov 8, 2019, 7:02 AM TimTyler  wrote:

> On 2019-11-08 00:15:AM, TimTyler wrote:
> > Another thread recently discussed Legg's 2007 definition of
> > intelligence - i.e.
> >
> > "Intelligence measures an agent’s ability to achieve goals in a wide
> > range of environments".
> >
> > I have never been able to swallow this proposed definition because
> > I think it leaves out something important, namely: the idea that
> > intelligence is a psychological attribute.
> 
> I should perhaps add, that this alleged defect also applies
> to Legg's formalized version, not just his hand-wavey one.
> 
> I.e. in a sequence predictor, we can ask whether an agent's
> intelligence is affected by whether they receive a delayed
> stream of the sequence they are being asked to predict -
> latency being a type of sensory defect. Legg's proposed
> definition of intelligence proposes that such a delay would
> adversely affects an agent's intelligence in a wide range of
> environments, whereas to me it seems like an attribute of their
> sensory array, not their intelligence - which should be a
> measure of their cognitive abilities.
> 
> --
> __
> |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mc5f69e7c4ccb0f1419e7edc7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-08 Thread TimTyler

On 2019-11-08 00:15:AM, TimTyler wrote:
Another thread recently discussed Legg's 2007 definition of 
intelligence - i.e.


"Intelligence measures an agent’s ability to achieve goals in a wide
range of environments".

I have never been able to swallow this proposed definition because
I think it leaves out something important, namely: the idea that
intelligence is a psychological attribute.


I should perhaps add, that this alleged defect also applies
to Legg's formalized version, not just his hand-wavey one.

I.e. in a sequence predictor, we can ask whether an agent's
intelligence is affected by whether they receive a delayed
stream of the sequence they are being asked to predict -
latency being a type of sensory defect. Legg's proposed
definition of intelligence proposes that such a delay would
adversely affects an agent's intelligence in a wide range of
environments, whereas to me it seems like an attribute of their
sensory array, not their intelligence - which should be a
measure of their cognitive abilities.

--
__
 |im |yler http://timtyler.org/  t...@tt1.org  617-671-9930
 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M7105fd67f0c1cb791cb4f828
Delivery options: https://agi.topicbox.com/groups/agi/subscription