Bill Hibbard, your talking about impossible questions.
Questions that cannot be answered logically.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mcdfea4cf3408f95dbd39d7fc
Delivery options: htt
Perhaps we need definitions of stupidity. With all artificial intelligence
there is artificial stupidity? Take the diff and correlate to bliss
(ignorance). Blue pill me baby. Consumes less watts. More efficient? But
survival is negentropy. So knowledge is potential energy. Causal entropic force?
> Philosophy is arguing about the meanings of words.
For me, the great lesson of philosophy is that any
language that is general enough to express all the
ideas we need to express is able to express questions
that do not have answers. For example, "Is there a god?"
This may be related to the fact
On 2019-11-08 15:58:PM, Matt Mahoney wrote:
You can choose to model I/O peripherals as either part of the agent or
part of the environment. Likewise for an input delay line. In one case
it lowers intelligence and in the other case it doesn't.
Thinking about it in computer science terms blurs t
On 2019-11-08 17:53:PM, Matt Mahoney wrote:
> we can approximate reward as dollars per hour over a set of
> real environments of practical value. In that case, it does
> matter how well you can see, hear, walk, and lift heavy objects.
> Whether you think that's fair or not, it matters for AGI too
something else?
From: WriterOfMinds
Sent: Saturday, 09 November 2019 08:46
To: AGI
Subject: Re: [agi] Against Legg's 2007 definition of intelligence
Nanograte, you seem to use "rational" oddly. Almost as if it's a synonym for
"
Nanograte, you seem to use "rational" oddly. Almost as if it's a synonym for
"pragmatic." That's not what I was trying to say at all.
In the sense I had in mind, the word means "possessing higher reasoning
powers," as in the phrase, "man is a rational animal." I paired it with
"sapient" becau
Its like the world goes to madness. I think AGI wont give us anything
remarkably new than ourselves, but it will be ASI - because you could make its
brain never forget, have instant reflexes, have constant never ending
motivation, its like making the "DAEMON OF EFFICIENCY" are we mad and headi
ubject: Re: [agi] Against Legg's 2007 definition of intelligence
Requirements for AGI.
1. To automate human labor so we don't have to work.
2. To provide a platform for uploading our minds so we don't have to die.
3. To create Kardashev level I, II, and III civilizations, controlling th
hang on, was I just being a skeptic myself, sorry, maybe you can reduce
conversation to rules? But u need them computer detectable...
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Mf03997
WriterofMinds you are going for the hardest possible a.i. to make, if you just
want to play soccer or tennis against a robot I wouldnt call it easy, but its
at least possible.
The best chat bots are all irrational, you have to be acceptant to some form
of irrationality or its impossible to do
> Requirements for AGI.
>
> 1. To automate human labor so we don't have to work.
> 2. To provide a platform for uploading our minds so we don't have to die.
> 3. To create Kardashev level I, II, and III civilizations, controlling the
> Earth, Sun, and galaxy respectively.
Okay; now we know what
When I say quantum I just mean exponential power, I dont mean quantum mechanics
sorry.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-Md5d1fca136ee73c386640a76
Delivery options: https://agi.topi
Actually no, a quantum computer doesn't solve AGI. Neural networks are not
unitary. A quantum computer can only perform time reversible operations. It
can't copy bits or write into memory.
In my paper on the cost of AI I specified the requirements for step 1
(automating labor) in more detail and a
Matt's got good point. It's all about what it does, what we need. Survival.
'advances/progress' are just survival steps. Sure, the brainz/AGI will be
needed to do the big stunts; the right data, attention, etc, but you can look
at it more sane like matt said.
Funny you said that, because 2 of those happenings it actually dont require
human level intelligence to be automated, a quantum computer alone would
suffice. but the platform for the "artificial heaven" may actually not even
be possible even with AGI, theres huge security risks there only go
Defining intelligence is proving to be as big a distraction as defining
consciousness. Remember when I said that the biggest mistake my students
make is to start designing a program after skipping the requirements? We're
doing it again.
Requirements for AGI.
1. To automate human labor so we don't
I like how Writer of Minds said the environment includes the agents body,
which I always considered it true to, and fixes the definition somewhat.
I was also going to say, What Colin Hayes said, that it refers to a computer
intelligence, not real intelligence, and its the leading method today
Survival requires general adaptive plans. Thinking allows you flexibility to
generate plans. Real arms allow you to refine your plans plus carry them out.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T6cada473e
Legg's formal definition of intelligence models an agent exchanging symbols
with an environment, both Turing machines. Like all models, it isn't going
to exactly coincide with what you think intelligence ought to mean, whether
that's school grades or a score on a particular IQ test.
You can choose
On 2019-11-08 00:15:AM, TimTyler wrote:
Another thread recently discussed Legg's 2007 definition of
intelligence - i.e.
"Intelligence measures an agent’s ability to achieve goals in a wide
range of environments".
I have never been able to swallow this proposed definition because
I think it lea
>
> "Intelligence measures an agent’s ability to achieve goals in a wide
> range of environments"
>
> No. This is a definition of automation. Zero intelligence.
Intelligence is a measure of an ability to achieve goals in environments
never before encountered.
Until the discourse gets this, th
Another thread recently discussed Legg's 2007 definition of intelligence
- i.e.
"Intelligence measures an agent’s ability to achieve goals in a wide
range of environments".
I have never been able to swallow this proposed definition because
I think it leaves out something important, namely: the
23 matches
Mail list logo