In the theory of Universal Algorithmic
Intelligence<http://www.hutter1.net/ai/aixigentle.pdf>(aka Universal
Artificial Intelligence), there are two aspects:

Prediction and Decision

Decision is driven by valuation.

Prediction is driven by experience.

Decision theory is routinely taught in elite business schools like Harvard
as "Decision trees".

Prediction is another matter, but one thing is certain:

Experience is limited by one's senses and one's senses require investment
of resources in order to operate.  If one's successful reproduction is not
served by particular investments -- those investments will tend to die out
in evolutionary time.  This places a value system on what one _can_
experience, as well as providing a value system within which to act on
those experiences.

Where Ockham's Razor comes in is in prediction:  The shortest program that
can represent a recording of all of one's experiences is also the program
that can best predict future experiences conditioned on various actions.

So define value however you like -- be it Sudoku puzzle-solving or
shamanistic healing of psychic wounds or any weighted combination thereof.
 The framework is the same.



On Thu, Jul 4, 2013 at 3:03 PM, Alain Sepeda <[email protected]> wrote:

> maybe I could add that intelligence can solve really new and unprepared
> problems, like life.
>
> I agree that it looks narrow , but sure creativity is part of operational
> intelligence.
> creativity is used for PS.
> Empathy (globally EQ) is sure also a PS ability for a social animal. Key
> value for intelligence, that is missing among some people who claim to be
> intelligent and do astronomic mistakes.
> Ability to communicate, is a key to PS in real world. Ability to call and
> offer enthusiasm is key to PS, at the individual and at the community level.
> Playing is also a key ability to develop intelligence, to develop
> competences, and discover the environment, thus opportunities.
>
>
> artistic talent, is a symptom of creativity thus intelligence,of empathy
> too, and is a game, and a way to earn your life for some...
> Useless things might be useful at the community level... dangerous games
> at the individual level might be worth at the community level...
>
> Modern psychology and game theory show how important are what we don't
> call cold intelligence, to solve hard unexpected problems, in a real
> unexpected social world.
>
> Sudoku are simple to solve... (branch and bound, just use a constraint
> programming system). they are closed game, like chess.
> Go is more open, but not so much... poker is more real, but not yet like
> life...
>
> Seeing robot cooperate to do real-world mining using a free-market logic
> of energy exchange with charity constraints, is the beginning of
> intelligence... It is an old experiment, but that is the beginning.
>
>
>
>
> 2013/7/4 Mark Gibbs <[email protected]>
>
>> Everything you're talking about equates intelligence with problem solving
>> which is essentially a very narrow view of what intelligence involves and
>> that's fine if problem solving is the only measure of intelligence you care
>> about. The problem with this perspective is that it excludes other aspects
>> that many people consider to be part of intelligence such as artistic
>> creativity, musical ability, poetic and story-telling abilities, empathic
>> ability, and so forth.
>>
>> As "weak" as the Turing test is, it goes some of the way to evaluating
>> something that formal problem-solving tests of intelligence don't address:
>> The quality of consciousness and understanding of hard to define things
>> such as emotions and attitudes because it is based on human brains making
>> judgements about the qualities of what may or may not be a human brain.
>> Your example of testing of a super-intelligent alien would tell us nothing
>> about its broader intelligence that we couldn't discern through dialog with
>> it (what would be, in reality, a Turing test) ... indeed, what if the alien
>> was horrible at problem solving but a genius at understanding how human
>> emotions worked?
>>
>> Imagine an alien who couldn't solve a Sodoku puzzle or get a double digit
>> score playing Tetris but in a single therapy session could deduce the
>> source of your emotional problems, explain them to you in such a way that
>> you could address them, and "cure" your depression, PTSD, or whatever your
>> issues are ... would that alien be intelligent?
>>
>> [mg]
>>
>>
>> On Thu, Jul 4, 2013 at 9:45 AM, Eric Walker <[email protected]>wrote:
>>
>>> On Thu, Jul 4, 2013 at 7:21 AM, James Bowery <[email protected]> wrote:
>>>
>>> Well since we're talking measurement and theory in the natural sciences,
>>>> one is operating on nature and one does have a model of nature which is
>>>> formal in the sense that any theory is formal.
>>>>
>>>
>>> I think we are largely in agreement here.  There are perhaps two or
>>> three different "formal" approaches that are possible -- there's the
>>> formality of a formal definition, i.e., "intelligence is A and B," where
>>> you can rigorously show that A and B are satisfied or not, in a
>>> mathematical sense.  And then there's the formality of a procedure -- "its
>>> not clear exactly what intelligence is and whether computers can have it,
>>> but we think we can rigorously detect some examples of intelligence being
>>> used that could potentially overlap with what computers can do now or in
>>> the future.  For our experiment, we'll try to place bounds the question by
>>> doing C and D, and whatever we find, it will be interesting
>>> and statistically sound."  And then there's the formality of a model -- "we
>>> don't know exactly what intelligence is or whether computers can have it,
>>> but we need to approach the problem systematically and relate the results
>>> to other experiments, so here are our general assumptions:  E and F."
>>>
>>> It would probably be difficult to keep these three dimensions apart in
>>> actual experiments.  But it seems to me that the first kind of formality
>>> could lead people into to assuming the answer implicitly in the question;
>>> for example, "intelligence is the ability to solve a certain class of
>>> NP-hard problems together with <fill in three other abilities>."
>>>
>>> Eric
>>>
>>>
>>
>

Reply via email to