Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mike Tintner
A tangential comment here. Looking at this and other related threads I can't 
help thinking: jeez, here are you guys still endlessly arguing about the 
simplest of syllogisms, seemingly unable to progress beyond them. (Don't you 
ever have that feeling?) My impression is that the fault lies with logic 
itself - as soon as you start to apply logic to the real world, even only 
tangentially with talk of "forward" and "backward" or "temporal" 
considerations, you fall into a quagmire of ambiguity, and no one is really 
sure what they are talking about. Even the simplest if p then q logical 
proposition is actually infinitely ambiguous. No?  (Is there a Godel's 
Theorem of logic?) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mike Tintner
I'm not questioning logic's elegance, merely its relevance - the intention 
is at some point to apply it to the real world in your various systems, no? 
Yet there seems to be such a lot of argument and confusion about the most 
basic of terms, when you begin to do that. That elegance seems to come at a 
big price.


RL:Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the fault 
lies with logic itself - as soon as you start to apply logic to the real 
world, even only tangentially with talk of "forward" and "backward" or 
"temporal" considerations, you fall into a quagmire of ambiguity, and no 
one is really sure what they are talking about. Even the simplest if p 
then q logical proposition is actually infinitely ambiguous. No?  (Is 
there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it works, and it seems a shame 
for someone to trample on the concept of forward and backward chaining 
when these are really quite clear and simple processes (at least 
conceptually).


You are right that logic is as clear as mud outside the pristine 
conceptual palace within which it was conceived, but if you're gonna hang 
out inside the palace it is a bit of a shame to question its elegance...







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Mike Tintner
Brad: By definition, an expert system rule base contains the total sum of 
the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking entirely 
here about narrow AI? Sorry if I've missed this, but has anyone been 
discussing how to provide a flexible, evolving set of rules for behaviour? 
That's the crux of AGI, isn't it? Something at least as flexible as a 
country's Constitution and  Body of Laws. What ideas are on offer here? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Mike Tintner
Perhaps like Bob, I'm not sure whether this isn't a leg-pull. But, to take it 
seriously, how do you propose to give your robot free will - especially 
considering that the vast majority of AI/AGI-ers & roboticists are still 
committed to an algorithmic paradigm which both excludes free will and denies 
its possibility?
  John LaMuth: Announcing the recently issued U.S. patent concerning ethical 
artificial intelligence titled: Inductive Inference Affective Language Analyzer 
Simulating AI. This innovative patent (# 6,587,846) introduces the newly 
proposed concept of the Ten Ethical Laws of Robotics: a system that radically 
expands upon previous ethical-robotic systems. As implied in its title, this 
patent represents the first AI system incorporating ethical/motivational terms: 
enabling a computer to reason and speak ethically, serving in roles specifying 
sound human judgement. These Ten Ethical Laws directly expand upon Isaac 
Asimov's Three Laws of Robotics, an earlier Science Fiction construct (from I, 
Robot)  that aimed to rein in the potential conduct of the futuristic AI 
robot.. Indeed, Asimov's first two laws state that (1) a robot must not harm a 
human (or through inaction allow a human to come to harm), and (2) a robot must 
obey human orders (unless conflicting with rule #1). Although this cursory 
system of safeguards proves intriguing in a Sci-Fi sense, it nevertheless 
remains simplistic in its dictates, leaving open the specific details for 
implementing such a system. The newly patented Ten Ethical Laws fortunately 
remedy such a shortcoming, representing a general overview of the enduring 
conflict pitting virtue against vice: the virtues of which are partially listed 
below: 

  Glory/Prudence   Honor/Justice 
  Providence/Faith Liberty/Hope
  Grace/Beauty  Free-will/Truth
  Tranquility/Ecstasy  Equality/Bliss

  Dignity/Temperance Integrity/Fortitude
  Civility/Charity  Austerity/Decency 
  Magnanim./Goodness  Equanimity/Wisdom
  Love/Joy   Peace/Harmony

  The Ten Ethical Laws are written in a positive style of formal mandate, 
focusing on the virtues to the necessary exclusion of the corresponding vices, 
as formally listed at:
  www.angelfire.com/rnb/fairhaven/ethical-laws.html

  The purely virtuous mode (by definition) is fully cognizant of the 
contrasting realm of the vices, without necessarily responding in kind. 
Furthermore, the corre-sponding hierarchy of the vices listed below contrasts 
point-for-point with the respective virtuous mode (the overall patent is 
actually composed of 320 individual terms).

  Infamy/Insurgency   Dishonor/Vengeance 
  Prodigal/Betrayal Slavery/Despair
  Wrath/UglinessTyranny/Hypocrisy
  Anger/Abomination  Prejudice/Perdition

  Foolishness/Gluttony   Caprice/Cowardice 
  Vulgarity/Avarice Cruelty/Antagonism 
  Oppression/Evil   Persecution/Cunning 
  Hatred/Iniquity Belligerence/Turpitude

  With such ethical safeguards firmly in place, the AI computer is formally 
prohibited from expressing the corresponding vices, allowing for a truly 
flawless simulation of virtue. Indeed, these Ten Ethical Robotic Laws hold the 
potential for parallel applications to a human sphere of influence.. Although 
only a cursory outline of applications is possible at this juncture, a more 
detailed treatment is posted at:

   www.ethicalvalues.com 

  John E.  LaMuth  -  M. S.
  fax: 586-314-5960
  P.O. Box 105  Lucerne Valley, CA   92356
  www.emotionchip.net 
  http://www.ethicalvalues.com

  The Ten Ethical Laws of Robotics 

  (A brief excerpt from the patent specification)

  A further pressing issue necessarily remains; namely, in addition to the 
virtues and values, the vices are similarly represented in the matching 
procedure (for completeness sake). These vices are appropriate in a diagnostic 
sense, but are maladaptive should they ever be acted upon. Response 
restrictions are necessarily incorporated into both the hardware and 
programming, along the lines of Isaac Asimov's Laws of Robotics. Asimov's first 
two laws state that (1) a robot must not harm a human (or through inaction 
allow a human to come to harm), and (2) a robot must obey human orders (unless 
they conflict with rule #1). Fortunately, through the aid of the power pyramid 
definitions, a more systematic set of ethical guidelines is constructed; as 
represented in the
  Ten Ethical Laws of Robotics 

  ( I ) As personal authority, I will express my individualism within the 
guidelines of the four basic ego states (guilt, worry, nostalgia, and desire) 
to the exclusion of the corresponding vices (laziness, negligence, apathy, and 
indifference). 

  ( II ) As personal follower, I will behave pragmatically in accordance with 
the alter ego states (hero worship, blame, approval, and concern) at the 
expense of the corresponding vices (treachery, vindictiveness, spi

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Mike Tintner


BillK: I prefer Warren Ellis's angry, profane Three Laws of Robotics.

(linked from BoingBoing)





Actually, while I take Ellis' point as in

"1...what are you thinking? "Ooh, I must protect the bag of meat at all 
costs because I couldn't possibly plug in the charger all on my own."   Shut 
the  up...


the issue of how an agent, robotic or living, is to secure its energy 
supply, is a huge, complicated and primary one both for an individual and a 
society  - and does seem to be ignored in most theorising about AGI's and 
implementations. Think of this little spot of bother called Iraq.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: "What does it do?" useful in AGI? Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-23 Thread Mike Tintner



Will:
Mike Archbold <[EMAIL PROTECTED]>:
It looks to me to be borrowed from Aristotle's ethics.  Back in my 
college

days, I was trying to explain my project and the professor kept
interrupting me to ask:  What does it do?  Tell me what it does.  I don't
understand what your system does.  What he wanted was
input-function-output.
He didn't care about my fancy data structure or architecture goals, he
just wanted to know what it DID.



I have come across this a lot. And while it is a very useful heuristic
for sniffing out bad ideas that don't do anything I also think it is
harmful to certain other endeavours. Imagine this hypothetical
conversation between Turing  and someone else (please ignore all
historical inaccuracies).

Sceptic: Hey Turing, how is it going. Hmm, what are you working on at
the moment?
Turing: A general purpose computing machine.
Sceptic: I'm not really sure what you mean by computing. Can you give
me an example of something it does?
Turing: Well you can use it calculate differential equations
Sceptic: So it is a calculator, we already have machines that can do that.
Turing: Well it can also be a chess player.
Sceptic: Wait, what? How can something be a chess player and a calculator?
Turing: Well it isn't both at the same time, but you can reconfigure
it to do one then the other.
Sceptic: If you can reconfigure something, that means it doesn't
intrinsically do one or the other. So what does the machine do itself?
Turing: Well, err, nothing.

I think the quest for general intelligence (if we are to keep any
meaning in the word general), will have be hindered by trying to pin
down what candidate systems do, in the same way general computing
would be.

I think the requisite question in AGI to fill the gap formed by not
allowing this question, is, "How does it change?"


Will,

You're actually almost answering the [correct and proper] question: what 
does it do? But you basically end up as with that sub problem, evading it.


What a General Intelligence does is basically simple. It generalizes 
creatively  - it connects different domains - it learns skills and ideas in 
one domain, and then uses them to learn skills and ideas in other domains. 
It learns how to play checkers, and then chess, and then war games, and then 
geometry.


A computer is in principle a general intelligence - a machine that can do 
all these things - like the brain. But in practice it has to be  programmed 
separately for each specialised skill and can only learn within a 
specialised domain. It has so far been unable to be truly general purpose - 
and think and learn across domains..


The core problem - what a general intelligence must DO therefore - is to 
generalize creatively - to connect different domains - chalk and cheese, 
storms and teacups, chess pieces and horses and tanks .  [I presume that is 
what you are getting at with: "How does it change?"]


That's your sub problem - the sub can't move. All the standard domain checks 
for non-movement -   battery failure, loose wire etc. - show nothing. The 
sub, if it's an AGI, must find the altogether new kind of reason in a new 
domain, that is preventing it moving. (Perhaps it was some mistyped but 
reasonable, or otherwise ambiguous, command. Perhaps it's some peculiar kind 
of external suction..).


What makes creative generalization so difficult (and 'creative') is that no 
domain follows rationally (i.e. logico-mathematically or strictly 
linguistically) from another. You cannot deduce chalk from cheese, or chess 
from checkers. And you cannot in fact deduce almost any branch of rational 
systems themselves from any other - Riemannian geometry, for example, does 
not follow logically or geometrically or statistically or via Bayes from 
Euclidean, any more than topology or fractals.


The FIRST thing AGI'ers should be discussing is how they propose to solve 
the what-does-it-do problem of creative generalization - or, at any rate, 
what are their thoughts and ideas so far.


You think they're being wise by universally avoiding this problem - *the* 
problem. I think they're just chicken.








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Mike Tintner
The eternal flaw in all this, it seems to me, is that you are still doing 
logic which assumes that agents know what the premises refer to, and those 
premises can be taken for granted.


Real world thinking, which is vastly more important and extensive than the 
logical variety, is interested in what the premises refer to, and how to 
establish the truth of those premises, (as distinct from conclusions that 
can be drawn from them).


For example:

Mary  says Clinton had sex with her.
Clinton says he did not have sex with her.

Who, and how, is an AGI to believe?

Or:., based on your internet research:

10,000 economists say the US economy is in recession.
9,000 economists say the US economy is not in recession.

Who, and how, is your superAGI to believe?

Real world thinking could also be called scientific thinking. Don't you 
think that's somewhat more important than logic for an AGI?



YKY:  Here is an example of a problematic inference:


1.  Mary has cybersex with many different partners
2.  Cybersex is a kind of sex
3.  Therefore, Mary has many sex partners
4.  Having many sex partners -> high chance of getting STDs
5.  Therefore, Mary has a high chance of STDs

What's wrong with this argument?  It seems that a general rule is
involved in step 4, and that rule can be "refined" with some
qualifications (ie, it does not apply to all kinds of sex).  But the
question is, how can an AGI detect that an exception to a general rule
has occurred?

Or, do we need to explicitly state the exceptions to every rule?

Thanks for any comments!





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem.. P.S.

2008-07-28 Thread Mike Tintner
I didn't emphasize the first flaw in logic, (which is more relevant to your 
question, and why such questions will keep recurring and can never be 
*methodologically* sorted out) - the assumption that we know what the terms 
*refer to*. Example:


Mary says Clinton had sex with her.
Clinton says he wouldn't call that sex.

Who, and how, is an AGI to believe or agree with?

Economist A says the US economy is in recession
Economist B says it depends what you mean by recession.


YKY:Here is an example of a problematic inference:


1.  Mary has cybersex with many different partners
2.  Cybersex is a kind of sex
3.  Therefore, Mary has many sex partners
4.  Having many sex partners -> high chance of getting STDs
5.  Therefore, Mary has a high chance of STDs

What's wrong with this argument?  It seems that a general rule is
involved in step 4, and that rule can be "refined" with some
qualifications (ie, it does not apply to all kinds of sex).  But the
question is, how can an AGI detect that an exception to a general rule
has occurred?

Or, do we need to explicitly state the exceptions to every rule?

Thanks for any comments!
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem.. P.S.

2008-07-28 Thread Mike Tintner

YKY/MT>>

Mary says Clinton had sex with her.
Clinton says he wouldn't call that sex.


LOL...

But your examples are still symbolic in nature.  I don't see why they
can't be reasoned via logic.

In the above example the concept "sex" may be a fuzzy concept.  So
certain forms of sex may be construed as "0.75 sex" or something like
that.



YKY,

That you and others "don't see why they can't be reasoned via logic" is one 
reason why my interjections are [contrary to Jim] valid even here.


Why isn't science done via logic? Why don't physicists, chemists, 
biologists, psychologists and sociologists just use logic to find out about 
the world?  Do you see why?And bear in mind that scientists are only formal 
representatives of every human being - IOW we all reason like scientists as 
individuals, however crudely, if we want to find out the truth about events 
in the world - & what happened with Mary & Bill, or why the car broke down.


Try suggesting that any scientist just use logic - or follow the reasoning 
principles of your AGI. It would be laughable.


The reason is: all the symbols you use refer to real world objects, and the 
only definitive way to find out their truth is by looking at their real 
objects not just the symbols  - "the evidence"  - as science does.


There are then various secondhand ways - getting other people's 
opinions/reports, looking at scientific data etc etc - but the only way to 
assess the reliability of those is by comparing their success with respect 
to other real world objects.  You can't as you guys seem to - (correct me) - 
quite arbitrarily -* "programmer ex machina" * - assign degrees of 
confidence/certainty to information  - "0.75 sex".


What is needed here - for any true General Intelligence -  is a whole new 
branch of metacognition to supplement logic that will set out the main 
principles by which we actually reason about the world most of the time. 
Logic  is a v. limited form of reasoning and metacognition. It alone cannot 
and never wll refer to reality. What Russell said of maths applies equally 
to logic (and he was even better than you guys at both) :


Mathematics may be defined as the subject in which we never know what we are 
talking about, nor whether what we are saying is true.

Bertrand Russell

The new branch of metacognition will explain how we know that Russell's 
statement is, broadly, true. Logic certainly can't explain to us its own 
imperfections.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Mike Tintner


Pei:
Charles Hixson wrote:


There's nothing wrong with the "logical" argument.  What's wrong is that 
you
are presuming a purely declarative logic approach can work...which it can 
in

extremely simple situations, where you can specify all necessary facts.

My belief about this is that the proper solution is to have a model of 
the

world, and how interactions happen in it separate from the logical
statements.  The logical statements are then seen as focusing techniques.
[ ... ]



Pei: The key word here is "model".  If you can reason with mental models,

then of course you can resolve a lot of paradoxes in logic.  This
boils down to:  how can you represent mental models?  And they seem to
boil down further to logical statements themselves.  In other words,
we can use logic to represent "rich" mental models.


Pei,

Can you identify a single metalogical dispute - about how to resolve 
paradoxes in logic, or,say, which form of logic to use for a given type of 
problem  - that has been resolved by formally LOGICAL means? Can you give 
one actual example of what you have just asserted above - one such 
paradox-resolving mental model that really was logical?


My contention would be that metalogical reasoning - depends on a totally 
different kind of reasoning to that of logic itself. And you cannot 
*logically* derive any new kind of logic - nonmotonic, fuzzy etc - from any 
previous kind. Nor can you derive any new branch of mathematics 
*mathematically* from any previous kind. The foundations of logic, maths and 
rational systems generally do not lie in themselves - which, if true, is 
rather important for General Intelligence.


But by all means disprove me.









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Mike Tintner


Charles:Sensory data would be more
significant, but there's considerable evidence that even sensory data has a
hard time in overruling a strong model belief.

That's a really good point. Both individuals' and social groups' willingness 
to change their models in the light of the evidence, has a great deal to do 
with factors like the availability of alternative models, and the labour 
involved of revising models - and not just the evidence. People accordingly 
cling to paradigms that defy huge amounts of evidence, or have little to no 
evidence - like the behaviourist black box paradigm, or religious 
paradigms - rather than rethink them, or be faced with mystery. They do this 
not just formally, intellectually, but also with models of individuals - 
like a spouse refusing to accept considerable evidence that their partner 
has been unfaithful. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] And now...human categorisation

2008-07-30 Thread Mike Tintner
We've been discussing how humans recognize that they don't recognize 
objects/info. - "don't know" something.


How about  how humans categorise in the first place ? How do we decide - to 
use another recent thread [see below] - whether *cybersex* classifies as 
*sex* or not, or whether  *foreplay* and *Clintonian sex* classify as *sex*?


Or whether *anal sex gets you pregnant*? And does that last example belong 
to *categorisation* or merely *reasoning*? And whether "The relationship 
between cybersex and sex is of a completely different character to the 
relationship between penguins and birds" ?


How do we resolve/argue our disagreements about categorisation?

What I''m particularly interested in is whether anyone thinks these matters 
can be resolved by purely symbolic, logical reasoning and semantic networks 
[with symbolic/verbal definitions of what concepts like *sex* involve]  or 
any other current computational methods.


My hypothesis is that categorisation depends heavily on the use of 
imagination - we have to imagine (albeit often unconsciously) the real 
things denoted by the concepts in order to "draw the line" as to what they 
do or do not include - visualise sex, for example, (and audiovisualise 
"reasoning") -  to settle what we think does or does not classify as sex, 
and whether *anal sex gets you pregnant*. A very great deal of the time 
there are no suitable symbolic/verbal definitions available.


["Dear Alice, Last night, my girlfriend and I had anal sex without a condom. 
She is a virgin. Is there a probability for her to get pregnant?.."]


And, of course, if anyone wants to give us what they consider the latest cog 
sci/AI positions here, please do.




***

Benjamin Johnston <[EMAIL PROTECTED]> wrote:



The relationship between cybersex and sex is of a completely different
character to the relationship between penguins and birds.


Can you define that difference in an abstract, general way?  I mean,
what is the *qualitative* difference that makes:
   "cybersex is a kind of sex"
different from:
   "penguin is a kind of bird"?

You may say:  cybersex and phone sex lacks property X that is common
to all other forms of sex.  But then, anal sex or sex with a condom do
not get a female pregnant, right?  So by a similar reasoning you may
also exclude anal sex or sex with a condom as sex.

It seems that you (perhaps subjectively) require "having physical
contact" as a defining characteristic of sex.  But I can imagine
someone not using that criterion in the definition of sex.

Also relevant here is Wittgenstein's idea of "family resemblance":
sometimes you may not be able to list all the defining properties of a
concept.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Mike Tintner

Brad,

Ah,, perhaps there has been a failure of communication  -  it sounded 
(rightly or wrongly) from your original post, like your "things I don't 
know" list was being used DURING the process of  perception/ categorization, 
and so was key to producing the "I don't know this" feeling.  That was hard 
to understand and accept.   If you're just saying that AFTER our brain has 
failed to recognize something,  it effectively (as you discuss)  stores 
those failures on an "I don't know list", that is unobjectionable.



James,

Someone ventured the *opinion* that keeping such a list of "things I don't 
know" was "nonsensical," but I have yet to see any evidence or 
well-reasoned argument backing that opinion.  So, it's just an opinion. 
One with which I, obviously, do not agree.


There are two "stages" of "not knowing."  The first is when the agent 
doesn't know it doesn't know something.  It's clueless.  This can be such 
a dangerous stage to be in that one can imagine the agent might be 
equipped with a "knee-jerk" type reaction, which evinces itself in a 
variety of ways.  One of those ways could be to promote this thing it 
didn't know it didn't know to the next stage of "not knowing"  by storing 
it (subconsciously, most likely) in a list of "things I know I don't 
know."  I use the term "list" generically.  I don't argue that the human 
brain maintains knowledge in list structures or that this would 
necessarily be the way this information is stored in an AGI agent).


I fail to see how saving this type of information in memory is any 
different from saving any other type of information.  It's a positive fact 
about the world as that world relates to the individual human (or AGI 
agent).  The first way having such a list might help is in optimizing 
memory search.  The next time the agent encounters a thing not known on 
this list, it won't have to perform an exhaustive search of things it 
knows to come to the "feeling of not knowing." It's right there on the 
(comparatively short) list of things it doesn't know (which would be 
searched first, of course).  In addition, if the agent's experience in the 
world results in repeated hits on a particular item in this list, this 
could be a factor in producing the desire to learn that is such a 
characteristic behavior of our species.  Once the thing is known, it is, 
of course, removed from the "not known" list.  If a thing on the list is 
not encountered again for a long period of time, it might just fall off 
the list. Both of these characteristics of such a list would work, 
subconsciously, to keep the list both small and relevant.


Cheers,

Brad


James Ratcliff wrote:

Sure,
search is at the root of all processing, be it human or AI.

How we each go about the search, and how efficient we are at the task are 
different, and what exactly we are searching for, and exponential 
explosion.


But some type of search is done, whether we are consciously aware of our 
brains doing the search or not.


Given a bit of context information about the question should allow us to 
use some heuristics to look at a smaller area of knowledge bases in our 
brains, or in a computer's memory.


Having a list of "things we dont know" is nonsensical as has been pointed 
out, when it comes to individual terms, but something like a aggregate 
estimate of knowledge known could be computed.


I myself know a little about baseball  say 10%, but baseball history 
and world series statistics would be more like 0.1%


James Ratcliff

--- On *Tue, 7/29/08, Brad Paulsen /<[EMAIL PROTECTED]>/* wrote:

James,

So, you agree that some sort of search must take place before the 
"feeling

of not knowing" presents itself?  Of course, "realizing we don't
have a lot of information" results from some type of a search and not 
a separate process

(at least you didn't posit any).

Thanks for your comments!
Cheers
Brad

James Ratcliff wrote:
> It is fairly simple at that point, we have enough context to have a 
very > limited domain

> world series - baseball
> 1924
> answer is a team,
> so we can do a lookup in our database easily enough, or realize 
that we > really dont have a lot of information about baseball in our 
mindset.

> > And for the other one, it would just be a
 strait term match.
> > James Ratcliff
> > ___
> James Ratcliff - http://falazar.com
> Looking for something...
> > --- On *Mon, 7/28/08, Brad Paulsen /<[EMAIL PROTECTED]>/*
wrote:
> > From: Brad Paulsen <[EMAIL PROTECTED]>
> Subject: Re: [agi] How do we know we don't know?
> To: agi@v2.listbox.com
> Date: Monday, July 28, 2008, 4:12 PM
> > Jim Bromer wrote:
> > On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
> <[EMAIL PROTECTED]> wrote:
> >> All,
> >>What does fomlepung mean?
> >>
> >> If your immediate (m

Re: [agi] How do we know we don't know?

2008-07-31 Thread Mike Tintner


Vlad:

I think Hofstadter's exploration of jumbles (
http://en.wikipedia.org/wiki/Jumble ) covers this ground. You don't
just recognize the word, you work on trying to connect it to what you
know, and if set of letters didn't correspond to any word, you give
up.


There's still more to word recognition though than this. How do we decide 
what is and isn't, may or may not be a word?  A neologism? What may or may 
not be words from:


cogrough
dirksilt
thangthing
artcop
coggourd
cowstock

or "fomlepaung" or whatever?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-31 Thread Mike Tintner
Er no, I don't believe in killing people :)

I'm not quite sure what you're what getting at. I was just trying to add 
another layer of complexity to the brain's immensely multilayered processing.  
Our processing of new words/word combinations shows that there is a creative 
aspect to this processing - it isn't just matching.  Some of this might be done 
by standard verbal associations/ semantic networks - e.g. yes IMO "artcop" 
could be a word for, say, art critic -  cops "police", and art can be seen as 
being policed - I may even have that last expression in memory.  But in other 
cases, the processing may have to be done by imaginative association/drawing - 
"dirksilt" could just conceivably be a word, if I imagine some dirk/dagger-like 
tool being used on silt, (doesn't make much sense but conceivable for my brain) 
-  I doubt that such reasoning could be purely verbal.


Valentina: This is how I explain it: when we perceive a stimulus, word in this 
case, it doesn't reach our brain as a single neuron firing or synapse, but as a 
set of already processed neuronal groups or sets of synapses, that each recall 
various other memories, concepts and neuronal group. Let me clarify this. In 
the example you give, the wod artcop might reach us as a set of stimuli: art, 
cop, mediu-sized word, word that begins with a, and so on. All these connect 
activate various maps in our memory, and if something substantial is monitored 
at some point (going with Richard's theory of the monitor, I don't have other 
references of this actually), we form a response.

  This is more obvious in the case of sight - where an image is first broken 
into various compontents that are separately elaborated: colours, motion, 
edges, shapes, etc. - and then further sent to the upper parts of the memory 
where they can be associated to higher level concepts.

  If any of this is not clear let me know, instead of adding me to your 
kill-lists ;-P

  On 7/31/08, Mike Tintner <[EMAIL PROTECTED]> wrote: 

Vlad:

  I think Hofstadter's exploration of jumbles (
  http://en.wikipedia.org/wiki/Jumble ) covers this ground. You don't
  just recognize the word, you work on trying to connect it to what you
  know, and if set of letters didn't correspond to any word, you give
  up.


There's still more to word recognition though than this. How do we decide 
what is and isn't, may or may not be a word?  A neologism? What may or may not 
be words from:

cogrough
dirksilt
thangthing
artcop
coggourd
cowstock

or "fomlepaung" or whatever? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] Search vs "Recall"

2008-08-01 Thread Mike Tintner
I doubt that the brain does much searching in the computational sense at all. 
Computer searches - - i.e. laying out a set of options and systematically going 
through them -  strike me as a brilliant artificial evolution from what the 
brain actually does.

My pretty uninformed guess is that the brain works primarily by "recall" - 
(there doesn't seem to be an agreed term)  -  i.e. the brain is primarily an 
image-processing device; it processes incoming images as whole figures, (even 
words & other symbols, as Damasio points out, have actually to be processed as 
images first). And the brain has special powers,  which we have yet to emulate 
mechanically,  - to recall similar whole figures -  a similar apple image, say, 
or a similar "a-p-p-l-e" word, -  with extremely limited search, or 
consideration of alternatives,  almost immediately. (Perhaps neurons, or neural 
networks, have special powers to rapidly recognize previous reconfigurations of 
themselves).

It strikes me that the prime example of this is movement. The brain doesn't, I 
suggest, go through searches in producing movements. When we want to play a 
"backhand" or a "forehand" or "throw a punch" or "kick", we more or less 
immediately recall a rough, holistic figure of that movement, (mainly 
kinaesthetic, partly visual), which is fluidly adaptable to the precise 
physical situation and relevant objects - "along these lines" so to speak. We 
don't search through lists of alternatives. Motor memories are important 
because they are probably, evolutionarily, (no?) about the first form of memory.

Who, if anyone, is arguing for anything like this idea of the brain having 
special powers of figurative recall?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Search vs "Recall"

2008-08-01 Thread Mike Tintner
It's not boring - it's the absolute centre of AGI. Everything AGI wants to do, 
I would argue,  -  produce analogies, metaphors, concepts, creative ideas, 
creative generalizations along with visual object recognition - depends on 
*figurative thinking" - on thinking with whole, fluid figures (or maps/ body 
maps) and not just with fragmented, "crisp", symbolic parts - on imagination 
and not just rationality. But it's a long argument, so I'll save it for now. 

Valentina:That's a really interesting point you just made.

Movement works in an inherently different way from concept elaboration, or 
recalling.

Most movements, particularly those part of the sympathic system do not even 
reach our conscious level of our brain - and are not elaborated by higher 
functions. If they are - as when you try to center a basket with a ball, you 
will be aware of that - therefore it seems that they are done automatically. 
That is partly true, in the sense that the lower parts of the brain elaborate 
them.

As for the figurative recall it's interesting that you suggested the brain 
works that way, because most of our stimuli are elaborated in terms of images, 
and so it is very tempting to think that the brain works on images. In a way it 
is true I think. But I prefer to call them maps.. in that sets of neurons.. 
maps of neurons.. interpret what we 'see'. There are tons of different types of 
stimuli and concepts, and their difference is not as obvious as one would 
think.. that is why there are ppl who 'see' sounds or 'hear' images... 
particularly deaf and blind ppl. Also keep in mind that of the incredible 
amount of info that reaches us through our senses, we only elaborate a small 
percentage. I bet that if you close your eyes now, you won't be able to repeat 
word by word, this email. But you can surely repeat its 'meaning' because that 
is what the brain extracts. Same goes for words, pictures, sounds.. 

Have I bored anyone enough yet? ;-)





It strikes me that the prime example of this is movement. The brain 
doesn't, I suggest, go through searches in producing movements. When we want to 
play a "backhand" or a "forehand" or "throw a punch" or "kick", we more or less 
immediately recall a rough, holistic figure of that movement, (mainly 
kinaesthetic, partly visual), which is fluidly adaptable to the precise 
physical situation and relevant objects - "along these lines" so to speak. We 
don't search through lists of alternatives. Motor memories are important 
because they are probably, evolutionarily, (no?) about the first form of memory.

Who, if anyone, is arguing for anything like this idea of the brain having 
special powers of figurative recall?


  agi | Archives  | Modify Your Subscription  




  -- 
  A true friend stabs you in the front. - O. Wilde

  Einstein once thought he was wrong; then he discovered he was wrong.

  For every complex problem, there is an answer which is short, simple and 
wrong. - H.L. Mencken


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Search vs "Recall".. P.S.

2008-08-01 Thread Mike Tintner
Oh.. it's just possible that Eliezer's thinking in this blog post linked by 
Vlad, is loosely compatible with my suggestions re the importance of figurative 
recall and figurative thinking - and how they are still beyond current 
computers. (I'd be interested if anyone can comment):
http://www.overcomingbias.com/2008/07/detached-lever.html

"to this day, it is still quite popular to try to program an AI with "semantic 
networks" that look something like this:
  (apple is-a fruit)
  (fruit is-a food)
  (fruit is-a plant)

You've seen apples, touched apples, picked them up and held them, bought them 
for money, cut them into slices, eaten the slices and tasted them.  Though we 
know a good deal about the first stages of visual processing, last time I 
checked, it wasn't precisely known how the temporal cortex stores and 
associates the generalized image of an apple - so that we can recognize a new 
apple from a different angle, or with many slight variations of shape and color 
and texture.  Your motor cortex and cerebellum store programs for using the 
apple.

You can pull the lever on another human's strongly similar version of all that 
complex machinery, by writing out "apple", five ASCII characters on a webpage.

But if that machinery isn't there - if you're writing "apple" inside a 
so-called AI's so-called knowledge base - then the text is just a lever.

This isn't to say that no mere machine of silicon can ever have the same 
internal machinery that humans do, for handling apples and a hundred thousand 
other concepts.  If mere machinery of carbon can do it, then I am reasonably 
confident that mere machinery of silicon can do it too. "



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Search vs "Recall".. P.S.

2008-08-01 Thread Mike Tintner

andi: If we can get the behavior we want from exhaustive
search, but it doesn't have some quick aspect of a feeling of
"recognition" before the content comes on line, I personally wouldn't
worry about it.

Maybe you can explain what kind of search could possibly enable you to 
recognize the body "shapes" in


http://www.youtube.com/watch?v=VRxI22zuLFs

Your capacity to recognize them v. strongly suggests that the brain does 
use, as I suggest, figurative thinking, relying on fluid, holistic figures. 
(And endless further examples can be produced). .I don't think search of a 
pre-existing or pre-definable set of options *will* work here or for any AGI 
(vs narrow AI) problem (somewhat as Valentina indicated) - where do you 
think search will work?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Mike Tintner
IEd: I think human collective intelligence is going to be one of the most 
important technologies, if mankind, is to fare well in its transition through 
the worm-hole of the singularity to the trans-human future.  Although a 
life-long Republican I have been very impressed by Barack Obama's very forward 
looking policy statements on technology, the internet, and how the internet can 
be used to make government more efficient and more democratic

The collective intelligence point is interesting & important (& yes a good part 
of Obama's defeat of Hillary is down to his understanding how to use the Net's 
collective power, and her not doing so).  Obviously the internet is massively 
advancing collective intelligence. But do you have or admire any proposals for 
the collective future?

My one comment about this & other intellectual net groups generally is that 
they haven't yet acquired a true creative culture. That, for me, would entail 
having special creative threads, which start with one of the major creative 
problems of the group, and where every contributor has to put forward an idea 
towards a solution and/or an improvement of other people's ideas. Proper 
brainstorming threads.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-04 Thread Mike Tintner


Harry: >> I have never bought this line of reasoning. It seems to me that 
meaning is a
layered thing, and that you can do perfectly good reasoning at one (or 
two
or three) levels in the layering, without having to go "all the way 
down."
And if that layering turns out to be circular (as it is in a dictionary 
in

the pure sense), that in no way invalidates the reasoning done.>>
My own AI work makes no attempt at grounding,


Vlad: It's too fuzzy an argument. ...These are all bad bugaboo philosophical

words, and they have many different, often misguided, interpretations.
You need to resolve this ambiguity in order for your argument to
obtain specific meaning. For example, from your side of the argument,
what is the "meaning" thing and why do you need it? What is your
concept of the "grounding" thing that others are talking about and
that you think is unnecessary?


This exchange (if you step outside it) provides a pretty good test of 
Harry's proposition. If you think you can do without grounding, Harry, you 
must explain how your system could reason as Vlad did in rejecting your 
argument as fuzzy.


1.Your proposition essentially: my AI system can reason successfully (or to 
a great extent) about meaning without being grounded in reality.


2.Vlad's rebuttal essentially : That's too fuzzy - what do you mean by 
"meaning" and "grounded."?


How would your system be able to say of any argument or proposition that 
"it's too fuzzy?" Bear in mind that Vlad, like all of us here, has read 
hundreds or thousands of statements about "meaning,"  " grounding" and this 
whole subject area, and probably has various interpretations of those words. 
but presumably won't find every statement he's read  fuzzy/ ambiguous,


Yes, this is a highly philosophical subject area - and hard to analyse - but 
bear in mind that any AGI that would hope to navigate the internet as so 
many AGI-ers dream of, would regularly have to deal with concepts such as 
"meaning", "argument", "grounds for" [an argument], "fuzzy", "ambiguous," 
"interpretation," "evidence,"  "define", "make sense of" etc.







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-04 Thread Mike Tintner
Terren: I can't see how an AGI can be truly general if meaning is 
externally defined.>
[*]> Note that the embodiment does not have to be physical, it can be 
virtual as in Novamente's use of Second Life.


How will the virtual AGI distinguish between what is virtual and real, and 
whether any information in any medium presents a "realistic picture," "good 
likeness", is "true to life" or "a gross distortion", and whether any 
proposal will "really work" or whether it itself is "grounded" or a 
"fictional character"?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-05 Thread Mike Tintner

Brad;We don't need no stinkin' grounding.

Your intention, I take it, is partly humorous. You are self-consciously 
assuming the persona of an angry child/adolescent. How do I know that 
without being grounded in real world conversations? How can you understand 
the prosody of language generally without being grounded in conversations? 
How can I otherwise know that you do not literally mean that grounding 
"stinks" like a carcase? What language does not have prosody?


How am I able to proceed to the following analysis of your sentence without 
grounding? .."Brad's sentence reveals a fascinating example of the workings 
of the unconscious mind. He has assumed in one sentence the persona of a 
wilful child. In effect, his unconscious mind is commenting on his conscious 
position  : "I know that I am being wilful in demanding that AGI be 
conducted purely in language/symbols  - demanding like a child that the 
world conform to my wishes and convenience (because, frankly, I only know 
how to do AI that is language- and symbol-based, and having to learn new 
sign systems would be jolly inconvenient, and I'm too lazy, so there)".


There's a lot more to language than meets the eye - or could ever meet the 
eye of a non-grounded AGI.


P.S. I would suggest as a matter of practice here that anyone who wants to 
argue a position should ALWAYS PROVIDE AN EXAMPLE OR TWO - of say a sentence 
or even a single word that they think can be understood with or without 
grounding. (Sorry Bob M., I think that's worth shouting about). Argument 
without examples here should be regarded as shoddy, inferior intellectual 
practice.


P.P.S. A possible further example of the workings of the unconscious mind. 
Is it possible that your sentence has an echo - *in your mind* - of  Pink 
Floyd's Another Brick in the Wall with "We don't need no education" ? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-08-05 Thread Mike Tintner
Jeez, there is NO concept that is not dependent on context. There is NO concept 
that is not infinitely fuzzy and open-ended in itself, period - which is the 
principal reason why language is and has to be grounded (although that needs 
demonstration).

1. "My response to your post is that you are playing chess with me, YKY"

2. "Make a treehouse in your soul, YKY"

3. "Chair can be a v. sensuous word in some languages." (Geddit? French?)
  YKY:> Categorization depends upon context.  This was pretty much decided by 
the late 1980s (look up Fuzzy Concepts).


  This is an important point so I don't want to miss it.  But I can't think of 
a very good example of context-dependence of concepts.

  Some books have these examples:

  1.  Chess is a sport that is a game (the book claims that people make this 
judgement).  But chess is not a sport.

  2.  Tree houses are in the category of dwellings that are not buildings.  But 
people also think tree houses are buildings.  (Again, this example seems 
somewhat awkward to me).

  3.  All chairs are furniture.  A seat in a car is a chair but people would 
not call a car seat furniture.  So, it seems to be a violation of transitivity.

  Can anyone give better examples of context-dependence?

  YKY

--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] Groundless (AND fuzzy) reasoning - in one

2008-08-05 Thread Mike Tintner

  YKY:  

  MT:> Jeez, there is NO concept that is not dependent on context. There is NO 
concept that is not infinitely fuzzy and open-ended in itself, period - which 
is the principal reason why language is and has to be grounded (although that 
needs demonstration).

  YKY:My current approach is to use fuzzy rules to model these concepts.  In 
some cases it seems to work but in other cases it seems problematic...

  For example I can give a definition of the concept "chair":

  chair(X) :-
  X has leg #1,
  X has leg #2,
  X has leg #3,
  X has leg #4,
  X has a horizontal seat area,
  X has a vertical back area,
  leg #1 is connected to seat at position #1,
  etc,
  etc

  But what if a chair has one leg missing?  Using fuzzy logic (fuzzy AND), the 
missing leg will result in a fuzzy value close to 0, which is not quite right.

  Probabilistic logic is also inappropriate.  I know *every* time that a chair 
missing a leg is "somewhat" a chair -- there is no probability involved here.

  REPLY:

  YKY,

  We can kill a whole flock with one stone here - both the infinite 
open-endedness of concepts AND, as a consequence, why any 
General-Intelligence-level reasoning *must* be grounded in sensory images and 
imagination. 

  You couldn't have picked a more archetypal concept. "Chair", from my indirect 
reading, is the concept Plato picked to illustrate his idea that eternal forms 
must underlie concepts and words and  the objects  to which they refer.  He 
suffered from the illusion that all AI/AGI-ers suffer from - & that literate 
culture -  which dates from the alphabetic Greeks to approx. 2008, now that 
multimediate culture is replacing it - also suffers from. Namely that words and 
other symbols refer to real structures or forms of objects.  We can look at 
millions of different chairs, and yet instantly recognize that they all fit the 
word "chair"  Therefore there is a) an "essence of chair" and b) the word 
"c-ha-i-r" somehow captures that essence. and c) that essence can be defined 
with more words.  ( I doubt that this illusion would have been possible in 
pre-alphabetic culture, when words were rendered in more or less *pictographic* 
rather than alphabetic form and therefore did not have a standardised, uniform 
form).

  AGI-ers certainly believe that words - and those essences of objects - can be 
successfully defined with other words, and therefore that purely verbal/ 
symbolic, or ungrounded reasoning is possible. Of course, they are all aware, 
YKY and esp. Ben included, that this may be a very complex business. In Ben's 
case, that it may take a massive CYC-scale operation to fully define any word. 
But it can be done. Oh yes, it can be done. He has no doubt of that.

  Well, let's see. ("See" being the operative word). Let's look at some chairs. 
You will note in the following two chair picture sets - which BTW I consider (& 
please disagree) an awesome a) set of pictures b) set of human creativity and 
c) examples of the mind's powers of categorization, that you can recognize 
almost every example as a *chair* immediately - although you will probably 
question a few.

  But I defy you to define any single attribute of "chair" that they all share, 
(they certainly don't all have "legs").

  1. CHAIR SLIDES
  http://www.mediafire.com/?mwm5ivjmmcd

  2. SET OF CHAIRS
  http://www.mediafire.com/imageview.php?quickkey=vmj2jkptlcn&thumb=4

  I further defy you to define any attribute of chair, period, including "seat" 
or "something to sit on" that an inventor has not already, or could not, 
circumvent -  and still produce a recognizable "chair."

  By extension, you will not be able to definitively define any concept, period 
-  "table," "cow". "human,"  let alone prepositions like "in" (Ben's word in 
his essay), "through", or "over." - let alone mildly to massively complex 
processes like "push,"handle","conversation," "sex", "evolution." Nor will you 
be able to definitively define any *individual* -  *Ben Goertzel," "Pei Wang," 
"Madonna."

  All these concepts can be defined in infinitely open-ended ways because the 
classes of object, both artificial and natural, that they refer to are 
themselves infinitely *open-form* and, usually, evolving  - constrained by some 
parameters, perhaps, but not limited. .

  How then did you come to form the concept of "chair", "table," "cat", "dog" 
etc. or  "Ben G" from these open sets of forms? Not verbally or symbolically. 
You did it the same way your evolutionary parents did - the way all those apes, 
bears, snakes, birds etc recognized each other - who despite having no words 
have plenty of general intelligence. You did it by visual and other image 
processing. Processes which do not and cannot result in a single coherent image 
or template, but result rather in a *flexible set* of images. What is your 
imagistic concept of "amoeba"? There is not and cannot be a single one for such 
an open-form, contin

Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-05 Thread Mike Tintner

Abram:There is one common feature to all chairs: They are for the purpose of
sitting on. I think it is important that this is *not* a visual
characteristic. There are several objections that you could raise, but I 
think that

all of them will follow from the fuzziness of language, not the
fuzziness of the actual concepts.

Your bottom is "for the purpose of sitting on". How will your set of verbal 
definitions be able to tell the difference between a "bottom" and a "chair? 
How will it know that if "Abram sits on a table", it isn't also a chair? 
(And how will it know that, actually, it *could* be a chair?)


And if "John hit Jack with a chair" ,  will your set of verbal definitions 
not exclude this as truthful, if it has nothing about a chair being "for the 
purpose of hitting people"?


Not only can a chair, like any other concept of an object , take an infinity 
of forms, but it can be used for an infinity of functions and purposes. 
Here's S. Kauffman on the purposes of screwdrivers [or chairs] -
"Do we think we can prestate all possible tasks in all possible environments 
and problem situations such that we can construct a bounded frame for 
screwdrivers? Do we think we could write an algorithm, an effective 
procedure, to generate a possibly infinite list of all possible uses ... 
some of which do not yet exist? I don't think we could get started."


Out of interest, is there one single domain, one area however small and 
bounded, like, say, understanding sentences about "boxes" or "geometrical 
objects", where ungrounded, purely symbolic reasoning has ever worked/ "got 
started" at general intelligence level - i.e. been able to understand all 
the permutations of  a limited set of words?  Just one. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mike Tintner

Charles:When you understand all the consequences of an act, then
you don't have free will.

Just so. And the number of decisions/actions that you take, where you 
understand all the consequences - all the rewards, risks, costs, and 
opportunity costs of not just the actions, but how and how long you think 
and decide about them, are ? .. Just one? (It's OK. Don't hurry. I can 
wait.  ) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-08 Thread Mike Tintner
Valentina:My point is that our brain *combines* visual or other stimuli with a 
bank of *non-*visual data in order to extract relevant information. 

This is a v. important point. There is no such thing as *single sense 
cognition*.  Cognition is actually *common sense*/*multisensory*. Michael Tye 
is v. big on this. You cannot just look at/see something. You are 
simultaneously hearing/ smelling/ kinaesthetically aware of its distance from 
you and relation to you, etc.. You cannot separate one sense from the rest - 
even though, intellectually, we have the illusion that we can.

That illusion is partly the price of using language, which fragments into 
pieces what is actually a continuous common sense, integrated response to the 
world.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] The Necessity of Embodiment

2008-08-08 Thread Mike Tintner

Bob: > As a roboticist I can say that a physical body resembling that of a

human isn't really all that important.  You can build the most
sophisticated humanoid possible, but the problems still boil down to
how such a machine should be intelligently directed by its software.

What embodiment does provide are *instruments of causation* and closed
loop control.  The muscles or actuators cause events to occur, and
sensors then observe the results.  Both actuation and sensing are
subject to a good deal of uncertainty, so an embodied system needs to
be able to cope with this adequately, at least maintaining some kind
of homeostatic regime.  Note that "actuator" and "sensor" could be
broadly interpreted, and might not necessarily operate within a
physical domain.

The main problem with non-embodied systems from the past is that they
tended to be open loop (non reflective) and often assumed crisp logic.

Certainly from a marketing perspective - if you're trying to promote a
particular line of research - humanoid-like embodiment certainly helps
people to identify with what's going on.  Also if you're trying to
understand human cognition by attempting to reproduce results from
developmental psychology a humanoid form may also be highly desirable.



Bob,

I think you are v. seriously wrong - and what's more, I suspect, robotically 
as well as humanly wrong. You are, in a sense, missing  literally "the whole 
point."


What mirror neurons are showing is that our ability to understand humans - 
as say portrayed in The Dancers :


http://www.csudh.edu/dearhabermas/matisse_dance_moma.jpg

comes from our capacity to simulate them with our whole body-and-brain 
all-at-once. Note that our brain does not just simulate their particular 
movement at the given point in time on that canvas - it simulates and 
understands their *manner* of movement - and you can get up and dance like 
them, and continue their dance, and produce/predict *further* movements that 
will be a reasonable likeness of how those dancers might dance - all from 
that one captured pose.


Our ability to understand animals and how they will move and emote and 
generally respond similarly comes from our ability to simulate them with our 
whole body-and-brain all at once - hence it is that we can go still further 
and liken humans to almost every animal under the sun - "he's a 
snake/lizard/angry bear/slug/busy bee" etc. etc.


Not only do we understand animals but also inanimate matter and its 
movements or non-movements with our whole body. Hence we see a book as 
"lying" on the table, and a wardrobe as "standing" in a room. This capacity 
is often valuable for inventors, who use it to imagine, for example, how 
liquids will flow through a machine, or scientists like Einstein who 
imagined himself riding a beam of light, or Kekule who imagined the atoms of 
a benzene molecule coiling like a snake.


We can only understand the entire world and how it behaves by embodying it 
within ourselves... or embodying ourselves within it.


This capacity shows that our self is a whole-brain-and-body unit. If I ask 
you to "change your self" - and please try this mentally - to simulate/ 
imagine yourself walking as - say,a flaming diva.. John Wayne... John 
Travolta... Madonna...   you should find that you will 
immediately/instinctively start to do this with your whole body and brain at 
once.As one integral unit.


Now my v. garbled understanding (& please comment) is that those Carnegie 
Mellon starfish robots show that such an integrated whole self is both 
possible - and perhaps vital - for robots too.  You need a whole-body-self 
not just to understand/embody the outside world and predict its movements, 
but to understand your inner body/world and how it's "holding up" and "how 
together" or "falling apart" it is - and whether you will/won't be able to 
execute different movements and think thoughts. You see, I hope, why I say 
you are missing the "whole" point.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] The Necessity of Embodiment..P.S.

2008-08-08 Thread Mike Tintner
The other thing about an embodied perspective is that I have a hunch it may 
be  necessary to understand even inanimate matter from a scientific and not 
just a personal perspective. (I mention this in case there are any 
physicists/chemists around who care to comment). IOW we won't understand how 
matter coheres in its myriad, diverse forms - if we don't understand water, 
gases etc as *integral wholes* and not just assemblies of pieces.


The great deficiency of our mechanistic worldview, in its current simplistic 
form, is that machines are artificial assemblies of parts, and so science 
tends to view all things similarly. But actually natural things and 
organisms are integral wholes, "naturally assembled"  and in that way 
fundamentally different. . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-08 Thread Mike Tintner

  Ben : I agree w/ Motters ... mirroring is an abstract dynamical process, not 
a specific biological mechanism ... and is not even specifically tied to 
embodiment, although the two do work naturally together...

  Abstracted.. from what? And by what? (Think about it).

  Or do you think there really are pure abstract entities and processes of 
abstraction? Wholly Ghosts? Who/what/where?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Mike Tintner

Brad:
Sigh.  Your point of view is heavily biased by the unspoken assumption that 
AGI

must be Turing-indistinguishable from humans.  That it must be AGHI.

Brad,

Literally: "what on earth are you talking about?" What other than human 
intelligence - symbol & sign-using intelligence - is there? (& a few Washoe 
chimps - & computer extensions of the brain).


I & you can *vaguely* understand the *concept* of  an alternative 
*non-human* or *alien intelligence*, but it's a totally ungrounded concept, 
like *flying pigs.* You can't point to a single operational example of what 
you mean. living or mechanical. Nor, I suggest, do you or anyone else have 
an example of what you mean - nobody's thought it through. It comes down to 
the same as "we don't need no stinking grounding." It's not an affirmation 
of something solid, merely a rejection of the hard work of having to 
understand how human intelligence - and especially general intelligence - 
works.


In general, I suggest - in fact, I'm quite sure - human intelligence should 
be taken as *ideal*. Not perfect, not unimprovable-on. But v.g. and usually 
the best available, however flawed it may at first appear. The product of 
the odd billion years of dealing with certain problems - and v. 
sophisticated. Before you can do better, you have to understand how it 
works. Especially you have to understand how GI works - how the human brain 
creates concepts, metaphors, creative ideas, creative generalizations. And 
at the moment, you - the AGI community - don't understand a single aspect of 
GI, or have a single idea/proposal that even *addresses* any aspect. (Ben's 
promised a paper with an idea, at last, but it's in the post).


[I'm v. happy & interested to consider animal/robotic intelligence & forms 
of culture - but I take it that your concepts of non-human intelligence all 
involve symbol use]. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-09 Thread Mike Tintner
Brad:Language is how we "record analog human experience in digitized 
format."


It's not (natural/direct) recording. It's literally fragmenting a continuous 
integrated set of images into *artificial* pieces.Artificial conceptual 
bricks.


And so are all the languages of rationality - logic, arithmetic/algebra.

"2 + 2= 4"
"If a then b"
"The dog bit the man on the face."

"Dog," "man", "face" are all artificial concepts. Like "2" and "4".

These are piecemeal ways of looking at, and thinking about the world. And 
only one level of intelligence.


Contrast that with imaginative, holistic, "embodied"  intelligence:

http://www.youtube.com/watch?v=nk9yHKQRE94&feature=related

Kinda different?

P.S. Re music - it's fairly obvious that visual images are holistic, but 
have you thought about how music's sound images are? The paradox is that we 
hear music moment by moment, and yet we also hear it as sequences - and I 
haven't thought much about this. For example, when I'm listening to my mp3 
player on automatic play, I usually know aurally the notes of what tune's 
coming next, even though I couldn't immediately verbally tell you its title.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Mike Tintner
Ben,

I clearly understood/understand this. My point is:  are you guys' notions of 
non-human intelligence anything more than sci-fi fantasy as opposed to serious 
invention? To be the latter, you must have some half-concrete ideas - however 
skimpy - of what such intelligence might entail and be different from ours. For 
example, would-be flying machine inventors, had some ideas of what such a 
machine might entail - wings or wheels or propellor - and would function (even 
if mistaken).And they obviously knew the difference between land and air-based 
travel. I'm questioning whether y'all have any remotely serious idea at all of 
what a non-human intelligence entails - as distinct from purely negative ideas 
about what parts of human intelligence you *don't* want to use/copy, and 
ethereal ideas about how an AGI will be "much more" intelligent. AFAICT every 
AGI project is actually very, very humanoid.
  Brad:

Sigh.  Your point of view is heavily biased by the unspoken assumption that 
AGI
must be Turing-indistinguishable from humans.  That it must be AGHI.


Brad,

Literally: "what on earth are you talking about?" What other than human 
intelligence - symbol & sign-using intelligence - is there? (& a few Washoe 
chimps - & computer extensions of the brain).



  He is talking about making something that does not yet exist, just as 
airplanes and computer chips did not exist before they were first  made.

  -- Ben G




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Mike Tintner
Ben,

I expressed myself badly. Clearly AGI-ers have ideas & systems, like you,  for 
AGI. But, I suggest, if you examine them, these are all actually humanoid - 
clear adaptations of human intelligence. Nothing wrong with that. It's just 
that AGI-ers often *talk* as if they are developing, or could develop, a truly 
non-human intelligence - a brain that could think in *fundamentally* different 
ways from humans. That's certainly an interesting concept, worth consideration. 
But I am not aware that AGI-ers actually have any ideas at all about what such 
"fundamentally different ways" might entail - what a brain revolutionarily 
different from ours might involve. Do you have/know any such ideas?
  Ben/MT:

I clearly understood/understand this. My point is:  are you guys' notions 
of non-human intelligence anything more than sci-fi fantasy as opposed to 
serious invention? To be the latter, you must have some half-concrete ideas - 
however skimpy - of what such intelligence might entail and be different from 
ours.


  See

  http://opencog.org/wiki/OpenCogPrime

  thx
  Ben







--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Mike Tintner
The real big deal is: how would a non-human brain/robot *think* differently? 
It's easy to envisage radically different substrates.


Here,for example, is a pure sci-fi form of radically different thinking. 
Imagine a creation that could not just entertain images of things, but could 
physically become them - morph into them, like, as I understand, The Thing 
in the movie - something that could really put itself into other people's 
skins. OK that's totally wild, but maybe it'll open the way to more 
realistic possibilities.


Eric/MT:
>But, I suggest, if you examine them, these are all actually humanoid - 
>clear adaptations
of human intelligence. Nothing wrong with that. It's just that AGI-ers 
often *talk* as if
they are developing, or could develop, a truly non-human intelligence - a 
brain that

could think in *fundamentally* different ways from humans.


I think it's an issue of substrate. An AGI built on human-like
cognitive principles -- even a total procedurally-accurate
reimplementation of a human mind -- running on an electronic rather
than organic platform would be a very different kind of intelligence
indeed.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Mike Tintner


Eric:



Yes. An electronic mind need never forget important facts. It'd enjoy
instant recall and on-demand instantaneous binary-precision arithmetic
and all the other upshots of the substrate. On the other hand it
couldn't take, say, morphine!



It would though, presumably, have major problems handling the amounts of 
information, if it had perfect recall - & would have a problem deciding what 
were important facts.


These wild non-human speculations *are* in fact useful, I think - because 
they help you realise the advantages of the human system, imperfect (in this 
case forgetful) as it seems.


We obviously *can* give AGI's supersenses - X-ray vision, for example -  but 
then comes the problem of how would you integrate this with its other 
senses, (how would you alternate it with normal vision), and process its 
info. so as to be generally available ? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Mike Tintner
Ben:but, from a practical perspective, it seems more useful to think about 
minds that are rougly similar to human minds, yet better adapted to existing 
computer hardware, and lacking humans' most severe ethical and motivational 
flaws

Well a) I think that we now agree that you are engaged in a basically, however 
loosely, humanoid endeavour (and thanks for setting out your thinking). But b) 
I disagree about those "flaws". My general philosophy which I keep stressing (& 
is perhaps v. v. loosely in parts in line with Richard's) is: yes, everywhere 
you look at the human system, you see what look like flaws. But, as a general 
principle, those "flaws" are actually great design when you understand the 
problems they are meant to deal with. The human mind, torn between sociocentric 
and egocentric urges, active and passive urges,  behaving in crazy, 
contradictory ways, now altruistically, now egotistically, now industriously, 
now idly, now ascetically, now gluttonously, and absolutely riddled with guilt 
all the time ,  looks quite mad to a rational, standard, mechanistic (and 
soon-to-be-out-of-date) POV.

But when you're dealing with a whole psychoeconomy of problematic, creative 
problems and activities, just as with a social economy of problems and 
activities, that design is ideal - it helps us survive and adapt, unlike 
standard machines and computers which (you may have heard), single-minded and 
rational as they are, can't deal with such problems or adapt at all.

That kind of "flawed", divided mind - still totally alien to the thinking of 
both AGI and cog. sci and rational philosophy -  is cool - just what you should 
be aiming for.

Don't knock the human system until you've understood it - & you guys certainly 
don't understand either its emotions or its conscience. or the open-ended and 
conflicted nature of its drives. (Can you think of any major rational, 
logicomathematical thinker ever who has been noted for his psychological 
sensibility & sensitivity?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Mike Tintner
Ben,

Obviously an argument too massive to be worth pursuing in detail. But just one 
point - your arguments are essentially specialist focussing on isolated 
anatomical rather than cognitive features, (and presumably we (science) don't 
yet have the general, systemic overview necessary to appreciate what would be 
the practical consequences to the rest of the body of, say, altering those 
isolated features like the clitoris - which, ahem, can, like everything else, 
no doubt, ideally, be improved). I am asserting a general, systemic philosophy 
that I applied to the whole of the human mind  -  and you have to stand back 
and look at its apparently crazy contradictions as a whole.

Just as you are in a rational, specialist way picking off isolated features, 
so, similarly, rational, totalitarian thinkers used to object to the crazy, 
contradictory complications of the democratic, "conflict" system of 
decisionmaking by contrast with their pure ideals. And hey, there *are* crazy 
and inefficient features - it's a real, messy system. But, as a whole, it works 
better than any rational, totalitarian, non-conflict system. Cog sci can't yet 
explain why, though, can it? (You guys, without realising it, are all rational, 
totalitarian systembuilders).
  Ben/MT:

Ben:but, from a practical perspective, it seems more useful to think about 
minds that are rougly similar to human minds, yet better adapted to existing 
computer hardware, and lacking humans' most severe ethical and motivational 
flaws

Well a) I think that we now agree that you are engaged in a basically, 
however loosely, humanoid endeavour (and thanks for setting out your thinking). 
But b) I disagree about those "flaws". My general philosophy which I keep 
stressing (& is perhaps v. v. loosely in parts in line with Richard's) is: yes, 
everywhere you look at the human system, you see what look like flaws. But, as 
a general principle, those "flaws" are actually great design when you 
understand the problems they are meant to deal with. 


  This is one of those misleading half-truths...

  Evolution sometimes winds up solving optimization problems effectively, but 
it solves each one given constraints that are posed by its prior solutions to 
other problems ...

  For instance, it seems one of the reasons we're not smarter than we are is 
that evolution couldn't figure out how to make our heads bigger without having 
too many of us get stuck coming out the vaginal canal during birth.   Heads got 
bigger, hips got wider ... up to a point ... but then the process stopped so 
we're the dipshits that we are.  Evolution was solving an optimization problem 
(balancing maximization of intelligence and minimization of infant and mother 
mortality during birth) but within a context set up by its previous choices ... 
it's not as though it achieved the maximum possible intelligence for any 
humanoid, let alone for any being.

  Similarly, it's hard for me to believe that human teeth are optimal in any 
strong sense.  No, no, no.  They may have resulted as the solution to some 
optimization problem based on the materials and energy supply and food supply 
at hand at some period of evolutionary history ... but I refused to believe 
that in any useful sense they are an ideal chewing implement, or that they 
embody some amazingly wise evolutionary insight into the nature of chewing.

  Is the clitoris optimal?  There is a huge and silly literature on this, but 
(as much of the literature agrees) it seems obvious that it's not.  

  The human immune system is an intelligent pattern recognition system, but if 
it were a little cleverer, we wouldn't need vaccines and we wouldn't have 
AIDS...

  We don't understand every part of the human brain/body, but those parts we do 
understand do NOT convey the message that you suggest.  They reflect a reality 
that the human brain/body is a mess combining loads of elegant solutions with 
loads of semi-random hacks.   Not surprisingly, this is also what we see in the 
problem-solutions produced by evolutionary algorithms in computer science 
simulations.

  -- Ben G







--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Mike Tintner

Will,

Maybe I should have explained the distinction more fully. A totalitarian 
system is one with an integrated system of decisionmaking, and unified 
goals. A "democratic", "conflict system is one that takes decisions with 
opposed, conflicting philosophies and goals (a la Democratic vs Republican 
parties) , fighting it out. Cog sci treats humans as if we are rational, 
consistent thinkers/ computers. AGI-ers AFAIK try to build rational, 
consistent (& therefore "totalitarian") computer systems. Actually, humans 
are very much conflict systems and to behave consistently for any extended 
period in any area of your life is a supreme and possibly heroic 
achievement.  A conflicted, non-rational system is paradoxically better 
psychologically as well as socially - and I would argue, absolutely 
essential for dealing with AGI decisions/problems as (most of us will agree) 
it is for social problems.. But it requires a whole new paradigm. Two minds 
(and two hearts) (and two cores?) are better than one.  (And it's the 
American way).



Will/MT;>> Just as you are in a rational, specialist way picking off 
isolated features,
so, similarly, rational, totalitarian thinkers used to object to the 
crazy,

contradictory complications of the democratic, "conflict" system of
decisionmaking by contrast with their pure ideals. And hey, there *are*
crazy and inefficient features - it's a real, messy system. But, as a
whole, it works better than any rational, totalitarian, non-conflict 
system.
Cog sci can't yet explain why, though, can it? (You guys, without 
realising

it, are all rational, totalitarian systembuilders).




All?  I'm a rational economically minded system builder, thank you
very much. I can't answer questions you want answered, like how will
my system reason with imagination precisely because I am not a
totalitarian. If you wish to be non-totalitarian you have set up a
system in a certain way and let the dynamics set up potentially
transform the system into something that can reason as you want.

Theoretically the system could be set up to reason as you want
straight away. But setting up a baby level system seems orders of
magnitude easier than expecting it solve problems straight away. In
which exact knowledge of the inner workings of mature imagination is
not required.

The more you ask for early results of systems, the more you are likely
to get totalitarians building your machines. Because they can get
results quick.

 Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Mike Tintner
Ben:By true rationality I simply mean making judgments in accordance with 
probability theory based on one's goals and the knowledge at one's disposal.

Which is not applicable to AGI prob lems, which are wicked and ill-structured, 
and where you cannot calculate probabilities, and are not sure of your goals, 
or what knowledge domains to consult or apply. Problems such as those of 
creative discovery that you mention.

Or even simpler problems, like : how were you to handle the angry Richard 
recently? Your response, and I quote: "Aaargh!" (as in "how on earth do I 
calculate my probabilities and Bayes?" and "which school of psychological 
thought is relevant here?") Now you're talking AGI. There is no rational or 
logical or mathematical way to handle another person, nor to make a scientific 
discovery nor to do anything that is AGI as opposed to narrow AI. I urgently 
advise reading Kauffman's Reinventing the Sacred, where he says compatible 
things (& may do a more extensive post on the book).

Nor strictly are there any errors - no right or wrong. Just more or less highly 
effective or ineffective approaches to a problem. It's a different paradigm.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Mike Tintner

Will: thought you meant rational as applied to the system builder :P
Consistency of systems is overrated, as far as I am concerned.
Consistency is only important if it ever the lack becomes exploited. A
system that alter itself to be consistent after the fact is
sufficient.

Do you remember when I wrote this?

http://www.mail-archive.com/agi@v2.listbox.com/msg07233.html

What parts of it suggest a fixed and totalitarian system to you?


WIll,

I didn't & still don't quite understand your ideas there. You need to give 
some examples of how they might apply to particular problems.The fact that a 
program/set of programs can change v. radically - and even engage opposite 
POV's - doesn't necessarily mean it isn't still a totalitarian system.


What you call "addiction" is a central example of how humans are a 
conflicted system - all our lives we are torn between urges to consume 
gluttonously and urges to consume abstemiously/moderately, right across 
multiple appetites. At a basic level, this conflict never changes. Ditto 
your conflicts between activity and passivity.  You aren't designed to 
finally resolve these conflicts in any particular way, like falling into 
permanent addiction. You are designed to be permanently conflicted - just 
like a democratic political system. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-11 Thread Mike Tintner
  Ben/MT: Cog sci treats humans as if we are rational, consistent thinkers/ 
computers. 

No, it just doesn't.  This is an egregious oversimplification and mis-analysis 
of the cognitive science community and its research and ideas.  Look at the 
heuristics and biases literature, for one thing... and the literature on 
analogical reasoning ... on the cognitive psychology of emotion ... etc. etc. 
etc

Ben,

I suspect this is a similar misunderstanding to Richard's response to the 
above, long ago - and it's an important subject. Cog sci is obsessed with the 
many irrationalities of human thinking, yes. That doesn't mean it doesn't see 
humans as basically rational, consistent, computer-like thinkers dealing mainly 
with rational problems. The irrationalities are seen as so many bugs that can, 
ideally, be fixed. Classic example of this attitude:

"More puzzling is myopic discounting: the tendency in all of us to prefer a 
large late reward to a small early one, but then to flip our preferences as 
time passes and both rewards draw nearer. A familiar example is deciding before 
dinner to skip dessert (a small early reward) in order to lose weight (a large 
late one), but succumbing to temptation when the waiter takes the dessert 
orders. Myopic discounting is easy to produce in the lab: give people (or 
pigeons, for that matter) two buttons, one delivering a small reward now, the 
other delivering a large reward later, and the subject will flip from choosing 
the large reward to choosing the small reward as the small one becomes 
imminent. The weakness of the will is an unsolved problem in economics and 
psychology alike."
Pinker - How The Mind Works

Cog sci. sees all this as puzzling and can't solve the problem of "the weakness 
of the will", because none of it makes sense within a rational thinker 
paradigm, and "myopic discounting" clearly can't be "fixed".

I am advancing an alternative paradigm in which it all does make sense. We 
aren't rational (or irrational) at all for the most part; rational (eg 
logicomathematical) problems are only half at most of the problems we have to 
deal with. Actually, we are creative thinkers, dealing mainly with creative 
problems, (like what to eat tonight as well as how to write a post or design an 
AGI ), and we are designed to be fundamentally and permanently conflicted, (and 
therefore erratically "strong"/"weak"-willed and "unfixable" like democratic 
systems), in order to deal with those problems. (And AGI too is about creative 
not rational problems).



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] AGI's Philosophy of Learning

2008-08-13 Thread Mike Tintner
THE POINT OF PHILOSOPHY:  There seemed to be some confusion re this - the 
main point of philosophy is that it makes us aware of the frameworks that 
are brought to bear on any subject, from sci to tech to business to arts - 
and therefore the limitations of those frameworks. Crudely, it says: hey 
you're looking in 2D, you could be loooking in 3D or nD.


Classic example: Kuhn. Hey, he said, we've thought science discovers bodies 
feature-by-feature, with a steady-accumulation-of-facts. Actually those 
studies are largely governed by paradigms [or frameworks] of bodies, which 
heavily determine  what features we even look for in the first place. A 
beatiful piece of philosophical analysis.


AGI: PROBLEM-SOLVING VS LEARNING.

I have difficulties with AGI-ers, because my philosophical approach to AGI 
is -  start with the end-problems that an AGI must solve, and how they 
differ from AI. No one though is interested in discussing them - to a great 
extent, perhaps, because the general discussion of such problem distinctions 
throughout AI's history (and through psychology's and philosophy's history) 
has been pretty poor.


AGI-ers, it seems to me, focus on learning - on how AGI's must *learn* to 
solve problems. The attitude is : if we can just develop a good way for 
AGI's to learn here, then they can learn to solve any problem, and gradually 
their intelligence will just take off, (hence superAGI). And there is a 
great deal of learning theory in AI, and detailed analysis of different 
modes of learning, that is logic- and maths-based. So AGI-ers are more 
comfortable with this approach.


PHILOSOPHY OF LEARNING

However there is relatively little broad-based philosophy of learning. Let's 
do some.


V. broadly, the basic framework, it seems to me, that AGI imposes on 
learning to solve problems is:


1) define a *set of options* for solving a problem,  and attach if you can, 
certain probabilities to them


2) test those options,  and carry the best, if any, forward

3) find a further set of options from the problem environment, and test 
those, updating your probabilities and also perhaps your basic rules for 
applying them, as you go


And, basically, just keep going like that, grinding your way to a solution, 
and adapting your program.


What separates AI from AGI is that in the former:

* the set of options [or problem space]  is well-defined, [as say, for how a 
program can play chess] and the environnment is highly accessible.AGI-ers 
recognize their world is much more complicated and not so clearly defined, 
and full of *uncertainty*.


But the common philosophy of both AI and AGI and programming, period, it 
seems to me, is : test a set of options.


THE $1M QUESTION with both approaches is: *how do you define your set of 
options*? That's the question I'd like you to try and answer. Let's make it 
more concrete.


a) Defining A Set of Actions?   Take AGI agents, like Ben's, in virtual 
worlds. Such agents must learn to perform physical actions and move about 
their world. Ben's had to learn how to move to a ball and pick it up.


So how do you define the set of options here - the set of 
actions/trajectories-from-A-to-B that an agent must test? For,say, moving 
to, or picking up/hitting a ball. Ben's tried a load - how were they 
defined? And by whom? The AGI programmer or the agent?


b)Defining A Set of Associations ?Essentially, a great deal of formal 
problem-solving comes down to working out that A is associated with B,  (if 
C,D,E, and however many conditions apply) -   whether A "means," "causes," 
or "contains" B etc etc .


So basically you go out and test a set of associations, involving A and B 
etc, to solve the problem. If you're translating or defining language, you 
go and test a whole set of statements involving the relevant words, say "He 
jumped over the limit" to know what it means.


So, again, how do you define the set of options here - the set of 
associations to be tested, e.g. the set of texts to be used on Google, say, 
for reference for your translation?


c)What's The Total Possible Set of Options [Actions/Associations] -  how can 
you work out the *total* possible set of options to be tested (as opposed to 
the set you initially choose) ? Is there one with any AGI problem?


Can the set of options be definitively defined at all? Is it infinite say 
for that set of trajectories, or somehow limited?   (Is there a definitive 
or guaranteed way to learn language?)


d) How Can You Insure the Set of Options is not arbitrary?  That you won't 
entirely miss out the crucial options no matter how many more you add? Is 
defining a set of options an art not a science - the art of programming, 
pace Matt?


POST HOC VS AD HOC APPROACHES TO LEARNING:  It seems to me there should be a 
further condition to how you define your set of options.


Basically, IMO, AGI learns to solve problems, and AI solves them, *post 
hoc.* AFTER the problem has already been solved/learned.


The pe

Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-13 Thread Mike Tintner
[agi] Meet the world's first robot controlled exclusively by living brain 
tissueThanks, Ed. My casual impression is that the scientist here, Kevin 
Warwick, is un peu d'un nut - although skilled at self-publicising. Some years 
ago, he had a chip sewn into his arm. He was going to open doors with it and 
other stuff. Anyone know what happened with that?
  Ed: "A 'Frankenrobot' with a biological brain  
  Meet Gordon, probably the world's first robot controlled exclusively by 
living brain tissue."  

  Article at 

  http://www.breitbart.com/article.php?id=080813192458.ud84hj9h&show_article=1 







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Mike Tintner

Jim:I know that
there are no solid reasons to believe that some kind of embodiment is
absolutely necessary for the advancement of agi.

I want to concentrate on one dimension of this: precisely the "solid" 
dimension. My guess would be that this is a dimension of AGI that has been 
barely thought through at all - rather like the claim that AGI projects are 
nonhuman;


What it comes down to is: what can you learn about any object[s] from flat 
drawings of them? Cardboard cutouts? It's a fascinating question, because it 
forces you to ask what you do/don't learn from the flat/solid object. [Bear 
in mind that almost ALL our culture's media are flat - there aren't too many 
statues and solid models around].


What can you learn from a flat representation of a building as opposed to 
the real thing that can be entered and walked around at will? Or a flat 
representation of a rock as opposed to the real to-be-handled object?


You can of course, make your building walkable through in the AGI world, on 
that flat screen, but every POV will be, presumably, programmer-defined 
beforehand. So what the system as a whole can truly *learn* is extremely 
limited.


Also, presumably, movement in this world, is simply movement of a flat shape 
through a flat world - with few dimensions of real object movement - weight, 
friction, heat, forces resisting you, balance and balance maintenance, 
centeredness,  kinaesthetic awareness, inner emotions, feelings of energy, 
tirednessl


Ben and other similar AGI-ers, (Voss?), ought to have some papers on 
flatlands vs real, solidlands... do they? I'd doubt it.


But this question forces us to think about our culture's limitations as 
well. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Mike Tintner

Jim:This is also a problem in animal vision.  Each eye is 2-D.  (That is
not entirely true, but from a practical point of view it is true.)
As far as flat land or hollywood land, we only live on the earth, so
that means that you can't understand anything about space right?

Logic running wild, Jim, We animals and humans are not limited by flattish 
retinas in learning about the world because we have a body and senses, and 
can walk round and explore the objects from multiple POV's & with multiple 
faculties, and embody them with our own bodies - put ourselves in their 
place. A virtual world AGI *IS* so limited. Have you explored the price of 
those unquestionable limitations - which is an important question - and 
needs to be answered?  Or are you just bent on defending "flatness" and the 
lack of a "rounded POV"?


Bob:This is essentially the same problem as in computer vision.  The

objects that you're looking at are three dimensional, but a camera
image is only a two dimensional shadow of them.  The problem then
becomes one of trying to reverse engineer the 3D shape from a set of
lower dimensional shadows


But - correct me - when you engineer the 3D shape, you are merely applying 
previous,existing knowledge about other objects to do so - which is a useful 
but narrow AI function. You are not actually discovering anything new about 
this particular object?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Mike Tintner
Ben: as discussed already ad nauseum, I do not think that robust 
perception/action is necessarily the best place to start in making an AGI. 
However, our current work on embodying Novamente and OpenCog does involve 3D 
virtual worlds ... and, of course, my planned work with Xiamen University using 
OpenCog to help control a Nao robot also involves our 3D visual world...

In terms of P.R. - communicating what you're doing - about through which 
channels your AGI's or other AGI's will try to learn about which dimensions of 
which worlds, this is all a little confusing. And I suspect everybody, 
including you, is a little confused here - especially about the different 
limitations/advantages of the different approaches. Hey, you seem to be in a 
highly experimental/exploratory stage, which is fine. But the "ad nauseum" 
comment seems to be aimed at stifling rather than encouraging further analysis, 
and covering over the confusion.. There's actually an awful lot to discuss here 
- it hasn't all been covered.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-15 Thread Mike Tintner

http://www.wired.com/wired/archive/8.02/warwick.html



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Mike Tintner

Abram:I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.

Can you explain this to me? (I really am interested in understanding your 
thinking). AGI's have a roughly 50 year record of total failure. They have 
never shown the slightest sign of general intelligence - of being able to 
cross domains. How do you think they will or could fool anyone? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-18 Thread Mike Tintner

Abram,

The key distinction here is probably that some approach to AGI may be widely 
accepted as having great *promise*. That has certainly been the case, 
although I doubt actually that it could happen again. There were also no 
robots of note in the past. Personally, I can't see any approach being 
accepted  now - and the general responses of this forum, I think, support 
this - until it actually delivers on some form of GI.


Mike,

There are at least 2 ways this can happen, I think. The first way is
that a mechanism is theoretically proven to be "complete", for some
less-than-sufficient formalism. The best example of this is one I
already mentioned: the neural nets of the nineties (specifically,
feedforward neural nets with multiple hidden layers). There is a
completeness result associated with these. I quote from
http://www.learnartificialneuralnetworks.com/backpropagation.html :

"Although backpropagation can be applied to networks with any number
of layers, just as for networks with binary units it has been shown
(Hornik, Stinchcombe, & White, 1989; Funahashi, 1989; Cybenko, 1989;
Hartman, Keeler, & Kowalski, 1990) that only one layer of hidden units
suces to approximate any function with finitely many discontinuities
to arbitrary precision, provided the activation functions of the
hidden units are non-linear (the universal approximation theorem). In
most applications a feed-forward network with a single layer of hidden
units is used with a sigmoid activation function for the units. "

This sort of thing could have contributed to the 50 years of
less-than-success you mentioned.

The second way this phenomenon could manifest is more a personal fear
than anything else. I am worried that there really might be partial
principles of mind that could seem to be able to do everything for a
time. The possibility is made concrete for me by analogies to several
smaller domains. In linguistics, the grammar that we are taught in
high school does almost everything. In logic, 1st-order systems do
almost everything. In sequence learning, hidden markov models do
almost everything. So, it is conceivable that some AGI method will be
missing something fundamental, yet seem for a time to be
all-encompassing.

On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:
Abram:I am worried-- worried that an AGI system based on anything less 
than

the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.

Can you explain this to me? (I really am interested in understanding your
thinking). AGI's have a roughly 50 year record of total failure. They have
never shown the slightest sign of general intelligence - of being able to
cross domains. How do you think they will or could fool anyone?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] How We Look At Faces

2008-08-20 Thread Mike Tintner
{I wonder whether the difference below *is* biological - due to narrower 
eyes taking that little bit longer to process?]


Culture Shapes How We Look at Faces
Caroline Blais1,2, Rachael E. Jack1, Christoph Scheepers1, Daniel Fiset1,2, 
Roberto Caldara1


1 Department of Psychology, University of Glasgow, Glasgow, United Kingdom,
2 Département de Psychologie, Université de Montréal, Montréal, Canada

Abstract
Background
Face processing, amongst many basic visual skills, is thought to be 
invariant across all humans. From as early as 1965, studies of eye movements 
have consistently revealed a systematic triangular sequence of fixations 
over the eyes and the mouth, suggesting that faces elicit a universal, 
biologically-determined information extraction pattern.


Methodology/Principal Findings
Here we monitored the eye movements of Western Caucasian and East Asian 
observers while they learned, recognized, and categorized by race Western 
Caucasian and East Asian faces. Western Caucasian observers reproduced a 
scattered triangular pattern of fixations for faces of both races and across 
tasks. Contrary to intuition, East Asian observers focused more on the 
central region of the face.


Conclusions/Significance
These results demonstrate that face processing can no longer be considered 
as arising from a universal series of perceptual events. The strategy 
employed to extract visual information from faces differs across cultures.


Source: PLoS One [Open Access]
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0003022




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-23 Thread Mike Tintner
Terren:> Just wanted to add something, to bring it back to feasibility of 
embodied/unembodied approaches. Using the definition of embodiment I 
described, it needs to be said that it is impossible to specify the goals of 
the agent, because in so doing, you'd be passing it information in an 
unembodied way. In other words, a fully-embodied agent must completely 
structure internally (self-organize) its model of the world, such as it is. 
Goals must be structured as well. Evolutionary approaches are the only means 
at our disposal for shaping the goal systems of fully-embodied agents, by 
providing in-built biases towards modeling the world in a way that is in 
alignment with our goals. That said, Friendly AI is impossible to guarantee 
for fully-embodied agents.


The question then becomes, is it necessary to implement full embodiment, 
in the sense I have described, to arrive at AGI. I think most in this 
forum will say that it's not. Most here say that embodiment (at least 
partial embodiment) would be useful but not necessary.


OpenCog involves a partially embodied approach, for example, which I 
suppose is an attempt to get the best of both worlds - the experiential 
aspect of embodied senses combined with the precise specification of goals 
and knowledge, not to mention additional components that aim to provide 
things like natural language processing.


The part I have difficulty understanding is how a system like OpenCog 
could hope to marry the information from each domain - the self-organized, 
emergent domain of embodied knowledge, and the externally-organized, given 
domain of specified knowledge. These two domains must necessarily involve 
different knowledge representations, since one emerges (self-organizes) at 
runtime. How does the cognitive architecture that processes the specified 
goals and knowledge dovetail with the constructions that emerge from the 
embodied senses?  Ben, any thoughts on that?




Terren,

You're struggling a bit for definitions - but I don't mean that in the least 
critically, because so is everyone that seems to interest you - struggling 
to form a new worldview.


The outgoing worldview, to which AGI is still wedded, sees the world as 
rationally structured - structured both physically, behaviourally and 
intelligently.


The new worldview sees living organisms as creatively 
self-structuring -again, physically, behaviourally and intelligently -  aka 
autopoiesis and Kauffman's self-organizing organisms.  And as Kauffman 
points out, rationally structured algorithms/programs are demonstrably 
incapable of producing the kind of creative thinking that is essential for 
General Intelligence.


Isn't it clear that if you look at a General Intelligence that works, like 
the human kind, the process of learning and becoming intelligent is the same 
in every field - from reaching out and grasping, to babbling and talking, to 
reading, writing and drawing, and mastering every activity up to and 
including,ironically, learning to program - first you creatively flail and 
only then, secondly, do you (and the unconscious mind) impose structure and 
routines/algorithms on the messy results? (Therein lies the General Method 
of General Intelligence). And then those routines/algorithms can only ever 
deal with the routine parts of intelligent activities.  AI here as 
elsewhere, gets things completely back to front, and assumes that structure 
and order come first. The whole of evolution, including the 
evolution/development of intelligent behaviour, contradicts that, (as I 
think you're pointing out). 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] How Would You Design a Play Machine?

2008-08-24 Thread Mike Tintner
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly more 
flexible than a computer, but if you want to do it all on computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?
How do infants, IOW, play? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner

Brad,

That's sad.  The suggestion is for a mental exercise, not a full-scale 
project. And play is fundamental to the human mind-and-body - it 
characterises our more mental as well as more physical activities - 
drawing, designing, scripting, humming and singing scat in the bath, 
dreaming/daydreaming & much more. It is generally acknowledged by 
psychologists to be an essential dimension of creativity - which is the goal 
of AGI. It is also an essential dimension of animal behaviour and animal 
evolution.  Many of the smartest companies have their play areas.


But I'm not aware of any program or computer design for play - as distinct 
from elaborating systematically and methodically or "genetically" on 
themes - are you? In which case it would be good to think about one - it'll 
open your mind & give you new perspectives.


This should be a group where people are not too frightened to play around 
with ideas.


Brad:> Mike Tintner wrote: "...how would you design a play machine - a 
machine

that can play around as a child does?"

I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building an 
AGI any time in this century cannot afford.  I firmly believe if we had 
not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more wrong. 
We *do not need embodiment* to be able to build a powerful AGI that can be 
of immense utility to humanity while also surpassing human intelligence in 
many ways.  To be sure, we want that AGI to be empathetic with human 
intelligence, but we do not need to make it equivalent (i.e., "just like 
us").


I don't want to give the impression that a non-Turing intelligence will be 
easy to design and build.  It will probably require at least another 
twenty years of "two steps forward, one step back" effort.  So, if we are 
going to develop a non-human-like, non-embodied AGI within the first 
quarter of this century, we are going to have to "just say no" to Turing 
and start to use human intelligence as an inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly more 
flexible than a computer, but if you want to do it all on computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&; Powered by 
Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner


Matt:> Kittens play with small moving objects because it teaches them to be 
better hunters. Play is not a goal in itself, but a subgoal that may or may 
not be a useful part of a successful AGI design.


Certainly, crude imitation of, and preparation for, adult activities is one 
aspect of play. But pure exploration - experimentation -and embroidery also 
are important. An infant dropping & throwing things & handling things every 
which way. Doodling - creating lines that go off and twist and turn in every 
direction. Babbling - playing around with sounds. Sputtering - playing 
around with silly noises - kids love that, no? (Even some of us adults too). 
Playing with stories and events - and alternative endings, beginnings and 
middles.  Make believe. Playing around with the rules of invented games.


Human development allots a great deal of time for such play. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner

Terren,

Your broad distinctions are fine, but I feel you are not emphasizing the 
area of most interest for AGI, which is *how* we adapt rather than why. 
Interestingly, your blog uses the example of a screwdriver - Kauffman uses 
the same in Chap 12 of Reinventing the Sacred as an example of human 
creativity/divergence - i.e. our capacity to find infinite uses for a 
screwdriver.


"Do we think we could write an algorithm, an effective procedure, to 
generate a possibly infinite list of all possible uses of screwdrivers in 
all possible circumstances, some of which do not yet exist? I don't think we 
could get started."


What "emerges" here, v. usefully, is that the capacity for play overlaps 
with classically-defined, and a shade more rigorous and targeted,  divergent 
thinking, e.g. "find as many uses as you can for a screwdriver, rubber teat, 
needle etc".


...How would you design a divergent (as well as play) machine that can deal 
with the above open-ended problems? (Again surely essential for an AGI)


With full general intelligence, the problem more typically starts with the 
function-to-be-fulfilled - e.g. how do you open this paint can? - and only 
then do you search for a novel tool, like a screwdriver or another can lid.




Terren:> Actually, kittens play because it's fun. Evolution has equipped 
them with the rewarding sense of fun because it optimizes their fitness as 
hunters. But kittens are adaptation executors, evolution is the fitness 
optimizer. It's a subtle but important distinction.


See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers.

--- On Mon, 8/25/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

Kittens play with small moving objects because it teaches
them to be better hunters. Play is not a goal in itself, but
a subgoal that may or may not be a useful part of a
successful AGI design.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?

Brad,

That's sad.  The suggestion is for a mental exercise,
not a full-scale
project. And play is fundamental to the human mind-and-body
- it
characterises our more mental as well as more physical
activities -
drawing, designing, scripting, humming and singing scat in
the bath,
dreaming/daydreaming & much more. It is generally
acknowledged by
psychologists to be an essential dimension of creativity -
which is the goal
of AGI. It is also an essential dimension of animal
behaviour and animal
evolution.  Many of the smartest companies have their play
areas.

But I'm not aware of any program or computer design for
play - as distinct
from elaborating systematically and methodically or
"genetically" on
themes - are you? In which case it would be good to think
about one - it'll
open your mind & give you new perspectives.

This should be a group where people are not too frightened
to play around
with ideas.

Brad:> Mike Tintner wrote: "...how would you design
a play machine - a
machine
> that can play around as a child does?"
>
> I wouldn't.  IMHO that's just another waste of
time and effort (unless
> it's being done purely for research purposes).
It's a diversion of
> intellectual and financial resources that those
serious about building an
> AGI any time in this century cannot afford.  I firmly
believe if we had
> not set ourselves the goal of developing human-style
intelligence
> (embodied or not) fifty years ago, we would already
have a working,
> non-embodied AGI.
>
> Turing was wrong (or at least he was wrongly
interpreted).  Those who
> extended his imitation test to humanoid, embodied AI
were even more wrong.
> We *do not need embodiment* to be able to build a
powerful AGI that can be
> of immense utility to humanity while also surpassing
human intelligence in
> many ways.  To be sure, we want that AGI to be
empathetic with human
> intelligence, but we do not need to make it equivalent
(i.e., "just like
> us").
>
> I don't want to give the impression that a
non-Turing intelligence will be
> easy to design and build.  It will probably require at
least another
> twenty years of "two steps forward, one step
back" effort.  So, if we are
> going to develop a non-human-like, non-embodied AGI
within the first
> quarter of this century, we are going to have to
"just say no" to Turing
> and start to use human intelligence as an inspiration,
not a destination.
>
> Cheers,
>
> Brad
>
>
>
> Mike Tintner wrote:
>> Just a v. rough, first thought. An essential
requirement of  an AGI is
>> surely that it must be able to play - so how would

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren:As may be obvious by now, I'm not that interested in designing 
cognition. I'm interested in designing simulations in which intelligent 
behavior emerges.But the way you're using the word 'adapt', in a cognitive 
sense of playing with goals, is different from the way I was using 
'adaptation', which is the result of an evolutionary process.


Two questions: 1)  how do you propose that your simulations will avoid the 
kind of criticisms you've been making of other systems of being too guided 
by programmers' intentions? How can you set up a simulation without making 
massive, possibly false assumptions about the nature of evolution?


2) Have you thought about the evolution of play in animals?

(We "play" BTW with just about every dimension of activities - goals, rules, 
tools, actions, movements.." ).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren: The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play..


Well, the key thing for me (although it was, technically, a play-ful 
question :) )  is the distinction between programmed/planned exploration of 
a basically known environment and ad hoc exploration of a deeply unknown 
environment. In many ways, it follows on from my previous thread on 
Philosophy of Learning in  AGI, which asked - how do you learn an unfamiliar 
subject/skill/ activity - could any definite set of principles guide you? 
(This, I presume, is what Ben is somehow dealing with).


If you're an infant, or even often an adult, you don't know what this 
strange object is for or how to manipulate it - so how do you go about 
moving it and testing its properties? How do you go about moving your hand, 
(or manipulator if you're a robot)? {I'd be interested in Bob M's input 
here] - exploring its  properties and capacities for movement too? What are 
the principles if any that should constrain you?


Equally, if you're exploring an environment - a new kind of room, or a new 
kind of territory like a garden, wood, forest, how do you go about moving 
through it, deciding on paths, orienting yourself, mapping etc.?  Remember 
that these are initially alien environments, so the adult or AGI equivalent 
is exploring a strange planet, or  videogame world with alien kinds of laws.


Play - divergent thinking - exploration - these are all overlapping 
dimensions of a general intelligence developing its intelligence, and 
central to AGI.


And for the more abstractly inclined, I should point out that these 
questions easily translate into the most abstract forms - like how do you 
explore a new area of, or for, logic, or maths? How do you go about 
exploring, or developing, a maths of, say, abstract art?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner


Bob M:> Play may be about characterising the state space.  As an embodied

entity you need to know which areas of the space are relatively
predictable and which are not.  Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of outcomes or affordances, which may be very useful in
practical situations.> You'll notice that play tends to be directed 
towards activities with

high novelty.  With enough experience through play an unfamiliar or
novel situation can be decomposed into a set of more predictable
outcomes.


What I was particularly interested in asking you is the following: part of 
the condition of being human is that you have to not just explore the 
outside world, but your own body and brain. And in fact it's potentially 
endless, because the degrees of freedom and range of possibilities for both 
are vast. So there is room to never stop exploring and developing your golf 
swing, say, or working out new ways to dredge out well-buried memories, and 
integrate them into new structures - for example, we can all develop a 
memory for dialogue, say, or for physical structures, (incl. from the past). 
Clearly, play along with development generally are a part of 
self-(one-s-own-system)-exploration.


Now robots too have similarly vast if not quite so vast possibilities of 
movement and thought. So in principle it sounds like a good, if not 
long-term essential idea to have them play and explore themselves as humans 
do. In principle, it would be a good idea for a pure AGI computer to explore 
its own vast possibilities/ways-of-thinking. Is anyone trying to design a 
self-exploring robot or computer? Does this principle have a name?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner

Terren:I know we've gotten a little off-track here from play, but the really

interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation?


Again, I think you're missing out the most important aspect of having a body 
,  (& is there a good definition of this? I think roboticists make some kind 
of deal of it). A body IS play, in a broad sense. It's first of all 
continuously *roving.*  -continuously moving, continuously thinking, 
*whether something is called for or not* (unlike machines which only act to 
order). Frankly, the idea that a human or animal body and brain are 
programmed in an *extended* way - for a minute of continuous action, say, as 
opposed to short routines/habits tossed together, can't be taken seriously - 
we have a major problem concentrating, following a train of thought or 
sticking to a train of movement, for that long. Our mind is continuously 
going off at tangents. The plus side of that is that we are highly adaptable 
and flexible - very ready to get a new handle on things.


The second, still more important advantage of a body, (the part, I think, 
that roboticists stress) is that it "incorporates" a vast range of 
possibilities which surely *do not have to be laboriously pre-specified* - 
vast ranges of possible movement and thought that can be playfully explored 
as required, rather than explicitly coded for beforehand. Start moving your 
hand around, twiddling your fingers independently & together, and twisting 
the whole unit, every which way.It's never-ending. And a good deal of it 
will be novel. So the basic general principle of learning any new movement, 
presumably,is "have a stab" at it - stick your hand out at the object in a 
loosely appropriate shape, and then play around with your grip/handling - 
explore your body's range of possibilities. There's no "beforehand."


Ditto the brain has a vast capacity for ranges of "free *non-pre-specified* 
association" - start thinking of - visualising - your screwdriver. Now think 
of similar *shapes*. You should find you can keep going for a good while - a 
stream of new, divergent, not convergently, algorithmically pre-arranged 
associations, (as Kauffman insists).The brain is designed for free, 
unprogrammed association in a way that computers clearly haven't been - or 
haven't been to date. It can freely handle and play with ideas as the hand 
can objects.


God/Evolution clearly looked at Matt's bill for an army of programmers to 
develop an AGI, and decided He couldn't afford it - he'd try something 
simpler and more ingenious. Play around first, program routines second, 
develop culture and AI third.


P.S. The whole concept of an "unembodied intelligence" is a nonsense. There 
is *no such thing*.  The real distinction, presumably, is between embodied 
intelligences that can control their bodies, like humans, and those, like 
computers to date, that can't (or barely). Unembodied intelligences don't 
and *can't* exist.


*Self-control* - being able to control your body - is perhaps the most vital 
dimension of having a body in the sense of the standard debate. Without 
that, you can't understand the distinction between inert matter and life - 
one of the most fundamental early distinctions in understanding the world. 
Without that, I doubt that you can really understand anything.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner
Ben:If an intelligent system has a goal G which is time-consuming or difficult 
to achieve ...
it may then synthesize another goal G1 which is easier to achieve
We then have the uncertain syllogism

Achieving G implies reward
G1 is similar to G

Ben,

The be-all and end-all here though, I presume is "similarity". Is it a logic-al 
concept?  Finding similarities - rough likenesses as opposed to rational, 
precise, logicomathematical commonalities - is actually, I would argue, a 
process of imagination and (though I can't find a ready term) physical/embodied 
improvisation. Hence rational, logical, computing approaches have failed to 
produce any new (in the normal sense of "surprising")  metaphors or analogies 
or be creative.

Maybe you could give an example of what you mean by similarity




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Mike Tintner
Valentina:In other words I'm looking for a way to mathematically define how the 
AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever been 
logically or mathematically (axiomatically) derivable from any old one?  e.g. 
topology,  Riemannian geometry, complexity theory, fractals,  free-form 
deformation  etc etc


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Mike Tintner

Abram,

Thanks for reply. This is presumably after the fact -  can set theory 
predict new branches? Which branch of maths was set theory derivable from? I 
suspect that's rather like trying to derive any numeral system from a 
previous one. Or like trying to derive any programming language from a 
previous one- or any system of logical notation from a previous one.



Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:
Valentina:In other words I'm looking for a way to mathematically define 
how

the AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
been logically or mathematically (axiomatically) derivable from any old
one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
free-form deformation  etc etc

agi | Archives | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: Information t..PS

2008-08-26 Thread Mike Tintner

Abram,

I suspect what it comes down to - I'm tossing this out off-the-cuff - is 
that each new branch of maths involves new rules, new operations on numbers 
and figures, and new ways of relating the numbers and figures to real 
objects and sometimes new signs, period. And they aren't predictable or 
derivable from previous ones. Set theory is ultimately a v. useful 
convention, not an absolute necessity?


Perhaps this overlaps with our previous discussion, which could perhaps be 
reduced to - is there a universal learning program - an AGI that can learn 
any skill? That perhaps can be formalised as - is there a program that can 
learn any program - a set of rules for learning any set of rules? I doubt 
it. Especially  if as we see with the relatively simple logic discussions on 
this forum, people can't agree on which rules/conventions/systems to apply, 
i.e. there are no definitive rules.


All this can perhaps be formalised neatly, near geometrically. (I'm still 
groping you understand). If we think of a screen of pixels - can all the 
visual games or branches of maths or art that can be expressed on that 
screen - mazes/maze-running/2d geometry/ 3d geometry/Riemannian/ abstract 
art/ chess/ go etc  - be united under - or derived from - a common set of 
metarules?


It should be fairly easy :) for an up-and-coming maths star like you to 
prove the obvious - that it isn't possible. Kauffman was looking for 
something like this. It's equivalent, it seems to me, to proving that you 
cannot derive any stage of evolution of matter or life from the previous 
one - that the world is fundamentally creative - that there are always new 
ways and new rules to join up the dots.



Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:
Valentina:In other words I'm looking for a way to mathematically define 
how

the AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
been logically or mathematically (axiomatically) derivable from any old
one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
free-form deformation  etc etc

agi | Archives | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Mike Tintner
Actually, exploring this further - human thinking is v. fundamentally different 
from the computational kind or most AGI conceptions - because it is massively 
and structurally metacognitive, self-examining (which comes under being a 
machine that works by "self-control").

Interestingly, Minsky's model of mind in The Emotion Machine includes this with 
three levels above "Deliberative Thinking":

Reflective Thinking
Self-Reflective Thinking
Self-Conscious Reflection

We don't just think about a problem, we simultaneously think about how we think 
about it, and consciously manage and take decisions about that thinking. We ask 
ourselves questions like:

-How long should we think about it?
-Should we follow our intuitions
-do we need examples?
-should we visualise
-should we follow our feelings of confusion?
-should we articulate our thoughts clearly and slowly or just let them whizz 
along, half-articulated?
-how would so-and-so handle it
-should we examine that part of the problem, or will it take too long?
-should we check the evidence?
-should we give up, or compromise?
-should we read a book for ideas? or consult a dictionary/thesaurus?

Such questions are all parts of our inner thinking dialogue.

As Minsky says, we have many ways to think, & we consciously choose from among 
them - & as a result different people devote very different amounts of time and 
resources to thinking at different times. But Minsky wants to make all this 
into an automatic process - and it can't be - how you think about problematic 
problems is fundamentally problematic in itself - which is why thinking is such 
a hesitant business.
  David Hart:/ MT : Is anyone trying to design a self-exploring robot or 
computer? Does this principle have a name?

  Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.

  I believe however that most approaches to designing AGI (those that do not 
specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.

  -dave


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Mike Tintner
ial case, 
oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play is in appropriate 
similarity-assessment ... i.e. in measuring similarity between G and G1 in such 
a way that achieving G1 actually teaches things useful for achieving G

So for any goal-achieving system that has long-term goals which it can't 
currently effectively work directly toward, play may be an effective strategy...

In this view, we don't really need to design an AI system with play in 
mind.  Rather, if it can explicitly or implicitly carry out the above 
inference, concept-creation and subgoaling processes, play should emerge from 
its interaction w/ the world...

ben g




On Tue, Aug 26, 2008 at 8:20 AM, David Hart <[EMAIL PROTECTED]> wrote:

  On 8/26/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
Is anyone trying to design a self-exploring robot or computer? Does 
this principle have a name?

  Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.

  I believe however that most approaches to designing AGI (those that do 
not specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.

  -dave


--
agi | Archives  | Modify Your Subscription  





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson






  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine? PS

2008-08-27 Thread Mike Tintner
Ben cont,

My formulation that general intelligence depends on creative, playful 
goal-definition and -setting may well sound confusing.;

How can or does that work?

Well, it can't work by using logic - too specific, even if it allows specific 
definitions of terms to be changed.

The way it works is clearly exemplified by language - as humans use it, and it 
is meant to be used.

Every word in the language is, in the final analysis, open-ended and capable of 
endless redefinition, and continuously being redefined culturally.

No word is *meant* to be precisely, "definitively"  defined - it's only 
misguided philosophers (and logicians) who try to do that. What enables us to 
adapt and survive in our activities is that we use words/goals which are 
open-ended, like "food" , "love", "companionship", "success"  ...or the goal of 
"general intelligence"..   -  meant to be adaptively, continuously redefinable 
, and never to be precisely, logically pinned down, and even to be used, like 
"general intelligence", in a confused, still-searching-for-the-meaning, way as 
AGI-ers do.. (Got a way of handling "general intelligence" logically?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Mike Tintner
Just in case there is any confusion, "ill-defined" is in this particular 
context is in no way pejorative. The crux of a General Intelligence for me 
is that it is necessarily a machine that works with more or less ill-defined 
goals to solve ill-structured problems. Bob's self-description is to a 
greater or lesser extent true of how most of us conduct most of our 
activities and lives. The test of a GI, artificial or natural, is how well 
it *creates* goal definitions and structures for solving problems, and the 
actual solutions ad hoc.


(I still think  of course that current AGI should have a not-so-ill 
structured definition of its problem-solving goals).



Bob: >>You on your side insist that you don't have to have such precisely 
defined goals
- your intuitive (and by definition, ill-defined) sense of intelligence 
will

do.



As a child I don't believe that I set out with the goal of "becoming a
software developer".  Indeed, such jobs barely even existed at the
time.  However, through play and experience I may have noticed that I
had certain skills, and later noticed that these might be useful in
particular kinds of situations.  This doesn't seem to be a situation
in which there was a well defined goal tree in advance, which I was
simply moving incrementally towards - although many people might like
to give such a whiggish impression in biographies or CVs.  Rather
there were various ideas and technologies developing at the time, some
of which were transmitted to me and were able to use me as a P2P host
for further propogation.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mike Tintner
Matt:If RSI is possible, then there is the additional threat of a fast 
takeoff of the kind described by Good and Vinge


Can we have an example of just one or two subject areas or domains where a 
takeoff has been considered (by anyone)  as possibly occurring, and what 
form such a takeoff might take? I hope the discussion of RSI is not entirely 
one of airy generalities, without any grounding in reality. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: Goedel machines ..PS

2008-08-28 Thread Mike Tintner
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in 
any specific areas has been considered? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Mike Tintner

Thanks. But like I said, airy generalities.

That machines can become faster and faster at computations and accumulating 
knowledge is certain. But that's narrow AI.


For general intelligence, you have to be able first to integrate as well as 
accumulate knowledge.  We have learned vast amounts about the brain in the 
last few years, for example - perhaps more than in previous history. But 
this hasn't led to any kind of comparably fast advances in integrating that 
knowledge.


You also have to be able second to discover knowledge  - be creative - fill 
in some of the many gaping holes in every domain of knowledge. That again 
doesn't march to a mathematical formua.


Hence, I suggest, you don't see any glimmers of RSI in any actual domain of 
human knowledge. If it were possible at all you should see some signs 
however small.


The whole idea of RSI strikes me as high-school naive - completely lacking 
in any awareness of the creative, systemic structure of how knowledge and 
technology actually advance in different domains.


Another example: try to recursively improve the car - like every part of 
technology it's not a solitary thing, but bound up in vast technological 
ecosystems (here - roads,oil,gas stations etc etc),  that cannot be improved 
in simple, linear fashion.


Similarly, I suspect each individual's mind/intelligence depends on complex 
interdependent systems and paradigms of knowledge. And so of necessity would 
any AGI's mind. (Not that mind is possible without a body).





Matt:> Here is Vernor Vinge's original essay on the singularity.

http://mindstalk.net/vinge/vinge-sing.html


The premise is that if humans can create agents with above human 
intelligence, then so can they. What I am questioning is whether agents at 
any intelligence level can do this. I don't believe that agents at any 
level can recognize higher intelligence, and therefore cannot test their 
creations. We rely on competition in an external environment to make 
fitness decisions. The parent isn't intelligent enough to make the correct 
choice.


-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))


Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge

Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone)  as possibly occurring, and what
form such a takeoff might take? I hope the discussion of RSI is not 
entirely

one of airy generalities, without any grounding in reality.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner

  Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what 
form of RSI in any specific areas has been considered? 

  To quote Charles Babbage, I am not able rightly to apprehend the kind of 
confusion of ideas that could provoke such a question.

  The best we can hope for is that we participate in the construction and 
guidance of future AGIs such they they are able to, eventually, invent, perform 
and carefully guide RSI (and, of course, do so safely every single step of the 
way without exception).

  Dave,

  On the contrary, it's an important question. If an agent is to self-improve 
and keep self-improving, it has to start somewhere - in some domain of 
knowledge, or some technique/technology of problem-solving...or something. 
Maths perhaps or maths theorems.?Have you or anyone else ever thought about 
where, and how? (It sounds like the answer is, no).  RSI is for AGI a 
v.important concept - I'm just asking whether the concept has ever been 
examined with the slightest grounding in reality, or merely pursued as a 
logical conceit..

  The question is extremely important because as soon as you actually examine 
it, something v. important emerges - the systemic interconectedness of the 
whole of culture, and the whole of technology, and the whole of an individual's 
various bodies of knowledge, and you start to see why evolution of any kind in 
any area of biology or society, technology or culture is such a difficult and 
complicated business. RSI strikes me as a last-century, local-minded concept, 
not one of this century where we are becoming aware of the global 
interconnectedness and interdependence of all systems.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Frame Semantics

2008-08-29 Thread Mike Tintner
Advances in Frame Semantics:
Corpus and Computational Approaches and Insights

Theme Session to be held at ICLC 11, Berkeley, CA
Date: July 28 - August 3, 2009
Organizer: Miriam R. L. Petruck

Theme Session Description:

Fillmore (1975) introduced the notion of a frame into linguistics over 
thirty years ago.  As a cognitive structuring device used in the service 
of understanding, the semantic frame, parts of which are indexed by words 
(Fillmore 1985), is at the heart of Frame Semantics.  While researchers 
have appealed to Frame Semantics to provide accounts for various lexical, 
syntactic, and semantic phenomena in a range of languages (e.g. Ostman 
2000, Petruck 1995, Lambrecht 1984), its most highly developed 
instantiation is found in FrameNet (http://framenet.icsi.berkeley.edu). An 
ongoing research project in computational lexicography, the FrameNet 
database provides for a substantial portion of the vocabulary of 
contemporary English, a body of semantically and syntactically annotated 
sentences from which reliable information can be reported on the valences 
or combinatorial possibilities of each lexical item.

FrameNet has generated great interest in the Natural Language Processing 
community, resulting in new efforts for lexicon building and computational 
semantics. Advances in technology and the availability of large corpora 
have facilitated developing FrameNet lexical resources for languages other 
than English (with Spanish, Japanese, and German the most advanced, and 
Hebrew, Italian, Slovenian and Swedish at early stages). These projects 
(necessarily) also test FrameNet???s implicit claim about representing 
conceptual structure, rather than building an application driven 
structured organization of the lexicon of contemporary English. At the 
same time, FrameNet has inspired research on automatically induced 
semantic lexicons (Green and Dorr 2004, Pado and Lapata 2005) and 
automatic semantic role labeling (ASRL), or ?"semantic parsing" (Gildea 
and Jurafsky 2002, Thompson et al. 2003, Fleischman and Hovy 2003, 
Litkowski 2004, Baldewein et al. 2004).  Frame Semantics has proven to be 
among the most useful techniques for deep semantic analysis of texts, thus 
contributing to research on information extraction (Mohit and Narayanan 
2003), question answering (Narayanan and Harabagiu 2004, Narayanan and 
Sinha 2005), and automatic reasoning (Scheffczyk et al. 2006, Scheffczyk 
et al., 2007).

In 1999 (at ICLC 6 in Stockholm), researchers began to address cognitive 
aspects of Frame Semantics explicitly in a public forum during a theme 
session on Construction Grammar, the sister theory of Frame Semantics. The 
goal of the 2009 theme session is to bring together researchers in 
cognitive, corpus and computational linguistics to (1) present their work 
using corpus approaches for the development of FrameNet-style lexical 
resources and FrameNet-derived representations for computational 
approaches to semantic processing and (2) share their insights about 
advances in Frame Semantics.  We are particularly interested in work that 
attends to the cognitive linguistic dimension in Frame Semantics.

Submission Procedure

Abstracts must be:
* a maximum of 500 words
* submitted in .pdf format
* received no later than the Sept 30, 2008 deadline
* sent with the title of the paper, name(s) of author(s), affiliation and
   a contact e-mail address
* sent to [EMAIL PROTECTED]

IMPORTANT: Both the theme session proposal itself and the individual 
contributions will undergo independent reviewing by the ICLC program committee.

--



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner
nd we haven't dealt with 
the foreign-function interface stuff needed to plug in LISP MindAgents (but 
that's probably not extremely hard).   We have done some experiments before 
expressing, for instance, a simplistic PLN deduction MindAgent in Combo.

  In short the OpenCogPrime architecture explicitly supports a tractable path 
to recursive self-modification.

  But, notably, one would have to specifically "switch this feature on" -- it's 
not going to start doing RSI unbeknownst to us programmers.

  And the problem of predicting where the trajectory of RSI will end up is a 
different one ... I've been working on some theory in that regard (and will 
post something on the topic w/ in the next couple weeks) but it's still fairly 
speculative...

  -- Ben G


  On Fri, Aug 29, 2008 at 6:59 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:


  Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - 
what form of RSI in any specific areas has been considered? 

  To quote Charles Babbage, I am not able rightly to apprehend the kind of 
confusion of ideas that could provoke such a question.

  The best we can hope for is that we participate in the construction and 
guidance of future AGIs such they they are able to, eventually, invent, perform 
and carefully guide RSI (and, of course, do so safely every single step of the 
way without exception).

  Dave,

  On the contrary, it's an important question. If an agent is to 
self-improve and keep self-improving, it has to start somewhere - in some 
domain of knowledge, or some technique/technology of problem-solving...or 
something. Maths perhaps or maths theorems.?Have you or anyone else ever 
thought about where, and how? (It sounds like the answer is, no).  RSI is for 
AGI a v.important concept - I'm just asking whether the concept has ever been 
examined with the slightest grounding in reality, or merely pursued as a 
logical conceit..

  The question is extremely important because as soon as you actually 
examine it, something v. important emerges - the systemic interconectedness of 
the whole of culture, and the whole of technology, and the whole of an 
individual's various bodies of knowledge, and you start to see why evolution of 
any kind in any area of biology or society, technology or culture is such a 
difficult and complicated business. RSI strikes me as a last-century, 
local-minded concept, not one of this century where we are becoming aware of 
the global interconnectedness and interdependence of all systems.


  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner


Matt: AGI spans just about every field of science, from ethics to quantum 
mechanics, child development to algorithmic information theory, genetics to 
economics.


Just so. And every field of the arts. And history. And philosophy. And 
technology. Including social technology. And organizational technology. And 
personal technology. And the physical technologies of sport, dance, sex 
etc. The whole of culture and the world.


No, nobody can be a superDa Vinci knowing everything and solving every 
problem. But actually every AGI-er will have personal experience of solving 
problems in many different domains as well as their professional ones. And 
they should, I suggest, be able to use and integrate that experience into 
AGI. They should be able to metacognitively relate, say, the problem of 
tidying and organizing a room, to the problem of organizing an argument in 
an essay, to the problem of creating an AGI organization, to the problem of 
organizing an investment portfolio, to the problem of organizing a soccer 
team  - because that is the business and problem of AGI. Crossing and 
integrating domains. Any and all domains. There should be a truly general 
culture. What I see is actually a narrow culture, (even if AGI-ers are much 
more broadly educated than most),  that only discusses a very limited set of 
problems, which are, in the final analysis, hard to distinguish from those 
of narrow AI - and a culture which refuses to consider any problems outside 
its intellectual/ professional comfort zone,





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Mike Tintner
Charles,

It's a good example. What it also brings out is the naive totalitarian premises 
of RSI - the implicit premise that you can comprehensively standardise your 
ways to represent and solve problems about the world,   (as well as the domains 
of the world itself). [This BTW has been the implicit premise of literate, 
rational culture since Plato].

The reason we encourage and foster competition in society - and competing, 
diverse companies and approaches - is that we realise that 
competition/diversity is a fundamental part of evolution, at every level, and 
necessary to keep developing better solutions to the problems of life.

What cog sci and AI haven't realised is that humans are also individually 
designed "competitively" with conflicting emotions and ideas and ways of 
thinking inside themselves -  a necessary structure for an AGI. And such 
conflict inevitably stands in the way of any RSI.

It'd be interesting to have Minsky's input here, because one thing he stands 
for is the principle that human/general minds have to be built kludge-ily with 
many different ways to think - different knowledge systems. We clearly aren't 
meant to - and simply can't - think, for example, just logically and 
mathematically. Evolution and human evolution/history have relentlessly built 
up these GI's with ever more complex repertoires of knowledge representation 
and sensors, because it's a good and necessary principle  - the more complex 
you want your interactions with the world to be. 





>
>Charles/MT:> If RSI were possible, then you should see some signs of 
it within human society, of
> humans recursively self-improving - at however small a scale. You 
don't because of this
> problem of crossing and integrating domains. It can all be done, but 
laboriously and
> stumblingly not in some simple, formulaic way. That is culturally a 
very naive idea.

I hope nobody minds if I interject with a brief narrative concerning a 
recent experience. Obviously I don't speak for Ben Goertzel, or anyone else who 
thinks RSI or recognizing superior intelligence is possible.

As it happened, I was looking for a new job a while back, and landed an 
interview with a major corporate entity. When I spoke to the HR representative, 
she bemoaned the lack of hiring standards, especially for her own department. 
"It's impossible," she said, "As a consultant explained it to us a few years 
ago, the corporation changes with each person we hire or fire, changes into a 
related but different entity. If we measure the intelligence of a corporation 
in terms of how well suited it is to profit from its environment, my job is to 
make sure that people we hire (on average) result in the corporation becoming 
more intelligent." She looked at me for sympathy. "As if all our resources were 
enough to recognize (much less plan) an entity more intelligent than 
ourselves!" She had a point. "What's worse, we're expected to hire new HR staff 
and provide training that will make our department more effective at hiring new 
people." I nodded. That would lead to recursive self improvement (RSI), which 
is clearly impossible. Finally she said I seemed like the sympathetic sort, and 
even though that had nothing to do with her worthless hiring criteria, I could 
have the job and start right away.

I thought about the problem later, and eventually concluded that one 
good HR strategy would be to form hundreds or thousands (millions?) of 
corporations with stochastic methods for hiring, firing, training, merging and 
creating spinoffs, perhaps using GP or MOSES or some such. Eventually, 
corporations would emerge with superior intelligence.

The alternative would be a massive cross-disciplinary effort, only 
imaginable by a super-neo-da Vinci character who's a master of psychology, 
mathematics, economics, manufacturing, politics -- essentially every field of 
human knowledge, including medical sciences, history and the arts.

I guess it doesn't look too hopeful, so we're probably going to be 
stuck with hiring, firing and training practices that mean absolutely nothing, 
forever.

Charles Griffiths



   
   
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-30 Thread Mike Tintner
David: I know that some systems (specifically systems without models or a 
lot of

human interaction) have had grounding problems but your statement below
seems to be stating something that is far from proven fact.

Your conclusions about "concept of self" and "unemboodied agent means
ungrounded symbols" are also not shared by me and not explained or proven 
by

you.

Your saying something is doesn't necessarily make it true.

Terren: To an unembodied agent, the concept of self is indistinguishable 
from any

other "concept" it works with. I use concept in quotes because to the
unembodied agent, it is not a concept at all, but merely a symbol with no
semantic context attached. All such an agent can do is perform operations 
on

ungrounded symbols - at best, the result of which can appear to be
intelligent within some domain (e.g., a chess program).


David,

MAN: But enough of talking about me, darling. Let's talk about you... What 
do you think about me?


And how is the computer going to get the joke, without having a self, that's 
been in a conversation, and had physical emotional urges to talk about 
themself, and had to wait impatiently while others talked about themselves, 
and having a gut that can laugh?


MAN: You're not a human being, David. You're just a machine. You talk 
robotically, you walk robotically, you think robotically. You don't have any 
feelings.


And how's it going to understand any of that? How's it going to know that 
the man is exaggerating?


MAN: I have terrible problems of self-control whenever I see a doughnut.

And that, esp self-control?

Or:

"Suppose Bob's goal is to create a human-level AI; and he thinks he knows 
how to do it, but the completion of his
approach is likely to take him an indeterminate number of years of work, 
during which he will

have trouble feeding himself.
Consider two options Bob has:
A) Spend 10 years hacking in his basement, based on his AI ideas
B) Spend those 10 years working as a financial trader, and donate 50% of his 
profits to others

creating AI"

How's a computer going to understand the pressures on Bob, and why they 
reflect pressures on Ben?


One can go on in this vein covering all of human and animal affairs and 
life. That doesn't leave a lot.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-03 Thread Mike Tintner

Pei:"it is important to understand
that both linguistic experience and non-linguistic experience are both 
special
cases of experience, and the latter is not more "real" than the former. In 
the previous
discussions, many people implicitly suppose that linguistic experience is 
nothing but
"Dictionary-Go-Round" [Harnad, 1990], and only non-linguistic experience can 
give
symbols meaning. This is a misconception coming from traditional semantics, 
which
determines meaning by referred object, so that an image of the object seems 
to be closer

to the "real thing" than a verbal description [Wang, 2007]."

1. Of course the image is more real than the symbol or word.

Simple test of what should be obvious: a) use any amount of symbols you 
like, incl. Narsese, to describe "Pei Wang." Give your description to any 
intelligence, human or AI, and see if it can pick out Pei in a lineup of 
similar men.


b) give the same intelligence a photo of Pei - & apply the same test.

Guess which method will win.

Only images can represent *INDIVIDUAL objects* - incl Pei/Ben or this 
keyboard on my desk. And in the final analysis, only indvidual objects *are* 
real. There are no "chairs" or "oranges" for example - those general 
concepts are, in the final analysis, useful fictions. There is only this 
chair here and that chair over there. And if you want to refer to them, 
individually, - so that you communicate successfully with another 
person/intelligence - you have no choice but to use images, (flat or solid).


2. Symbols are abstract - they can't refer to anything unless you already 
know, via images, what they refer to. If you think not, please draw a 
"cheggnut"Again, if I give you an image of a cheggnut, you will have no 
problem.


3. You talk of a misconception of semantics, but give no reason why it is 
such, merely state it is.


4. You leave out the most important thing of all - you argue that experience 
is composed of symbols and images. And...?  Hey, there's also the real 
thing(s). The real objects that they refer to. You certainly can't do 
science without looking at the real objects. And science is only a 
systematic version of all intelligence. That's how every  functioning 
general intelligence is able to be intelligent about the world - by being 
"grounded" in the real world, composed of real objects. which it can go out 
and touch, walk round, look at and interact with. A box like Nars can't do 
that, can it?


"Do you realise what you're saying, Pei?" To understand statements is to 
*realise* what they mean - what they refer to - to know that they refer to 
real objects, which you can really go and interact with and test - and to 
try (or have your brain try automatically) to connect those statements to 
real objects.


When you or I are given words or images, "find this man [Pei]", or "cook a 
Chinese meal tonight", we know that those signs must be tested in the real 
world and are only valid if so tested. We know that it's possible that that 
man over there who looks v. like the photo may not actually be Pei, or that 
Pei may have left the country and be impossible to find. We know that it may 
be impossible to cook such a meal, because there's no such food around. - 
And all such tests can only be conducted in the real world (and not say by 
going and looking at other texts or photos - living in a Web world).


Your concept of AI is not so much "un-grounded" as "unreal."

5. Why on earth do you think that evolution shows us general intelligences 
very successfully dealing with the problems of the world for over a billion 
years *without* any formal symbols? Why do infants take time to acquire 
l;anguage and are therefore able to survive without it?


The conception of AI that you are advancing is the equivalent of 
Creationism - it both lacks and denies an evolutionary perspective on 
intelligence - a (correctly) cardinal sin in modern science..







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-03 Thread Mike Tintner
Terren:My own feeling is that computation is just the latest in a series of 
technical metaphors that we apply in service of understanding how the universe 
works. Like the others before it, it captures some valuable aspects and leaves 
out others. It leaves me wondering: what future metaphors will we apply to the 
universe, ourselves, etc., that will make computation-as-metaphor seem as 
quaint as the old clockworks analogies?

I think this is a good important point. I've been groping confusedly here. It 
seems to me computation necessarily involves the idea of using a code (?). But 
the nervous system seems to me something capable of functioning without a code 
- directly being imprinted on by the world, and directly forming movements, 
(even if also involving complex hierarchical processes), without any code. I've 
been wondering whether computers couldn't also be designed to function without 
a code in somewhat similar fashion.  Any thoughts or ideas of your own?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment.. P.S.

2008-09-03 Thread Mike Tintner
I think I have an appropriate term for what I was trying to conceptualise. 
It is that intelligence has not only to be embodied, but it has to be 
EMBEDDED in the real world -  that's the only way it can test whether 
information about the world and real objects is really true. If you want to 
know whether Jane Doe is great at sex, you can't take anyone's word for it, 
you have to go to bed with her. [Comments on the term esp. welcome). 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Mike Tintner
Terren's request for new metaphors/paradigms for intelligence threw me 
temporarily off course.Why a new one - why not the old one? The computer. 
But the whole computer.


You see, AI-ers simply don't understand computers, or understand only half 
of them


What I'm doing here is what I said philosophers do - outline existing 
paradigms and point out how they lack certain essential dimensions.


When AI-ers look at a computer, the paradigm that they impose on it is that 
of a Turing machine - a programmed machine, a device for following programs.


But that is obviously only the half of it.Computers are obviously much more 
than that - and  Turing machines. You just have to look at them. It's 
staring you in the face. There's something they have that Turing machines 
don't. See it? Terren?


They have -   a keyboard.

And as a matter of scientific, historical fact, computers are first and 
foremost keyboards - i.e.devices for CREATING programs  on keyboards, - and 
only then following them. [Remember how AI gets almost everything about 
intelligence back to front?] There is not and never has been a program that 
wasn't first created on a keyboard. Indisputable fact. Almost everything 
that happens in computers happens via the keyboard.


So what exactly is a keyboard? Well, like all keyboards whether of 
computers, musical instruments or typewriters, it is a creative instrument. 
And what makes it creative is that it is - you could say - an "organiser."


A device with certain "organs" (in this case keys) that are designed to be 
creatively organised - arranged in creative, improvised (rather than 
programmed) sequences of  action/ association./"organ play.


And an extension of the body. Of the organism. All organisms are 
"organisers" - devices for creatively sequencing actions/ 
associations./organs/ nervous systems first and developing fixed, orderly 
sequences/ routines/ "programs" second.


All organisers are manifestly capable of an infinity of creative, novel 
sequences, both rational and organized, and crazy and disorganized.  The 
idea that organisers (including computers) are only meant to follow 
programs - to be straitjacketed in movement and thought -  is obviously 
untrue. Touch the keyboard. Which key comes first? What's the program for 
creating any program? And there lies the secret of AGI.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Mike Tintner

Terren,

If you think it's all been said, please point me to the philosophy of AI 
that includes it.


A programmed machine is an organized structure. A keyboard (and indeed a 
computer with keyboard) are something very different - there is no 
organization to those 26 letters etc.   They can be freely combined and 
sequenced to create an infinity of texts. That is the very essence and 
manifestly, the whole point, of a keyboard.


Yes, the keyboard is only an instrument. But your body - and your brain - 
which use it,  are themselves keyboards. They consist of parts which also 
have no fundamental behavioural organization - that can be freely combined 
and sequenced to create an infinity of sequences of movements and thought - 
dances, texts, speeches, daydreams, postures etc.


In abstract logical principle, it could all be preprogrammed. But I doubt 
that it's possible mathematically - a program for selecting from an infinity 
of possibilities? And it would be engineering madness - like trying to 
preprogram a particular way of playing music, when an infinite repertoire is 
possible and the environment, (in this case musical culture), is changing 
and evolving with bewildering and unpredictable speed.


To look at computers as what they are (are you disputing this?) - machines 
for creating programs first, and following them second,  is a radically 
different way of looking at computers. It also fits with radically different 
approaches to DNA - moving away from the idea of DNA as coded program, to 
something that can be, as it obviously can be, played like a keyboard  - see 
Dennis Noble, The Music of Life. It fits with the fact (otherwise 
inexplicable) that all intelligences have both deliberate (creative) and 
automatic (routine) levels - and are not just automatic, like purely 
programmed computers. And it fits with the way computers are actually used 
and programmed, rather than the essentially fictional notion of them as pure 
turing machines.


And how to produce creativity is the central problem of AGI - completely 
unsolved.  So maybe a new approach/paradigm is worth at least considering 
rather than more of the same? I'm not aware of a single idea from any AGI-er 
past or present that directly addresses that problem - are you?





Mike,

There's nothing particularly creative about keyboards. The creativity 
comes from what uses the keyboard. Maybe that was your point, but if so 
the digression about a keyboard is just confusing.


In terms of a metaphor, I'm not sure I understand your point about 
"organizers". It seems to me to refer simply to that which we humans do, 
which in essence says "general intelligence is what we humans do." 
Unfortunately, I found this last email to be quite muddled. Actually, I am 
sympathetic to a lot of your ideas, Mike, but I also have to say that your 
tone is quite condescending. There are a lot of smart people on this list, 
as one would expect, and a little humility and respect on your part would 
go a long way. Saying things like "You see, AI-ers simply don't understand 
computers, or understand only half of them."  More often than not you 
position yourself as the sole source of enlightened wisdom on AI and other 
subjects, and that does not make me want to get to know your ideas any 
better.  Sorry to veer off topic here, but I say these things because I 
think some of your ideas are valid and could really benefit from an 
adjustment in your
presentation of them, and yourself.  If I didn't think you had anything 
worthwhile to say, I wouldn't bother.


Terren

--- On Wed, 9/3/08, Mike Tintner <[EMAIL PROTECTED]> wrote:


From: Mike Tintner <[EMAIL PROTECTED]>
Subject: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 9:42 PM
Terren's request for new metaphors/paradigms for
intelligence threw me
temporarily off course.Why a new one - why not the old one?
The computer.
But the whole computer.

You see, AI-ers simply don't understand computers, or
understand only half
of them

What I'm doing here is what I said philosophers do -
outline existing
paradigms and point out how they lack certain essential
dimensions.

When AI-ers look at a computer, the paradigm that they
impose on it is that
of a Turing machine - a programmed machine, a device for
following programs.

But that is obviously only the half of it.Computers are
obviously much more
than that - and  Turing machines. You just have to look at
them. It's
staring you in the face. There's something they have
that Turing machines
don't. See it? Terren?

They have -   a keyboard.

And as a matter of scientific, historical fact, computers
are first and
foremost keyboards - i.e.devices for CREATING programs  on
keyboards, - and
only then following them. [Remember how AI gets almost
everything about
intelligence back to fron

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Will:You can't create a program out of thin air. So you have to have some
sort of program to start with

Not out of thin air.Out of a general instruction and desire[s]/emotion[s]. 
"Write me a program that will contradict every statement made to it." "Write 
me a single program that will allow me to write video/multimedia 
articles/journalism fast and simply." That's what you actually DO. You start 
with v. general briefs rather than any detailed list of instructions, and 
fill them  in as you go along, in an ad hoc, improvisational way - 
manifestly *creating* rather than *following* organized structures of 
behaviour in an initially disorganized way.


Do you honestly think that you write programs in a programmed way? That it's 
not an *art* pace Matt, full of hesitation, halts, meandering, twists and 
turns, dead ends, detours etc?  If "you have to have some sort of program to 
start with", how come there is no sign  of that being true, in the creative 
process of programmers actually writing programs?


Do you think that there's a program for improvising on a piano [or other 
form of keyboard]?  That's what AGI's are supposed to do - improvise. So 
create one that can. Like you. And every other living creature. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Abram,

Thanks for reply. But I don't understand what you see as the connection. An 
interaction machine from my brief googling is one which has physical organs.


Any factory machine can be thought of as having organs. What I am trying to 
forge is a new paradigm of a creative, free  machine as opposed to that 
exemplified by most actual machines, which are rational, deterministic 
machines. The latter can only engage in any task in set ways - and therefore 
engage and combine their organs in set combinations and sequences. Creative 
machines have a more or less infinite range of possible ways of going about 
things, and can combine their organs in a virtually infinite range of 
combinations, (which gives them a slight advantage, adaptively :) ). 
Organisms *are* creative machines; computers and robots *could* be (and are, 
when combined with humans), AGI's will *have* to be.


(To talk of creative machines, more specifically, as I did, as 
keyboards/"organisers" is to focus on the mechanics of this infinite 
combinativity of organs).


Interaction machines do not seem in any way then to entail what I'm talking 
about - "creative machines" - keyboards/ organisers - infinite 
combinativity - or the *creation,* as quite distinct from *following*  of 
programs/algorithms and routines..




Abram/MT:>> If you think it's all been said, please point me to the 
philosophy of AI

that includes it.


I believe what you are suggesting is best understood as an interaction 
machine.




General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by "inductive turing machines", but I cannot find a good
single reference for that. (I am not knowledgable on this subject, I
just have heard the idea before.)

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Mike Tintner
Matt:You absolutely must have a means of guessing probabilities to do 
anything at all in the real world


Do you mean mathematically?  Estimating chances as roughly, even if 
provisionally,  0.70? If so, manifestly, that is untrue. What are your 
chances that you will get lucky tonight?  Will an inability to guess the 
probability stop you trying?  Most of the time, arguably, we have to and do, 
act on the basis of truly vague magnitudes - a mathematically horrendously 
rough sense of probability. Or just: "what the heck - what's the worst that 
can happen? Let's do it. And let's just pray it works out."  How precise a 
sense of the probabilities attending his current decisions does even a 
professionally mathematical man like Bernanke have?


Only AGI's in a virtual world can live with cosy, mathematically calculable 
"uncertainty." Living in the real world is as Kauffman points out to a great 
extent living with *mystery*. What are the maths of mystery? Do you think 
Ben has the least realistic idea of the probabilities affecting his AGI 
projects? That's not how most creative projects get done, or life gets 
lived.  Quadrillions, Matt, schmazillions.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Mike Tintner

Matt,

I'm confused here. What I mean is that in real life, the probabilities are 
mathematically incalculable, period, a good deal of the time - you cannot 
go, as you v. helpfully point out, much beyond saying this is "fairly 
probable", "may happen", "there's some chance.." And those words are fairly 
good reflections of how we actually reason and "anti-calculate" 
probabilities -*without* numbers or any maths... And such non-mathematical 
vagueness seems foundational for AGI.  You can't, for example, calculate 
mathematically the likeness or the truthfulness of metaphorical terms - of 
storms and swirling milk in a teacup. Not even provisionally.


My understanding is that AGI-ers still persist in trying to use numbers, and 
you seem, in your first sentence, to be advocating the same.



Matt: I mean that you have to assign likelihoods to beliefs, even if the 
numbers are wrong. Logic systems where every statement is true or false 
simply are too brittle to scale beyond toy problems. Everything in life is 
uncertain, including the degree of uncertainty. That's why we use terms like 
"probably", "maybe", etc. instead of numbers.


--

Matt:You absolutely must have a means of guessing
probabilities to do
anything at all in the real world


MT: Do you mean mathematically?  Estimating chances as roughly,

even if
provisionally,  0.70? If so, manifestly, that is untrue.
What are your
chances that you will get lucky tonight?  Will an inability
to guess the
probability stop you trying?  Most of the time, arguably,
we have to and do,
act on the basis of truly vague magnitudes - a
mathematically horrendously
rough sense of probability. Or just: "what the heck -
what's the worst that
can happen? Let's do it. And let's just pray it
works out."  How precise a
sense of the probabilities attending his current decisions
does even a
professionally mathematical man like Bernanke have?

Only AGI's in a virtual world can live with cosy,
mathematically calculable
"uncertainty." Living in the real world is as
Kauffman points out to a great
extent living with *mystery*. What are the maths of
mystery? Do you think
Ben has the least realistic idea of the probabilities
affecting his AGI
projects? That's not how most creative projects get
done, or life gets
lived.  Quadrillions, Matt, schmazillions.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Abram,

Thanks. V. helpful and interesting. Yes, on further examination, these 
interactionist guys seem, as you say, to be trying to take into account  the 
embeddedness of the computer.


But no, there's still a huge divide between them and me. I would liken them 
in the context of this discussion, to Pei who tries to argue that NARS is 
"non-algorithmic", because the program is continuously changing. - and 
therefore satisfies the objections of classical objectors to AI/AGI.


Well, both these guys and Pei are still v. much algorithmic in any 
reasonable sense of the word - still following *structures,* if v. 
sophisticated (and continuously changing) structures, of thought.


And what I am asserting is a  paradigm of a creative machine, which starts 
as, and is, NON-algorithmic and UNstructured  in all its activities, albeit 
that it acquires and creates a multitude of algorithms, or 
routines/structures, for *parts* of those  activities. For example, when you 
write a post,  nearly every word and a great many phrases and even odd 
sentences, will be automatically, algorithmically produced. But the whole 
post, and most paras will *not* be - and *could not* be.


A creative machine has infinite combinative potential. An algorithmic, 
programmed machine has strictly limited combinativity..


And a keyboard is surely the near perfect symbol of infinite, unstructured 
combinativity. It is being, and has been, used in endlessly creative ways - 
and is, along with the blank page and pencil, the central tool of our 
civilisation's creativity. Those randomly arranged letters - clearly 
designed to be infinitely recombined - are the antithesis of a programmed 
machine.


So however those guys account for that keyboard, I don't see them as in any 
way accounting for it in my sense, or in its true, full usage. But thanks 
for your comments. (Oh and I did understand re Bayes - I was and am still 
arguing he isn't valid in many cases, period).




Mike,

The reason I decided that what you are arguing for is essentially an
interactive model is this quote:

"But that is obviously only the half of it.Computers are obviously
much more than that - and  Turing machines. You just have to look at
them. It's staring you in the face. There's something they have that
Turing machines don't. See it? Terren?

They have -   a keyboard."

A keyboard is precisely what the interaction theorists are trying to
account for! Plus the mouse, the ethernet port, et cetera.

Moreover, your general comments fit into the model if interpreted
judiciously. You make a distinction between rule-based and creative
behavior; rule-based behavior could be thought of as isolated
processing of input (receive input, process without interference,
output result) while creative behavior is behavior resulting from
continual interaction with and exploration of the external world. Your
concept of organisms as "organizers" only makes sense when I see it in
this light: a human organizes the environment by interaction with it,
while a Turing machine is unable to do this because it cannot
explore/experiment/discover.

-Abram

On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:

Abram,

Thanks for reply. But I don't understand what you see as the connection. 
An
interaction machine from my brief googling is one which has physical 
organs.


Any factory machine can be thought of as having organs. What I am trying 
to

forge is a new paradigm of a creative, free  machine as opposed to that
exemplified by most actual machines, which are rational, deterministic
machines. The latter can only engage in any task in set ways - and 
therefore
engage and combine their organs in set combinations and sequences. 
Creative
machines have a more or less infinite range of possible ways of going 
about

things, and can combine their organs in a virtually infinite range of
combinations, (which gives them a slight advantage, adaptively :) ).
Organisms *are* creative machines; computers and robots *could* be (and 
are,

when combined with humans), AGI's will *have* to be.

(To talk of creative machines, more specifically, as I did, as
keyboards/"organisers" is to focus on the mechanics of this infinite
combinativity of organs).

Interaction machines do not seem in any way then to entail what I'm 
talking
about - "creative machines" - keyboards/ organisers - infinite 
combinativity

- or the *creation,* as quite distinct from *following*  of
programs/algorithms and routines..



Abram/MT:>> If you think it's all been said, please point me to the
philosophy of AI


that includes it.


I believe what you are suggesting is best understood as an interaction
machine.



General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The conce

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Terren:  > I agree in spirit with your basic criticisms regarding current AI 
and creativity. However, it must be pointed out that if you abandon 
determinism, you find yourself in the world of dualism, or worse.


Nah. One word (though it would take too long here to explain) ; 
nondeterministic programming.


Terren: you still need to have an explanation for how creativity emerges in 
either case, but in contrast to what you said before, some AI folks have 
indeed worked on this issue.


Oh, they've done loads of work, often fine work, i.e. produced impressive 
but 'hack' variations on themes, musical, artistic, scripting etc. But the 
people actually producing those "creative"/hack variations, will agree, when 
pressed that they are not truly creative. And actual AGI-ers, to repeat, 
AFAIK have not produced a single idea about how machines can be creative. 
Not even a proposal, however wrong. Please point to one.


P.S. Glad to see your evolutionary perspective includes the natural kind - I 
had begun to think, obviously wrongly, that it didn't. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] How to Guarantee Creativity...

2008-09-04 Thread Mike Tintner


Mike Tintner wrote:

And how to produce creativity is the central problem of AGI -
completely unsolved. So maybe a new approach/paradigm is worth at
least considering rather than more of the same? I'm not aware of a
single idea from any AGI-er past or present that directly addresses
that problem - are you?


Bryan; Mike, one of the big problems in computer science is the prediction 
of

genotypes from phenotypes in general problem spaces. So far, from what
I've learned, we haven't a way to "guarantee" that a resulting process
is going to be creative. So it's not going to be "solved" per-se in the
traditional sense of "hey look, here's a foolproof equivalency of
creativity." I truly hope I am wrong. This is a good way to be wrong
about the whole thing, I must admit.

Bryan,

Thanks for comments. First, you definitely sound like you will enjoy and 
benefit from Kauffman's Reinventing the Sacred - v. much extending your 1st 
sentence.


Second, you have posed a fascinating challenge. How can one guarantee 
creativity? I was going to say but of course not, you can only guarantee 
non-creativity by using programs and rational systems. True creativity can 
be extremely laborious and involve literally "far-fetched" associations.


But actually, yes, I think you may be able to guarantee creativity with a 
high degree of probability. That is, low-level creativity. Not social 
creativity - creative associations that no one in society has thought of 
before. But personal creativity. Novel personal associations that if not 
striking fit the definition. Let's see. Prepare to conduct an experiment. I 
will show you a series of associations - you will quickly grasp the 
underlying principle - you must, *thinking visually*, continue freely 
associating with the last one (or, actually, any one). See what your mind 
comes up with - and let's judge the results. (Everyone else is encouraged to 
try this too -  in the interests of scientific investigation).


http://www.bearskinrug.co.uk/_articles/2005/09/16/doodle/hero.jpg

[Alternatively, simply start with an image of a snake, and freely, visually 
associate with that.]


P.S. You will notice, Bryan, that this test - these metamorphoses - are 
related to the nature of the evolution of new species from old.








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Bryan,

You start v. constructively thinking how to test the non-programmed nature 
of  - or simply record - the actual writing of programs, and then IMO fail 
to keep going.


There have to be endless more precise ways than trying to look at their 
brain.


Verbal protocols.

Ask them to use the keyboard for everything - (how much do you guys use the 
keyboard vs say paper or other things?) - and you can automatically record 
key-presses.


If they use paper, find a surface that records the pen strokes.

Combine with a camera recording them.

Come on, you must be able to give me still more ways - there are multiple 
possible recording technologies, no?


Hasn't anyone done this in any shape or form? It might sound as if it would 
produce terribly complicated results, but my guess is that they would be 
fascinating just to look at (and compare technique) as well as analyse.



Bryan/MT:> Do you honestly think that you write programs in a programmed 
way?

That it's not an *art* pace Matt, full of hesitation, halts,
meandering, twists and turns, dead ends, detours etc? If "you have
to have some sort of program to start with", how come there is no
sign of that being true, in the creative process of programmers
actually writing programs?


Two notes on this one.

I'd like to see fMRI studies of programmers having at it. I've seen this
of authors, but not of programmers per-se. It would be interesting. But
this isn't going to work because it'll just show you lots of active
regions of the brain and what good does that do you?

Another thing I would be interested in showing to people is all of those
dead ends and turns that one makes when traveling down those paths.
I've sometimes been able to go fully into a recording session where I
could write about a few minutes of decisions for hours on end
afterwards, but it's just not efficient to getting the point across.
I've sometimes wanted to do this for web crawling, when I do my
browsing and reading, and at least somewhat track my jumps from page to
page and so on, or even in my own grammar and writing so that I can
make sure I optimize it :-) and so that I can see where I was going or
not going :-) but any solution that requires me to type even /more/
will be a sort of contradiction, since then I will have to type even
more, and more.

Bah, unused data in the brain should help work with this stuff. Tabletop
fMRI and EROS and so on. Fun stuff. Neurobiofeedback.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Mike Tintner

Bryan,

How do you know the brain has a code? Why can't it be entirely 
"impression-istic" - a system for literally forming, storing and associating 
sensory impressions (including abstracted, simplified, hierarchical 
impressions of other impressions)?


1). FWIW some comments from a cortically knowledgeable robotics friend:

"The issue mentioned below is a major factor for die-hard card-carrying 
Turing-istas, and to me is also their greatest stumbling-block.


You called it a "code", but I see computation basically involves setting up 
a "model" or "description" of something, but many people think this is 
actually "synonomous" with the real-thing. It's not, but many people are in 
denial about this. All models involves tons of simplifying assumptions.


EG, XXX is adamant that the visual cortex performs sparse-coded [whatever 
that means] wavelet transforms, and not edge-detection. To me, a wavelet 
transform is just "one" possible - and extremely simplistic (meaning subject 
to myriad assumptions) - mathematical description of how some cells in the 
VC appear to operate.


Real biological systems are immensely more complex than our simple models. 
Eg, every single cell in the body contains the entire genome, and genes are 
being turned on+off continually during normal operation, and based upon an 
immense #feedback loops in the cells, and not just during reproduction. On 
and on."


2) I vaguely recall de Bono having a model of an imprintable surface that 
was non-coded:


http://en.wikipedia.org/wiki/The_Mechanism_of_the_Mind

(But I think you may have to read the book. Forgive me if I'm wrong).

3) Do you know anyone who has thought of using or designing some kind of 
computer as an imprintable rather than just a codable medium? Perhaps that 
is somehow possible.


PS Go to bed. :)


Bryan/MT
:

I think this is a good important point. I've been groping confusedly
here. It seems to me computation necessarily involves the idea of
using a code (?). But the nervous system seems to me something
capable of functioning without a code - directly being imprinted on
by the world, and directly forming movements, (even if also involving
complex hierarchical processes), without any code. I've been
wondering whether computers couldn't also be designed to function
without a code in somewhat similar fashion. Any thoughts or ideas of
your own?


Hold on there -- the brain most certainly has "a code", if you will
remember the gene expression and the general neurophysical nature of it
all. I think partly the difference you might be seeing here is how much
more complex and grown the brain is in comparison to somewhat fragile
circuits and the ecological differences between the WWW and the
combined evolutionary history keeping your neurons healthy each day.

Anyway, because of the quantified nature of energy in general, the brain
must be doing something physical and "operating on a code", or i.e.
have an actual nature to it. I would like to see alternatives to this
line of reasoning, of course.

As for computers that don't have to be executing code all of the time.
I've been wondering about machines that could also imitate the
biological ability to recover from "errors" and not spontaneously burst
into flames when something goes wrong in the Source. Clearly there's
something of interest here.

- 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Mike Tintner
Thanks, Brad. My question is: all we know as a result of this is that the 
same cells that were somehow part of registering a sensory impression, are 
also part of recalling it? We don't kow what exact part they play, do we?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Mike Tintner
Er sorry - my question is answered in the interesting Slashdot thread 
(thanks again):


"Past studies have shown how many neurons are involved in a single, simple 
memory. Researchers might be able to isolate a few single neurons "in the 
process of summoning a memory", but that is like saying that they have 
isolated a few water molecules in the runoff of a giant hydroelectric dam. 
The practical utility of this is highly questionable."  (and much more.. 
good thread) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner

OK, I'll bite: what's nondeterministic programming if not a contradiction?

Again - v. briefly - it's a reality - nondeterministic programming is a 
reality, so there's no material, mechanistic, software problem in getting a 
machine to decide either way. The only problem is a logical one of doing it 
for sensible reasons. And that's the long part - there are a continuous 
stream of sensible reasons, as there are for current nondeterministic 
computer choices.


Yes, strictly, a nondeterministic *program* can be regarded as a 
contradiction - i.e. a structured *series* of instructions to decide freely 
. The way the human mind is "programmed" is that we are not only free, and 
have to, *decide* either way about certain decisions, but we are also free 
to *think* about it - i.e. to decide metacognitively whether and how we 
decide at all - we continually "decide." for example, to put off the 
decision till later.


So the simple reality of being as free to decide and think as you are, is 
that when you sit down to engage in any task, like write a post, essay, or 
have a conversation, or almost literally anything, there is no guarantee 
that you will start, or continue to the 2nd, 3rd, 4th step, let alone 
complete it. You may jack in your post more or less immediately.  This is at 
once the bane and the blessing of your life, and why you have such 
extraordinary problems finishing so many things. Procrastination.


By contrast, all deterministic/programmed machines and computers are 
guaranteed to complete any task they begin. (Zero procrastination or 
deviation). Very different kinds of machines to us. Very different paradigm. 
(No?)


I would say then that the human mind is strictly not so much 
nondeterministically "programmed" as "briefed". And that's how an AGI will 
have to function. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner

Abram:> In that case I do not see how your view differs from simplistic

dualism, as Terren cautioned. If your goal is to make a creativity
machine, in what sense would the machine be non-algorithmic? Physical
random processes?



Abram,

You're operating within a philosophical paradigm that says all actions and 
problemsolving must be preprogrammed. Nothing else is possible. That ignores 
the majority of real life problems where no program is possible, period.


"Sometimes the best plan is no plan"  If you're confronted with the task of 
finding something in a foreign territory, you simply don't (and couldn't) 
have the luxury of a program.


All you have is a rough idea, as opposed to an algorithm, of the sort of 
things you can do. You know roughly what you're looking for - an object 
somewhere in that territory. You know roughly how to "travel" and put one 
foor in front of the other and avoid obstacles and pick things up etc.


(Let's say - you have to find a key that has been lost somewhere in a 
house).


Well you certainly don't have an algorithm for finding a lost key in a 
house. In fact, if you or anyone would care to spend 5 mins on this problem, 
you would start to realise that no algorithm is possible. Check out 
Kauffman's interview on edge.com. for similar problems & arguments

.
So what do/can you do? Make it up as you go along. Start somewhere and keep 
going, and after a while if that doesn't work, try somewhere and something 
else...


But there's no algorithm for this. Just as there is, or was,  no algorithm 
for your putting the pieces of a jigsaw puzzle together (a much simpler, 
more tightly defined problem).  You just got stuck in. Somewhere. Anywhere 
reasonable.


Algorithms, from a human POV, are for literal people who have to "do things 
by the book" - people with a "compulsive obsessional disorder" - who can't 
bear to confront a blank page. :).V useful *after* you've solved a problem, 
but not in the beginning


There are no physical, computational, mechanical reasons why machines can't 
be designed on these principles - to proceed with rough ideas of what to do, 
freely consulting and combining options and looking around for fresh ones, 
as they go along, rather than following a preprogrammed list.


P.S. Nothing in this is strictly "random" - as in a narrow AI, randomly, 
blindly. working its way through a preprogrammed list. You only try options 
that are appropriate -  routes that appear likely to lead to your goal. I 
would call this "unstructured" but not (blindly) random thinking. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner

MT:By contrast, all deterministic/programmed machines and computers are

guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the 
above case, the computer is guaranteed to hang - and it does, strictly, 
complete its task.


What's happened is that you have had imperfect knowledge of the program's 
operations. Had you known more, you would have known that it would hang.


Were your computer like a human mind, it would have been able to say (as 
you/we all do) - "well if that part of the problem is going to be difficult, 
I'll ignore it"  or.. "I'll just make up an answer..". or "by God I'll keep 
trying other ways until I do solve this.." or... ".."  or ... 
Computers, currently, aren't free thinkers. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner


Abram,

I don't understand why.how I need to argue an alternative - please explain. 
If it helps, a deterministic, programmed machine can, at any given point, 
only follow one route through a given territory or problem space or maze - 
even if surprising & *appearing* to halt/deviate from the plan -   to the 
original, less-than-omniscient-of-what-he-hath-wrought programmer. (A 
fundamental programming problem, right?) A creative free machine, like a 
human, really can follow any of what may be a vast range of routes - and you 
really can't predict what it will do or, at a basic level, be surprised by 
it.



Mike,

Will's objection is not quite so easily dismissed. You need to argue
that there is an alternative, not just that Will's is more of the
same.

--Abram

On Fri, Sep 5, 2008 at 9:34 AM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:

MT:By contrast, all deterministic/programmed machines and computers are


guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the
above case, the computer is guaranteed to hang - and it does, strictly,
complete its task.

What's happened is that you have had imperfect knowledge of the program's
operations. Had you known more, you would have known that it would hang.

Were your computer like a human mind, it would have been able to say (as
you/we all do) - "well if that part of the problem is going to be 
difficult,
I'll ignore it"  or.. "I'll just make up an answer..". or "by God I'll 
keep

trying other ways until I do solve this.." or... ".."  or ...
Computers, currently, aren't free thinkers.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner

Will,

Yes, humans are manifestly a RADICALLY different machine paradigm- if you 
care to stand back and look at the big picture.


Employ a machine of any kind and in general, you know what you're getting - 
some glitches (esp. with complex programs) etc sure - but basically, in 
general,  it will do its job.


Humans are "only human, not a machine." Employ one of those, incl. yourself, 
and, by comparison, you have only a v. limited idea of what you're getting - 
whether they'll do the job at all, to what extent, how well. Employ a 
programmer, a plumber etc etc.. "Can you get a good one these days?..." 
VAST difference.


And that's the negative side of our positive side - the fact that we're 1) 
supremely adaptable, and 2) can tackle those problems that no machine or 
current "AGI"  - (actually of course, there is no such thing at the mo, only 
pretenders) - can even *begin* to tackle.


Our unreliability
.

That, I suggest, only comes from having no set structure - no computer 
program - no program of action in the first place. ("Hey, good  idea, who 
needs a program?")


Here's a simple, extreme example.

"Will,  I want you to take up to an hour, and come up with a dance, called 
the "Keyboard Shuffle." (A very "ill-structured" problem.)


Hey, you can do that. You can tackle a seriously ill-structured problem. You 
can embark on an activity you've never done before, presumably had no 
training for, have no structure for, & yet you will, if cooperative, come up 
with something - cobble together a session of that activity, and 
end-product, an actual dance. May be shit, but it'll be a dance.


And that's only an extreme example of how you approach EVERY activity. You 
similarly don't have a structure for your next hour[s], if you're writing an 
essay, or a program, or spending time watching TV, flipping chanels. You may 
quickly *adopt* or *form* certain structures/ routines. But they only go 
part way, and you do have to adopt and/or create them.


Now, I assert,  that's what an AGI is - a machine that has no programs, (no 
preset, complete structures for any activities), designed to tackle 
ill-structured problems by creating and adopting structures, not 
automatically following ones that have been laboured over for ridiculous 
amounts of time by human programmers offstage.


And that in parallel, though in an obviously more constrained way, is what 
every living organism is - an extraordinary machine that builds itself 
adaptively and flexibly, as it goes along  -  Dawkins' famous plane that 
builds itself in mid-air. Just as we construct our activities in mid-air. 
Also a very different machine paradigm to any we have at the mo  (although 
obviously lots of people are currently trying to design/understand such 
self-building machines).


P.S. The irony is that scientists and rational philosophers, faced with the 
extreme nature of human imperfection - our extreme fallibility (in the sense 
described above - i.e. liable to "fail"/give up/procrastinate at any given 
activity at any point in a myriad of ways) - have dismissed it as, 
essentially, down to bugs in the system. Things that can be fixed.


AGI-ers have the capacity like no one else to see and truly appreciate that 
such fallibility = highly desirable adaptability and that humans/animals 
really are fundamentally different machines.


P.P.S.  BTW that's the proper analogy for constructing an AGI - not 
inventing the plane (easy-peasy), but inventing the plane that builds itself 
in mid-air, (whole new paradigm of machine- and mind- invention).


Will:>> MT:By contrast, all deterministic/programmed machines and computers 
are


guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the
above case, the computer is guaranteed to hang - and it does, strictly,
complete its task.


Not necessarily, the task could be interrupted at that process stopped
or paused indefinately.


What's happened is that you have had imperfect knowledge of the program's
operations. Had you known more, you would have known that it would hang.


If it hung because of mult-process issues, you would need perfect
knowledge of the environment to know the possible timing issues as
well.


Were your computer like a human mind, it would have been able to say (as
you/we all do) - "well if that part of the problem is going to be 
difficult,
I'll ignore it"  or.. "I'll just make up an answer..". or "by God I'll 
keep

trying other ways until I do solve this.." or... ".."  or ...
Computers, currently, aren't free thinkers.



Computers aren't free thinkers, but it does not follow from an
inability to switch,  cancel, pause and restart or modify tasks. All
of which they can do admirably. They just don't tend to do so, because
they aren't smart 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner

Sorry - para "Our unreliability .."  should have contined..

"Our unreliabilty is the negative flip-side of our positive ability to stop 
an activity at any point, incl. the beginning and completely change tack/ 
course or whole approach, incl. the task itself, and even completely 
contradict ourself." 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   9   10   >