Re: [agi] just a thought

2009-01-14 Thread Valentina Poletti
Cool,

this idea has already been applied successfully to some areas of AI, such as
ant-colony algorithms and swarm intelligence algorithms. But I was thinking
that it would be interesting to apply it at a high level. For example,
consider that you create the best AGI agent you can come up with and,
instead of running just one, you create several copies of it (perhaps with
slight variations), and you initiate each in a different part of your
reality or environment for such agents, after letting them have the ability
to communicate. In this way whenever one such agents learns anything
meaningful he passes the information to all other agents as well, that is,
it not only modifies its own policy but it also affects the other's to some
extent (determined by some constant or/and by how much the other agent likes
this one, that is how useful learning from it has been in the past and so
on). This way not only each agent would learn much faster, but also the
agents could learn to use this communication ability to their advantage to
ameliorate. I just think it would be interesting to implement this, not that
I am capable of right now.


On Wed, Jan 14, 2009 at 2:34 PM, Bob Mottram fuzz...@gmail.com wrote:

 2009/1/14 Valentina Poletti jamwa...@gmail.com:
  Anyways my point is, the reason why we have achieved so much technology,
 so
  much knowledge in this time is precisely the we, it's the union of
 several
  individuals together with their ability to communicate with one-other
 that
  has made us advance so much. In a sense we are a single being with
 millions
  of eyes, ears, hands, brains, which alltogether can create amazing
 things.
  But take any human being alone, isolate him/her from any contact with any
  other human being and rest assured he/she will not achieve a single
 artifact
  of technology. In fact he/she might not survive long.


 Yes.  I think Ben made a similar point in The Hidden Pattern.  People
 studying human intelligence - psychologists, psychiatrists, cognitive
 scientists, etc - tend to focus narrowly on the individual brain, but
 human intelligence is more of an emergent networked phenomena
 populated by strange meta-entities such as archetypes and memes.  Even
 the greatest individuals from the world of science or art didn't make
 their achievements in a vacuum, and were influenced by earlier works.

 Years ago I was chatting with someone who was about to patent some
 piece of machinery.  He had his name on the patent, but was pointing
 out that it's very difficult to be able to say exactly who made the
 invention - who was the guiding mind.  In this case many individuals
 within his company had some creative input, and there was really no
 one inventor as such.  I think many human-made artifacts are like
 this.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] just a thought

2009-01-13 Thread Valentina Poletti
Not in reference to any specific current discussion,

I find it interesting that when people talk of human like intelligence in
the realm of AGI, they refer to the ability of a human individual, or human
brain if you like. It just occurred to me that human beings are not that
intelligent. Well, of course we are super intelligent compared to a frog (as
some would say) but then again a frog is super intelligent compared to an
aunt.


Anyways my point is, the reason why we have achieved so much technology, so
much knowledge in this time is precisely the we, it's the union of several
individuals together with their ability to communicate with one-other that
has made us advance so much. In a sense we are a single being with millions
of eyes, ears, hands, brains, which alltogether can create amazing things.
But take any human being alone, isolate him/her from any contact with any
other human being and rest assured he/she will not achieve a single artifact
of technology. In fact he/she might not survive long.


So that's why I think it is important to put emphasis on this when talking
about super-human intelligence.


That was my 2-in-the-morning thought. I guess I should sleep now.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] a mathematical explanation of AI algorithms?

2008-10-08 Thread Valentina Poletti
And here is your first question on AGI.. actually rather on AI. It's not so
trivial though.
Some researchers are telling me that no-one has actually figured out how AI
algorithms, such as ANNs and genetic algorithms work.. in other words there
is no mathematical explanation to prove their behavior. I am simply not
happy with this answer. I always look for mathematical explanations.
Particularly, I am not talking about something as complex as AGI, but
something as simple as a multi-layer perceptron. Anybody knows anything that
contradicts this?

Thanks



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Free AI Courses at Stanford

2008-09-20 Thread Valentina Poletti
The lectures are pretty good in quality, compared with other major
university on-line lectures (such as MIT and so forth) I followed a couple
of them and definitely recommend. You learn almost as much as in a real
course.

On Thu, Sep 18, 2008 at 2:19 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Hi List,

 Also interesting to some of you may be VideoLectures.net, which offers
 lots of interesting lectures. Although not all are of Stanford
 quality, still I found many interesting lectures by respected
 lecturers. And there are LOTS (625 at the moment) of lectures about
 Machine Learning... :)

 http://videolectures.net/Top/Computer_Science/

 Algorithmic Information Theory (2)
 Algorithms and Data Structures (4)
 Artificial Intelligence (6)
 Bioinformatics (45)
 Chemoinformatics (1)
 Complexity Science (24)
 Computer Graphics (2)
 Computer Vision (41)
 Cryptography and Security (4)
 Databases (1)
 Data Mining (56)
 Data Visualisation (18)
 Decision Support (3)
 Evolutionary Computation (3)
 Fuzzy Logic (4)
 Grid Computing (1)
 Human Computer Interaction (10)
 Image Analysis (47)
 Information Extraction (30)
 Information Retrieval (40)
 Intelligent Agents (4)
 Interviews (54)
 Machine Learning (625)
 Natural Language Processing  (9)
 Network Analysis (27)
 Robotics (23)
 Search Engines (5)
 Semantic Web (175)
 Software and Tools (12)
 Spatial Data Structures (1)
 Speech Analysis (9)
 Text Mining (37)
 Web Mining (19)
 Web Search (2)



 On Thu, Sep 18, 2008 at 8:52 AM, Brad Paulsen [EMAIL PROTECTED]
 wrote:
  Hey everyone!
 
 ...
 
  Links to all the courses being offered are here:
  http://www.deviceguru.com/2008/09/17/stanford-frees-cs-robotics-courses/
 
  Cheers,
  Brad
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Valentina Poletti
I think it's the surprize that makes you laugh actually, not physical
pain in other people. I find myself laughing at my own mistakes often
- not because they hurt (in fact if they did hurt they wouldn't be
funny) but because I get surprized by them.

Valentina

On 9/10/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 On Wed, Sep 10, 2008 at 2:39 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:
Without a body, you couldn't understand the joke.

 False. Would you also say that without a body, you couldn't understand
 3D space ?

 BTW it's kind of sad that people find it funny when others get hurt. I
 wonder what are the mirror neurons doing at the time. Why so many kids
 like to watch the Tom  Jerry-like crap?

 Jiri




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] any advice

2008-09-09 Thread Valentina Poletti
I am applying for a research program and I have to chose between these two
schools:

Dalle Molle Institute of Artificial Intelligence
University of Verona (Artificial Intelligence dept)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment.. P.S.

2008-09-04 Thread Valentina Poletti
That's if you aim at getting an AGI that is intelligent in the real world. I
think some people on this list (incl Ben perhaps) might argue that for now -
for safety purposes but also due to costs - it might be better to build an
AGI that is intelligent in a simulated environment.

Ppl like Ben argue that the concept/engineering aspect of intelligence
is *independent
of the type of environment*. That is, given you understand how to make it in
a virtual environment you can then tarnspose that concept into a real
environment more safely.

Some other ppl on the other hand believe intelligence is a property of
humans only. So you have to simulate every detail about humans to get that
intelligence. I'd say that among the two approaches the first one (Ben's) is
safer and more realistic.

I am more concerned with the physics aspect of the whole issue I guess.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-04 Thread Valentina Poletti
That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.

On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:

 2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
  Got ya, thanks for the clarification. That brings up another question.
 Why
  do we want to make an AGI?
 
 

 To understand ourselves as intelligent agents better? It might enable
 us to have decent education policy, rehabilitation of criminals.

 Even if we don't make human like AGIs the principles should help us
 understand ourselves, just as optics of the lens helped us understand
 the eye and aerodynamics of wings helps us understand bird flight.

 It could also gives us more leverage, more brain power on the planet
 to help solve the planets problems.

 This is all predicated on the idea that fast take off is pretty much
 impossible. It is possible then all bets are off.

 Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-04 Thread Valentina Poletti
On 8/31/08, Steve Richfield [EMAIL PROTECTED] wrote:


  Protective mechanisms to restrict their thinking and action will only
 make things WORSE.



Vlad, this was my point in the control e-mail, I didn't express it quite as
clearly, partly because coming from a different background I use a slightly
different language.

Also, Steve made another good point here: loads of people at any moment do
whatever they can to block the advancement and progress of human beings as
it is now. How will *those* people react to a progress as advanced as AGI?
That's why I keep stressing the social factor in intelligence as very
important part to consider.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
I agree with Pei in that a robot's experience is not necessarily more real
than that of a, say, web-embedded agent - if anything it is closer to the *
human* experience of the world. But who knows how limited our own sensory
experience is anyhow. Perhaps a better intelligence would comprehend the
world better through a different emboyment.

However, could you guys be more specific regarding the statistical
differences of different types of data? What kind of differences are you
talking about specifically (mathematically)? And what about the differences
at the various levels of the dual-hierarchy? Has any of your work or
research suggested this hypothesis, if so which?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What Time Is It? No. What clock is it?

2008-09-04 Thread Valentina Poletti
Great articles!

On 9/4/08, Brad Paulsen [EMAIL PROTECTED] wrote:

 Hey gang...

 It's Likely That Times Are Changing

 http://www.sciencenews.org/view/feature/id/35992/title/It%E2%80%99s_Likely_That_Times_Are_Changing
 A century ago, mathematician Hermann Minkowski famously merged space with
 time, establishing a new foundation for physics;  today physicists are
 rethinking how the two should fit together.

 A PDF of a paper presented in March of this year, and upon which the
 article is based, can be found at http://arxiv.org/abs/0805.4452.  It's a
 free download.  Lots of equations, graphs, oh my!

 Cheers,
 Brad




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Valentina Poletti
Programming definitely feels like an art to me - I get the same feelings as
when I am painting. I always wondered why.

On the phylosophical side in general technology is the ability of humans to
adapt the environment to themselves instead of the opposite - adapting to
the environment. The environment acts on us and we act on it - we absorb
information from it and we change it while it changes us.

When we want to step further and create an AGI I think we want to
externalize the very ability to create technology - we want the environment
to start adapting to us by itself, spontaneously by gaining our goals.

Vale



On 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Will:You can't create a program out of thin air. So you have to have some
 sort of program to start with

 Not out of thin air.Out of a general instruction and desire[s]/emotion[s].
 Write me a program that will contradict every statement made to it. Write
 me a single program that will allow me to write video/multimedia
 articles/journalism fast and simply. That's what you actually DO. You start
 with v. general briefs rather than any detailed list of instructions, and
 fill them  in as you go along, in an ad hoc, improvisational way -
 manifestly *creating* rather than *following* organized structures of
 behaviour in an initially disorganized way.

 Do you honestly think that you write programs in a programmed way? That
 it's not an *art* pace Matt, full of hesitation, halts, meandering, twists
 and turns, dead ends, detours etc?  If you have to have some sort of
 program to start with, how come there is no sign  of that being true, in
 the creative process of programmers actually writing programs?

 Do you think that there's a program for improvising on a piano [or other
 form of keyboard]?  That's what AGI's are supposed to do - improvise. So
 create one that can. Like you. And every other living creature.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
On 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:



 However, could you guys be more specific regarding the statistical
 differences of different types of data? What kind of differences are you
 talking about specifically (mathematically)? And what about the differences
 at the various levels of the dual-hierarchy? Has any of your work or
 research suggested this hypothesis, if so which?



 Sorry I've been fuzzy on this ... I'm engaging in this email conversation
 in odd moments while at a conference (Virtual Worlds 2008, in Los
 Angeles...)

 Specifically I think that patterns interrelating the I/O stream of system S
 with the relation between the system S's embodiment and its environment, are
 important.  It is these patterns that let S build a self-model of its
 physical embodiment, which then leads S to a more abstract self-model (aka
 Metzinger's phenomenal self)

 So in short you are saying that the main difference between I/O data by
a motor embodyed system (such as robot or human) and a laptop is the ability
to interact with the data: make changes in its environment to systematically
change the input?

  Considering patterns in the above category, it seems critical to have a
 rich variety of patterns at varying levels of complexity... so that the
 patterns at complexity level L are largely approximable as compositions of
 patterns at complexity less than L.  This way a mind can incrementally build
 up its self-model via recognizing slightly complex self-related patterns,
 then acting based on these patterns, then recognizing somewhat more complex
 self-related patterns involving its recent actions, and so forth.


Definitely.

  It seems that a human body's sensors and actuators are suited to create
 and recognize patterns of the above sort whereas the sensors and actuators
 of a

 laptop w/o network cables or odd peripherals are not...


Agree.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread Valentina Poletti
So it's about money then.. now THAT makes me feel less worried!! :)

That explains a lot though.

On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  Got ya, thanks for the clarification. That brings up another question.
 Why do we want to make an AGI?

 I'm glad somebody is finally asking the right question, instead of skipping
 over the specification to the design phase. It would avoid a lot of
 philosophical discussions that result from people having different ideas of
 what AGI should do.

 AGI could replace all human labor, worth about US $2 to $5 quadrillion over
 the next 30 years. We should expect the cost to be of this magnitude, given
 that having it sooner is better than waiting.

 I think AGI will be immensely complex, on the order of 10^18 bits,
 decentralized, competitive, with distributed ownership, like today's
 internet but smarter. It will converse with you fluently but know too much
 to pass the Turing test. We will be totally dependent on it.

 -- Matt Mahoney, [EMAIL PROTECTED]






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-09-01 Thread Valentina Poletti
Define crazy, and I'll define control :)

---
This is crazy. What do you mean by breaking the laws of information
theory? Superintelligence is a completely lawful phenomenon, that can
exist entirely within the laws of physics as we know them and
bootrapped by technology as we know it. It might discover some
surprising things, like a way to compute more efficiently than we
currently think physics allows, but that would be extra and certainly
that won't do impossible-by-definition things like breaking
mathematics.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Valentina Poletti
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?



On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  An AGI will not design its goals. It is up to humans to define the goals
 of an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If we
 are successful, then we are doomed because human goals evolved in a
 primitive environment to maximize reproductive success and not in an
 environment where advanced technology can give us whatever we want. AGI will
 allow us to connect our brains to simulated worlds with magic genies, or
 worse, allow us to directly reprogram our brains to alter our memories,
 goals, and thought processes. All rational goal-seeking agents must have a
 mental state of maximum utility where any thought or perception would be
 unpleasant because it would result in a different state.

 -- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-28 Thread Valentina Poletti
All these points you made are good points, and I agree with you. However,
what I was trying to say - and I realized I did not express myself too well,
is that, from what I understand I see a paradox in what Eliezer is trying to
do. Assuming that we agree on the definition of AGI - a being far more
intelligent than human beings, and we agree on the definition of
intelligence - ability to achieve goals. He would like to build an AGI, but
he would also like to ensure human safety. Although I don't think this will
be a problem for limited forms of AI, this does imply that some control is
necessarily given to its paramenters, specifically to its goal system. We
*are* controlled in that sense, contrary to what you say, by our genetic
code. That is why you will never voluntarily place your hand in the fire, as
long as your pain scheme is genetically embedded correctly. As I mentioned,
exceptions to this control scheme are often imprisoned, sometimes killed -
in order not to endanger the human species. However, just because genetic
limitations are not enforced visibly, that does not exclude them from being
a kind of control on our behavior and actions. Genetic limitations on their
part are 'controlled' by the scope of our species, that to evolve and to
preserve itself. And that in turn is controlled by laws of thermodynamics.
Now the problem is, we often overestimate the amount of control we have on
our environemnt, and *that* is a human bias, embedded in us and necessary
for our success.

If you can break the laws of thermodynamics and information theory (which I
assume is what Eliezer is trying to do), then yes, perhaps you can create a
real AGI that will not try to preserve itself or to ameliorate, and
therefore its only goals will be those of preserving and ameliorating the
human species. But until we can do that, to me, is an illusion.

Let me know if I missed something or am misunderstanding anything.


On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Mon, Aug 25, 2008 at 6:23 PM, Valentina Poletti [EMAIL PROTECTED]
 wrote:
 
  On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
  Why would anyone suggest creating a disaster, as you pose the question?
 
 
  Also agree. As far as you know, has anyone, including Eliezer, suggested
 any
  method or approach (as theoretical or complicated as it may be) to solve
  this problem? I'm asking this because the Singularity has confidence in
  creating a self-improving AGI in the next few decades, and, assuming they
  have no intention to create the above mentioned disaster.. I figure
 someone
  must have figured some way to approach this problem.

 I see no realistic alternative (as in with high probability of
 occurring in actual future) to creating a Friendly AI. If we don't, we
 are likely doomed one way or another, most thoroughly through
 Unfriendly AI. As I mentioned, one way to see Friendly AI is as a
 second chance substrate, which is a first thing to do to ensure any
 kind of safety from fatal or just vanilla bad mistakes in the future.
 Of course, establishing a dynamics that know a mistake and when to
 recover or prevent or guide is a tricky part.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-28 Thread Valentina Poletti
Lol..it's not that impossible actually.


On Tue, Aug 26, 2008 at 6:32 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Valentina:In other words I'm looking for a way to mathematically define
 how the AGI will mathematically define its goals.
 Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
 been logically or mathematically (axiomatically) derivable from any old
 one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
 free-form deformation  etc etc
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Valentina Poletti
Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong) give an r-function for granted and work from
that. In real life that is not the case though. What I'm looking for is how
the AGI will create that function. Because the AGI is created by humans,
some sort of direction will be given by the humans creating them. What kind
of direction, in mathematical terms, is my question. In other words I'm
looking for a way to mathematically define how the AGI will mathematically
define its goals.

Valentina


On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  I was wondering why no-one had brought up the information-theoretic
 aspect of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy
 of a rational goal seeking agent in an unknown computable environment is
 AIXI: to guess that the environment is simulated by the shortest program
 consistent with observation so far [1]. Legg and Hutter also propose as a
 measure of universal intelligence the expected reward over a Solomonoff
 distribution of environments [2].

 These have profound impacts on AGI design. First, AIXI is (provably) not
 computable, which means there is no easy shortcut to AGI. Second, universal
 intelligence is not computable because it requires testing in an infinite
 number of environments. Since there is no other well accepted test of
 intelligence above human level, it casts doubt on the main premise of the
 singularity: that if humans can create agents with greater than human
 intelligence, then so can they.

 Prediction is central to intelligence, as I argue in [3]. Legg proved in
 [4] that there is no elegant theory of prediction. Predicting all
 environments up to a given level of Kolmogorov complexity requires a
 predictor with at least the same level of complexity. Furthermore, above a
 small level of complexity, such predictors cannot be proven because of Godel
 incompleteness. Prediction must therefore be an experimental science.

 There is currently no software or mathematical model of non-evolutionary
 recursive self improvement, even for very restricted or simple definitions
 of intelligence. Without a model you don't have friendly AI; you have
 accelerated evolution with AIs competing for resources.

 References

 1. Hutter, Marcus (2003), A Gentle Introduction to The Universal
 Algorithmic Agent {AIXI},
 in Artificial General Intelligence, B. Goertzel and C. Pennachin eds.,
 Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

 2. Legg, Shane, and Marcus Hutter (2006),
 A Formal Measure of Machine Intelligence, Proc. Annual machine
 learning conference of Belgium and The Netherlands (Benelearn-2006).
 Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf

 3. http://cs.fit.edu/~mmahoney/compression/rationale.html

 4. Legg, Shane, (2006), Is There an Elegant Universal Theory of
 Prediction?,
 Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
 Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno,
 Switzerland.
 http://www.vetta.org/documents/IDSIA-12-06-1.pdf

 -- Matt Mahoney, [EMAIL PROTECTED]


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Valentina Poletti
Vlad, Terren and all,

by reading your interesting discussion, this saying popped in my mind..
admittedly it has little to do with AGI but you might get the point anyhow:

An old lady used to walk down a street everyday, and on a tree by that
street a bird sang beautifully, the sound made her happy and cheerful and
she was very thankful for that. One day she decided to catch the bird and
place it into a cage, so she could always have it singing for her.
Unfortunately for her, the bird got sad in the cage and stopped singing...
thus taking away her cheer as well.

Well, the story has a different purpose, but one can see a moral that
connects to this argument. Control is an illusion. It takes away the very
nature of what we are trying to control.

My point is that by following Eliezer's approach we might never get an AGI.
Intelligence, as I defined, is the ability to reach goals and those goals
(as Terren pinted out) must somehow have to do with self-preservation if the
system itself is not at equilibrium.
Not long ago, I was implementing the model for a bio-physics research in New
York based on non-linear systems far from equilibrium and this became pretty
evident to me. If any system is kept far from equilibrium, it has the
tendency to form self-preserving entities, given enough time.

I also have a nice definition of friendliness: firstly note that we have
goals both as individuals and as a species (these goals are inherent in the
species and come from self-preservation -see above). The goals of the
individuals and species must somehow match, i.e. not be in significant
conflict, least the individual be considered a criminal, harmful. By
friendliness is meant creating an AGI which will follow the goals of the
human species rather than its own goals.

Valentina



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: I Made a Mistake

2008-08-25 Thread Valentina Poletti
Chill down Jim, he took it back.

On 8/24/08, Jim Bromer [EMAIL PROTECTED] wrote:

 Intolerance of another person's ideas through intimidation or ridicule
 is intellectual repression.  You won't elevate a discussion by
 promoting a program anti-intellectual repression.  Intolerance of a
 person for his religious beliefs is a form of intellectual
 intolerance



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Valentina Poletti
In other words, Vladimir, you are suggesting that an AGI must be at some
level controlled from humans, therefore not 'fully-embodied' in order to
prevent non-friendly AGI as the outcome.

Therefore humans must somehow be able to control its goals, correct?

Now, what if controlling those goals would entail not being able to create
an AGI, would you suggest we should not create one, in order to avoid the
disastrous consequences you mentioned?

Valentina



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Valentina Poletti
On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Mon, Aug 25, 2008 at 1:07 PM, Valentina Poletti [EMAIL PROTECTED]
 wrote:
  In other words, Vladimir, you are suggesting that an AGI must be at some
  level controlled from humans, therefore not 'fully-embodied' in order to
  prevent non-friendly AGI as the outcome.

 Controlled in Friendliness sense of the word. (I still have no idea
 what embodied refers to, now that you, me and Terren used it in
 different senses, and I recall reading a paper about 6 different
 meanings of this word in academic literature, none of them very
 useful).


Agree

 Therefore humans must somehow be able to control its goals, correct?
 
  Now, what if controlling those goals would entail not being able to
 create
  an AGI, would you suggest we should not create one, in order to avoid the
  disastrous consequences you mentioned?
 

 Why would anyone suggest creating a disaster, as you pose the question


Also agree. As far as you know, has anyone, including Eliezer, suggested any
method or approach (as theoretical or complicated as it may be) to solve
this problem? I'm asking this because the Singularity has confidence in
creating a self-improving AGI in the next few decades, and, assuming they
have no intention to create the above mentioned disaster.. I figure someone
must have figured some way to approach this problem.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Question, career related

2008-08-22 Thread Valentina Poletti
Thanks Rui

On 8/21/08, Rui Costa [EMAIL PROTECTED] wrote:

 Hi,

 Here: http://delicious.com/h0pe/phds
 You can find a list of Doctoral programs related with computational
 neurosciences, cognitive sciences and artificial (general) intelligence that
 I have been saving since some time ago.

 Hope that you can find this useful.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-22 Thread Valentina Poletti
Thanks Vlad, I read all that stuff plus other Eliezer papers. They don't
answer my question: I am asking what is the use of a non-embodied AGI, given
it would necessarily have a different goal system from that of humans, I'm
not asking how to make any AGI friendly - that is extremely difficult.


On 8/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Thu, Aug 21, 2008 at 5:33 PM, Valentina Poletti [EMAIL PROTECTED]
 wrote:
  Sorry if I'm commenting a little late to this: just read the thread. Here
 is
  a question. I assume we all agree that intelligence can be defined as the
  ability to achieve goals. My question concerns the establishment of those
  goals. As human beings we move in a world of limitations (life span,
 ethical
  laws, etc.) and have inherent goals (pleasure vs pain) given by
 evolution.
  An AGI in a different embodyment might not have any of that, just a pure
  meta system of obtaining goals, which I assume, we partly give the AGI
 and
  partly it establishes. Now, as I understand, the point of Singularity is
  that of building an AGI more intelligent than humans so it could solve
  problems for us that we cannot solve. That entails that the goal system
 of
  the AGI and ours must be interconnected somehow. I find it difficult
  to understand how that can be achieved with an AGI with a different type
 of
  embodyment. I.e. planes are great in achieving flights, but are quite
  useless to birds as their goal system is quite different. Can anyone
  clarify?
 

 This is the question of Friendly AI: how to construct AGI that are
 good to have around, that are a right thing to launch Singularity
 with, what do we mean by goals, what do we want AGI to do and how to
 communicate this in implementation of AGI. Read CFAI (
 http://www.singinst.org/upload/CFAI/index.html ) and the last arc of
 Eliezer's posts on Overcoming Bias to understand what the problem is
 about. This is a tricky question, not in the least because everyone
 seems to have a deep-down intuitive confidence that they understand
 what the problem is and how to solve it, out of hand, without
 seriously thinking about it. It takes much reading to even get what
 the question is and why it won't be answered along the way, as AGI
 itself gets understood better, or by piling lots of shallow rules,
 hoping that AGI will construct what we want from these rules by the
 magical power of its superior intelligence.

 For example, inherent goals (pleasure vs pain) given by evolution
 doesn't even begin cut it, leading the investigation in the wrong
 direction. Hedonistic goal is answered by the universe filled with
 doped humans, and it's certainly not what is right, no more than a
 universe filled with paperclips.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-22 Thread Valentina Poletti
 Jim,

I was wondering why no-one had brought up the information-theoretic aspect
of this yet. Are you familiar at all with the mathematics behind such a
description of AGI? I think it is key so I'm glad someone else is studying
that as well.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-22 Thread Valentina Poletti
Ben,

Being one of those big-headed children myself.. I have just a peculiar
comment. You probably know this but human intelligence is not limited to the
size of the human skull. That is why communication and social skills are
such important keys to intelligence. An individual by himself can do very
little in society or in this world. Even the most brilliant people wouldn't
have achieved what they have achieved without great teachers, great
disciples, followers and loved ones. Groups of people can achieve great
things. That is why talking about intelligence in terms of a single human
brain makes little sense to me.

Valentina



 This is one of those misleading half-truths...

 Evolution sometimes winds up solving optimization problems effectively, but
 it solves each one given constraints that are posed by its prior solutions
 to other problems ...

 For instance, it seems one of the reasons we're not smarter than we are is
 that evolution couldn't figure out how to make our heads bigger without
 having too many of us get stuck coming out the vaginal canal during birth.
 Heads got bigger, hips got wider ... up to a point ... but then the process
 stopped so we're the dipshits that we are.  Evolution was solving an
 optimization problem (balancing maximization of intelligence and
 minimization of infant and mother mortality during birth) but within a
 context set up by its previous choices ... it's not as though it achieved
 the maximum possible intelligence for any humanoid, let alone for any being.

 Similarly, it's hard for me to believe that human teeth are optimal in any
 strong sense.  No, no, no.  They may have resulted as the solution to some
 optimization problem based on the materials and energy supply and food
 supply at hand at some period of evolutionary history ... but I refused to
 believe that in any useful sense they are an ideal chewing implement, or
 that they embody some amazingly wise evolutionary insight into the nature of
 chewing.

 Is the clitoris optimal?  There is a huge and silly literature on this, but
 (as much of the literature agrees) it seems obvious that it's not.

 The human immune system is an intelligent pattern recognition system, but
 if it were a little cleverer, we wouldn't need vaccines and we wouldn't have
 AIDS...

 We don't understand every part of the human brain/body, but those parts we
 do understand do NOT convey the message that you suggest.  They reflect a
 reality that the human brain/body is a mess combining loads of elegant
 solutions with loads of semi-random hacks.   Not surprisingly, this is also
 what we see in the problem-solutions produced by evolutionary algorithms in
 computer science simulations.

 -- Ben G


  http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How We Look At Faces

2008-08-21 Thread Valentina Poletti
Don't you think it might be more closely related to education and culture,
rather than morphologic differences? Especially when reading the rest of the
article - the part on how asians focus on background rather than subjects as
westerners, or how they tend to analyze things relative to context rather
than in absolute terms, to me reflects very much the education of such
cultures (more environment and socially centered) compared to the education
of westerners (more individual-centered).

By the way, I'm back from a break - will comment on posts I missed in the
next few days.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-21 Thread Valentina Poletti
On 8/8/08, Mark Waser [EMAIL PROTECTED] wrote:

   The person believes his decision are now guided by free will, but
 truly they are still guided by the book: if the book gives him the wrong
 meaning of a word, he will make a mistake when answering a Chinese speaker

 The translations are guided by the book but his answers certainly are not.
 He can make a mistranslation but that is a mechanical/non-understanding act
 performed on the original act of deciding upon his answer.


The person can make mistakes in the first chinese room as well, so that
doesn't change my point.


  The main difference in this second context is that the contents of the
 book were transferred to the brain of the person

 No.  The main difference is that the person can choose what to answer (as
 opposed to the Chinese Room where responses are dictated by the input and no
 choice is involved).


I was assuming that the person has no reason to give the wrong answer
spontaneously, just as in the first room, sorry if I didn't make that clear.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-21 Thread Valentina Poletti
Sorry if I'm commenting a little late to this: just read the thread. Here is
a question. I assume we all agree that intelligence can be defined as the
ability to achieve goals. My question concerns the establishment of those
goals. As human beings we move in a world of limitations (life span, ethical
laws, etc.) and have inherent goals (pleasure vs pain) given by evolution.
An AGI in a different embodyment might not have any of that, just a pure
meta system of obtaining goals, which I assume, we partly give the AGI and
partly it establishes. Now, as I understand, the point of Singularity is
that of building an AGI more intelligent than humans so it could solve
problems for us that we cannot solve. That entails that the goal system of
the AGI and ours must be interconnected somehow. I find it difficult
to understand how that can be achieved with an AGI with a different type of
embodyment. I.e. planes are great in achieving flights, but are quite
useless to birds as their goal system is quite different. Can anyone
clarify?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Question, career related

2008-08-21 Thread Valentina Poletti
Dear AGIers,

I am looking for a research opportunity in AGI or related neurophysiology. I
won prizes in maths, physics, computer science and general science when I
was younger and have a keen interest in those fields. I'm a pretty good
programmer, and have taught myself neurophysiology and some cognitive
science. I have an inclination towards math and logic. I was wondering if
anyone knows of any open such positions, or could give me advice, references
to whom I may speak.

Thanks.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-08 Thread Valentina Poletti
That goes back to my previous point on the amount and type of information
our brain is able to extract from a visual input. It would be truly
difficult I say, even using advanced types of neural nets, to give a set of
examples of chairs, such as the ones Mike linked to, and let the machine
recognize any subsequent object as chiar, given *only* the visual stimuli.

That's why I think it's amazing what kind of info we extract from visual
input, in fact it is anything but visual. Suppose for example that you
wanted to take pictures of the concept 'force'. When we see a force we can
recognize one, and the concept 'force' is very clearly defined.. i.e.
something is either a force or is not, there is no fuzzyiness in the
concept. But teaching a machine to recognize it visually.. well that's a
different story.

A practical example: before I learned rock-climbing I saw not only rocks,
but the space around me in a different way. Now just by looking at a room or
space I see all sorts of hooks, places to hang on that I would never have
thought of before.. I learned to 'read' the image differently, that is, to
extract different types of information from before.

In the same manner, my mother who is a Sommelier, by smelling wine can
extract all sorts of information about its provenience, the way it was made,
the kind of ingredients, that I could never think of (thus now I am shifting
from visual input to odor input). To me it just smells like wine :)

My point is that our brain *combines* visual or other stimuli with a bank of
*non-*visual data in order to extract relevant information. That's why
talking about AGI from a purely visual perspective (or purely verbal) takes
it out of context from the way we experience the world.

But you could prove me wrong by building a machine that using *only* visual
input can recognize forces :)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-08 Thread Valentina Poletti
Let me ask about a special case of this argument.

Suppose now the book that the guy in the room holds is a chinese-teaching
book for english speakers. The guy can read it for as long as he wishes, and
can consult it in order to give the answers to the chinese speakers
interacting with him.

In this situation, although the setting has not changed much physically
speaking, the guy can be said to use his free will rather than a controlled
approach to answer questions. But is that true? The amount of information
exchanged is the same. The energy used is the same. The person believes his
decision are now guided by free will, but truly they are still guided by the
book: if the book gives him the wrong meaning of a word, he will make a
mistake when answering a chinese speaker. So his free will is just an
illusion.

The main difference in this second context is that the contents of the book
were transferred to the brain of the person, and these contents were
compressed (rather than consulting each case for what to do, he has been
taught general rules on what to do). The person has acquired understanding
of chinese from the book? No, he has acquired *information* from the book.
Information alone is not enough for understanding to exist. There must be
energy processing it.

By this definition a machine *can* understand.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Valentina Poletti
Terren: Substituting an actual human invalidates the experiment, because
then you are bringing something in that can actually do semantics. The point
of the argument is to show how merely manipulating symbols (i.e. the
syntactical domain) is not a demonstration of understanding, no matter what
the global result of the manipulation is (e.g., a chess move).
Got ya! We're on the same line then.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-07 Thread Valentina Poletti
yep.. isn't it amazing how long a thread is becoming based on an experiment
that has no significance?

On 8/6/08, Steve Richfield [EMAIL PROTECTED] wrote:

 Back to reason,

 This entire thread is yet another example that once you accept a bad
 assumption, you can then prove ANY absurd proposition. I see no reason to
 believe that a Chinese Room is possible, and therefore I have no problem
 rejecting all arguments regarding the absurd conclusions that could be
 reached regarding such an apparently impossible room. The absurdity of the
 arguments is just further proof of the impossibility of the room.

 Steve Richfield

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
My view is that the problem with the Chinese Room argument is precisely the
manner in which it uses the word 'understanding'. It is implied that in this
context this word refers to mutual human experience. Understanding has
another meaning, namely the emergent process some of you described, which
can happen in a computer in a different way from the way it happens in a
human being. In fact notice that the experiment says that the computer will
not understand chinese the way humans do. Therefore it implies the first
meaning, not the second.

Regarding grounding, I think that any intelligence has to collect data from
somewhere in order to lear. Where it collects it from will determine the
type of intelligence it is. Collecting stories is still a way of collecting
information, but such an intelligence will never be able to move in the real
world, as it has no clue regarding it. On the other hand an intelligence who
learns by moving in the real world, yet has never read anything, will gather
no information from a book.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] I didn't see the KILLTHREAD

2008-08-06 Thread Valentina Poletti
Yeh, don't bother writing to him, he stopped reading the AGI posts anyways
:(

On 8/5/08, David Clark [EMAIL PROTECTED] wrote:

 I apologize for breaking the killthread on my last 2 posts.

 Since I have never seen one before on the AGI list, I didn't skim my emails
 before commenting.

 David Clark




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
Then again, I was just thinking.. wouldn't it be wonderful if instead of
learning everything from scratch since the day we are born, we were born
with all the knowledge all human beings had acquired until that moment? If
somehow that was inplanted in our DNA? Of course that is not feasable.. but
suppose that AGI could somehow do that.. be born with some sort of
encyclopedia inplanted into them, plus imagine they also had several more
interaction/communication channels such as radiowaves, ultrasounds, etc.
enabling them to learn much faster and using far more information from the
environment..

ok I am letting my imagination run too far right now :)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
Ok, I really don't see how it proves that then. In my view, the book could
be replaced with a chinese-english translator and the same exact outcome
will be given. Both are using their static knowledge for this process, not
experience.

On 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote:


 Hi Valentina,

 I think the distinction you draw between the two kinds of understanding is
 illusory. Mutual human experience is also an emergent phenomenon. Anyway,
 that's not the point of the Chinese Room argument, which doesn't say that a
 computer understands symbols in a different way than humans, it says that a
 computer has no understanding, period.

 Terren

 --- On *Wed, 8/6/08, Valentina Poletti [EMAIL PROTECTED]* wrote:

  My view is that the problem with the Chinese Room argument is precisely
 the manner in which it uses the word 'understanding'. It is implied that in
 this context this word refers to mutual human experience. Understanding has
 another meaning, namely the emergent process some of you described, which
 can happen in a computer in a different way from the way it happens in a
 human being. In fact notice that the experiment says that the computer will
 not understand chinese the way humans do. Therefore it implies the first
 meaning, not the second.

 Regarding grounding, I think that any intelligence has to collect data from
 somewhere in order to lear. Where it collects it from will determine the
 type of intelligence it is. Collecting stories is still a way of collecting
 information, but such an intelligence will never be able to move in the real
 world, as it has no clue regarding it. On the other hand an intelligence who
 learns by moving in the real world, yet has never read anything, will gather
 no information from a book.
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning -- Chinese Room

2008-08-06 Thread Valentina Poletti
by translator i meant human translator btw. what this experiment does
suggest is that linguistic abilities require energy (the book alone would do
nothing). and that they are independent of humanness (the machine could do
it), whether they involve 'understanding' or not.

On 8/6/08, Valentina Poletti [EMAIL PROTECTED] wrote:

 Ok, I really don't see how it proves that then. In my view, the book could
 be replaced with a chinese-english translator and the same exact outcome
 will be given. Both are using their static knowledge for this process, not
 experience.

 On 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote:


 Hi Valentina,

 I think the distinction you draw between the two kinds of understanding is
 illusory. Mutual human experience is also an emergent phenomenon. Anyway,
 that's not the point of the Chinese Room argument, which doesn't say that a
 computer understands symbols in a different way than humans, it says that a
 computer has no understanding, period.

 Terren

 --- On *Wed, 8/6/08, Valentina Poletti [EMAIL PROTECTED]* wrote:

  My view is that the problem with the Chinese Room argument is precisely
 the manner in which it uses the word 'understanding'. It is implied that in
 this context this word refers to mutual human experience. Understanding has
 another meaning, namely the emergent process some of you described, which
 can happen in a computer in a different way from the way it happens in a
 human being. In fact notice that the experiment says that the computer will
 not understand chinese the way humans do. Therefore it implies the first
 meaning, not the second.

 Regarding grounding, I think that any intelligence has to collect data
 from somewhere in order to lear. Where it collects it from will determine
 the type of intelligence it is. Collecting stories is still a way of
 collecting information, but such an intelligence will never be able to move
 in the real world, as it has no clue regarding it. On the other hand an
 intelligence who learns by moving in the real world, yet has never read
 anything, will gather no information from a book.
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 A true friend stabs you in the front. - O. Wilde

 Einstein once thought he was wrong; then he discovered he was wrong.

 For every complex problem, there is an answer which is short, simple and
 wrong. - H.L. Mencken




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Any further comments from lurkers??? [WAS do we need a stronger politeness code on this list?]

2008-08-04 Thread Valentina Poletti
AGI list,

What I see in most of these e-mail list discussions is people with very
diversified backgrounds, cultures, ideas, failing to understand each other.
What people should remember is that e-mail is not even close to a complete
communication medium. By its definition, you are going to miss out a lot of
information the other person wants to convey. So when something seems
offensive or unclear to you, ask for clarification, rather than starting a
bullfight. Really, you are wasting your time, but also the time of the
people who have to go through your e-mails in order to get to the useful
points. I know it is much easier to call someone names rather than trying to
at least have an idea of what they are saying, but really, I don't need to
sign onto an AGI list to see those sorts of things.

Valentina



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-08-01 Thread Valentina Poletti
Mike:

 I wrote my last email in a rush. Basically what I was trying to explain is
precisely the basis of what you call creative process in understanding
words. I simplified the whole thing a lot because I did not even consider
the various layers of mappings - mappings of mappings and so on.

What you say is correct, the word art-cop will invoke various ideas, amongst
which art - which in turn will evoke art-exhibit, painting, art-criticism,
and whatever else you want to add. The word cop in analogy will evoke a
series of concepts, and those concepts themselves will evoke more concepts
and so on.

Now obviously if there were no 'measuring' system to how much concepts
are evoked among each other, this process would go no-where. But fortunately
there is such a measuring process and simplifying things a lot, it consists
of excitatory and inhibitory synapses, as well as overall disturbance or
'noise' which after so and so many transitions will make the signal lose its
significance (i.e. become random for practical purposes).

Hope this is not too confusing. I'm not that great at explaining my ideas
with words :)

Jim  Vlad:

that is a difficult question because it depends a lot on your
database. Actually Marko Rodriguez has attempted this in a program that
uses a database of related words from the University of South Florida. This
program is able to understand very simple analogies such as
Which word of the second list best fits in the first list?
bear, cow, dog, tiger: turtle, carp, parrot, lion

Obviously this program is very limited. If you just need to just search
words correspondence, I'd go with Vlad's suggestion. Otherwise there is a
lot to be implemented, in terms of layers, inhibitory vs excitatory
connections, concept from stimuli and so on..What strikes me in AGI is that
so many researchers try to build an AGI with the presupposition that
everything should be built in already, the machine should be able to resolve
tasks from day 1 - just like in AI. That's like expecting a new born baby to
talk about political issues! It's easy to forget that the database we have
in our brains, upon which we make decisions, selections, creations, and so
on.. is incredibly large.. in fact it took a life-time to assembe.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-31 Thread Valentina Poletti
This is how I explain it: when we perceive a stimulus, word in this case, it
doesn't reach our brain as a single neuron firing or synapse, but as a set
of already processed neuronal groups or sets of synapses, that each recall
various other memories, concepts and neuronal group. Let me clarify this. In
the example you give, the wod artcop might reach us as a set of stimuli:
art, cop, mediu-sized word, word that begins with a, and so on. All these
connect activate various maps in our memory, and if something substantial is
monitored at some point (going with Richard's theory of the monitor, I don't
have other references of this actually), we form a response.

This is more obvious in the case of sight - where an image is first broken
into various compontents that are separately elaborated: colours, motion,
edges, shapes, etc. - and then further sent to the upper parts of the memory
where they can be associated to higher level concepts.

If any of this is not clear let me know, instead of adding me to your
kill-lists ;-P

Valentina



On 7/31/08, Mike Tintner [EMAIL PROTECTED] wrote:


 Vlad:

 I think Hofstadter's exploration of jumbles (
 http://en.wikipedia.org/wiki/Jumble ) covers this ground. You don't
 just recognize the word, you work on trying to connect it to what you
 know, and if set of letters didn't correspond to any word, you give
 up.


 There's still more to word recognition though than this. How do we decide
 what is and isn't, may or may not be a word?  A neologism? What may or may
 not be words from:

 cogrough
 dirksilt
 thangthing
 artcop
 coggourd
 cowstock

 or fomlepaung or whatever?





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Valentina Poletti
lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain
concludes that we 'don't know'. that's why it takes no effort to realize it.
agi algorithms should be built in a similar way, rather than searching.


 Isn't this a bit of a no-brainer?  Why would the human brain need to keep
 lists of things it did not know, when it can simply break the word down into
 components, then have mechanisms that watch for the rate at which candidate
 lexical items become activated  when  this mechanism notices that the
 rate of activation is well below the usual threshold, it is a fairly simple
 thing for it to announce that the item is not known.

 Keeping lists of things not known is wildly, outrageously impossible, for
 any system!  Would we really expect that the word
 ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
 owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
 hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
 dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as a
 word that I do not know? :-)

 I note that even in the simplest word-recognition neural nets that I built
 and studied in the 1990s, activation of a nonword proceeded in a very
 different way than activation of a word:  it would have been easy to build
 something to trigger a this is a nonword neuron.

 Is there some type of AI formalism where nonword recognition would be
 problematic?



 Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Can We Start P.S.

2008-07-10 Thread Valentina Poletti
Hey Steve,

thanks for the clarifications!


  My point was that the operation of most interesting phenomena is NOT
 fully understood, but consists of various parts, many of which ARE
 understood, or are at least easily understandable. Given the typical figure
 6 shape of most problematical cause-and-effect chains, many MAJOR problems
 can be solved WITHOUT being fully understood, by simply interrupting the
 process at two points, one in the lead-in from the root cause, and one in
 the self-sustaining loop at the end. This usually provides considerable
 choice in designing a cure. Of course this can't cure everything, but it
 WOULD cure ~90% of the illnesses that now kill people, fix most (though
 certainly not all) social and political conflicts, etc.


Yep, totally agree. But according to what you state below, there exist some
methods that would produce exact resulsts - given you understand the system
completely. That is what I was arguing against. In many fields there are
problems that are understood completely and yet are still unsolvable. We
know exactly the formula for say, the Lorenz curves. Yet it is impossible to
determine with any certainly a point a million iterations from now. That is
because even a variation at the atomic level would change the result
considerably. And if we observe such variation, we change it. It seems to be
nature's nature that we can never know it with exacness. Unless we are
talking mathematics of course.. but as someone already pointed out on this
list, mathematics has little to do with the real world.


  As I explained above, many/most complex problems and conflicts can be
 fixed WITHOUT a full understanding of them, so your argument above is really
 irrelevant to my assertion.


Yeh.. but i wasn't talking about such problems here. I was talking about
problems you do have a full understanding of. For example see your
statement: Random investment beats nearly all other methods.

Not at all! There is some broadly-applicable logical principles that NEVER
EVER fail, like Reductio ad Absurdum. Some of these are advanced and not
generally known, even to people here on this forum, like Reverse Reductio ad
Absurdum. Some conflicts require this advanced level of understanding for
the participants to participate in a process that leads to a mutually
satisfactory conclusion.

 Why do you assume most people on this forum would not know/understand them?
And how would you relate this to culture anyways?

Yes, and THIS TOO is also one of those advanced concepts. If you ask a
Palestinian about what the problem is in the Middle East, he will say that
it is the Israelis. If you ask an Israeli, he will say that it is the
Palestinians. If you ask a Kanamet (from the Twilight Zone show To Serve
Man, the title of a cook book), he will say that the problem is that they
are wasting good food. However, Reverse Reductio ad Absurdum methods can
point the way to a solution that satisfies all parties.

Hmm.. I guess I just don't see how. Could you be a lil more specific? :)


 In short, you appear to be laboring under the most dangerous assumption of
 all - that man's efforts to improve his lot and address his problems is at
 all logical. It is rarely so, as advanced methods often suggest radically
 different and better approaches. Throwing AGIs into the present social mess
 would be an unmitigated disaster, of the very sorts that you suggest.


When you say 'man' do you include yourself as well? ;) I hope not.. I don't
assume that: Yet you seem to assume that the methods you have are better
than anybody else's for any field.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] The Advance of Neuroscience

2008-07-09 Thread Valentina Poletti
Could you specify what do you mean by synaptic response curve? If it is what
I think it is it is far from linear, at least from the textbooks I read, so
I am probably not following you.

On 7/9/08, Steve Richfield [EMAIL PROTECTED] wrote:

 Mike, et al,

 When you look at the actual experiments upon which what we think we know is
 based, the information is SO thin that it is hard to come to any other
 rational conclusion. I could describe some of these, where for example a
 group of people spent a year putting electrodes into every one of the
 neurons in a lobster's stomatogastric ganglion so that they could
 electrically quiet all but two of them. This so that they could plot the
 synaptic response curve between those two cells. Why did it take a year?
 Because the neurons would die before they could make a recording. It took a
 year or trying and failing before blind luck finally worked in their favor.

 OK, so what does a synaptic response curve look like in our brain? Is it
 linear like many people presume? NO ONE KNOWS. Everything written on this
 subject is pure speculation.

 I see only one possibly viable way through this problem. It will take two
 parallel research efforts:
 1.  One effort is purely theoretical, where the optimal solutions to
 various processing problems is first exhibited, then the best solutions that
 can be achieved in a cellular architecture are exhibited, then potentially
 identifiable features are documented to guide wet-science efforts to
 confirm/deny these theories.
 2.  The other effort is a wet science effort armed with a scanning UV
 fluorescence microscope (or something better if something better comes
 along) that is charged with both confirming/denying identifiable details
 predicted by various theories, and with producing physical diagrams of
 brains to guide theoretical efforts.

 At present, there is not one dollar of funding for either of these efforts,
 so I expect to stay at the 2 micron point for the foreseeable future.

 Steve Richfield
 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com