Re: [agi] Philosophy of General Intelligence

2008-09-08 Thread Jiri Jelinek
Matt,

Suppose you write a program that inputs jokes or cartoons and outputs whether 
or not they are funny. Then there is an iterative process by which you can 
create funny jokes or cartoons. Write a program that inputs a movie and 
outputs a rating of 1 to 5 stars. Then you have an iterative process for 
creating good movies.

The system first needs to parse the input and translate it into its
KR. For movies - no way at this point because of technology
limitations (even if we had KR format that could well express it).
Jokes in NL - still a problem (decades of trouble with NL as you know
-  there is a good reason for that). Jokes in a formal language -
that could work IF we get the KR right. There are many types of Jokes.
Each type has its algorithm and the algorithms can be combined. Simple
algorithm example: Comparison of 2 objects which have some of the same
(or very similar) characteristics. Emphasizing the similarity (=
Optional part 1.). Then applying non-identical characteristic(s) of
object1 in an action taken by the object2 (pretending that the
object2 also has that characteristic) and deriving a result which is
in contrast with the result we would get if it was for real. (= Part
2.). If you have lots of data and a decent KR then you can query it
for data to fill joke templates (+ use various modifiers for
uniqueness), detect and rate jokes. Funny stuff is often based on
contrast  unexpected turns. Also certain creatures (like ducks) have
often a better potential than others. And of course, there are also
certain things in particular societies you need to avoid. If the
system gets feedback  joke samples, it can tweak/generate its joke
templates (always considering info about the audience) and get better.
Decent KR - that's the first thing.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AI isn't cheap

2008-09-08 Thread Steve Richfield
Matt,

On 9/7/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Sun, 9/7/08, Steve Richfield [EMAIL PROTECTED] wrote:

 1.  I believe that there is some VERY fertile but untilled ground, which
 if it is half as good as it looks, could yield AGI a LOT cheaper than
 other higher estimates. Of course if I am wrong, I would probably accept
 your numbers.
 
 2.  I believe that AGI will take VERY different (cheaper and more
 valuable) forms than do other members on this forum.
 
 Each of the above effects are worth several orders of magnitude in effort.

 You are just speculating.


Of course. Aren't we all here on this forum?

The fact is that thousands of very intelligent people have been trying to
 solve AI for the last 50 years, and most of them shared your optimism.


Unfortunately, their positions as students and professors at various
universities have forced almost all of them into politically correct paths,
substantially all of which lead nowhere, for otherwise they would have
succeeded long ago. The few mavericks who aren't stuck in a university (like
those on this forum) all lack funding.

Perhaps it would be more fruitful to estimate the cost of automating the
 global economy. I explained my estimate of 10^25 bits of memory, 10^26 OPS,
 10^17 bits of software and 10^15 dollars.


I don't understand the goal or value here? Perhaps you could explain?

You really should see my Dr. Eliza demo.

 Perhaps you missed my comments in April.

 http://www.listbox.com/member/archive/303/2008/04/search/ZWxpemE/sort/time_rev/page/2/entry/5:53/20080414221142:407C652C-0A91-11DD-B3D2-6D4E66D9244B/


Apparently I did. Sorry about that. Here I have pasted in the posting with
embedded contemporary comments.

--- Steve Richfield [EMAIL PROTECTED] wrote:

 Why go to all that work?! I have attached the *populated* Knowledge.mdb
file
 that contains the knowledge that powers the chronic illness demo of Dr.
 Eliza. To easily view it, just make sure that any version of MS Access is
 installed on your computer (it is in Access 97 format) and double-click on
 the file. From there, select the Tables tab, and click on whatever table
 interests you.

I looked at your file. Would I be correct that if I described a random
health
problem to Dr. Eliza that it would suggest that my problem is due to one of:

- Low body temperature
- Fluorescent lights
- Consuming fructose in the winter
- Mercury poisoning from amalgam fillings and vaccines
- Aluminum cookware
- Hydrogenated vegetable oil
- Working a night shift
- Aspirin (causes macular degeneration)
- Or failure to accept divine intervention?

= First, my complements on your careful reading of the knowledge base.

= Yes, there is a pretty good chance that you would be asked about some of
these things, as various of these things seem to underlie most chronic
illnesses.

Is that it, or is there a complete medical database somewhere,

= WYSIWYG, though this is only maybe 1% of a fully populated
health database. This stuff is just there for demo. For a 1% demo, it works
amazingly well. Further, I presume that people would embed generous
hyperlinks into the explanations, so that Pub Med and other medical
databases would be just a mouse click away.

or the capability of acquiring this knowledge?

= Only machine knowledge that has been carefully crafted by humans. As I
have explained in a number of postings, certain key things, like how people
commonly express symptoms and the carefully crafted questions needed to
drill down, are NOT on any web site or medical text, so the services of an
experienced expert is absolutely required. Plans of others to mine the
Internet (or Wikipedia) are absolutely doomed to failure because this
information is so completely lacking. No AGI would be able to compose this
knowledge unless they had the real-world experience with real-world people
to know how they express things. In short, many AI and AGI plans are quite
obviously hopeless because they lack access to this information.

Do you have a medical background,

= Yes.

or have you consulted with doctors in building the database?

= Yes.

BTW, regarding processes that use 100% of CPU in Windows. Did you try
Ctrl-Alt-Del to bring up the task manager, then right click on the process
and
change its priority?

= Not that specifically, though I did try the Windows API to do the same,
and got back an error code that indicated that the most problematical task
(NaturallySpeaking) had set a bit to keep other tasks from adjusting its
priority. I presume that the Task Manager would have simply called the same
API but probably failed to provide the return code. I expect to abandon
speech I/O in the future even though it works pretty well, because no one
seems to want to bet their success in overcoming their problems on the
random screwups of a speech recognition program. Without speech I/O, there
is no speed problem. This is apparently one of those great ideas that
just can't make it in the real world.


 In any case, 

Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-08 Thread Abram Demski
Hi,

I am curious about the result you mention. You say that the genetic
algorithm stopped search very quickly. Why? It sounds like they want
to search to go longer, but can't they just tell it to go longer if
they want it to? And to reduce convergence, can't they just increase
the level of mutation? Do you know if they tried this, and if so, why
it wasn't sufficient?

Other than that, I think there are several things to try. First, it
seems more natural to me to put the textbook solutions in the initial
population, rather than coding them as genetic operations. Second, if
they are used as operations, I'd try splitting them up further (just
to reduce the bias).

Disclaimer: I do not consider myself an expert, as I am still an undergraduate.

--Abram

On Sun, Sep 7, 2008 at 8:55 PM, Benjamin Johnston
[EMAIL PROTECTED] wrote:


 Hi,



 I have a general question for those (such as Novamente) working on AGI
 systems that use genetic algorithms as part of their search strategy.



 A GA researcher recently explained to me some of his experiments in
 embedding prior knowledge into systems. For example, when attempting to
 automate the discovery of models of a mechanical system, they tried adding
 some textbook models to the set of genetic operators. The results weren't
 good – the prior knowledge worked too well, causing the GA to converge too
 fast onto the prior knowledge… so fast that there wasn't time for the GA to
 build up sufficient diversity and quality in other solutions that might have
 helped get out of the local maxima. The message seemed to be that prior
 knowledge is too powerful – it can 'blind' a search – and that if you must
 use it, you'd have to very very aggressively artificially deflate the
 fitness of instances that use prior knowledge (and this is tricky to get
 right).



 This struck me as relevant to GA-based AGIs that continually build on and
 improve a knowledge-base. Once an AGI learns very simple initial models of
 the world, if it then tries to evolve deeper knowledge about more difficult
 problems (but, in the context of its prior learning), then its initial
 models may prove to be too good: forcing the GA to converge on poor local
 maxima that represent only minor variations on the initial models it learnt
 in its earliest days.



 Does this issue actually crop up in GA-based AGI work? If so, how did you
 get around it? If not, would you have any comments about what makes AGI
 special so that this doesn't happen?



 -Ben



 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: Language modeling (was Re: [agi] draft for comment)

2008-09-08 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  From: John G. Rose [EMAIL PROTECTED]
  Subject: RE: Language modeling (was Re: [agi] draft for comment)
  To: agi@v2.listbox.com
  Date: Sunday, September 7, 2008, 9:15 AM
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
   --- On Sat, 9/6/08, John G. Rose
  [EMAIL PROTECTED] wrote:
  
Compression in itself has the overriding goal of
  reducing
storage bits.
  
   Not the way I use it. The goal is to predict what the
  environment will
   do next. Lossless compression is a way of measuring
  how well we are
   doing.
  
 
  Predicting the environment in order to determine which data
  to pack where,
  thus achieving higher compression ratio. Or compression as
  an integral part
  of prediction? Some types of prediction are inherently
  compressed I suppose.
 
 Predicting the environment to maximize reward. Hutter proved that
 universal intelligence is a compression problem. The optimal behavior of
 an AIXI agent is to guess the shortest program consistent with
 observation so far. That's algorithmic compression.
 

Oh I see. Guessing shortest program = compression. OK right. But yeah like
Pei said the word compression is misleading. It implies a reduction where
you are actually increasing understanding :)

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Will AGI Be Stillborn?

2008-09-08 Thread Brad Paulsen


From the article:

A team of biologists and chemists [lab led by Jack Szostak, a molecular 
biologist at Harvard Medical School] is closing in on bringing non-living 
matter to life.


It's not as Frankensteinian as it sounds. Instead, a lab led by Jack 
Szostak, a molecular biologist at Harvard Medical School, is building 
simple cell models that can almost be called life.


http://blog.wired.com/wiredscience/2008/09/biologists-on-t.html

There's a video entitled A Protocell Forming from Fatty Acids.  It's 
fascinating and, at the same time, a bit scary.


Paper co-authored by Szostak published this month:

Thermostability of model protocell membranes
http://www.pnas.org/content/early/2008/09/02/0805086105.full.pdf+html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread YKY (Yan King Yin)
A somewhat revised version of my paper is at:
http://www.geocities.com/genericai/AGI-ch4-logic-9Sep2008.pdf
(sorry it is now a book chapter and the bookmarks are lost when extracting)

On Tue, Sep 2, 2008 at 7:05 PM, Pei Wang [EMAIL PROTECTED] wrote:

   I intend to use NARS confidence in a way compatible with
 probability...

 I'm pretty sure it won't, as I argued in several publications, such as
 http://nars.wang.googlepages.com/wang.confidence.pdf and the book.

I understood your argument about defining the confidence c, and agree
with it.  But I don't see why c cannot be used together with f (as
*traditional* probability).

 In summary, I don't think it is a good idea to mix B, P, and Z. As Ben
 said, the key is semantics, that is, what is measured by your truth
 values. I prefer a unified treatment than a hybrid, because the former
 is semantically consistent, while the later isn't.

My logic actually does *not* mix B, P, and Z.  They are kept
orthogonal, and so the semantics can be very simple.  Your approach
mixes fuzziness with probability which can result in ambiguity in some
everyday examples:  eg, John tries to find a 0.9 pretty girl (degree)
vs  Mary is 0.9 likely to be pretty (probability).  The difference is
real, but subtle, and I agree that you can mix them but you must
always acknowledge that the measure is mixed.

Maybe you've mistaken what I'm trying to do, 'cause my theory should
not be semantically confusing...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread YKY (Yan King Yin)
On Tue, Sep 2, 2008 at 12:05 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 but in a PLN approach this could be avoided by looking at

 IntensionalInheritance B A

 rather than extensional inheritance..

The question is how do you know when to apply the intensional
inheritance, instead of the extensional one.

It seems to me that using the probabilistic interpretation of
fuzziness would force you to use sum-product calculus...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread Pei Wang
Sorry I don't have the time to type a detailed reply, but for your
second point, see the example in
http://www.cogsci.indiana.edu/pub/wang.fuzziness.ps , page 9, 4th
paragraph:

If these two types of uncertainty [randomness and fuzziness] are
different, why bother to treat them in an uniform way?
The basic reason is: in many practical problems, they are involved
with each other. Smets stressed
the importance of this issue, and provided some examples, in which
randomness and fuzziness are
encountered in the same sentence ([20]). It is also true for
inferences. Let's take medical diagnosis
as an example. When a doctor want to determine whether a patient A is
suffering from disease D,
(at least) two types of information need to be taken into account: (1)
whether A has D's symptoms,
and (2) whether D is a common illness. Here (1) is evaluated by
comparing A's symptoms with D's
typical symptoms, so the result is usually fuzzy, and (2) is
determined by previous statistics. After
the total certainty of A is suffering from D is evaluated, it should
be combined with the certainty
of  T is a proper treatment to D (which is usually a statistic
statement, too) to get the doctor's
degree of belief for T should be applied to A. In such a situation
(which is the usual case,
rather than an exception), even if randomness and fuzziness can be
distinguished in the premises,
they are mixed in the middle and final conclusions.

Pei

On Mon, Sep 8, 2008 at 3:55 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 A somewhat revised version of my paper is at:
 http://www.geocities.com/genericai/AGI-ch4-logic-9Sep2008.pdf
 (sorry it is now a book chapter and the bookmarks are lost when extracting)

 On Tue, Sep 2, 2008 at 7:05 PM, Pei Wang [EMAIL PROTECTED] wrote:

   I intend to use NARS confidence in a way compatible with
 probability...

 I'm pretty sure it won't, as I argued in several publications, such as
 http://nars.wang.googlepages.com/wang.confidence.pdf and the book.

 I understood your argument about defining the confidence c, and agree
 with it.  But I don't see why c cannot be used together with f (as
 *traditional* probability).

 In summary, I don't think it is a good idea to mix B, P, and Z. As Ben
 said, the key is semantics, that is, what is measured by your truth
 values. I prefer a unified treatment than a hybrid, because the former
 is semantically consistent, while the later isn't.

 My logic actually does *not* mix B, P, and Z.  They are kept
 orthogonal, and so the semantics can be very simple.  Your approach
 mixes fuzziness with probability which can result in ambiguity in some
 everyday examples:  eg, John tries to find a 0.9 pretty girl (degree)
 vs  Mary is 0.9 likely to be pretty (probability).  The difference is
 real, but subtle, and I agree that you can mix them but you must
 always acknowledge that the measure is mixed.

 Maybe you've mistaken what I'm trying to do, 'cause my theory should
 not be semantically confusing...

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread YKY (Yan King Yin)
On Tue, Sep 9, 2008 at 4:27 AM, Pei Wang [EMAIL PROTECTED] wrote:
 Sorry I don't have the time to type a detailed reply, but for your
 second point, see the example in
 http://www.cogsci.indiana.edu/pub/wang.fuzziness.ps , page 9, 4th
 paragraph:

 If these two types of uncertainty [randomness and fuzziness] are
 different, why bother to treat them in an uniform way?
 The basic reason is: in many practical problems, they are involved
 with each other. Smets stressed
 the importance of this issue, and provided some examples, in which
 randomness and fuzziness are
 encountered in the same sentence ([20]). It is also true for
 inferences. Let's take medical diagnosis
 as an example. When a doctor want to determine whether a patient A is
 suffering from disease D,
 (at least) two types of information need to be taken into account: (1)
 whether A has D's symptoms,
 and (2) whether D is a common illness. Here (1) is evaluated by
 comparing A's symptoms with D's
 typical symptoms, so the result is usually fuzzy, and (2) is
 determined by previous statistics. After
 the total certainty of A is suffering from D is evaluated, it should
 be combined with the certainty
 of  T is a proper treatment to D (which is usually a statistic
 statement, too) to get the doctor's
 degree of belief for T should be applied to A. In such a situation
 (which is the usual case,
 rather than an exception), even if randomness and fuzziness can be
 distinguished in the premises,
 they are mixed in the middle and  final conclusions.


Thanks, that's a good point that I haven't thought of.

For example
I have a _slight_ knee pain  (fuzzy, z = 0.6)
knee pain - rheumatoid arthritis  (p = 0.3)   (excuse me for
making up numbers)
Then my system would convert
knee pain (z = 0.6)   to   knee pain = true (binary)
and conclude
rheumatoid arthritis (p = 0.3)

So there is some loss of information, but I feel this is OK.  Many
commonsense reasoning steps are lossy.  We're not trying to build
doctors here.  A commonsense AGI can control a medical expert system
to achieve professional levels.

The point is, I can always keep P and Z orthogonal.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Will AGI Be Stillborn?

2008-09-08 Thread Eric Burton
I've reflected that superintelligence could emerge through genetic or
pharmaceutical options before cybernetic ones, maybe by necessity. I
am really rooting for cybernetic enlightenment to guide our use of the
other two, though.

On 9/8/08, Brad Paulsen [EMAIL PROTECTED] wrote:

  From the article:

 A team of biologists and chemists [lab led by Jack Szostak, a molecular
 biologist at Harvard Medical School] is closing in on bringing non-living
 matter to life.

 It's not as Frankensteinian as it sounds. Instead, a lab led by Jack
 Szostak, a molecular biologist at Harvard Medical School, is building
 simple cell models that can almost be called life.

 http://blog.wired.com/wiredscience/2008/09/biologists-on-t.html

 There's a video entitled A Protocell Forming from Fatty Acids.  It's
 fascinating and, at the same time, a bit scary.

 Paper co-authored by Szostak published this month:

 Thermostability of model protocell membranes
 http://www.pnas.org/content/early/2008/09/02/0805086105.full.pdf+html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-08 Thread Eric Burton
You can implement a new workaround to bootstrap your organisms past
each local maximum, like catalyzing the transition from water to land
over and over. I find this leads to cheats that narrow the search in
unpredictable ways, though. This problem comes up again and again.

Maybe some kind of drift in the parameters or fitness function would
destabilize deeply-converged positions. I've thought before how useful
it would be to have an AI tuning my GA. _o


On 9/8/08, Benjamin Johnston [EMAIL PROTECTED] wrote:

 I am curious about the result you mention. You say that the
 genetic algorithm stopped search very quickly. Why? It sounds
 like they want to search to go longer, but can't they just
 tell it to go longer if they want it to?

 They found that the system converged too quickly. The initial knowledge
 quickly dominated the population, and then successive generations showed
 little improvement.

 And to reduce convergence, can't they just increase the
 level of mutation? Do you know if they tried this, and if
 so, why it wasn't sufficient?

 The quality of the solutions found using prior knowledge was such that any
 random mutations was almost always inferior. As I understood it, to get out
 of the local maxima that prior knowledge gets a GA stuck in, you really need
 some reasonable quality solutions so that larger structures of a good
 solution can be introduced via cross-over. Any given random mutation was
 usually detrimental - real progress depended on a child being able to
 combine complex substructures from two different parents.

 Other than that, I think there are several things to try. First,
 it seems more natural to me to put the textbook solutions in the
 initial population, rather than coding them as genetic
 operations. Second, if they are used as operations, I'd try
 splitting them up further (just to reduce the bias).

 Yes, those are good points - I have been wondering about that, but I didn't
 have the chance to ask those questions. Presumably one problem is that if
 you just put prior knowledge in the initial population, unmatched to the
 system parameters, then the textbook models would be unreasonably bad; they
 would quickly be eliminated and there would be little chance for them to be
 reintroduced later into the population. One solution to this might then be
 to have a fixed 'immortal' population of textbook models that can be crossed
 with the rest of the population at any time.

 Another possibility could be to use island-GA, with prior knowledge 'banned'
 from some of the islands.

 Anyway, I'm sure there must be lots of different ways that sound like they
 might solve the problem. But, which (or whether any) ones actually work in
 practice is another matter. And that's why I'm curious to know whether AGI
 researchers have encountered this problem, and what they have done about
 it...

 -Ben




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com