A breakthrough in AGI will be immediately obvious because it will not need
to be "tweaked" for 60 years - or even 10 years - to get it to work.
Once it was figured it out someone would be able to implement simple models
that would confirm the viability of the methods within a few weeks or a few
months.  We have computers that are powerful enough to run intense
simulations or do extensive searches; we are just lacking some fundamental
programming or hardware design that would make visible improvements in AGI
viable.

I can't think of specific examples of where AGI algorithms fail because
with good programs they fail or would fail whenever an effort is made to
take the program beyond a fairly low level of achievement.  (I can't tell
if Watson is a viable model for general intelligence or not because I don't
know how it works, but right now it seems like it is unable to learn to
work with combinatorial uncertainty and multiple path integration issues.)

As long as the problem is kept simple feasible AGI programs can learn by
using a method of validification through some kind of acknowledgement.
What I am saying is that I could write a simple effective AGI program that
could learn a few hundred "ideas" (or idea-like knowledge objects) and use
them in ways that correspond to the ways that the program had seen them
used.  However, this program could not continue to learn new ideas that
could be used the way human beings would find familiar because they would
be so severly limited.  So we have programs that can learn to understand a
great deal of speech or to translate from one language to another or to
detect some handwritten characters but it is done without much flexibility
or the capability to explore novel paths of insight in any but the simplest
or most structured ways.  No one can deny that these programs are viable
examples of artificial intelligence but they always seem to lack the
spark that only novel thinking can provide.

So the examples of problems that haven't been solved can be best specified
as kinds of problems. One thing that AI programs haven't been able to do is
to effectively use sustained exploration of ideas or concepts in order to
solve novel kinds of problems (that had not been specified by using well
formed narrow method like a highly specified mathematical formula or
which are simple enough to be solved by a contemporary neural network.)

A multiple path problem is one in which different paths of reasoning can be
used to arrive at a conclusion.  Multiple path problems should be easy for
actual AGI programs because they usually have common nodes where divergent
paths toward a solution can be taken.  This would give an AGI program the
advantage to try another path to take if it got stuck and it should give
the AGI program ample opportunities to learn about different strategies.
However, I don't know of any AGI program that is able to solve problems
like this (except for actual path taking when the paths are reasonably easy
to traverse) and I think it is specifically because the multiple path
problems also present combinatorial complexity to genuine learning
algorithms.  Some board games, like chess, are multiple path games, and
here AI is able to do well by using an artificial position evaluation
method.  So we have a good chess model that works specifically by choosing
only the best path that deep searches can provide using a position
evaluation algorithm and even today most chess programs do not actually go
through much learning outside the most mundane employment of record
keeping.  The problem is that it is almost impossible (or it is currently
impossible) to find the different ways to efficiently represent the
characterizations of an event so that it could be grouped with other events
that share some similarity with it.  Because we are able to use reasoning
that goes beyond superficial similarities we are able to find hundreds or
thousands of possible associations from one concept to another.  This
richness in potential comes at a cost of overwhelming complexity.

You talked about coordination using the attributes of a concept, but when
you offer some examples they are predictably artificial and insipid.  (That
isn't an insult, typical examples are insipid because they are so
concise.)  Part of this might be due to the time it would take to represent
all the attributes of a concept but if you were to start to list all the
attributes of a concept that you could think of, the potential to find
how that concept could be related to other concepts would make the
complications and complexity of that method plain.  And that doesn't even
take the added complications of exploring tentative hypotheses to account.

Jim Bromer

Jim Bromer
On Thu, Dec 6, 2012 at 4:38 PM, Piaget Modeler <[email protected]>wrote:

>  Jim,
>
> Everything hinges on what problems you are attempting to solve and how you
> frame those problems.
>
> I know specifically what problems I'm addressing.  But it sounds to me
> that you have not defined the
> larger problems well enough for yourself to tackle them with any known
> methods.  You have to be
> more specific.
>
> What do you mean by "in all significant cases"?  Examples please.
>
> What do you mean by "solve the kinds of problems that you would need to
> solve"?  Examples please.
>
> "It seems obvious to me that a memetic algorithm is not a breakthough
> method that would make an
> AGI program feasible".  Is it the case that you'll know the breakthrough 
> algorithm
> when you see it?
> It'll be obvious to everyone?  What are the characteristics of such a
> breakthrough algorithm that
> will enable you to recognize it?   And will it be a single algorithm or a
> combination of algorithms
> that will enable AGI?  Please be specific.
>
> Everything hinges on what how you frame the "AGI problem"  and the methods
> you employ to address it.
>
> I know which problems I'm trying to solve.
>
> ~PM.
>
>
> ------------------------------
> Date: Thu, 6 Dec 2012 15:50:46 -0500
> Subject: Re: [agi] Memetic algorithms
> From: [email protected]
> To: [email protected]
>
>
> Memetic algorithms sound like more of the same.  Maybe I am not getting it
> but it doesn't sound like it is going to lead to anything that less
> formalized methods haven't been able to do.  It seems obvious to me that a
> memetic algorithm is not a breakthrough method that would make an AGI
> program feasible.
>
> You say that a meme is a strategy?  Before I read the thing on Memetic
> Algorithms I thought that that remark made perfect sense, but now that I
> have read it I am wondering what are you talking about?  I mean really.
>
> A genetic algorithm is a neat thing, ok and I can understand that a
> variation on it is very interesting.  But to believe that it will solve the
> kinds of problems that you would need it to solve is inexplicable.  This is
> not a solution to np-complexity it is a generator of it.  Isn't it
> obvious?  Have I missed some great efficacy that lurks in the method that
> was hidden in my superficial reading of the description in Wikipedia?  If I
> had I am pretty sure I would have sensed it.
>
> A concept or a meme cannot (reliably or always) be decomposed into a set
> of elemental parts.  Because the parts of the concept are concepts
> themselves they can be studied, further explored, expanded and grouped with
> other related concepts.  This is a property that I call relativistic.  Of
> course you can use recombinations of concepts and memes and that method is
> necessary for imaginative projection and analysis and so on.  But to
> believe that a method like memetic algorithms would lead to greater
> comprehension - in all significant cases - does not seem like a reasonable
> presumption to me.
>
> Jim Bromer
>
>
>
>
>
> On Thu, Dec 6, 2012 at 1:43 PM, Piaget Modeler 
> <[email protected]>wrote:
>
> Jim:  First, a meme cannot be modelled in the same way a superficial data
> string can be.
> http://en.wikipedia.org/wiki/Memetic_algorithm
>
> In the lingua of MA a meme is a strategy; individuals within populations
> are recombined.
>
> In PAM-P2 a solution is an individual, and solutions do undergo
> recombination and mutation
> during regulation and compensation.
>
> ~PM.
>
> -------------
>
> The wikipedia definition of memetics was interesting.  Assuming that I can
> make a pretty good guess about how your idea of memetic recombination might
> work, I would say that your imagined usage of the method has some serious
> problems.  First, a meme cannot be modelled in the same way a superficial
> data string can be so the comparison of memetic algorithms to recombination
> in genetic algorithms seems fanciful.  Secondly the idea that the
> attributes of a concept might be clearly differentiated in an automated
> system that is able to learn and then used to clearly integrate different
> ideas seems unlikely.  I do not think the concept is impossible, I think
> that it is complicated.  It is a problem of complexity.  You mentioned that
> you thought you can avoid complexity by using many small search problems.
> Although I cannot point to this or that study which can drive this point
> home, I do feel that there is ample evidence that domain restricted
> learning has not worked in AI just because we need to use concepts outside
> of the domain in order to understand those concepts which are strongly
> within the domain.  (By the way, here is where an imagined efficiency of
> using weighted evaluations can really turn to nonsense. You can't eliminate
> the need to look outside the domain to determine meaning or relevance
> just by putting a numerical value on how much a meme belongs to a
> particular domain.)
>
>
> Jim Bromer
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to