Classification: given a set of inputs return a distinct output that compresses 
the information of the input into a smaller set of values. 
Classification tasks can be done with neural networks, fuzzy logic, case based 
reasoning, specialized compression, etc.
Construction: given an initial state, a set of operations, and a goal state, 
return a sequence of operations that transforms the initial state into the goal 
state.
Construction tasks can be done with planning algorithms (state space search, 
plan space search, hierarchical search, etc.).

Both approaches ARE used in complex AI applications. 
~PM

Date: Tue, 17 Feb 2015 09:03:40 +0100
Subject: Re: [agi] Couple thoughts
From: [email protected]
To: [email protected]



On Tue, Feb 17, 2015 at 8:42 AM, Piaget Modeler via AGI <[email protected]> 
wrote:



 I was taught that in AI there are two primary tasks, Classification and 
Construction.
Please correct me where I'm wrong, anyone.  I like to learn.
There has always been a lot of debate about what AI is. We don't even have 
anything close to a consensus on a good definition of "intelligence". This 
leads me to suspect that the main problem with AI is that we don't have a 
well-defined problem to tackle, but that's a broader issue.
Sure "Classification and Construction" is not so bad. It's not a matter of 
being right or wrong. There are thousands of plausible alternatives to this. 
You pick a model and run with it, but let's not pretend we are dealing with 
some super-objective definition. 
Deep Learning and (many other methods) are good at classification tasks.
We also need methods good at construction tasks (i.e. plan generation). 
This "also need" mentality could be the problem. Maybe what we need is 
something that can holistically perform both types of tasks.
Suppose you take deep blue. It can play chess really well, a skill that was up 
to then associated with humans. But then someone says: wait humans are also 
usually good at driving cars. Then you merge Google cars and deep blue and 
claim to be closer to AGI? Does this make any sense? Do you see the problem?
Best,Telmo. 
~PM

> Date: Mon, 16 Feb 2015 16:09:00 -0800
> Subject: [agi] Couple thoughts
> From: [email protected]
> To: [email protected]
> 
> I had a couple of things running through my mind --
> 
> 1) "Deep learning algorithms are very good at one thing today:
> learning input and mapping it to an output. X to Y. Learning concepts
> is going to be hard."    Andrew Ng.
> 
> I guess I take that to be an acid test of where the big guys are with 
> concepts.
> 
> 2) "brain inspired", "physics inspired", "math inspired," X-inspired,
> etc-inspired, hybird-inspired...
> 
> It seems all AGI approaches take the "inspired by" approach.  The only
> approach that is not deliberately inspired by some discipline, but
> aspires to the  actual thing:  Colin Hayes' approach.
> 
> There is nothing wrong with the "inspired by" approach, of course.
> 
> Mike
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to