>
> To me, AGI is about thinking machines. In more technical terms, this would
> imply self-recursive, computerized logic, or machines who can learn. But
> learning is only one part of thinking. Reasoning, as applied learning, is a
> further dimension of thinking.


I would consider thought and reasoning to be the computation of
simulations, and learning to be the construction of the model used for
simulation. In fact, the outcomes of simulations are then used to further
improve the model, so in an indirect way, thought and reasoning also part
of learning. They are not entirely separable.

On Tue, Feb 17, 2015 at 2:29 PM, Nanograte Knowledge Technologies via AGI <
[email protected]> wrote:

> @ Aaron
>
> To me, AGI is about thinking machines. In more technical terms, this would
> imply self-recursive, computerized logic, or machines who can learn. But
> learning is only one part of thinking. Reasoning, as applied learning, is a
> further dimension of thinking. Adaptation is a symbol of being able to
> change to better-fit into one's environment, to survive. This should
> probably also be include din this domain. At such a state fo AGI
> development, we would probably have a machine that would be a version of
> sustainable, autonomous adaptiveness.
>
> I can't speak for AGI, but to my mind the ability to evolve (as a form of
> adaptive learning that could be reproduced without the direct influence of
> the environment the stimuli originated from, to assume a new form, so to
> speak, would also be included in here.
>
> Yet another form of evolution, transmutation - computer-based reproduction
> - even to humans, should be yet another level of AGI. Last, to ascent, into
> being, could well be the ultimate. The ability for a machine to be
> spiritual in a sense, to exist in a quantum state of zero energy, as
> nothing, as a void, open to truly-random energy. This is theoretically
> possible in mutative machines too.
>
> Shroedinger is believed to have only 2 instances of the same cat. I think
> he had a few more instances in mind. As Hawking recently stated; We need to
> move into borderless dimensions, implying the void, or possibly the
> universal vortex.
>
> I hope my comment was helpful.
>
> Rob
>
> ------------------------------
> Date: Tue, 17 Feb 2015 14:03:41 -0600
>
> Subject: Re: [agi] Couple thoughts
> From: [email protected]
> To: [email protected]
>
> My problem is with the definitions of Intelligence, AI, and AGI. Can you
> define that?
>
>
> Here are my definitions:
>
> *Observational Intelligence*: Construction, in the limit, of a predictive
> model of the environment, through observation, which can be used to
> generate simulations that are homomorphic to the environment under the
> transformation of sensory projection. (This constitutes *understanding*,
> without *decision making*.)
>
> *Behavioral Intelligence*: Construction of a mapping from model or
> environment states to behaviors to maximize the value of a reward signal or
> the probability of accomplishing a goal or set of goals. (This constitutes 
> *decision
> making*, without *understanding*.)
>
> *General Intelligence*: The entrainment of observational intelligence
> towards maximization of behavioral intelligence. (This is *decision** making
> based on understanding*.)
>
> The distinction between observational and behavioral intelligence, at a
> high level, roughly parallels that of classification versus construction,
> and of generative versus discriminative models. The reason compression
> comes into the picture, as mentioned by PM regarding classification, is
> two-fold: A compressed model reduces utilization of scarce computational
> resources, and reduces the dimensionality of the parameter space the
> behavioral system must contend with while learning to make effective
> decisions.
>
>
> On Tue, Feb 17, 2015 at 7:48 AM, Telmo Menezes via AGI <[email protected]>
> wrote:
>
>
>
> On Tue, Feb 17, 2015 at 10:24 AM, Piaget Modeler via AGI <[email protected]>
> wrote:
>
> Classification:* given a set of inputs return a distinct output that
> compresses the information of the input into a smaller set of values. *
>
> Classification tasks can be done with neural networks, fuzzy logic, case
> based reasoning, specialized compression, etc.
>
> Construction: *given an initial state, a set of operations, and a goal
> state, return a sequence of operations that transforms the initial state
> into the goal state.*
>
>
> Right, I have no problem with the definitions of Classification and
> Construction. My problem is with the definitions of Intelligence, AI, and
> AGI. Can you define that? We need those definitions to be able to judge if
> Classification and Construction are necessary and sufficient for AGI.
>
>
>
> Construction tasks can be done with planning algorithms (state space
> search, plan space search, hierarchical search, etc.).
>
>
>
> Both approaches *ARE *used in complex AI applications.
>
>
> Yes, and if you look at what we know about how the human brain works, you
> can easily argue that the brain does Classification and Construction. What
> we don't know is if this will turn out to be a useful distinction. For
> example, I can conceive of an ANN being trained to do classification and
> construction at the same time, and without any well defined borders (as I
> suspect happens in the brain).
>
> Or you could argue that Construction is all that's happening and that
> classification is just a detail to help construction (along with learning,
> random exploration, whatever).
>
> Or...
>
> My point is, this is just a model. Models aren't really right/wrong as
> much as they are useful or not.
>
> Telmo.
>
>
>
> ~PM
>
>
> ------------------------------
> Date: Tue, 17 Feb 2015 09:03:40 +0100
> Subject: Re: [agi] Couple thoughts
>
> From: [email protected]
> To: [email protected]
>
>
>
> On Tue, Feb 17, 2015 at 8:42 AM, Piaget Modeler via AGI <[email protected]>
> wrote:
>
>  I was taught that in AI there are two primary tasks, Classification and
> Construction.
>
> Please correct me where I'm wrong, anyone.  I like to learn.
>
>
> There has always been a lot of debate about what AI is. We don't even have
> anything close to a consensus on a good definition of "intelligence". This
> leads me to suspect that the main problem with AI is that we don't have a
> well-defined problem to tackle, but that's a broader issue.
>
> Sure "Classification and Construction" is not so bad. It's not a matter of
> being right or wrong. There are thousands of plausible alternatives to
> this. You pick a model and run with it, but let's not pretend we are
> dealing with some super-objective definition.
>
>
>
> Deep Learning and (many other methods) are good at classification tasks.
>
> We also need methods good at construction tasks (i.e. plan generation).
>
>
> This "also need" mentality could be the problem. Maybe what we need is
> something that can holistically perform both types of tasks.
>
> Suppose you take deep blue. It can play chess really well, a skill that
> was up to then associated with humans. But then someone says: wait humans
> are also usually good at driving cars. Then you merge Google cars and deep
> blue and claim to be closer to AGI? Does this make any sense? Do you see
> the problem?
>
> Best,
> Telmo.
>
>
>
> ~PM
>
> > Date: Mon, 16 Feb 2015 16:09:00 -0800
> > Subject: [agi] Couple thoughts
> > From: [email protected]
> > To: [email protected]
> >
> > I had a couple of things running through my mind --
> >
> > 1) "Deep learning algorithms are very good at one thing today:
> > learning input and mapping it to an output. X to Y. Learning concepts
> > is going to be hard." Andrew Ng.
> >
> > I guess I take that to be an acid test of where the big guys are with
> concepts.
> >
> > 2) "brain inspired", "physics inspired", "math inspired," X-inspired,
> > etc-inspired, hybird-inspired...
> >
> > It seems all AGI approaches take the "inspired by" approach. The only
> > approach that is not deliberately inspired by some discipline, but
> > aspires to the actual thing: Colin Hayes' approach.
> >
> > There is nothing wrong with the "inspired by" approach, of course.
> >
> > Mike
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/25129130-ee4f7d55> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/25129130-ee4f7d55> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to