Re: [agi] Growing Knowledge

2018-09-13 Thread Basile Starynkevitch
On Wed, 12 Sep 2018 at 15:26, Nanograte Knowledge Technologies via AGI 
mailto:agi@agi.topicbox.com>> wrote:


Jim

Bootstrapping a computational platform with domain knowledge
(seeding with insights), was already done a few years ago by the
ex head of AI research in France. I need to find his blogs again,
but apparently he had amazing results with regards re-solving
classical mathematical problems.

Evn with all its bureaucracy, France has not (officially) and never had 
a single head of AI research.



However, regarding bootstrapping AI, I guess you are refering to Jacques 
Pitrat. He is a pionneer of French AI, is a retired academics (he 
probably was born just before WW2) and was a top-level Directeur de 
recherches at CNRS and is still working on bootstrapping his CAIA 
system. He is describing on his blog, 
http://bootstrappingartificialintelligence.fr/WordPress3/ his views on 
AI and his system.



Cheers.


--


Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T032c6a46f393dbd9-M0568289a62ed5c9f5f79a2f2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread Jim Bromer via AGI
https://www.nytimes.com/2018/02/02/science/plants-consciousness-anesthesia.html?module=Promotron=Body=click=article
Jim Bromer
On Thu, Sep 13, 2018 at 8:01 PM Jim Bromer  wrote:
>
> Conscious experience - the soul or whatever it is - is not relevant to
> contemporary computer science. I do not agree with the dismissal of
> that feeling of experience either. As I told Marvin Minsky I do agree
> that whatever conscious experience is it probably has the potential to
> be explained by science one day, but right now it is a mystery.
>
> There are some complications of the experience of our existence, and
> those complications may be explained by the complex processes of mind.
> Since we can think we can think about the experience of life and
> interweave the strands of the experience of our existence. But that
> does not mean that the essential experience can be explained by
> complicated thinking or some other dismissive denial. The processes of
> higher intelligence may shed light on the complexity problem but the
> experience of consciousness is irrelevant to AI because it is not
> strictly a computational thing. It cannot be reduced by our theories
> of mind or life which are currently available and which are certainly
> not part of computer science.
> Jim Bromer
>
> On Thu, Sep 13, 2018 at 4:15 PM  wrote:
> >
> > On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote:
> >
> > I don't even think that stuff is relevant.
> >
> >
> > Jim,
> >
> > It's relevant if consciousness is the secret sauce. and if it applies to 
> > the complexity problem.
> >
> > Would a non-conscious entity have a reason to develop AGI?
> >
> > John
> > Artificial General Intelligence List / AGI / see discussions + participants 
> > + delivery options Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-Md9052ffefe7a514746abaa67
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread Jim Bromer via AGI
Conscious experience - the soul or whatever it is - is not relevant to
contemporary computer science. I do not agree with the dismissal of
that feeling of experience either. As I told Marvin Minsky I do agree
that whatever conscious experience is it probably has the potential to
be explained by science one day, but right now it is a mystery.

There are some complications of the experience of our existence, and
those complications may be explained by the complex processes of mind.
Since we can think we can think about the experience of life and
interweave the strands of the experience of our existence. But that
does not mean that the essential experience can be explained by
complicated thinking or some other dismissive denial. The processes of
higher intelligence may shed light on the complexity problem but the
experience of consciousness is irrelevant to AI because it is not
strictly a computational thing. It cannot be reduced by our theories
of mind or life which are currently available and which are certainly
not part of computer science.
Jim Bromer

On Thu, Sep 13, 2018 at 4:15 PM  wrote:
>
> On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote:
>
> I don't even think that stuff is relevant.
>
>
> Jim,
>
> It's relevant if consciousness is the secret sauce. and if it applies to the 
> complexity problem.
>
> Would a non-conscious entity have a reason to develop AGI?
>
> John
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-Mdf21c1f70ad4448e8dad71ae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
On Thu, Sep 13, 2018, 12:12 PM John Rose  wrote:

> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > We could say that everything is conscious. That has the same meaning as
> > nothing is conscious. But all we are doing is avoiding defining
> something that is
> > really hard to define. Likewise with free will.
>
>
> I disagree. Some things are more conscious. A thermostat might be
> negligibly conscious unless there are thresholds.
>

When we say that X is more conscious tha Y we really mean that X is more
like a human than Y.

The problem is still there how to distinguish between p-zombie and a
> conscious being.
>

The definition of a p-zombie makes this impossible. This should tell you
something.

Qualia is what perception feels like. Your belief in qualia (correcting my
previous email) is motivated by mostly positive reinforcement of your
perceptions.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M21610f6a969341d82c6edf49
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
On Thu, Sep 13, 2018, 4:15 PM  wrote:

> On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote:
>
> I don't even think that stuff is relevant.
>
>
> Jim,
>
> It's relevant if consciousness is the secret sauce. and if it applies to
> the complexity problem.
>

Jim is right. I don't believe in magic.

-- Matt Mahoney

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-Md968338373f4bdf5c563dbc2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Judea Pearl on AGI

2018-09-13 Thread EdFromNH via AGI
If Demis Hassabis, the current leader of Google's DeepMind AI subsidiary,
was able several years ago to create an artificially intelligent program
that could  learn to play each of many different video games much better
than human players -- just from  feedback from  from playing each such game
-- his program obviously had to be able to model the causal inference
inherent in whatever videogame it was learning. So obviously there already
has been a lot of success in AI's being able to do a good job at
automatically learning causal inference.

On Thu, Sep 13, 2018 at 3:45 PM Nanograte Knowledge Technologies via AGI <
agi@agi.topicbox.com> wrote:

> Most interesting. Thanks for sharing. From the little I understand about
> this large, body of work, this makes sense to me. However, I would contend
> that by adopting - what is called by some - a network structure (closing
> loops in a 3-entity structure) would lead to confusing results.
>
> For example, one cannot reliably infer a vertex from that, which may then
> skew the rest of the structural results. . I think it's a classical
> "copout" in systems design; when in doubt, then to close the loop to open
> the associative option i.e., A=> B and C and B => C. Result: A indirectly
> causing C, but it was already inferred that A directly caused C. Did it, or
> didn't it?
>
> This would present as a self-made paradox, not so?
>
>
> --
> *From:* Robert Levy via AGI 
> *Sent:* Thursday, 13 September 2018 10:08 PM
> *To:* AGI
> *Subject:* [agi] Judea Pearl on AGI
>
> I don't think I've seen a discussion on this mailing list yet about
> Pearl's hypothesis that causal inference is the key to AGI.  His
> breakthroughs on causation have been in use for almost 2 decades.  The new
> Book of Why, other than being the most accessible presentation of these
> ideas to a broader audience, is interesting in that it expressly goes into
> applying causal calculus to AGI.
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-Mf8d761b549558b23eeb9b432
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Judea Pearl on AGI

2018-09-13 Thread Nanograte Knowledge Technologies via AGI
Most interesting. Thanks for sharing. From the little I understand about this 
large, body of work, this makes sense to me. However, I would contend that by 
adopting - what is called by some - a network structure (closing loops in a 
3-entity structure) would lead to confusing results.

For example, one cannot reliably infer a vertex from that, which may then skew 
the rest of the structural results. . I think it's a classical "copout" in 
systems design; when in doubt, then to close the loop to open the associative 
option i.e., A=> B and C and B => C. Result: A indirectly causing C, but it was 
already inferred that A directly caused C. Did it, or didn't it?

This would present as a self-made paradox, not so?



From: Robert Levy via AGI 
Sent: Thursday, 13 September 2018 10:08 PM
To: AGI
Subject: [agi] Judea Pearl on AGI

I don't think I've seen a discussion on this mailing list yet about Pearl's 
hypothesis that causal inference is the key to AGI.  His breakthroughs on 
causation have been in use for almost 2 decades.  The new Book of Why, other 
than being the most accessible presentation of these ideas to a broader 
audience, is interesting in that it expressly goes into applying causal 
calculus to AGI.
Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-M4e059f68d10346e680f74b75
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread johnrose
On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote:
> I don't even think that stuff is relevant.

Jim,

It's relevant if consciousness is the secret sauce. and if it applies to the 
complexity problem.

Would a non-conscious entity have a reason to develop AGI?

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-M1cea9ea3e894df9dde086333
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Judea Pearl on AGI

2018-09-13 Thread Robert Levy via AGI
I don't think I've seen a discussion on this mailing list yet about Pearl's
hypothesis that causal inference is the key to AGI.  His breakthroughs on
causation have been in use for almost 2 decades.  The new Book of Why,
other than being the most accessible presentation of these ideas to a
broader audience, is interesting in that it expressly goes into applying
causal calculus to AGI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-M9e6c354c9f8ac56c414a651f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread Jim Bromer via AGI
The problem has always been complexity. If that hadn't been a problem the
paths to achieve AI - even a general AI - would be so numerous that it
would just be a normal programming project. It might take 10 or 20 years to
fully develop the first good models. As far as Artificial Soul or
Artificial Consciousness or Artificial Essence of Life. I don't even think
that stuff is relevant.
Jim Bromer


On Wed, Sep 12, 2018 at 5:18 PM Nanograte Knowledge Technologies via AGI <
agi@agi.topicbox.com> wrote:

> Mike
>
> Sooner or later, someone will stumble upon the activating algorithms. From
> it will be born an artificial version of a prior intelligence and it would
> exist on a computational platform. I'm not sure how long it would take, but
> that is not relevant to my thinking. It will happen when it is meant to. I'
> d like to help the dream by inserting my part of research into an
> architectural blueprint. If we do not start, we'll never get there. I think
> I'm ready to depart on a conceptual and logical spec of version 0.1. Not
> all my own. Just doing the translation work.
>
> Rob
>
>
> Archbold via AGI 
> *Sent:* Wednesday, 12 September 2018 10:27 PM
> *To:* AGI
> *Subject:* Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn
> et alonsciousness^2 ?)
>
> "Can you see the consciousness at work? Can you sense and immerse in
> it? Can you hear the myriad of messages clipping by? If so, you'll
> realize it is pervasive, endless, and not locality driven, not
> discrete. Like the waves of the ocean, the outcome of a whole universe
> conspiring to tell its tale."
>
> The above is pretty good. Actually that was basically where I was at
> last night when I decided AGI was impossible. A conclusion which I
> don't care about though... The problem is made worse with all the hype
> that leads people to believe the above is just ths close to
> being automated. The enemy of the people here being the combinatorial
> explosion and the curse of dimensionality.
>
> On 9/12/18, John Rose  wrote:
> >> -Original Message-
> >> From: Nanograte Knowledge Technologies via AGI 
> >>
> >> Challenging a la Haramein? No doubt. But that is what the adventure is
> >> all
> >> about. Have we managed to wrap our minds fully round the implications of
> >> Mandelbrot's contribution? And then, there is so much else of science to
> >> revisit once the context of an AGI has been adequately " boundaried".
> >
> > Cheers to Mandelbrot not only for the math and science but for the great
> > related art and culture. and music even! Fractal music 
> >
> >> Imagine if "we" could engineer that (to develop an ingenious
> >> consciousness-
> >> based engine), which the vast majority of researchers claim cannot be
> >> done?
> >> Except for lack of specific knowledge and knowhow and an inadequate
> >> resource base (for now), I see no sound reason why such a feat would not
> >> be
> >> possible.
> >
> > Big project 
> >
> > IMO successful AGI will use consciousness functionally but won't call it
> > that since it causes so much hyperventilation. Researchers want
> > non-conscious AGI so it doesn't go rogue LOL. Hmmm wonder about that.
> Could
> > non-conscious go rogue anyway... and is non-conscious even possible.
> >
> > John
> >
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-M0e2bcb708f964bf532c4c045
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My thoughts on the stages of research

2018-09-13 Thread Jim Bromer via AGI
Yes it is. This is, what I believe is a basis to human learning. Of
course we get a lot of outside help, but the value of education (or
instruction) is based on the ability of the human being to be able to
integrate what is being taught (or pointed out). While that seems to
be a little beyond current AI, I think it is clear that AI is already
able to learn so it is just a question of being able to integrate
certain kinds of abstractions like those that must be used in
language.
For example, I should be able to create a synthetic language that
could be used like a programming language, Then it should be possible
to create a synthetic language that does not specify all the details
of a program but which can point to idea (idea objects or subject
objects) and then the program can relate this new idea to the subject
matter that is being pointed to . This was done in early AI but they
quickly ran into a complexity barrier. Those barriers have expanded in
the last 40 years but little research is being done with these methods
because most researchers have to play it safe so they become followers
of whatever is currently working. The point of view that I just
expressed is that we have to use the results of advancements but they
do not always lead directly to other revolutionary advances because
incremental advancements are a necessary part of revolutionary
science. But these advancements have to be based on the intelligent
use of directed imagination and actual experimentation. This
experimentation may be directed at sub-goals. But, then the results of
the experiments and the nature of the sub-goals have to be analyzed.
Was the sub goal an actual prerequisite of the project goal? Was it
just a feasibility test where the sub-goal may lack some features of
the goal (like scale) so that while it may be a prerequisite of
understanding or advancement it is more like a step in the development
of the research project than it is a substantial step in the
production of a successful stage of development.
This is a rough map of how learning might take place in a feasible
concrete AI program. Notice how outside guidance would be so useful in
this process that is almost a design necessity. And yet a human being
would not be able to provide every detail to the program even if he
wanted to try. To some extent, a great extent, the program would have
to be capable of some true learning.
Jim Bromer

On Thu, Sep 13, 2018 at 11:47 AM Stefan Reich via AGI
 wrote:
>
> Is this relating to anything concrete? I'm having a hard time processing 
> abstract essays like that...
>
> Cheers
>
> On Thu, 13 Sep 2018 at 17:42, Jim Bromer via AGI  wrote:
>> 
>> The first stage of learning something new is mostly trial and error.
>> Of course you have to understand some prerequisites before you are
>> capable of learning something new. Simplification is useful at this
>> stage even though it might get in the way. Idealization is a method
>> which you can use to initially create some rough metrics (or something
>> that can be used in ways similar to metrics.) Exaggeration and
>> simplification have some similarities to idealization and so they are
>> useful in this process. The next stage requires that you look at your
>> results and begin to analyze them. Although idealization and
>> simplification are important tools, if they are used inappropriately
>> they can create some interference in the process. The process of
>> analysis is used to find core concepts (or core abstractions) which
>> might to be useful in discovering what went wrong or developing new
>> ideas. Adaptation is a necessary component of new learning. This is
>> the stage when stubborn adherence to some initial idealization or
>> simplification may really interfere in the process of new learning.
>> While you need to continue using simplifications and idealizations, if
>> your simplifications are stuck in the primitive mode they were in
>> during the initial stage of research they will probably interfere in
>> finding an effective adaptation. The next step is to examine some
>> sub-goals which might be useful to discover what seem like necessary
>> pre-requisites for the ultimate goal. Again, you may find that the
>> abstractions and core features of a problem or a hypothetical solution
>> that you thought you understood may be inaccurate. So you may need to
>> refine your ideas about the core features of the problem just as you
>> have to rethink the solutions that you thought might work. I have
>> found that at a later stage of work you may find that you may make
>> advances on sub-goals that go way past what you did at an earlier
>> stage. This recognition may also serve as a kind of metric. Even
>> though you may not have made any substantial progress toward the
>> project goal, the fact that you have made an unexpected advancement in
>> a sub-goal may indicate that it is something worth looking into. Over
>> a period of time, the work which has been done to idealize and
>> 

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> We could say that everything is conscious. That has the same meaning as
> nothing is conscious. But all we are doing is avoiding defining something 
> that is
> really hard to define. Likewise with free will.


I disagree. Some things are more conscious. A thermostat might be negligibly 
conscious unless there are thresholds.


> We will know we have properly modeled human minds in AGI if it claims to be
> conscious and have free will but is unable to tell you what that means. You 
> can
> train it as follows:
> 
> Positive reinforcement of perception trains belief in quality.
> Positive reinforcement of episodic memory recall trains belief in
> consciousness.
> Positive reinforcement of actions trains belief in free will.


I agree. This will ultimately make a p-zombie which is fine for many situations.

The problem is still there how to distinguish between p-zombie and a conscious 
being. 

Solution: Protocolize qualia. A reason for Universal Communication Protocol 
(UCP) is that it scales up.

Then you might say that p-zombies can use machine learning to mimic 
protocolized qualia to deceive. And they can from past communications.

But what they cannot do is generally predict qualia. And you should agree with 
that ala Legg's proof.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mc02d54a4317de005468e466e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My thoughts on the stages of research

2018-09-13 Thread Stefan Reich via AGI
Is this relating to anything concrete? I'm having a hard time processing
abstract essays like that...

Cheers

On Thu, 13 Sep 2018 at 17:42, Jim Bromer via AGI 
wrote:

> The first stage of learning something new is mostly trial and error.
> Of course you have to understand some prerequisites before you are
> capable of learning something new. Simplification is useful at this
> stage even though it might get in the way. Idealization is a method
> which you can use to initially create some rough metrics (or something
> that can be used in ways similar to metrics.) Exaggeration and
> simplification have some similarities to idealization and so they are
> useful in this process. The next stage requires that you look at your
> results and begin to analyze them. Although idealization and
> simplification are important tools, if they are used inappropriately
> they can create some interference in the process. The process of
> analysis is used to find core concepts (or core abstractions) which
> might to be useful in discovering what went wrong or developing new
> ideas. Adaptation is a necessary component of new learning. This is
> the stage when stubborn adherence to some initial idealization or
> simplification may really interfere in the process of new learning.
> While you need to continue using simplifications and idealizations, if
> your simplifications are stuck in the primitive mode they were in
> during the initial stage of research they will probably interfere in
> finding an effective adaptation. The next step is to examine some
> sub-goals which might be useful to discover what seem like necessary
> pre-requisites for the ultimate goal. Again, you may find that the
> abstractions and core features of a problem or a hypothetical solution
> that you thought you understood may be inaccurate. So you may need to
> refine your ideas about the core features of the problem just as you
> have to rethink the solutions that you thought might work. I have
> found that at a later stage of work you may find that you may make
> advances on sub-goals that go way past what you did at an earlier
> stage. This recognition may also serve as a kind of metric. Even
> though you may not have made any substantial progress toward the
> project goal, the fact that you have made an unexpected advancement in
> a sub-goal may indicate that it is something worth looking into. Over
> a period of time, the work which has been done to idealize and
> simplify, test and experiment, analyze and adapt, and refine the
> idealizations and abstractions about both the problem and possible
> solutions should help you to be understand the nature of the problem
> and the nature of what a solution may look like. I believe that
> incremental advances are necessary for revolutionary advances in
> science because they are the basis for revolutionary advancements. But
> you have to have some experience focusing your imagination on actual
> experiments to appreciate the significance of the adaptation of
> simplification, ideals, and abstraction.
> Jim Bromer


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td2f16e9693de44aa-M10021cd5cc289388367c3693
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] My thoughts on the stages of research

2018-09-13 Thread Jim Bromer via AGI
The first stage of learning something new is mostly trial and error.
Of course you have to understand some prerequisites before you are
capable of learning something new. Simplification is useful at this
stage even though it might get in the way. Idealization is a method
which you can use to initially create some rough metrics (or something
that can be used in ways similar to metrics.) Exaggeration and
simplification have some similarities to idealization and so they are
useful in this process. The next stage requires that you look at your
results and begin to analyze them. Although idealization and
simplification are important tools, if they are used inappropriately
they can create some interference in the process. The process of
analysis is used to find core concepts (or core abstractions) which
might to be useful in discovering what went wrong or developing new
ideas. Adaptation is a necessary component of new learning. This is
the stage when stubborn adherence to some initial idealization or
simplification may really interfere in the process of new learning.
While you need to continue using simplifications and idealizations, if
your simplifications are stuck in the primitive mode they were in
during the initial stage of research they will probably interfere in
finding an effective adaptation. The next step is to examine some
sub-goals which might be useful to discover what seem like necessary
pre-requisites for the ultimate goal. Again, you may find that the
abstractions and core features of a problem or a hypothetical solution
that you thought you understood may be inaccurate. So you may need to
refine your ideas about the core features of the problem just as you
have to rethink the solutions that you thought might work. I have
found that at a later stage of work you may find that you may make
advances on sub-goals that go way past what you did at an earlier
stage. This recognition may also serve as a kind of metric. Even
though you may not have made any substantial progress toward the
project goal, the fact that you have made an unexpected advancement in
a sub-goal may indicate that it is something worth looking into. Over
a period of time, the work which has been done to idealize and
simplify, test and experiment, analyze and adapt, and refine the
idealizations and abstractions about both the problem and possible
solutions should help you to be understand the nature of the problem
and the nature of what a solution may look like. I believe that
incremental advances are necessary for revolutionary advances in
science because they are the basis for revolutionary advancements. But
you have to have some experience focusing your imagination on actual
experiments to appreciate the significance of the adaptation of
simplification, ideals, and abstraction.
Jim Bromer

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td2f16e9693de44aa-M4302b77798b74fb8f396212f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
We could say that everything is conscious. That has the same meaning as
nothing is conscious. But all we are doing is avoiding defining something
that is really hard to define. Likewise with free will.

We will know we have properly modeled human minds in AGI if it claims to be
conscious and have free will but is unable to tell you what that means. You
can train it as follows:

Positive reinforcement of perception trains belief in quality.
Positive reinforcement of episodic memory recall trains belief in
consciousness.
Positive reinforcement of actions trains belief in free will.

These are the things that make life better than death, which is good for
reproductive fitness.

On Wed, Sep 12, 2018, 9:21 AM John Rose  wrote:

> > -Original Message-
> > From: Matt Mahoney via AGI 
> >
> > I don't believe that my thermostat is conscious. Or let me taboo words
> like
> > "believe" and 'conscious". I assign a low probability to the possibility
> that my
> > thermostat has a homunculus or an immortal soul or a little person
> inside it
> > that feels hot or cold. I assign a low probability that human brains
> have these
> > either. When we look inside, all we see are neurons.
>
> The thermostat in a tiny binary way I would say is conscious. And I
> speculate has a tiny bit of free will.
>
> It's hard to imagine but there are situations where the thermostat would
> choose to save itself verses getting destroyed. How? Causal feedback into
> its own negentropic complexity. It has a slight preference to exist. This
> probably could be calculated...
>
> Note: I'm more thinking about thermostats that control heat and cold,
> furnace and AC. Not sure about AC-only thermostats in warm areas 
>
> >
> > Your argument that I am conscious is to poke me in the eye and ask
> whether I
> > felt pain or just neural signals. My reaction to pain must either be
> real or it
> > must be involuntary and I lack the free will to ignore it. Well guess
> what. Free
> > will is an illusion too. If you don't believe me, then define it.
> Something you
> > can apply as a test to humans, thermostats, dogs, AI, etc. I'll wait...
> >
> 
> Part of free will is choosing to be responsible for your actions. Meaning?
> Our actions are a discourse with the environment and other agents (aka.
> people, animals, etc..) In conscious existence there are choices since we
> believe other agents might be similarly conscious like ourselves even
> though they could be zombies we err on the positive side. For example my
> neighbor might actually feel pain so I don't maim him and take all his
> consciousness enhancing feel-good things including food, money, women,
> drugs 
> 
> I don't know if this answers your questions but perhaps is in the
> direction of...
> 
> John
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M2bfd453d99bc0c114e424986
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Massive Bacteriological Consciousness - Gut Homunculi

2018-09-13 Thread Stefan Reich via AGI
On Thu, 13 Sep 2018 at 04:02, Logan Streondj via AGI 
wrote:

> personally I'm a monist, dualism has too many problems.
>

What's a monist?

>
> Everything is consciousness,
> all the things we experience (i.e. photons and fermions),
> are just conscious entities communicating to each other.
>

That's REALLY stretching the word "conscious" though, isn't it?

Sometimes I unconsciously do things and later they enter my consciousness.
Consciousness consists of thoughts. What thoughts does a photon have in
your opinion?

Consciousness has clearly defined domains too. I would say Stockfish is
somewhat conscious about chess moves as it deliberately chooses some of
them. However, it is not at all conscious about the fact that it runs
inside a box with a keyboard.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te2ef084a86d2a11e-Me9c2c0ebe594d62fcaeb6c21
Delivery options: https://agi.topicbox.com/groups/agi/subscription