Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-03 Thread Nanograte Knowledge Technologies
Matt

Suppose, we're not living in a simulation. but in a future reality we created 
for ourselves by individually, and collectively, applying our thoughts, will, 
and actions in a superstate of the spacetime continuum, which may well exist 
between the past and the future? The superstate could be at zero-point energy. 
Possibility would exist there.


From: Matt Mahoney 
Sent: Monday, 02 September 2019 22:39
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

If we are living in a simulation then of course anything is possible. It isn't 
a law that nothing is faster than light. It's an observation. Here are at least 
4 possibilities, listed in decreasing order of complexity, and therefore 
increasing likelihood if Occam's Razor holds outside the simulation.

1. Only your brain exists. All of your sensory inputs are simulated by a model 
of a non-existent outside world. The universe running this simulation is 
completely different and we can know nothing about it. It might be that space, 
time, matter, and life are abstract concepts that exist only in the model.

2. Your mind, memories, and existence are also simulated. You didn't exist one 
second ago.

3. The observable universe is modeled in a few hundred bits of code tuned to 
allow intelligence life to evolve. In this case the speed of light is in the 
code.

4. All possible universes with all possible laws of physics exist and we 
necessary observe one that allows intelligent life to evolve.

There may be other possibilities that a simulation wouldn't allow us to imagine.

On Mon, Sep 2, 2019, 6:49 AM Nanograte Knowledge Technologies 
mailto:nano...@live.com>> wrote:
It's easier to break a mold than gluing it back together again.

Let's not male science our god, nor our demon. It's still up to us to procure 
meaning from chaos, or at least to describe such an observation via an 
appropriate, qualitative and quantitative language.

I'm in favor of experimental science, but there has to be a discipline 
involved, else it's just hacking away till you've deforested the forest.

Your point on the speed of light? Most interesting. Perhaps a more-practical 
example of "going faster than the speed of light" would be useful. I suppose, 
it'll all end in a static, spot of white light, wouldn't it? Just as it began.

Ask yourself this; would an AGI entity be having this kind of discussion, and 
if so, how would it flow?




From: johnr...@polyplexic.com<mailto:johnr...@polyplexic.com> 
mailto:johnr...@polyplexic.com>>
Sent: Sunday, 01 September 2019 23:54
To: AGI mailto:agi@agi.topicbox.com>>
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies wrote:
But, I strongly disagree with the following statement, for it contains an 
inherent contradiction.

"It is allowed to break physics or invent new ones in a virtual world."

No, they should not be allowed. The definition of engineering, as putting 
method to science, denounces such anarchism. Engineers have to take method and 
use it in context of science. If no science exists yet, they seemingly have the 
obligation to try equally hard to develop and formalize it.

What I meant, for example that old saying, what goes faster than the speed of 
light? Thought. I always considered that stupid but it actually isn’t. If you 
have models in a software virtual world they can break all kinds of physics 
(and mathematics) in an attempt to shortcut to solutions and/or model more 
accurately with existing resources.

A Few wise Yogi quotes:
"In theory there is no difference between theory and practice. In practice 
there is."
"We made too many wrong mistakes."
"If the world was perfect, it wouldn’t be."

What is one way to bypass combinatorial explosions? Break rules.  Shhh it’s a 
secret :) and it’s OK. That’s how things work.

John

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M35540f43ca4f7b08a2d379b1>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M3482aad723dcf1c15bb3b3f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-03 Thread Nanograte Knowledge Technologies
Matt

The irony to me, of what every human believe in, is that it is conditional. For 
example, "We believe in science, but we don't believe in possibility." 
Strangely though, it is exactly that inherent, relativistic effect in every 
human, which should give rise to possibility. We do not have to all agree on 
the assumption that all laws of how the universe works are known to science. We 
already know - because of science - that all laws are not known, for example, 
the laws on dark matter. This fact opens up possibilities for re-examining and 
challenging conventional science.

Bearing in mind, we cannot simply discard what is known as "irrelevant". For 
example. we cannot simply conclude that all light spectra do not travel through 
a vacuum at a constant speed.

The process of science allows for the re-examination of published facts. 
However, there is an acceptable process for doing so. This is t






From: Matt Mahoney 
Sent: Monday, 02 September 2019 22:39
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

If we are living in a simulation then of course anything is possible. It isn't 
a law that nothing is faster than light. It's an observation. Here are at least 
4 possibilities, listed in decreasing order of complexity, and therefore 
increasing likelihood if Occam's Razor holds outside the simulation.

1. Only your brain exists. All of your sensory inputs are simulated by a model 
of a non-existent outside world. The universe running this simulation is 
completely different and we can know nothing about it. It might be that space, 
time, matter, and life are abstract concepts that exist only in the model.

2. Your mind, memories, and existence are also simulated. You didn't exist one 
second ago.

3. The observable universe is modeled in a few hundred bits of code tuned to 
allow intelligence life to evolve. In this case the speed of light is in the 
code.

4. All possible universes with all possible laws of physics exist and we 
necessary observe one that allows intelligent life to evolve.

There may be other possibilities that a simulation wouldn't allow us to imagine.

On Mon, Sep 2, 2019, 6:49 AM Nanograte Knowledge Technologies 
mailto:nano...@live.com>> wrote:
It's easier to break a mold than gluing it back together again.

Let's not male science our god, nor our demon. It's still up to us to procure 
meaning from chaos, or at least to describe such an observation via an 
appropriate, qualitative and quantitative language.

I'm in favor of experimental science, but there has to be a discipline 
involved, else it's just hacking away till you've deforested the forest.

Your point on the speed of light? Most interesting. Perhaps a more-practical 
example of "going faster than the speed of light" would be useful. I suppose, 
it'll all end in a static, spot of white light, wouldn't it? Just as it began.

Ask yourself this; would an AGI entity be having this kind of discussion, and 
if so, how would it flow?




From: johnr...@polyplexic.com<mailto:johnr...@polyplexic.com> 
mailto:johnr...@polyplexic.com>>
Sent: Sunday, 01 September 2019 23:54
To: AGI mailto:agi@agi.topicbox.com>>
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies wrote:
But, I strongly disagree with the following statement, for it contains an 
inherent contradiction.

"It is allowed to break physics or invent new ones in a virtual world."

No, they should not be allowed. The definition of engineering, as putting 
method to science, denounces such anarchism. Engineers have to take method and 
use it in context of science. If no science exists yet, they seemingly have the 
obligation to try equally hard to develop and formalize it.

What I meant, for example that old saying, what goes faster than the speed of 
light? Thought. I always considered that stupid but it actually isn’t. If you 
have models in a software virtual world they can break all kinds of physics 
(and mathematics) in an attempt to shortcut to solutions and/or model more 
accurately with existing resources.

A Few wise Yogi quotes:
"In theory there is no difference between theory and practice. In practice 
there is."
"We made too many wrong mistakes."
"If the world was perfect, it wouldn’t be."

What is one way to bypass combinatorial explosions? Break rules.  Shhh it’s a 
secret :) and it’s OK. That’s how things work.

John

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M35540f43ca4f7b08a2d379b1>


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-03 Thread johnrose
Qualia flow, the dots are qualia :)
https://www.youtube.com/watch?v=vw9vjEB1S2Y

Transform into text:
https://www.youtube.com/watch?v=myFR8FTXOM4

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me732f7b91cd5f1781446e973
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-03 Thread johnrose
Our minds are simulating most everything. We can imagine a model where a 
spaceship goes from Earth to Pluto in 1 second virtually breaking the speed of 
light (we know it isn't really). My thoughts were that consciousness is the one 
piece of the mind that isn't a model or a simulation. And for a machine based 
being self-awareness could be the act of occupying a representation of itself 
of itself of itself of itself...

Holistically sure we could be in a video game that some teenager left running 
on his computer overnight in an advanced technological future. Or this really 
could be it, it is what it is, base. Who knows, I have my suspicions, like:

1) As our consciousness expands, we are creating the universe or something is 
creating it.
2) We are beings somehow injected into this existence from some other reality.
3) We are just part of a larger informational based trans-dimensional creature 
structured on DNA where we are instance nodes of that informational being.

Or each of the above are true. And they are in some way.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M8a5fba12af1e12a3d9f417e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-02 Thread immortal . discoveries
Typo - *it is true.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M1b1996cd3bd777d6b2af45f0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-02 Thread immortal . discoveries
Thought isn't faster than light, brain waves don't move that fast. And humans 
are much slower than computers doing tasks. This is part of our society, is it 
true.

In the 4 given statements Matt gave, either we are in a sim or in the base 
physics.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M25f8079cbf1081691369cb0a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-02 Thread Matt Mahoney
If we are living in a simulation then of course anything is possible. It
isn't a law that nothing is faster than light. It's an observation. Here
are at least 4 possibilities, listed in decreasing order of complexity, and
therefore increasing likelihood if Occam's Razor holds outside the
simulation.

1. Only your brain exists. All of your sensory inputs are simulated by a
model of a non-existent outside world. The universe running this simulation
is completely different and we can know nothing about it. It might be that
space, time, matter, and life are abstract concepts that exist only in the
model.

2. Your mind, memories, and existence are also simulated. You didn't exist
one second ago.

3. The observable universe is modeled in a few hundred bits of code tuned
to allow intelligence life to evolve. In this case the speed of light is in
the code.

4. All possible universes with all possible laws of physics exist and we
necessary observe one that allows intelligent life to evolve.

There may be other possibilities that a simulation wouldn't allow us to
imagine.

On Mon, Sep 2, 2019, 6:49 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> It's easier to break a mold than gluing it back together again.
>
> Let's not male science our god, nor our demon. It's still up to us to
> procure meaning from chaos, or at least to describe such an observation via
> an appropriate, qualitative and quantitative language.
>
> I'm in favor of experimental science, but there has to be a discipline
> involved, else it's just hacking away till you've deforested the forest.
>
> Your point on the speed of light? Most interesting. Perhaps a
> more-practical example of "going faster than the speed of light" would be
> useful. I suppose, it'll all end in a static, spot of white light, wouldn't
> it? Just as it began.
>
> Ask yourself this; would an AGI entity be having this kind of discussion,
> and if so, how would it flow?
>
>
>
> --
> *From:* johnr...@polyplexic.com 
> *Sent:* Sunday, 01 September 2019 23:54
> *To:* AGI 
> *Subject:* Re: [agi] Re: ConscioIntelligent Thinkings
>
> On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies
> wrote:
>
> But, I strongly disagree with the following statement, for it contains an
> inherent contradiction.
>
> "It is allowed to break physics or invent new ones in a virtual world."
>
> No, they should not be allowed. The definition of engineering, as putting
> method to science, denounces such anarchism. Engineers have to take method
> and use it in context of science. If no science exists yet, they seemingly
> have the obligation to try equally hard to develop and formalize it.
>
>
> What I meant, for example that old saying, what goes faster than the speed
> of light? Thought. I always considered that stupid but it actually isn’t.
> If you have models in a software virtual world they can break all kinds of
> physics (and mathematics) in an attempt to shortcut to solutions and/or
> model more accurately with existing resources.
>
> A Few wise Yogi quotes:
> "In theory there is no difference between theory and practice. In practice
> there is."
> "We made too many wrong mistakes."
> "If the world was perfect, it wouldn’t be."
>
> What is one way to bypass combinatorial explosions? Break rules.  Shhh
> it’s a secret :) and it’s OK. That’s how things work.
>
> John
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M9b29a1bfee52310754e0969b>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M35540f43ca4f7b08a2d379b1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-02 Thread Nanograte Knowledge Technologies
It's easier to break a mold than gluing it back together again.

Let's not male science our god, nor our demon. It's still up to us to procure 
meaning from chaos, or at least to describe such an observation via an 
appropriate, qualitative and quantitative language.

I'm in favor of experimental science, but there has to be a discipline 
involved, else it's just hacking away till you've deforested the forest.

Your point on the speed of light? Most interesting. Perhaps a more-practical 
example of "going faster than the speed of light" would be useful. I suppose, 
it'll all end in a static, spot of white light, wouldn't it? Just as it began.

Ask yourself this; would an AGI entity be having this kind of discussion, and 
if so, how would it flow?




From: johnr...@polyplexic.com 
Sent: Sunday, 01 September 2019 23:54
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies wrote:
But, I strongly disagree with the following statement, for it contains an 
inherent contradiction.

"It is allowed to break physics or invent new ones in a virtual world."

No, they should not be allowed. The definition of engineering, as putting 
method to science, denounces such anarchism. Engineers have to take method and 
use it in context of science. If no science exists yet, they seemingly have the 
obligation to try equally hard to develop and formalize it.

What I meant, for example that old saying, what goes faster than the speed of 
light? Thought. I always considered that stupid but it actually isn’t. If you 
have models in a software virtual world they can break all kinds of physics 
(and mathematics) in an attempt to shortcut to solutions and/or model more 
accurately with existing resources.

A Few wise Yogi quotes:
"In theory there is no difference between theory and practice. In practice 
there is."
"We made too many wrong mistakes."
"If the world was perfect, it wouldn’t be."

What is one way to bypass combinatorial explosions? Break rules.  Shhh it’s a 
secret :) and it’s OK. That’s how things work.

John

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Ma37955495624271bf462819d>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M9b29a1bfee52310754e0969b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-01 Thread johnrose
On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies wrote:
> But, I strongly disagree with the following statement, for it contains an 
> inherent contradiction. 
>  
>  "It is allowed to break physics or invent new ones in a virtual world." 
> 
> No, they should not be allowed. The definition of engineering, as putting 
> method to science, denounces such anarchism. Engineers have to take method 
> and use it in context of science. If no science exists yet, they seemingly 
> have the obligation to try equally
 hard to develop and formalize it.

What I meant, for example that old saying, what goes faster than the speed of 
light? Thought. I always considered that stupid but it actually isn’t. If you 
have models in a software virtual world they can break all kinds of physics 
(and mathematics) in an attempt to shortcut to solutions and/or model more 
accurately with existing resources.

A Few wise Yogi quotes:
"In theory there is no difference between theory and practice. In practice 
there is."
"We made too many wrong mistakes."
"If the world was perfect, it wouldn’t be."

What is one way to bypass combinatorial explosions? Break rules.  Shhh it’s a 
secret :) and it’s OK. That’s how things work.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Ma37955495624271bf462819d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-30 Thread Nanograte Knowledge Technologies
John

For once, I fully agree with you. This is perhaps not just a part of the 
problem (issue), but probably one of the biggest problems facing general, AGI 
progress.
" Part of the issue here is that engineers particularly software cannot wait 
for science in many cases."

But, I strongly disagree with the following statement, for it contains an 
inherent contradiction.

"It is allowed to break physics or invent new ones in a virtual world."

No, they should not be allowed. The definition of engineering, as putting 
method to science, denounces such anarchism. Engineers have to take method and 
use it in context of science. If no science exists yet, they seemingly have the 
obligation to try equally hard to develop and formalize it.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mbe81c82d9877b83423840c4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread immortal . discoveries
@Matt We think the same
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mca07bd6601dd56a5ab108f57
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread Matt Mahoney
On Thu, Aug 29, 2019, 7:39 AM  wrote:

> On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
>
> Like I said when I first posted on this thread, phenomenal consciousness
> is neither necessary nor sufficient for an intelligent system.
>
>
> This is the premise that you are misguided by. Who is building the
> intelligent systems? Grunts that so happen to have phenomenon
> consciousness, not the opposite.
>

Phenomenal consciousness is what thinking feels like. This feeling evolved
because it motivates you to not die and let those feelings stop. It doesn't
require any new physics. It can be explained entirely by neural
computation. We used to call it a soul, but those of us who understand
computers know better.

Likewise, qualia is what perception feels like, and free will is what
(deterministic) action feels like. These feelings also evolved so that you
would fear dying.

I realize it is disturbing to conclude that your mind is just a
computation. But if AGI is possible, then it has to be. There is no
objective evidence other than your own feelings that phenomenal
consciousness, qualia, or free will is anything else. And feelings are just
neural signals that modify your behavior.

Remember that the first objective of AGI is to reproduce human capabilities
needed to automate work. One of those capabilities is modeling human
behavior so that AGI can communicate more effectively with its masters.
Phenomenal consciousness, qualia, and free will don't affect human
behavior, but our opinions about them do have observable effects that need
to be modeled. An AGI should not have emotions, but should understand how
they work in humans in order to accurately predict behavior.

The second objective of AGI is to extend life. Brains wear out and will
eventually need to be replaced with functionally equivalent devices, just
like all our other organs. The simplest way to do this is to develop a
model of your mind through years of observation and program a robot that
looks like you to carry out the model's predictions of your actions in real
time. As far as anyone can tell, the robot is you, but in a substrate where
your mind can be backed up to the internet.

FAQ:

Q. If I shoot myself, will I wake up as a robot?
A. Yes, because you will be programmed to believe that.

Q. Won't I just be pretending to have feelings?
A. No, you will have no memory of pretending to act out feelings you don't
have, so the feelings will seem real. That is how your brain works now.

Q. Won't I lose free will?
A. No. Free will is an illusion. Your model, like your brain, will still be
programmed to express this illusion while behaving deterministically.

Q. Won't I lose qualia?
A. No. Qualia is an illusion. You will still respond to input the same way
and you will still express a sensation of qualia.

Q. Won't I be a philosophical zombie?
A. No. There is no such thing as a zombie. You will still be able to
imagine such a thing and honestly claim not to be one.

>
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mb8f87ae225ca8091e99a22ee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
Clarified:

AGI={I,C,M,PSI}={I,UCP+OR,M,BB}; BB=Black Box

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M3849c56767c291ea6a534cf9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
On Thursday, August 29, 2019, at 6:32 AM, Nanograte Knowledge Technologies 
wrote:
> Qualia are communicable.
> As such, I propose a new research methodology, which pertains to one-off 
> valid and reliable experimentation when dealing with the "unseen". The 
> "public" and repeat" tests for vetting it as science could be replaced by a 
> suitably-representative body of reviewing
 scientists who are accredited in the limitations of subjective, scientific 
observation.

Originally, I did not like the word "qualia" but it's actually quite good. When 
Chalmers or whoever named it put his or her stake on that location in the 
language on that combination of letters it was a good choice.

Part of the issue here is that engineers particularly software cannot wait for 
science in many cases. It is allowed to break physics or invent new ones in a 
virtual world. And engineers need words to put into code. Also, there are many 
symbol issues in contemporary language that have not been addressed generally. 
So two conscious entities need better communications channels to convey 
structure more efficiently and this is easier to do among software agents 
verses human by expanding the symbol complexity and bandwidth. In a perfect 
world full qualia would be instantly transmittable. But this is facilitated 
contemporarily by transmitting multimedia verses just natural language thus the 
 addition of mechanisms like MMS, video conferencing, realtime document 
sharing, etc..

Some researchers say qualia cannot be transmitted. I would change that to say 
full qualia are not transmittable yet.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M75616ccb2a402d5bdda20964
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
> Like I said when I first posted on this thread, phenomenal consciousness is 
> neither necessary nor sufficient for an intelligent system.

This is the premise that you are misguided by. Who is building the intelligent 
systems? Grunts that so happen to have phenomenon consciousness, not the 
opposite.

Well I was thinking of calling all this Gloobledeglockedicnicty or individually 
use 15 other terms every time it's mentioned. But my qualia on it better fit 
into the term "consciousness" and other grunts can relate better.  (Well some 
of them :) )

...

I also want to build an artificial heart. Oh nnooo can't call it a heart. It 
doesn't feel love. Note IMO the heart is an integral part of human intelligence.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mce6ed6677364c02685c2d5cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread Nanograte Knowledge Technologies
"Qualia are personal and incommunicable *by definition,*..."

I tend to disagree with the  assertion how qualia are "incommunicable". Shall 
we we revisit the definition for absolute proof?

Qualia are communicable. I have proven that using a scientific method. I'm 
referring to qualia here in the context of "tacit knowledge". No matter if the 
subject does not know what it knows, or even that it knows. If explicit, 
verifiable evidence of subjective experience could be expressed in a valid and 
reliable manner, as objective fact in context of the holistic experience, it 
should pass as science.

However, the problem to science is; once subjectivity has been made objective, 
how could it be returned to a pure state of subjectivity for the experiment to 
be reliably replicated by others?  That thought encapsulates many of the 
ambiguous problems bio-information science are seemingly struggling with, e.g., 
NP-Hard, ambiguity as well as quantum-spin observations.

As such, I propose a new research methodology, which pertains to one-off valid 
and reliable experimentation when dealing with the "unseen". The "public" and 
repeat" tests for vetting it as science could be replaced by a 
suitably-representative body of reviewing scientists who are accredited in the 
limitations of subjective, scientific observation.



From: WriterOfMinds 
Sent: Thursday, 29 August 2019 00:49
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

"You don’t know my qualia on red ... We may never know that your green is my 
red."
Great, seems like we've reached agreement on something.
When we communicate with words like "red," we're really communicating about the 
frequency of light. I would argue that we are not communicating our qualia to 
each other. If we could communicate qualia, we would not have this issue of 
being unable to know whether your green is my red. Qualia are personal and 
incommunicable *by definition,* and it's good to have that specific word and 
not pollute it with broader meanings.

In the mouse example, I was assuming that I had fully modeled the 
electro-mechanical phenomena in *this specific* mouse. I still don't think that 
would give me its qualia.

I would be happy to refer to a machine with an incommunicable first-person 
subjective experience stream as "conscious." But you've admitted that you're 
not trying to talk about incommunicable first-person subjective experiences, 
you're trying to talk about communication. I'm not concerned with whether the 
"consciousness" is mechanical or biological, natural or artificial; I'm 
concerned with whether it's actually "consciousness."
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M123187415d84d17b03b08bf7>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M6c0065d6583e018c990255af
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
Cool, but ... I maintain that none of this is about consciousness.  Knowledge 
representation, abstraction and compression via symbolism, communication of 
structure, common protocols and standards for describing physical phenomena ... 
these are all intelligence tasks. Specifically, the communication-related stuff 
would be part of social and linguistic intelligence. If you want some labels 
that convey the "inter-agent" aspect without confusing everyone, I think those 
would do.

The thing you call "occupying representation" ... a  conscious agent can do it, 
but an unconscious agent can too.  The ability to decompress information and 
construct models from symbolic communication does not require or imply that the 
agent has its own qualia or first-person experiences.

And I do agree that, for the practical/utilitarian purpose of Getting Things 
Done, this is useful and is all you need for cooperative agents. Like I said 
when I first posted on this thread, phenomenal consciousness is neither 
necessary nor sufficient for an intelligent system.

I think your comment about "Gloobledeglock" actually illustrates my point. 
Communication breaks down here because you haven't tied Gloobledeglock to a 
causative external event. If you said something like, "I feel Gloobledeglock 
whenever I get rained on," then I could surmise (with no guarantee of 
correctness) that you feel whatever it is I feel when I get rained on. 
Observable events, in the world external to both our minds, are things we can 
hold in common and use as the basis of communication protocols. We can't hold 
qualia in common, or transfer them (even partially).

> AGI researchers are so occluded by first-person.

Umm since when? I certainly don't think an AGI system has to be an isolated 
singleton that only deals with first-person information. I think the kerfuffle 
in this thread is about you appearing to claim that Universal Communication 
Protocols and the ability to "occupy representation" are something they are 
not. We're not trying to give an exaggerated importance to phenomenal 
consciousness ... quite the opposite, in fact. We're just saying that the 
systems you describe don't have it.

Signing off now. Good luck with your work.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M0b99c375007957bfa978963b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Nanograte Knowledge Technologies
John

I know you know this, so perhaps you're clickbaiting me. :-)

Digital devices do not pass the qualia test. Therefore, the example is invalid. 
However, it has relevance for this debate.

As a thought experiment, perhaps try an example of the interaction between 
yourself, your PC's processor, the resident operating system, and a peripheral 
device. It should be interesting.




From: johnr...@polyplexic.com 
Sent: Wednesday, 28 August 2019 16:06
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

On Wednesday, August 28, 2019, at 9:30 AM, Nanograte Knowledge Technologies 
wrote:
Any generalized system relying on the random, subjective input value of qualia 
would give rise to the systems constraint of ambiguity. Therefore, as a policy, 
all subjectively-derived data would introduce a semantic anomaly into any 
system - by design. This has significance for the assertion that qualia input 
would be useful for symbolic systems.


Simple example, my PC and mouse communicate. They're separated. Assume they 
have simple digital qualia. Is the mouse able to compute the k-complexity of 
the PC? No. It's estimated. Does the PC use the qualia of the mouse that are 
communicated? Click click. The mouse is compressing my finger action into 
simple  digital symbols for communication. Can I compute the exact electron 
flow and mechanical action, IOW feel the mouse’s qualia? No, it's estimated but 
estimated very reliably with almost zero errors.

Take more complicated distributed systems with more symbol complexity and it's 
still the same principle except that more consciousness is generally required 
among the communicating agents.

John

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M6c1a44183a5c4d56e8f57655>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Md08315f9b5aacd84d59429e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> Great, seems like we've reached agreement on something.
> When we communicate with words like "red," we're really communicating about 
> the frequency of light. I would argue that we are not communicating our 
> qualia to each other. If we could communicate qualia, we would not have this 
> issue of being unable to know whether your green is my red. Qualia are 
> personal and incommunicable *by definition,* and it's good to have that 
> specific word and not pollute it with broader meanings.

We can't fully communicate our qualia only a representation of that which we 
ourselves loose the exact reconstruction of. That's the inter-agent part of it. 
How do you know any qualia ever existed? They are communicated. They are fitted 
into words/symbols. IMO like a pointer in the programming sense. This is all 
utilitarian not philosophical.

On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> In the mouse example, I was assuming that I had fully modeled the 
> electro-mechanical phenomena in *this specific* mouse. I still don't think 
> that would give me its qualia.

There is only a best guess within the context of the observer...

On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> I would be happy to refer to a machine with an incommunicable first-person 
> subjective experience stream as "conscious." But you've admitted that you're 
> not trying to talk about incommunicable first-person subjective experiences, 
> you're trying to talk about communication. I'm not concerned with whether the 
> "consciousness" is mechanical or biological, natural or artificial; I'm 
> concerned with whether it's actually "consciousness."

A sample, lossily compressed internally, symbolized. We loose the original 
basically. You can't transmit the whole qualia it's gone. Yes the utilitarian 
aspect of it is that it is all about communication in a system of agents.  
Everything is not first-person. AGI researchers are so occluded by 
first-person. Human general intelligence in not one person but a system of 
people... a baby dies in isolation.

Another piece of this is occupying representation. A phenomenal conscious 
observer may assume the structure that is transmitted in its symbolic form and 
attempt to reconstruct the original lossy representation based on it's own 
experience.  

Not really aiming for human phenomenal consciousness now but more panpsychist. 
Objects inherently contain structure that can be extracted into discrete 
representation that can be fitted systematically with similar structure of 
other objects.

...

I want to tell you a secret but it's incommunicable. Guess what. It's already 
been communicated.

Can I ask you a question? Thanks, no need to answer.

I felt a unique incommunicable sensation. I call it Gloobledeglock.  Have you 
ever felt Gloobledeglocked?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mfcb6e0f90becb8dba4791d4a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> People can only communicate their conscious experiences by analogy. When you 
> say "I'm in pain," you're not actually describing your experience; you're 
> encouraging me to remember how I felt the last time *I* was in pain, and to 
> assume you feel the same way. We have no way of really knowing whether the 
> assumption is correct.
> 

That’s protocol. They sync up. We are using an established language but it 
changes over time.  The word "Pain" is a transmitted compression symbol that's 
understood already not to be always the same but the majority of others besides 
oneself have a similar experience. Some people get pleasure from pain due to 
different wiring or neurochemicals or whatever. There might be a societal 
tendency for them not to breed


On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> We can both name a certain frequency of light "red" and agree on which 
> objects are "red." But I can't tell you what my visual experience of red is 
> like, and you can't tell me what yours is like. Maybe my red looks like your 
> green -- the visual experience of red doesn't seem to inhere in the 
> frequency's numerical value, in fact color is nothing like number at all, so 
> nothing says my red isn't your green. "Qualia" refers to that indescribable 
> aspect of the experience. If your "qualia" can be communicated with symbols, 
> or described in terms of other things, then we're not talking about the same 
> concept -- and using the same word for it is just confusing.

Think multi-agent. Say my red is your green and your green is my red. We are 
members of a species sampling the environment. If we all saw it the same way it 
would impact evolution? You don’t know my qualia on red. But you do understand 
me communicating the experience using words and symbols generally understood 
and that is what matters in the multi-agent computational standpoint. We are 
multi-sensors emitting compressed samples via symbol transmission hoping the 
external world understands, but the initial sample is lossily compressed and 
fitted into a symbol to traverse a distance. We may never know that your green 
is my red.


On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> Going back to your computer-and-mouse example: if I admit your panpsychist 
> perspective and assume that a computer mouse has qualia, those qualia are not 
> identified with the electro-mechanical events inside the mouse.  I could have 
> full knowledge of those (fully compute or model them) without sharing the 
> mouse's experience.

You can compute mouse electro-mechanical at a functional level but between two 
mice there are actual vast differences in electron flow and microscopic 
mechanical differences. You still are only estimating what is actually going 
on, or the K-complexity or qualia. There could be self-correcting errors in one 
but the signal clicks to external entities is the same...

Please note that terminology gets usurped with technology when implemented. 
Should we not call intelligence intelligence? Usually it is prepended with 
artificial but IMO wrong move there. It is intelligence or better machine 
intelligence.  Should we not call an artificial eye an eye? What's so special 
about the word consciousness that everyone gets all squirmy about?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M640c41a41bf4e294765e68a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread WriterOfMinds
People can only communicate their conscious experiences by analogy. When you 
say "I'm in pain," you're not actually describing your experience; you're 
encouraging me to remember how I felt the last time *I* was in pain, and to 
assume you feel the same way. We have no way of really knowing whether the 
assumption is correct.

We can both name a certain frequency of light "red" and agree on which objects 
are "red." But I can't tell you what my visual experience of red is like, and 
you can't tell me what yours is like. Maybe my red looks like your green -- the 
visual experience of red doesn't seem to inhere in the frequency's numerical 
value, in fact color is nothing like number at all, so nothing says my red 
isn't your green. "Qualia" refers to that indescribable aspect of the 
experience. If your "qualia" can be communicated with symbols, or described in 
terms of other things, then we're not talking about the same concept -- and 
using the same word for it is just confusing.

Going back to your computer-and-mouse example: if I admit your panpsychist 
perspective and assume that a computer mouse has qualia, those qualia are not 
identified with the electro-mechanical events inside the mouse.  I could have 
full knowledge of those (fully compute or model them) without sharing the 
mouse's experience.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Med65915d05938166d1cc3e1f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> Are you sure you wouldn't be better served by calling your ideas some other 
> names than "consciousness" and "qualia," then?  We're all getting "hung-up 
> on" the concepts that those terms actually refer to. 

Good question.

That's what been going on already. But in this age of intelligence it's time to 
take back what is ours and also preserve human consciousness. Also, human 
machine communications are better served by calling it thus IMO. And why let 
narrow minded visionaries control the labeling? That's a control strategy. 
Shoot for the stars. Consciousness is the full package, not little bits and 
pieces to tiptoe around.

This might be premature but at some point it'll be trendy to call it as it is 
IMO.

On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> I do not see how communication protocols have anything to do with 
> consciousness as it is usually understood.

People communicate their conscious experiences no? Machines do that too :) 
Machines use communication protocols.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M80abe3880277b7daf241686e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 3:35 PM, Secretary of Trades wrote:
> https://philpapers.org/archive/CHATMO-32.pdf#page=50

Blah blah blah.

>From AGI perspective we are interested in the multi-agent computational 
>advantages in distributed systems that consciousness (or by other names) 
>facilitates. Thus I look at the communication aspects like communication 
>complexity, protocol, structure, etc. which are an external view, not first 
>person narrative of phenomenal consciousness that many people are so 
>obstinately hung-up on. Thus the utilitarian Qualia = Compressed impressed 
>samples symbolized for communication. Though I think first person narrative is 
>addressed by this also, it's not my goal

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mf579060433b8625fb3c512fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Secretary of Trades



https://philpapers.org/archive/CHATMO-32.pdf#page=50


On 28.08.2019 22:19, Secretary of Trades wrote:

clrscr();


On 28.08.2019 16:03, johnr...@polyplexic.com wrote:

On Wednesday, August 28, 2019, at 8:44 AM, WriterOfMinds wrote:

That is not what qualia are.  Qualia are incommunicable and private.


As Matt would say:

printf("Ouch!\n");

John

*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M33918554ba75d9d645503d8e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Secretary of Trades

clrscr();


On 28.08.2019 16:03, johnr...@polyplexic.com wrote:

On Wednesday, August 28, 2019, at 8:44 AM, WriterOfMinds wrote:

That is not what qualia are.  Qualia are incommunicable and private.


As Matt would say:

printf("Ouch!\n");

John

*Artificial General Intelligence List
* / AGI / see discussions
 + participants
 + delivery options
 Permalink




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M930616be8b03d837e6218f2d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread Nanograte Knowledge Technologies
Does a quale have to always pass the "qualitative character of sensation" test?

 Any generalized system relying on the random, subjective input value of qualia 
would give rise to the systems constraint of ambiguity. Therefore, as a policy, 
all subjectively-derived data would introduce a semantic anomaly into any 
system - by design. This has significance for the assertion that qualia input 
would be useful for symbolic systems.

With regards effective complexity, in the sense of generalized correctness as 
it pertains to generalized intelligence, an AGI design would have to 
empirically resolve the 'ambiguity' problem first. Else, it would result in (or 
take form as) a consciousness-challenged dumb device, like most computers still 
are today.


From: WriterOfMinds 
Sent: Wednesday, 28 August 2019 14:44
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

That is not what qualia are.  Qualia are incommunicable and private.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M100a6fbb04132f410d7de3d6>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mfb4e7c925c5b4a5bbb64e05e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Monday, August 26, 2019, at 5:25 PM, WriterOfMinds wrote:
> "What it feels like to think" or "the sum of all a being's qualia" can be 
> called phenomenal consciousness. I don't think this type of consciousness is 
> either necessary or sufficient for AGI. If you have an explicit goal of 
> creating an Artificial Phenomenal Consciousness ... well, good luck. 
> Phenomenal consciousness is inherently first-person, and measuring or 
> detecting it in anyone but yourself is seemingly impossible. Nothing about an 
> AGI's structure or behavior will tell you what its first-person experiences 
> *feel* like, or if it feels anything at all.


Qualia = Compressed impressed samples symbolized for communication. From the 
perspective of other agents attempting to Occupy Representation of another 
agents phenomenal consciousness would be akin to computing it's K-complexity. 
Some being commutable some being estimable.

Why does this help AGI? This universe has inherent 
separateness/distributedness. It's the same reason why there is no single 
general compression algorithm.

John



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M09d11c426cbd235dd276652c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread immortal . discoveries
This attention system is also done in cells, on DNA translation 
encoding/decoding and for repairs of errors and signaling growth specialization 
of new types of cell differentiation. It works on all levels, cities to cells, 
even wound healing, because it is information repair and Generative emerging. 
It self-organizing and comes to equilibrium. It emerges, and repairs, just like 
cities and missing knowledge information do. Same for Glove relations, not just 
entails. This is regenerative technology, even dead machines can be realized 
eventually and its as if the said worker machine is back, as if nothing 
happened and no one notices you are different. I know a lot more but will share 
later when proper.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M86c4c2febf47c1f3874b98fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread immortal . discoveries
If you shake a bucket of rocks, they will fall and take up less volume, you let 
the matter do the calculations on their own. This is indeed similar to the 
Self-Attention system in the Transformer architecture, let me get a picture:
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Md7b0ec95ddda5c472e92cc73
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
When I say "synchronization makes you propagate signals farther faster", take 
for example having learnt the piano or English  alphabet - phrases are made 
from word parts made from letter parts and Glove relations are formed using 
past Glove relations. These bridges are built on each other, and to satisfy new 
knowledge will require replacing them in an ex. christian's brain. Anyhow, the 
signals can propagate faster. And you can build the next layer connections and 
go deeper into mental or friend or city relationships/ entailment.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mc1abf7b308d81b08bfca228f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
Dreaming is simulating, like GPT-2 does. And it can learn while doing that in 
Attention. But in Long term memory, in the midnight subconscious time if not 
all the time, learning also happens without 'you' thinking and seeing the 
discovery occur and laugh at the rewarding relation. The difference between 
these 2 learnings is the Attention one is guided, the other global one is not 
local but it is a big helper in stumps however not with precision guided per 
see.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me8dc6ec1b4ed50ed760bd0b5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
(The short term Attention comes to equilibrium with your knowledgebase.)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mfb84f161c988fdd93f25549b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
By "short term memory" I mean the working train of thoughts, looping, like 
GPT-2's Attention system in the Transformer architecture. Paying attention to 
context updates the attention weights, then it iterates, having a new context. 
So while Glove equalizes, Attention equalizes too. Your though-train comes to a 
decision, that is satisfactory in standing to your knowledgebase which governs 
your Attention.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M24646bbd7571cdeb5a646790
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
Dropping the 'consciousness' word, that video I linked above is actually hit 
on. Let me explain. In the middle of the video, he mentioned wave 
synchronization - the brain has signals propagating around - please see this 
video below to see what the man has meant. In the brain, the connections 
strengthen/weaken and learn the model of the world, like Glove/W2V does, 
relationships aka entailments, and builds bridges so you know quicker next time 
exactly what it is and what the answer is. The flow can happen faster to think 
faster/ farther, and the 'clustering' has learnt (like Glove) the synchronized 
paths trading in memory for speed. And, if you think for a long enough period 
like, like how the metronomes needed time to become synchronized as seen below, 
you can synchronize more, but if you give your work little bursts of time, 
you'll never invent AGI. You need constant thoughts in 'short term memory' and 
must cycle many times.

https://www.youtube.com/watch?v=Ov3aeqjeih0
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me57d7aaf8d417aaceca597e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
:-)
"Is Your Consciousness Just A Bunch Of Vibrations? | Answers With Joe"
https://www.youtube.com/watch?v=0GE5M6F8I18

For cavemen laymen, that answer is quiet accurately well.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M58a6e19895d54810f052b2ca
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread Matt Mahoney
On Tue, Aug 27, 2019, 9:30 AM Stefan Reich via AGI 
wrote:

Please point me to the code being written as a result of this talk then :-)

http://mattmahoney.net/autobliss.txt

Actually I wrote it in response to this conversation in 2007 because
consciousness is a topic that just won't die. The program is the opposite
of AGI.  It is a simple reinforcement learner that passes all the tests we
use to determine that animals feel pain. It says "ouch" and it modifies its
behavior to avoid negative stimuli. Too much torture will kill it.

For any simple behavioral test or criteria for consciousness that you can
come up with, I can write a simple program that passes the test. What do
you expect to happen if you think that intelligence depends on a property
that by definition has no behavioral effect?
That AGI must be magic?

Oh yeah, it's vibrations. That's it.

Meanwhile Google, Siri, and Alexa fail the Turing test mostly by being too
smart and too helpful.



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M1a2f521403f1d91836833e51
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread keghnfeem


Network of Networks — A Neural-Symbolic Approach to Inverse-Graphics:

https://towardsdatascience.com/network-of-networks-a-neural-symbolic-approach-to-inverse-graphics-acf3998ab3d

 Here is some.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Ma631d687a7524fdea6e073fa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread keghnfeem
Thanks John.

Could consciousness all come down to the way things vibrate?:
https://theconversation.com/could-consciousness-all-come-down-to-the-way-things-vibrate-103070




The visual alphabet 2.0:

https://www.youtube.com/watch?v=Z6MB-ZgPcNg

Is Your Consciousness Just A Bunch Of Vibrations? | Answers With Joe:

https://www.youtube.com/watch?time_continue=212=0GE5M6F8I18

 The code will not be ready for around six years and is in the early stages.
 Yes the AGI develops it own  unique internal language. 

  And the code will only be for the believers of the cause of machine 
consciousness..





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M92d4362e8e3485e442a65c13
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread Stefan Reich via AGI
Please point me to the code being written as a result of this talk then :-)

On Mon, 26 Aug 2019 at 22:21, Matt Mahoney  wrote:

>
>
> On Mon, Aug 26, 2019, 8:05 AM Stefan Reich via AGI 
> wrote:
>
> Is all this discussion leading anywhere?
>
> No. Consciousness is an irrelevant distraction to anyone doing serious
> work in AGI.
>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M7cd0d1dfed05c8226003f1e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
Yes as I said, consciousness means other things for real, the uses are there, 
just not any magic, only misunderstandings.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me83daaccee42bd3ff4696bee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread johnrose
On Tuesday, August 27, 2019, at 7:51 AM, immortal.discoveries wrote:
> I believe consciousness doesn't exist for many, many, reasons, ex. physics, 
> our brain being a meta ball from the womb, learned cues, etc. I am purely a 
> robot from evolution, with no life. The moment you hear that you fear that 
> and want to feel more special, it's like buying bible movies as a food sold 
> only to fill the spirit, a hobby, marketed. Thinking god or free will exists 
> gives you fake highs, and people sell it to you, it makes cash. There is only 
> things that we do that we are really talking about, like having knowledge 
> about objects, paying attention to the correct context or stimuli, and 
> entailment learning and generation.

Keep in mind that before the electronic communications era people sought forms 
of communication and forms of super/omni-intelligence and developed these 
concepts for many reasons and these were/are not perfect by far since 
transmissions were without sufficient lossless mechanisms.

I could argue that electronic communications is making individuals less 
intelligent in some ways, is short circuiting many high level processes but 
won't bother. Over-classification is another issue, with too much 
labeling/symbolizing and that can paralyze efficient cognitive thought where 
one must obey what they were taught to regurgitate with fear of misapplying a 
label... like a Skitts Law on scientific terminology.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4e7a84d6a3ce726622b5db4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread immortal . discoveries
I believe consciousness doesn't exist for many, many, reasons, ex. physics, our 
brain being a meta ball from the womb, learned cues, etc. I am purely a robot 
from evolution, with no life. The moment you hear that you fear that and want 
to feel more special, it's like buying bible movies as a food sold only to fill 
the spirit, a hobby, marketed. Thinking god or free will exists gives you fake 
highs, and people sell it to you, it makes cash. There is only things that we 
do that we are really talking about, like having knowledge about objects, 
paying attention to the correct context or stimuli, and entailment learning and 
generation.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mb6c9251140b716373ab2bbc5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread johnrose
I was expressing panpsychist mathematical modeling with consciousness as 
Universal Communication Protocol and Occupying Representation in case you 
didn't notice. This has much overlap on other AI fields...

kegineem we may have some similar ideas I see you have something called a 
Visual Alphabet this may be related to my thinking on a universal panpsychist 
language of everything where everything "speaks" based on structure 
observed/occupied. So I will look at what you have there.

But... I don’t know maybe Matt is right consciousness has absolutely nothing to 
do with AGI. Then he falls into the C = null camp in AGI = {I,C,M} so that 
statement is still true.

BTW way I was thinking AGI = {I,C,M,PSI} but not sure. Matt what do you think 
about that? 0 or null? LOL

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4900ff559dbd716ed4713625
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread keghnfeem
 There are my types of consciousness. There is insect, lizard, mouse, primate, 
and human.
  And there is different forms of consciousness for every species of lizard and 
so on.

 With in machine consciousness, for my AGI, there will different types of 
machine consciousness
 for each other types and models. I call it Keghn consciousness system. If any 
other AGI scientist
 makes a machine consciousness then it is their consciousness system .

 My AGI will have emotion and will feel the wind on it skin. And remember that 
first of joy.
 My way will work just good as nature. It can scale form lizard level all the 
way up to ASI, when hardware
improves.

 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Md3185282e17394e505c8bf6a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread WriterOfMinds
Attention/focus is not the type of consciousness that Matt is talking about. 
Maybe some more specific terms will help clear things up.

"What it feels like to think" or "the sum of all a being's qualia" can be 
called phenomenal consciousness. I don't think this type of consciousness is 
either necessary or sufficient for AGI. If you have an explicit goal of 
creating an Artificial Phenomenal Consciousness ... well, good luck. Phenomenal 
consciousness is inherently first-person, and measuring or detecting it in 
anyone but yourself is seemingly impossible. Nothing about an AGI's structure 
or behavior will tell you what its first-person experiences *feel* like, or if 
it feels anything at all.

The availability of information to your high-level thought processes can be 
called access consciousness. As in, "I became conscious of a rooster crowing, 
and then realized I'd been hearing him for the past five minutes." This is 
potentially relevant to AGI if we interpret it as a form of 
information-filtering or focus.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M5ced584771c1d0281ce71985
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread keghnfeem
AGI will not work unless there is some form of consciousness system in place. 
In deep learning it is 
called a attention network, and they are coming on strong on the latest 
supervised NN's.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4f9db979da33f2f55a4b8b1b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread Matt Mahoney
On Mon, Aug 26, 2019, 8:05 AM Stefan Reich via AGI 
wrote:

Is all this discussion leading anywhere?

No. Consciousness is an irrelevant distraction to anyone doing serious work
in AGI.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M486dbc92ae9a992b6cf624de
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread Stefan Reich via AGI
Is all this discussion leading anywhere?

On Mon, 26 Aug 2019 at 14:00,  wrote:

> On Monday, August 26, 2019, at 7:44 AM, immortal.discoveries wrote:
>
> Encoding information, remembering information, decoding information,
> paying attention to context, prediction forecast, loop back to step 1, is
> the main gist of it. This has generation, feedback, and adapting temporal
> patterns.
>
>
> These would all fit into {I,C,M}.
>
> Some researchers say C=0 or null but C is a very convenient for throwing
> extra stuff into :)
>
> I'd say as C increases I goes to zero. What if M increases?
>
> But they all borrow from each other.
>
> John
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Md28910d9e17dc43e080284b5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread johnrose
On Monday, August 26, 2019, at 7:44 AM, immortal.discoveries wrote:
> Encoding information, remembering information, decoding information, paying 
> attention to context, prediction forecast, loop back to step 1, is the main 
> gist of it. This has generation, feedback, and adapting temporal patterns.

These would all fit into {I,C,M}.

Some researchers say C=0 or null but C is a very convenient for throwing extra 
stuff into :)

I'd say as C increases I goes to zero. What if M increases? 

But they all borrow from each other.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M46093dea6896f817cfc22060
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread immortal . discoveries
Encoding information, remembering information, decoding information, paying 
attention to context, prediction forecast, loop back to step 1, is the main 
gist of it. This has generation, feedback, and adapting temporal patterns.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M40acf873c74a45d022e607a4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread johnrose
Intelligence, Memory, Consciousness for AGI is a very nice 3 tuple:

AGI = {I,C,M}

Any missing elements?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4a0bc51d34f8bb88282cda4c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread Brett N Martensen
*Intelligence* has been defined in many ways, including: the capacity for logic 
, understanding 
, self-awareness 
, learning 
, emotional knowledge 
, reasoning 
, planning 
, creativity 
, critical thinking 
, and problem solving 
. More generally, it can be 
described as the ability to perceive or infer information 
, and to retain it as knowledge 
 to be applied towards adaptive 
behaviors within an environment or context.

*Memory* is the faculty of the brain  by 
which data  or information 
 is encoded, stored, and retrieved 
when needed. It is the retention of information over time for the purpose of 
influencing future action. If past events 
 could not be remembered, 
it would be impossible for language, relationships, or personal identity 
 to develop. Memory loss is 
usually described as forgetfulness  
or amnesia .

*Consciousness* is the state or quality 
 of sentience 
 or awareness 
 of internal or external existence. It 
has been defined variously in terms of qualia 
, subjectivity 
, the ability to experience 
 or to feel 
, wakefulness 
, having a sense of selfhood 
 or soul 
, the fact that there is something 'that it 
is like' to 'have' or 'be' it, and the executive control system of the mind 
. Despite the difficulty in definition, 
many philosophers believe that there is a broadly shared underlying intuition 
about what consciousness is. According to Max Velmans 
 and Susan Schneider, "Anything that 
we are aware of at a given moment forms part of our consciousness, making 
conscious experience at once the most familiar and most mysterious aspect of 
our lives."

Thank you Wikipedia.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mea304060b23258b2c986a78e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-25 Thread johnrose
On Friday, August 23, 2019, at 9:57 PM, keghnfeem wrote:
> Consciousness is Memory:
> https://vimeo.com/98785998

Uhm, I was thinking that intelligence is memory. Consciousness is now.  
Intelligence is what comes before and after now.

Could be wrong though I guess... life is a recording that can be replayed.

Consciousness is the act of occupying representation. Intelligence is a memory 
of and a synthesis of that occupation.  Then general intelligence is applying 
new occupation on representations that have morphisms back to previous 
occupations guided by the intelligent synthesis of memory.

Or something like that...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me8e73779b8c4ba73bdf070b0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-25 Thread keghnfeem
I have a complete agi model that is a conscious machine and can generate 
symbols using neural 
networks. Here are two high end scientist using my work. 
https://www.youtube.com/watch?v=Z6MB-ZgPcNg

https://www.youtube.com/watch?v=kBKEaJtc8dU

  And the greatest paper of are time:

https://www.academia.edu/37275998/A_Nice_Artificial_General_Intelligence_How_To_Make_A_Nice_Artificial_General_Intelligence

 Low detail of the over all model:
https://groups.google.com/forum/#!topic/artificial-general-intelligence/0rHVcqNoFG8

 


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M58ba45863a3727c6925a9df8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-25 Thread immortal . discoveries
@NT

Attention Is All You Need
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4cc29af1f6399bd26e529172/consciointelligent-thinkings
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4a48a690513d0cec1d2a0548
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-25 Thread johnrose
On Saturday, August 24, 2019, at 11:15 AM, keghnfeem wrote:
> The human mind builds many temporal patterns and pick the best one. Since wet 
> neurons are so 
> slow. Also the human brain build many temporal patterns that will occur or 
> could occur if predicted 
> patterns fails. Also, the brain records everything, so when we sleep complex 
> algorithms are brought 
> into being. to make sense of the more complex temporal patters.  
>  Allot of these altered altered realities, possible temporal patterns, are 
> garbage and a algorithm 
> delete them or retrain them. BUT some are on the border line and are past to 
> conscious 
> mind as we sleep and is up to that person dreaming person salvage any are 
> worth saving before they
> deleted. 


Interesting. Sleeping pattern machines sifting with altered realities.

Going further, sleepers, synchronized to sinusoidal day night cycle, go back a 
few million cycles, in the jungle, each sleeper retaining slightly different 
realities then sharing on the day half, then sleeping, mixing, reprocessing, 
eating some seasonal herbs, sharing some multi-agent consciousness... the sound 
of daily cycles zung zung zung, speed it up to like 100 Hz, buz, agents 
only last a couple minutes, faster hum, meta-patterns emerge, are hosted 
across agent lifetimes in a society shared with other societies, faster, high 
pitched whine, societies fail meta-patterns collapse, shatter, vibrated into 
the cycles reconstituted wheee industrial revolution, internet, STOP. Into 
the future, start zung zung zung buz whee high pitched whine 
dissipates, we left the planet... on other planets now multi-cycles 
zwerherringzwerherringzwerherring... heheh

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M8942072bd1fffa68070fef6f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread keghnfeem
 The human mind builds many temporal patterns and pick the best one. Since wet 
neurons are so 
slow. Also the human brain build many temporal patterns that will occur or 
could occur if predicted 
patterns fails. Also, the brain records everything, so when we sleep complex 
algorithms are brought 
into being. to make sense of the more complex temporal patters.  
 Allot of these altered altered realities, possible temporal patterns, are 
garbage and a algorithm 
 delete them or retrain them. BUT some are on the border line and are past to 
conscious 
mind as we sleep and is up to that person dreaming person salvage any are worth 
saving before they
deleted. 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M7cecb93ec05c26511e7e8521
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> Matt,
> 
> Not sure about the hard problem here but a rat would have far less
> consciousness when sleeping that is for sure 
> 
> Why? Think about the communication model with other objects/agents.
> 
> John

Although... I have to say that sometimes when I'm sleeping, lucid dreaming or 
whatever, somehow wandering the world in astounding mental clarity, (probably 
due to drinking too much coffee) I could argue that there is more consciousness 
there. Occupying more representation but less communication so... that darn 
sleeping rat could be doing the same.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M449578cc2d85d2931b155c62
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread Nanograte Knowledge Technologies
I don't mean to be rude, but this is just nonsense. Your system uses a 
textual-input sensor to activate a response from the reasoning engine. In this 
case, the sensor is a data field, which requires a human, or bot to populate 
it. It's seemingly not even aware of that sensor, nor of the constraints of 
that sensor.

What you're describing is not neurophysiologically accepted as consciousness at 
all. For consciousness to begin, your machine needs to at least recognize that 
it has a sensor and try and show an "understanding" of how it relates to the 
rest of its field of reality. It needs orientation.


From: immortal.discover...@gmail.com 
Sent: Saturday, 24 August 2019 09:39
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

Sentiment detection, human-written detection, danger detection, there's 
infinite detections. It must recognize the input and say what entails using its 
knowledge.  The "concept" of who it speaks to is based on feeding input in and 
entailment out. That is "awareness" and "conscious".
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M5d2f8713a586364483f62770>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4cc29af1f6399bd26e529172
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> -Original Message-
> From: Matt Mahoney 
> 
> So the hard problem of consciousness is solved. Rats have a thalamus which
> controls whether they are in a conscious state or asleep.
> 
> John, is that what you meant by consciousness?

Matt,

Not sure about the hard problem here but a rat would have far less 
consciousness when sleeping that is for sure 

Why? Think about the communication model with other objects/agents.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M359fb419a5de8fa101e264b0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread johnrose
Possible correction here, this is modeling consciousness assuming everything is 
conscious, "panpsychism" is it?

I mentioned pondering pure randomness. This might not be right it might be when 
pondering pure nothingness. Would pure nothingness have a consciousness of 
everything or pure randomness? Maximal k-Complexity verses zero. Structural 
distance from the pondering agent via comm. protocol.

BTW we know with our Venn Diagrams there is overlap with ML. As with everything 
there is overlap, not trying to draw a borderly distinction but actually 
consciousness could be described as encompassing ML.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M48ec2367d9035c15155ebe07
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread immortal . discoveries
Sentiment detection, human-written detection, danger detection, there's 
infinite detections. It must recognize the input and say what entails using its 
knowledge.  The "concept" of who it speaks to is based on feeding input in and 
entailment out. That is "awareness" and "conscious".
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M5d2f8713a586364483f62770
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread Basile Starynkevitch


On 8/24/19 8:44 AM, Nanograte Knowledge Technologies wrote:
This machine is not even aware that it is a human being typing. It has 
zero concept of being communicated with via the Web. It reasons it 
could be a bear, or a girl that prompted it in the English language.


FAIL



But we cannot be sure neither that we are not part of some simulated 
universe running on some planet-sized computer operated by aliens in 
some other multiverse.




--
Basile STARYNKEVITCH   == http://starynkevitch.net/Basile
opinions are mine only - les opinions sont seulement miennes
Bourg La Reine, France; 
(mobile phone: cf my web page / voir ma page web...)


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M85bba82c98a69df2b2722bfb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread Nanograte Knowledge Technologies
This machine is not even aware that it is a human being typing. It has zero 
concept of being communicated with via the Web. It reasons it could be a bear, 
or a girl that prompted it in the English language.

FAIL


From: immortal.discover...@gmail.com 
Sent: Saturday, 24 August 2019 06:18
To: AGI 
Subject: Re: [agi] Re: ConscioIntelligent Thinkings

To clear understanding of these unsettling words like simulating/ dreaming, 
consciousness, etc, I have found related meanings to many of these words in 
AGI. GPT-2 uses a Transformer that uses Attention. This considers context, and 
the more context - the more consciousness. Besides that, knowing knowledge 
about humankind, etc, could make GPT-2 sound conscious, because its got 
knowledge about itself, others, etc, everything. https://talktotransformer.com/
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mc4b7dbd53890bad45cef37fd>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4f9baa85df7041282cce669b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread immortal . discoveries
To clear understanding of these unsettling words like simulating/ dreaming, 
consciousness, etc, I have found related meanings to many of these words in 
AGI. GPT-2 uses a Transformer that uses Attention. This considers context, and 
the more context - the more consciousness. Besides that, knowing knowledge 
about humankind, etc, could make GPT-2 sound conscious, because its got 
knowledge about itself, others, etc, everything. https://talktotransformer.com/
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mc4b7dbd53890bad45cef37fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread Matt Mahoney
So the hard problem of consciousness is solved. Rats have a thalamus which
controls whether they are in a conscious state or asleep.

John, is that what you meant by consciousness?

On Fri, Aug 23, 2019, 9:13 PM  wrote:

>  Consciousness has to do with observing temporal patterns. Intelligence is
> for how well a AGI  understanding these temporal patterns. And how well it
> can control these temporal patterns. And how well   it can prediction these
> temporal patterns.
>
>  Yes, a rat is conscious but not very intelligent.
>
> The Structure and Physiology of the Human Brain:
> https://www.youtube.com/watch?v=N6VanlUF6f4
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mec1008815d9958d7f3d35b60
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread keghnfeem
It is about a program that can learn what a clock is. This is what machine 
learning is all about
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M2139e482264f8202d4fa8eb7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread johnrose
How about:  Write an expression for or compute the consciousness of a clock.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M77f3de8ef0fd657b53de65f3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread keghnfeem
https://www.sciencedaily.com/releases/2015/12/151217151716.htm

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me0f711ecf4c52ac5f0e72703
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread Matt Mahoney
How do you test whether a rat is conscious or not?

On Fri, Aug 23, 2019, 9:24 PM  wrote:

> "Consciousness has to do with observing temporal patterns."
> The term pattern is ... obscure I'm afraid I try to avoid it but...
>
> It's more than observe, I would say occupy representation. A pattern is a
> representation. Only terminology?
>
> Two patterns from different domains - the key is how do they relate. A rat
> cannot relate them (well some yes) but an AGI can.
>
> John
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M1bba0d55c86fdb20ba882946
Delivery options: https://agi.topicbox.com/groups/agi/subscription