Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 8:59 AM, korrelan wrote:
> If the sensory
streams from your sensory organs were disconnected what would your
experience of reality be?  No sight,
sound, tactile or sensory input of any description, how would you play a part/
interact with this wider network you describe… you are a closed box/ system and 
only experience 'your individual' reality through your own senses, you would be 
in a very quiet, dark place indeed.



Open system then closed?

Jump into the isolation tank for a few hours. What happens? You go from full 
duplex to no duplex transmission. Transceiver buffers fill. Symbol negentropy 
builds. Memetic pressure builds. Receptivity builds. Hallucinations occur.

I'm not trying to convince but to further my own thoughts, I guess the question 
is to what extent are we individuals? Are we just parroting patterns and memes 
and changing them a little then acting as switches/routers/reflectors? Some 
people are more reflectors and some add more K-complexity to the distributed 
intelligence I suppose. But I think that we have less individuality than most 
people assume.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M737a43e4e9b79a6e8d63fa7b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 10:57 AM, immortal.discoveries wrote:
> We could say our molecules make the decision korrelan :)

And the microbiome bacteria, etc., transmitting through the gut-brain axis 
could have massive more complexity than the brain.

"The gut-brain axis, a bidirectional neurohumoral communication system, is 
important for maintaining homeostasis and is regulated through the central and 
enteric nervous systems and the neural, endocrine, immune, and metabolic 
pathways, and especially including the hypothalamic-pituitary-adrenal axis (HPA 
axis)."

https://endpoints.elysiumhealth.com/microbiome-explainer-e345658db2c

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Me991df962c2c00a3725d92e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread immortal . discoveries
Well, we duplicate and build all our technology just to persist. So all you 
need is a intelligence that finds ways efficiently.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M87b438bd17bfb0185f8a73b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
Persist as what?

Unpersist the sun rising, break the 99.99... % probability that it rises 
tomorrow. What happens? We burn.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1ebf99508486d21d6e0f55ae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 1:44 PM, immortal.discoveries wrote:
> Describing intelligence is easier when ignore the low level molecules. 

What if it loops? 

I remember reading a book as a kid where a scientist invented a new powerful 
microscope, looked into it, and saw himself looking into the microscope.

Our view of reality may be all out of wack with reality. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M5a403b7f8ce8fcdeea94fbe0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread immortal . discoveries
Describing intelligence is easier when ignore the low level molecules. What if 
we ignore the brain and focus on the hive? May lead to what Open AI did with 
the Hide & Seek bots.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M9bd2526ee7acc2730d6e6554
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-27 Thread John Rose
On Wednesday, September 25, 2019, at 7:01 PM, James Bowery wrote:
> Yes, what is missing is the parsimony of your measure, since the Perplexity 
> and Logical Expectation measures have open parameters that if filled properly 
> reduce to K-Complexity.

James, interesting, thanks for making us aware of that... you know what you are 
talking about. You are making me think!.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M51d22f0cfe1bf128fd093ff0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The world on the eve of the singularity.

2019-09-27 Thread John Rose
We must first accept and understand that there are intelligence structures 
bigger than ourselves and some of these structures cannot be fully modeled by 
one puny human brain.

And some structures are vastly inter-generational... and some may be designed 
or emerged that way across generations to positively effect future generations.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3085ad834d97ffb2-M972909f81e8a45b3099e5b68
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 2:05 PM, korrelan wrote:
> The realisation/ understanding that the human brain is
closed system, to me… is a first order/ obvious/ primary concept when designing 
an AGI
or in my case a neuromorphic brain simulation.



A human brain is merely an instance node on the graph of brains. All biologic 
brains are connected by at the very least DNA at a very base level. Not a 
closed system.  It's easy to focus in but ignore the larger networks of 
information flow the human brain or an emulated brain are part of.

For example, when modeling vehicular traffic do you only study or emulate one 
car? Or say when studying the intelligence of elephants do you only model one 
elephant? If that's all you do you miss the larger complex systems and how they 
relate to the structure and behavior of individual components. Also, you ignore 
the intelligence hosted by differences in brains in a larger web of pattern 
networks.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mac461188ab36fa130f653384
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 3:34 PM, korrelan wrote:
> Reading back up the thread I do seem rather stern or harsh in my opinions, if 
> I came across this way I apologise. 

I didn't think that of you we shouldn't be overly sensitive and afraid to 
offend. There is no right to not be offended, at least in the country :)

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M73ea70ffba040f4d01d92ba0
Delivery options: https://agi.topicbox.com/groups/agi/subscription