Re: [agi] Simulation

2019-09-28 Thread John Rose
On Saturday, September 28, 2019, at 4:59 AM, immortal.discoveries wrote:
> Nodes have been dying ever since they were given life. But the mass is STILL 
> here. Persistence is futile. We will leave Earth and avoid the sun.

You right. It is a sad state of affairs with the environment...the destruction 
going on for centuries actually. It's almost like man subsumed natures 
complexity is his rise to intelligence. Or electronic intelligence did.

Oh well, can't cry over split milk :) Onward.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mbec5b69b96a014bb526c43de
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-28 Thread immortal . discoveries
You can hear the ratchet's hypertrophy screeching.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M6cda8589ce6eddb29bb5c3ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-28 Thread immortal . discoveries
Nodes have been dying ever since they were given life. But the mass is STILL 
here. Persistence is futile. We will leave Earth and avoid the sun.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M05bae50dc34544925e34e620
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
Persist as what?

Unpersist the sun rising, break the 99.99... % probability that it rises 
tomorrow. What happens? We burn.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1ebf99508486d21d6e0f55ae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread immortal . discoveries
Well, we duplicate and build all our technology just to persist. So all you 
need is a intelligence that finds ways efficiently.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M87b438bd17bfb0185f8a73b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 1:44 PM, immortal.discoveries wrote:
> Describing intelligence is easier when ignore the low level molecules. 

What if it loops? 

I remember reading a book as a kid where a scientist invented a new powerful 
microscope, looked into it, and saw himself looking into the microscope.

Our view of reality may be all out of wack with reality. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M5a403b7f8ce8fcdeea94fbe0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread immortal . discoveries
Describing intelligence is easier when ignore the low level molecules. What if 
we ignore the brain and focus on the hive? May lead to what Open AI did with 
the Hide & Seek bots.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M9bd2526ee7acc2730d6e6554
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 10:57 AM, immortal.discoveries wrote:
> We could say our molecules make the decision korrelan :)

And the microbiome bacteria, etc., transmitting through the gut-brain axis 
could have massive more complexity than the brain.

"The gut-brain axis, a bidirectional neurohumoral communication system, is 
important for maintaining homeostasis and is regulated through the central and 
enteric nervous systems and the neural, endocrine, immune, and metabolic 
pathways, and especially including the hypothalamic-pituitary-adrenal axis (HPA 
axis)."

https://endpoints.elysiumhealth.com/microbiome-explainer-e345658db2c

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Me991df962c2c00a3725d92e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 8:59 AM, korrelan wrote:
> If the sensory
streams from your sensory organs were disconnected what would your
experience of reality be?  No sight,
sound, tactile or sensory input of any description, how would you play a part/
interact with this wider network you describe… you are a closed box/ system and 
only experience 'your individual' reality through your own senses, you would be 
in a very quiet, dark place indeed.



Open system then closed?

Jump into the isolation tank for a few hours. What happens? You go from full 
duplex to no duplex transmission. Transceiver buffers fill. Symbol negentropy 
builds. Memetic pressure builds. Receptivity builds. Hallucinations occur.

I'm not trying to convince but to further my own thoughts, I guess the question 
is to what extent are we individuals? Are we just parroting patterns and memes 
and changing them a little then acting as switches/routers/reflectors? Some 
people are more reflectors and some add more K-complexity to the distributed 
intelligence I suppose. But I think that we have less individuality than most 
people assume.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M737a43e4e9b79a6e8d63fa7b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 3:34 PM, korrelan wrote:
> Reading back up the thread I do seem rather stern or harsh in my opinions, if 
> I came across this way I apologise. 

I didn't think that of you we shouldn't be overly sensitive and afraid to 
offend. There is no right to not be offended, at least in the country :)

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M73ea70ffba040f4d01d92ba0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 2:05 PM, korrelan wrote:
> The realisation/ understanding that the human brain is
closed system, to me… is a first order/ obvious/ primary concept when designing 
an AGI
or in my case a neuromorphic brain simulation.



A human brain is merely an instance node on the graph of brains. All biologic 
brains are connected by at the very least DNA at a very base level. Not a 
closed system.  It's easy to focus in but ignore the larger networks of 
information flow the human brain or an emulated brain are part of.

For example, when modeling vehicular traffic do you only study or emulate one 
car? Or say when studying the intelligence of elephants do you only model one 
elephant? If that's all you do you miss the larger complex systems and how they 
relate to the structure and behavior of individual components. Also, you ignore 
the intelligence hosted by differences in brains in a larger web of pattern 
networks.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mac461188ab36fa130f653384
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread korrelan
Reading back up the thread I do seem rather stern or harsh in my opinions, if I 
came across this way I apologise. 

Believe it or not I'm quite an amicable chap, I just lack/ forget the social 
graces on occasion.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M9364edb559421575f774cb09
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread korrelan
The realisation/ understanding that the human brain is
closed system, to me… is a first order/ obvious/ primary concept when designing 
an AGI
or in my case a neuromorphic brain simulation.


> Your model
looks like it has a complexity barrier.
On the contrary,
I’m negating a complexity barrier by representing processing at the basest
level, akin to binary on a Von Neumann machine.  In my opinion the only way to 
create a human level AGI is to start at
the bottom with a human derived connectome model and build up hierarchically.


>  I'm not pursuing a complex system model perhaps that's our disconnect here?


Yes... perhaps that's our disconnect, cheers for the protocol exchange.

:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M3adf8a31d6cfc7293fc1c5a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:36 AM, korrelan wrote:

"the brain is presented with external patterns"
"When you talk to someone"
"Take this post as an example; I’m trying to explain a concept"
"Does any of the actual visual information you gather"

These phrases above, re-read them, are more related to a 
consciousness/communication layer IMO an open system.  "Presented", "talk", 
"explain", visually gather., etc..  I totally agree this "layer" surrounds 
something much deeper which you are referring to and I wasn't describing yet 
only in a weakly represented form. Your model looks like it has a complexity 
barrier, or on your particular emulation of the human brain. Still not closed.

On Tuesday, September 24, 2019, at 7:36 AM, korrelan wrote:
> Does any of the actual visual information you gather from
viewing ever leave your closed system? You can convert it into a common
protocol and describe it to another brain, but the actual visual information
stays with you.



Yes! But it's representation is far removed from the input... How far away? 
This is very tedious to describe in detail mathematically the description of 
which we could spend much time discussing. I'm not pursuing a complex system 
model perhaps that's our disconnect here?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M00afb1f07c5bac0eed38da4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread korrelan
>How does
ANY brain acting as a pattern reservoir get filled?


No one fills/
places/ forces/ feeds/ writes information directly into a brain, the brain is
presented with external patterns that represent information/ learning by other
brains that triggers a similar scenario in the receiver’s brain.


It’s the
singular closed intelligence of the learning brain that makes sense of the
information and creates its own internalised version for its own usage based on
its own knowledge and experiences.


When you talk to someone you are not imparting any kind of
information, language is a common protocol designed to trigger the same/
similar mental model in the receiver, but it’s the receiver’s version of
reality that comprises the information, not the senders.


Take this post as an example; I’m trying to explain a
concept in a manner that will enable your personal closed internal simulation
of reality to recognise/ simulate the same/ similar concept, but you will be
using your own knowledge and intelligence to grasp it… not mine.


> Uhm... a "closed system"
that views. Not closed then?


Does any of the actual visual information you gather from
viewing ever leave your closed system? You can convert it into a common
protocol and describe it to another brain, but the actual visual information
stays with you.


A brain can’t learn anything without the innate intelligence
to do so.


:)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mebf2be0557ae3129c04da517
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:07 AM, immortal.discoveries wrote:
> The brain is a closed system when viewing others

Uhm... a "closed system" that views. Not closed then?

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mdf132d4bed1b97879811f946
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread immortal . discoveries
We are machines, your home is a machine. The brain is a closed system when 
viewing others, but your not "you", its all machinery actually. There may be 
some magical consciousness but it has nothing to do with the physics really.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M716fc00deaa6787c5df9c252
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread John Rose
On Monday, September 23, 2019, at 7:43 AM, korrelan wrote:
> From the reference/ perspective point of a single
intelligence/ brain there are no other brains; we are each a closed system and a
different version of you, exists in every other brain.



How does ANY brain acting as a pattern reservoir get filled? There is an 
interaction point or a receiver/transmitter component(s). There are no closed 
systems for brains, look closer and you will find the graph edges. Some 
patterns auto-generate but the original structure prepop comes from another 
brain even if it is a geeky programmer.

On Monday, September 23, 2019, at 7:43 AM, korrelan wrote:
> We don’t receive any information from other brains; we receive
patterns that our own brain interprets based solely on our own learning and
experience.  There is no actual
information encoded in any type of language or communication protocol, without
the interpretation/ intelligence of the receiver the data stream is meaningless.



Of course, it's a given that the receiver needs to be able to interpret... the 
transceiver piece also feeds data from environment to add to new pattern 
formation. Even an insect brain can eventually discern between an "A" and a "B".

Another thing I'm looking at - conscioIntelligent flow - or to put it another 
way, patternistic flow. Potentially even modeled somewhat by Bernoulli 
equations. but perhaps this is covered by memetics.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Ma300915a3e0b0902e1050d6d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread korrelan
>From the reference/ perspective point of a single
intelligence/ brain there are no other brains; we are each a closed system and a
different version of you, exists in every other brain.


We don’t receive any information from other brains; we receive
patterns that our own brain interprets based solely on our own learning and
experience.  There is no actual
information encoded in any type of language or communication protocol, without
the interpretation/ intelligence of the receiver the data stream is meaningless.


:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M376259a89e4e444d954a3076
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 6:48 PM, rouncer81 wrote:
> actually no!  it is the power of time.    doing it over time steps is an 
> exponent worse.

Are you thinking along the lines of Konrad Zuse's Rechnender Raum?  I just had 
to go read some again after you mentioned this :)

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Md0f09f577b2797183834e1cf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 8:42 AM, korrelan wrote:
> Our consciousness is like… just the surface froth, reading
between the lines, or the summation of interacting logical pattern recognition
processes.



That's a very good clear single brain description of it. Thanks for that.

I don't think a complete understanding of consciousness is possible from a 
single brain. Picture this - the rise of general intelligence in the human 
species and that collection of brains spread over time and space communicating. 
Each brain being a node in a graph. The consciousness piece is a component in 
each brain transceiver transmitting on graph edges to other brains and other 
non-brain environment related structure. On this model naturally there is much 
superfluous material that can be eliminated compared to a single brain model 
since a single brain has to survive independently in the real environment. And 
the graph model can be telescoped down into a single structure I believe.

To be more concise, consciousness can be viewed as a functional component in 
the brain’s transceiver. That’s essentially the main crux of my perspective. 
Could it be wrong? Oh ya totally… But that functionality, whether or not in 
consciousness itself is still integral to general intelligence IMO. And there 
are other related reasons…

It would be interesting analyzing single brain consciousness connectome 
structure based on the multi-brain intelligence model, why things happen as 
they do in their electrochemical patterns firing up the single brain model 
and getting it transceiving with other emulations.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M884a62d1fa8b831989377578
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread rouncer81
Yeh thats what im talking about thanks,  all permutations of space and time and 
im getting a bit confused...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M0de418647e9834420ee6c80f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread immortal . discoveries
The butterfly effect is like a tree, there is more and more paths as time goes 
on. In that sense, time is the amount of steps. Time powers Search Space. 
Time=Search Space/Tree. So even if you have a static tree, it is really time, 
frozen. Unlike real life, you can go forward/backward wherever want. Not sure 
time powers outer space though, outer space is outer space.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1ed245101f58b767071b6a0f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-22 Thread rouncer81
actually no!  it is the power of time.    doing it over time steps is an 
exponent worse.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M58be065288caaccfec1fad3f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-22 Thread rouncer81
oops, that thing about time not being the 4th dimension in a way, was wrong.  
and im just being confusing on purpose, sorry about that.
2^(x*y*z*time)   -is all permutations of the 3d space (blipping on and off) 
over that many time steps... actually im not sure if im right or wrong abotu it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mda73239009d51efe13c9b580
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-22 Thread korrelan
> Or network on top of network on top of network... turtles.


Close... though more... networks within networks...

Consciousness seems so elusive because it is not an
‘intended’ product of the connectome directly recognising sensory patterns,
consciousness is an extra layer. The interacting synaptic networks produce
harmonics because each is using a specific frequency to communicate with its
logical/ connected neighbours.  The
harmonics/ interference patterns travel through the synaptic network just like
normal internal/ sensory patterns.
Consciousness uses the same internal/ external networks that
are the product of learning through external experiences but… it’s
disconnected/ out of phase from the normal deep pattern recognition processes…
it’s an interference bi-product that piggy-backs/ influences the global thought
pattern.


It’s similar to hypnotism or deep meditation…cortical
regions learn the harmonics… our sub-conscious is just out of phase, or to be
more precise, our consciousness is out of phase with the ‘logical’ intelligence
of our connectome.


Our consciousness is like… just the surface froth, reading
between the lines, or the summation of interacting logical pattern recognition
processes.


Consciousness is just the sound of all the gears grinding.


https://www.youtube.com/user/korrelan


:)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M5717525ccecbe5e67a353269
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-22 Thread John Rose
On Saturday, September 21, 2019, at 7:24 PM, rouncer81 wrote:
> Time is not the 4th dimension, time is actually powering space.   
> (x*y*z)^time.

And what's the layer on top of (x*y*z)^time that allows for intelligent 
interaction and efficiency to be expressed and executed in this physical 
universe? Symbol representation creation/transmission a.k.a. consciousness. It 
is the fabric on which intelligence operates. You cannot pull it out of the 
equation no matter how hard you try. You can pretend it doesn't exist but it 
will always come back to bite you in the end.

Unless there is some sort of zero energy, zero latency, infinite bandwidth 
network floating this whole boat... which there might be, or I should say 
probably is... Or network on top of network on top of network... turtles.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Medd560a53934d144fcc71295
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread rouncer81
Agreed with everyone here on this subject.   yes simulation - what have I got 
to say?
Time is not the 4th dimension, time is actually powering space.   (x*y*z)^time.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mcf9c3e0eee9e3edc384603c2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread John Rose
On Saturday, September 21, 2019, at 11:01 AM, Stefan Reich wrote:
> Interesting thought. In all fairness, we can just not really interact with a 
> number which doesn't have a finite description. As soon as we do, we pull it 
> into our finiteness and it stops being infinite.

IMO there are only finite length descriptions. When something more accurate is 
needed in this thermodynamic universe a better description is attempted to be 
expressed and we create pointers to yet to be computed computations a.k.a. 
symbols.

Coincidentally related - did anyone see this quite interesting recent proof 
utilizing Graph Theory!
https://www.scientificamerican.com/article/new-proof-solves-80-year-old-irrational-number-problem/

paper here:
https://arxiv.org/abs/1907.04593

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mb3eab92c328ca9ffb07cc64c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread Stefan Reich via AGI
> It is like saying that the vast majority of real numbers don't have a
finite length description. I can't give you an example of one of those
either.

Interesting thought. In all fairness, we can just not really interact with
a number which doesn't have a finite description. As soon as we do, we pull
it into our finiteness and it stops being infinite.

Actually, the property of having "a finite description" is very
anthropocentric, isn't it? All depends on our language. e is quite infinite
in a way, yet we can describe it exactly.

How did the topic move to infinite math again? Nevermind, it's always fun.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mfc61ea94b8cf893b40f46787
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread Matt Mahoney
These aren't the only 4 possible simulation scenarios. There are probably
others we cannot imagine because the simulation doesn't allow us to.

And no, I can't give you an example for that reason. It is like saying that
the vast majority of real numbers don't have a finite length description. I
can't give you an example of one of those either.

On Fri, Sep 20, 2019, 5:39 PM TimTyler  wrote:

> On 2019-09-02 16:39:PM, Matt Mahoney wrote:
> > Here are at least 4 possibilities, listed in decreasing order of
> > complexity, and therefore increasing likelihood if Occam's Razor holds
> > outside the simulation.
> >
> > 1. Only your brain exists. All of your sensory inputs are simulated by
> > a model of a non-existent outside world. The universe running this
> > simulation is completely different and we can know nothing about it.
> > It might be that space, time, matter, and life are abstract concepts
> > that exist only in the model.
> >
> > 2. Your mind, memories, and existence are also simulated. You didn't
> > exist one second ago.
> >
> > 3. The observable universe is modeled in a few hundred bits of code
> > tuned to allow intelligence life to evolve. In this case the speed of
> > light is in the code.
> >
> > 4. All possible universes with all possible laws of physics exist and
> > we necessary observe one that allows intelligent life to evolve.
> 
> This makes 3 and 4 sound more likely. However, there's a problem with
> that argument -
> 
> which is that the four cases are not mutually exclusive. 3 and 4 include
> 1 and 2 as possibilities.
> 
> If you try and fix that problem by defining 3 or 4 so that they exclude
> 1 and 2, then the
> 
> argument about 3 and 4 being favored by Occam's razor (if that is still
> a thing in the enclosing
> 
> world) no longer holds - because they are no longer simpler options.
> 
> If this argument about Occam's razor did hold, I think simulism would be
> a bit of
> 
> a less interesting possibility. However, it isn't a valid argument.
> 
> --
> __
> |im |yler http://timtyler.org/
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mf6ea7db8b105df69e545d488
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread immortal . discoveries
When I dream, I can see anything, I was in a huge desert full of buildings, no 
end in sight, creatures walking up a grass oompa-loompa land. Sometimes I have 
a new body and even can feel tentacles around a sewer wall while in my new body 
not even touching that wall (I have a skin around the wall, linked to me...). 
Clearly, all we see can be dreamed for sure.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1fc0414525edcfa31ca72db9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread John Rose
All four are partially correct. 

It is a simulation. And you're it. When you die your own private Idaho ends 
*poof*.

This can all be modeled within the framework of conscioIntelligence, CI = UCP + 
OR.

When you are that tabula rasa simuloid in your mother's womb you begin to 
occupy a representation of the world. Since there is little structure yet the 
communication protocol is simple. After you are born that occupation expands 
and protocols are built up. Structure is shared amongst other simulators. 
Symbol negentropy is maintained and you could also say a memetic pressure 
exists among those agents.

Communication, particularly, is done through your eyes, ears, mouth, etc.. When 
you chow down on that big cheese burrito what are you doing? You are occupying 
a representation of the glorious nature of it all 

And, the universe observes itself though your senses, or occupies a 
representation of itself by observing through everyone’s senses… informational 
structure gets instantiated, transmitted amongst people nodes.

Whether or not a big alien is running it all is another topic, intelligent 
design? deity? Those topics used to be taboo like UFO’s even though we have 
public military footage of UFO's now.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M209d8471049d236e109afd8e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread Nanograte Knowledge Technologies
Hi Tim

I'm enjoying your website. Thanks for sharing.

Robert


From: TimTyler 
Sent: Saturday, 21 September 2019 01:35
To: agi@agi.topicbox.com 
Subject: Re: [agi] Simulation

On 2019-09-02 16:39:PM, Matt Mahoney wrote:
> Here are at least 4 possibilities, listed in decreasing order of
> complexity, and therefore increasing likelihood if Occam's Razor holds
> outside the simulation.
>
> 1. Only your brain exists. All of your sensory inputs are simulated by
> a model of a non-existent outside world. The universe running this
> simulation is completely different and we can know nothing about it.
> It might be that space, time, matter, and life are abstract concepts
> that exist only in the model.
>
> 2. Your mind, memories, and existence are also simulated. You didn't
> exist one second ago.
>
> 3. The observable universe is modeled in a few hundred bits of code
> tuned to allow intelligence life to evolve. In this case the speed of
> light is in the code.
>
> 4. All possible universes with all possible laws of physics exist and
> we necessary observe one that allows intelligent life to evolve.

This makes 3 and 4 sound more likely. However, there's a problem with
that argument -

which is that the four cases are not mutually exclusive. 3 and 4 include
1 and 2 as possibilities.


If you try and fix that problem by defining 3 or 4 so that they exclude
1 and 2, then the

argument about 3 and 4 being favored by Occam's razor (if that is still
a thing in the enclosing

world) no longer holds - because they are no longer simpler options.


If this argument about Occam's razor did hold, I think simulism would be
a bit of

a less interesting possibility. However, it isn't a valid argument.

--
__
  |im |yler http://timtyler.org/



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mfce9d65a1731d6faaa6b27b1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-20 Thread TimTyler

On 2019-09-02 16:39:PM, Matt Mahoney wrote:
Here are at least 4 possibilities, listed in decreasing order of 
complexity, and therefore increasing likelihood if Occam's Razor holds 
outside the simulation.


1. Only your brain exists. All of your sensory inputs are simulated by 
a model of a non-existent outside world. The universe running this 
simulation is completely different and we can know nothing about it. 
It might be that space, time, matter, and life are abstract concepts 
that exist only in the model.


2. Your mind, memories, and existence are also simulated. You didn't 
exist one second ago.


3. The observable universe is modeled in a few hundred bits of code 
tuned to allow intelligence life to evolve. In this case the speed of 
light is in the code.


4. All possible universes with all possible laws of physics exist and 
we necessary observe one that allows intelligent life to evolve.


This makes 3 and 4 sound more likely. However, there's a problem with 
that argument -


which is that the four cases are not mutually exclusive. 3 and 4 include 
1 and 2 as possibilities.



If you try and fix that problem by defining 3 or 4 so that they exclude 
1 and 2, then the


argument about 3 and 4 being favored by Occam's razor (if that is still 
a thing in the enclosing


world) no longer holds - because they are no longer simpler options.


If this argument about Occam's razor did hold, I think simulism would be 
a bit of


a less interesting possibility. However, it isn't a valid argument.

--
__
 |im |yler http://timtyler.org/
 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M3b751c8020ee32a5d6cc86fa
Delivery options: https://agi.topicbox.com/groups/agi/subscription