RE: [agi] Blockchainifying Conscious Awareness

2018-06-19 Thread John Rose
Rob,

 

This is a very insightful and knowledgeable reply and most of your coverage is 
spot-on on. But…

 

Think of it when “databases” were first becoming pursued and popular. I don’t 
know, say 1990’ ish? What was a database then? And think of databases now, 
their realm of function, for example Firebase, NoSQL’s, Mongo, the graph 
database ecosystem with add-ons and extensions, SQL Server and R, whole 
development environments INSIDE the DB, etc.. A DBA then, like an accountant, 
and a DBA now…

 

Blockchain, crypto-systems are not fixed immutable concepts like say web 
browsers. Though web browsers are a cornucopia of techno-functionality 
nowadays, bad example, say ring-buffers. Ring-buffers are a relatively fixed 
tool that solves some specific problems. Blockchain opens doors into other 
worlds basically.

 

But you are right many AGI “components” difficult or impossible to 
blockchainify. Some components can be vastly improved it seems. When building a 
giant sculpture with a handful of traditional tools you need to utilize a new 
tool in new and creative ways to do things difficult or impossible to do before.

 

Most of my experience with blockchain, besides some technical research, is from 
trading cryptocurrencies over the years and running many masternodes (wife 
calls me a masternoder  ) to supplement income. So I’ve done research on 
thousands of cryptos basically and have practical experience. Look at TRON 
acquiring bittorrent – very interesting 

 

So among the thousands of crypto’s you get many organizations doing different 
things going different directions with their chains… on and off …

 

Relational integrity of course! My suggestion here of conscious awareness, 
essentially securely recording and distributing historical consciousness 
basically to platform “model checking” 
(https://en.wikipedia.org/wiki/Model_checking) on imagined models was to use 
specific new features that blockchain brings. 

 

Just touching on a couple of your insights…

 

John

 

From: Nanograte Knowledge Technologies via AGI  
Sent: Monday, June 18, 2018 10:45 AM
To: AGI 
Subject: Re: [agi] Blockchainifying Conscious Awareness

 

The Blockchain 2.0 and AGI - My rudimentary thoughts. Because I'm still 
learning about this technology, anyone (including IBM) should feel free to 
correct me where ever my understanding fails the reality. I'm commenting 
because of the apparent significance of this technology to our business future 
(including an AGI world). Please accept my apologies if I inadvertently 
misrepresent the facts of the product and/or its application across industry. I 
have no motive to want to do so. 

I can best relate to Blockchain as an application, which automatically 
generates Entity Relational Databases on the fly. Primarily, Blockchain very 
much represents a dynamic data model. Granted, it has transactional 
functionality, security, and the like, but all of that would be quite 
meaningless without the integrity derived from the data model. 

Further, without the relational part of the model, all entities (in the sense 
of nodes) would remain uncoupled (unclustered). Without the business rules in 
place, the nodes would not be able to be logically clustered. In short, 
Blockchain could be compared to many things, including a scalable VPN, which is 
enabled by a dynamic, relational data model. Did I say relational? Yes, I did. 

In argument, if no relational integrity existed, would referential integrity be 
possible at all (in this case)? In a stretch, the hearty part of Blockchain 
could also be viewed as an operationally-integrated, near-real time, 
enterprisal, lower CASE tool.

My summary:

Is The Blockchain a great, closed-network, commercial app? It probably is. 

How would The Blockchain co-exist with AI? Pretty- damned good. 

Is The Blockchain suitable as a core component for an AGI platform? No, but it 
may be quite useful as a management application (a node) for one of the many 
levels within the system (e.g., value transactions). This, provided the true 
scalability issues could be resolved, which remains to be seen. If The 
Blockcahin belonged to me, I would've integrated The Blockchain with a 
truly-scalable ontology and repositioned its core on a complex-adaptive model. 
I would've turned the pyramid on its head.   

What do I see as the key constraint for The Blockchain? It's core dependency on 
what appears to be a  relational model.  

What is my issue with The Blockchain's claim to be fully scalable? Given 
pervasive network infrastructure being used across the world, I think a risk 
exists that The Blockchain may eventually either duplicate, or contribute to 
ambiguity within an open-standards network topology. That is, unless The 
Blockchain is only X scalable within a standardized networking environment. 
Scalable perhaps, yet limited in scale (does this still count as scalability 
then?). I'm not suggesting the data-model-generation component is not 

RE: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread John Rose
Ehm, "chunking out code"...that's ah, yeah good way to describe it 

I agree. Arthur, you need to elevate yourself man. The Elon Musk's of the world 
are stealing all the thunder.

John

> -Original Message-
> From: Mike Archbold via AGI 
> 
> At least A.T. Murray is in the trenches chunking out code, unlike all of our
> celebrities like Elon Musk and Bill Gates who, while they may have more
> money, just write about it! Roll on Arthur...
> 



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Mfd8f1a0f69610c6b54b592c0
Delivery options: https://agi.topicbox.com/groups


[agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread John Rose
How I'm thinking lately (might be totally wrong, totally obvious, and/or 
totally annoying to some but it’s interesting):

Consciousness Oriented Intelligence (COI)

Consciousness is Universal Communications Protocol (UCP)

Intelligence is consciousness manifestation 

AI is a computational consciousness

GI is consciousness computation

GI requires non-homogeneous multi-agent structure (commonly assumed), with 
intra and inter agent communication in consciousness.

Consciousness computation (GI) is on the negentropic massive multi-partite 
entanglement frontier of a spontaneous morphismic awareness complexity - IOW on 
the edge of life’s consciousness based on manifestation of inter/intra-agent 
entanglement (in DNA perhaps?).

IOW the communication protocol UCP (consciousness) is simultaneously the 
computed, the computer, and the cross-categorical interlocuter (cohomological 
sheaver weaver?).

So for AGI it's needed to artificially create consciousness in software.

How's that done?  Using mathematical shortcuts from the knowledge gained from 
the collective human general intelligence and replacing the universal 
communications protocol of consciousness mathematically and computationally.

And there is trend in AGI R that aims for this but under other names and 
descriptions since the term consciousness has a lot of baggage but the concept 
is morphismic (and perhaps Sheldrakedly morphic ).

My sense though says that we are going to start seeing (already maybe?) 
evidence of massive and pervasive biological quantum entanglement, example in 
DNA. And the entanglement might go back eons and the whole of life's collective 
consciousness could be based on that...

John



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M0d67035a0f5f8e8fd877bd6e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread John Rose
> -Original Message-
> From: Russ Hurlbut via AGI 
> 
> 1. Where do you lean regarding the measure of intelligence? - more towards
> that of Hutter (the ability to predict the future) or towards 
> Winser-Gross/Freer
> (causal entropy - soft of a proxy for future opportunities; ref
> https://www.alexwg.org/publications/PhysRevLett_110-168702.pdf) 

Russ,

I see intelligence, in one way, as efficiency, increasing intelligence as 
efficiency increase. Measuring would be comparing efficiencies. Predicting 
futures is a form of attaining efficiencies but I usually lean towards the 
thermodynamical aspects when theorizing but that is somewhat virtualized in 
software to the information theory analogues. 

> 2. Do you
> agree with Tegmark's position regarding consciousness? Namely,
> "Consciousness might feel so non-physical because it is doubly substrate
> independent:
> * Any chunk of matter can be the substrate for memory as long as it has many
> different stable states;
> * Any matter can be computronium, the substrate for computation, as long as
> it contains certain universal building blocks that can be combined to
> implement any function. NAND gates and neurons are two important examples
> of such universal "computational atoms.".
> 

Definitely agree with the digital physics aspects. IMO all matter is memory and 
computation. Everything is effectively storing and computing. Also I think 
everything can be interpreted as language. And when you think about it, it is. 
Example, take an individual molecule and calculate it's alphabet based on 
atomic positions. The molecule is effectively talking with positional subsets 
or words. It can also speak a continuous language verses individual 
probabilistic states based on heat or whatever. And some matter would be more 
intelligent being more computationally flexible.


> If consciousness is the way information feels when being processed in certain
> complex ways, then it's merely the structure of the information processing 
> that
> matters, not the structure of the matter doing the information processing. A
> wave can travel across the lake, even though none of its water molecules do.
> It's not the particles but the pattern that really matters.
> (A Tegmark cliff notes version of can be found here:
> https://quevidaesta2010.blogspot.com/2017/10/life-30-max-tegmark.html)
> 

Now you're making me have to think. It's both right? The wave going across a 
different lake, say a lake of liquid methane, will have different waveform. Not 
sure how you can separate the structural complexity of the processing from the 
processed since information is embedded in matter. Language, math, symbols must 
be represented physically (for example on ink or in the brain). In an 
electronic computer though it is very separate, the electrons and holes on 
silicon highways are strongly decoupled from the higher level informational 
representation they are shuttling... hmmm!

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M936f76447ec1d2ade78e9d8f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message-
> From: Jim Bromer via AGI 
> 
> "Randomness" is merely computational distance from agent perspective."
> 
> That is really interesting but why the fixation on the particular
> fictionalization?  Randomness is computation distance from the agent
> perspective?  No it isn't. 

Jim,

OK, what then is between a compression agents perspective (or any agent for 
that matter) and randomness? Including shades of randomness to relatively 
"pure" randomness.


> I will have to give up trying but you are not merely using
> (specialized) linguistic reference markers. What you are saying makes enough
> sense to me to want to think about it but the noise makes it more difficult to
> understand. So yeah, I can see how randomness within a relative constraint
> system might be related to computational distance - especially from [your
> perspective[ of the agent's perspective. But even if I accept that as a
> reasonable view, you later made this remarkable statement: "it's an operation
> not a number or data point until you reach a boundary of thermodynamic
> expense being a compressor agent in a virtualized escapism pulled back to
> finite entropic reality."
> 

>From an information theoretic (and thermodynamic) viewpoint in your mind what 
>happens when you see the symbol for infinity? Semi-quantitatively describe the 
>thought processes?

John






--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M2d4b7f74746a361aa34f68eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> On Thu, Oct 11, 2018 at 12:38 PM John Rose 
> wrote:
> > OK, what then is between a compression agents perspective (or any agent
> for that matter) and randomness? Including shades of randomness to
> relatively "pure" randomness.
> 
> A string is random if there is no shorter description of the string.
> Obviously this depends on which language you use to write descriptions.
> Formally, a description is a program that outputs the string. There are no
> "shades" of randomness. A string is random or not, but there is no general
> algorithm to distinguish them in any language. If there were, then AIXI and
> thus general intelligence would be computable.
> 

Exactly! But you don't know generally if there is a shorter description do you? 
So the compression agent does its thing and executes programs and tries to find 
out. Until it runs out of resources for whatever reason. In absolute case there 
is only random and non-random. In a real world there are shades of randomness I 
think? Something appears random to one agent but not another?


> > From an information theoretic (and thermodynamic) viewpoint in your mind
> what happens when you see the symbol for infinity? Semi-quantitatively
> describe the thought processes?
> 
> The same thing that happens when you see any other symbols like "2" or "+".
> Mathematics is the art of discovering rules for manipulating symbols that help
> us make real world predictions.
> 

Not. "2" is more often a data point and "+" an operation (they don't have to be 
but usually are). "Infinity" is 2+2+2+2+2+ IOW a program that you execute 
until you get tired and say just give me a symbol and it's done. Unless you are 
persistent and keep imagining other programs but at some point you need a 
symbol. Or, for the duration of the agent's existence it keeps attempting to 
compute it :) And it could but it would have to suck in the universe.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M71f2312fba5fe07bf4f00de0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-09 Thread John Rose
> -Original Message-
> From: Jim Bromer via AGI 
> 
> Operating on compressed data without having to decompress it is the goal that
> I am thinking of so being able to access internal relations would be 
> important.
> There can be some compressed data that does not contain explicit internal
> relations but even then it would be nice to be able to make modifications to
> the data without decompressing it. My assumption is that the data would have
> some kind of internal relations that were either implicit in the data or which
> might be a product of the compression method.
> The parts of the model that I am thinking about may contain functions to:
> Compress data.
> Transform compressed data into another compressed form without
> decompressing it.
> Append additional data onto the previously compression without
> decompressing it.
> Modify the data previously compressed without decompressing it.
> Decompress the data.
> 


Isn't this just building alphabets of patterns and symbolizing "effective 
complexity" regions (Gell-Mann ) on successive iterations while interacting 
with a more general library graph of symbols? Aligning to entropy extrema when 
forming crypticity topology... shifting lossy and lossless dynamically in 
referencing the general library. IOW, for example mining into "dynamical depth" 
then inserting "purer" symbols from the library into the compressed form at the 
appropriate depth. Symbol injection basically... the cleaner symbols being 
effectively pre-compressed.

Maybe?

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-Mb6546283d67a31c72034ca22
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message-
> From: Jim Bromer via AGI 
> 
> John,
> Can you map something like multipartite entanglement to something more
> viable in contemporary computer programming? I mean something simple
> enough that even I (and some of the other guys in this group) could
> understand? Or is there no possible model that could be composed from
> contemporary computer programming concepts?
> Jim Bromer
> 

Yes, what's the difference between knowing and knowing verses knowing and 
telling? Or, what are the computational distances, information distances, 
algebraic distances, etc..

Entanglement in biological separation mimicry can be virtualized into 
communicational group modeling. Contemporary computers are unable to do quantum 
entanglement but they can excel in natural language communication complexity 
and bandwidth efficiency. And with contemporary computers there is physics and 
there are physics. Virtuality lends to overcoming physical and separation 
issues.

John







--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mc1e739559676dc5e0a7dea27
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message-
> From: Nanograte Knowledge Technologies via AGI 
> 
> John. considering eternity, what you described is but a finite event. I dare 
> say,
> not only consciousness, but cosmisity.
> 

Until one comes to terms with their true insignificance will they not grasp 
their true significance.

Wait doesn't insignificance just equal anti-significance?

No, it depends which one you are thinking about at the moment or which one you 
are temporally conscious of... when using qualia qubits.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mf66302d93cc71626da10805d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message-
> From: Jim Bromer via AGI 
> 
> And if the concept of randomness is called into question then
> how do you think entropic extremas are going to hold up?
> 

"Entropic extrema" as in computational resource expense barrier, including 
chaotic boundaries, too expensive to mine into for the compression agent 
causing symbol explosion and unpredictable time complexity.. so effectively 
one-time symbolizing the whole region and working around it until a larger 
pattern is discovered perhaps on successive passes and the symbol can be fitted 
into some dynamical component from an emerging model. "Randomness" is merely 
computational distance from agent perspective.

Your example? Infinity. The ultimate symbol explosion. So what do you do? You 
symbolize it. And you're right it is not a number unless intentionally demarked 
so in a virtualized boundary. Symbols can be pointers to regions of relatively 
incomputable data or an expression of operation to generate data. For infinity 
there are an infinite number of expressions so the expression should be in 
relation to the agent engine. The more efficient and intelligent the agent the 
better it is at creating computable expressions verses data pointers of symbol 
alphabets and languages. And expressions can be re-expressed into simpler form 
with optimization of the language on successive passes.

What do you envision when seeing the symbol "infinity"? The time complexity of 
various algorithms in your mind... it's open ended... unpredictable... your 
mind symbolizes symbols. Infinity symbolizes symbol creation, it's an operation 
not a number or data point until you reach a boundary of thermodynamic expense 
being a compressor agent in a virtualized escapism pulled back to finite 
entropic reality. Thermo-entropically bound in a virtual-entropic projection 
searching for escape velocity... and not finding it... your concept of infinity 
being transmitted to other compression agents who are similarly entrapped and 
virtualizing out attempting more efficient combustion and intelligence 
increase... thus the qualia of infinity is protocolized for systemic 
intelligence maximalization.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M393b9ffb68e03b8311dd10d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Massive Bacteriological Consciousness - Gut Homunculi

2018-09-12 Thread John Rose
I’m tellin’ ya, nobody believes me! 

 

More and more research has been conducted on microbial gut intelligence... Then 
a couple years ago bacteria were scientifically shown to be doing quantum 
optimization processing. Now we see all kinds electrical microbiome activity 
going on in the gut:

 

https://phys.org/news/2018-09-hundreds-electricity-generating-bacteria-pathogenic-probiotic.html

 

Could consciousness really be coming from our second brain (some say third ) 
the gut? And might this be the source of Dennett’s so-called “homuncular 
hordes”? 

 

https://thesensitivegut.com/2018/02/16/gut-feelings-does-both-consciousness-and-emotion-come-from-our-gut/

 

heheh

 

John

 

 


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te2ef084a86d2a11e-M731cd1ab8dbbaa0c568314c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message-
> From: Jim Bromer via AGI 
> 
> 
> There are some complications of the experience of our existence, and those
> complications may be explained by the complex processes of mind.
> Since we can think we can think about the experience of life and interweave
> the strands of the experience of our existence. But that does not mean that
> the essential experience can be explained by complicated thinking or some
> other dismissive denial. The processes of higher intelligence may shed light 
> on
> the complexity problem but the experience of consciousness is irrelevant to AI
> because it is not strictly a computational thing. It cannot be reduced by our
> theories of mind or life which are currently available and which are certainly
> not part of computer science.


No relevance? How about user interface at the very least. Meaning 
computer-human interaction. How do we communicate with advanced intelligence? 
Do we use some sort of... communication protocols? Are they complicated or 
simple? If they are simple are we able to communicate efficiently enough to 
express to it fully? Would an AGI be able to express itself to us effectively 
just using shell scripts, natural language and hand gestures?

IMO interface is all about conscious experience... why did DOS die and Windows 
thrive? Better conscious experience. Check out Apple's Special Event announcing 
new iPhone's (they are power computers) announced a couple days ago - ALL about 
AI interfacing to conscious experience in a BIG way. 

AGI is not an isolated system otherwise no reason to build it.

John







--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-M69170af1a7113624816b7c56
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> 
> It's relevant if consciousness is the secret sauce. and if it applies to the
> complexity problem.
> 
> Jim is right. I don't believe in magic.
> 

A Recipe for a Theory Mind

Three pints of AIT (Algorithmic Information Theory) Ale
Two Pints of IIT (Integrated Information Theory) Ale
Quarter Ounce of Compressed Qualia Ganja 
Six Shots of AI (Artificial Intelligence) Chaser

Fire up some music:
https://www.youtube.com/watch?v=O7ONp-GC7vM

Slap a label on it:
http://trap.ncirl.ie/2114/

John











--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-Me4148d4095f7c8a38539e12a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-12 Thread John Rose
> -Original Message-
> From: Nanograte Knowledge Technologies via AGI 
> 
> Challenging a la Haramein? No doubt. But that is what the adventure is all
> about. Have we managed to wrap our minds fully round the implications of
> Mandelbrot's contribution? And then, there is so much else of science to
> revisit once the context of an AGI has been adequately " boundaried".

Cheers to Mandelbrot not only for the math and science but for the great 
related art and culture. and music even! Fractal music 

> Imagine if "we" could engineer that (to develop an ingenious consciousness-
> based engine), which the vast majority of researchers claim cannot be done?
> Except for lack of specific knowledge and knowhow and an inadequate
> resource base (for now), I see no sound reason why such a feat would not be
> possible.

Big project 

IMO successful AGI will use consciousness functionally but won't call it that 
since it causes so much hyperventilation. Researchers want non-conscious AGI so 
it doesn't go rogue LOL. Hmmm wonder about that. Could non-conscious go rogue 
anyway... and is non-conscious even possible.

John



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-M35cefe5ab69cf35542a85d92
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-19 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> What do you think qualia is? How would you know if something was
> experiencing it?
> 

You could look at qualia from a multi-systems signaling and a compressionist 
standpoint. They're compressed impressed samples of the environment and other 
agents. Somewhat uniquely compressed by the agent due to genetic diversity and 
experience so the qualia have similarities and differences across agents. And 
the genetic tree is exhaustively searching. Similarly conscious agents would 
infer similar qualia experience of other agents but not exactly the same even 
if genetically identical due to differing knowledge and experience. Also the 
genetic tree is modelling the environment but this type of model is an 
approximation and this contributes to the need for compressed sampling from 
agent variety.

So one could suggest a consciousness topology influenced by agent environmental 
complexity and communication complexity. And the topology must have coherent 
and symbiotic structure that contribute to agent efficiency... meaning if 
effects the species intelligence.

An agent not experiencing similar qualia though would exhibit some level of 
decoherence related to similar agents until their consciousness model is 
effectively equal. How do you test if a bot is a bot? You test it's reaction 
and if the reaction is expected. The bot tries to predict what the reaction 
should be but cannot predict all expected reactions. The more perfect the model 
the more difficult to detect. For example, CAPTCHA. Not working well now since 
the bots are better so the industry is moving to biometric visual. What comes 
after that? Turing test becomes qualia test. But it's all related to 
communication protocol due to separateness since full qualia cannot be 
transmitted they are further lossily compressed and symbolized for 
transmission, an imperfect process. But agents need to communicate experience 
so imperfect communication is another reason for consciousness. We reference 
symbols of qualia in other people's or, multi-agent consciousness... or the 
general consciousness.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M4095ccfa5bca7ac872f13500
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread John Rose
> -Original Message-
> From: Matt Mahoney via AGI 
> 
> We could say that everything is conscious. That has the same meaning as
> nothing is conscious. But all we are doing is avoiding defining something 
> that is
> really hard to define. Likewise with free will.


I disagree. Some things are more conscious. A thermostat might be negligibly 
conscious unless there are thresholds.


> We will know we have properly modeled human minds in AGI if it claims to be
> conscious and have free will but is unable to tell you what that means. You 
> can
> train it as follows:
> 
> Positive reinforcement of perception trains belief in quality.
> Positive reinforcement of episodic memory recall trains belief in
> consciousness.
> Positive reinforcement of actions trains belief in free will.


I agree. This will ultimately make a p-zombie which is fine for many situations.

The problem is still there how to distinguish between p-zombie and a conscious 
being. 

Solution: Protocolize qualia. A reason for Universal Communication Protocol 
(UCP) is that it scales up.

Then you might say that p-zombies can use machine learning to mimic 
protocolized qualia to deceive. And they can from past communications.

But what they cannot do is generally predict qualia. And you should agree with 
that ala Legg's proof.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mc02d54a4317de005468e466e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] My AGI 2019 paper draft

2019-04-30 Thread John Rose
Matt > "The paper looks like a collection of random ideas with no coherent 
structure or goal"

 

Argh... I love this style of paper whenever YKY publishes something my eyes are 
on it. So few (if any) are written this way, it's a terse jazz fusion improv of 
mecho-logical-mathematical thought physics needed to describe AGI concept.

 

Immediately on the first version when I saw the navigating the labyrinth of 
"thinking" I thought of the quantum many paths simultaneity in photosynthesis 
and YKY mentioning the discovery of a possible correlation of Schrödinger and 
RL... but that item was yanked in the second iteration. That's OK, sometimes 
while on the vanguard of thought viewers eyes must be shielded from that which 
they explicitly fear the most...coincidentally sometimes which is totally 
obvious thus suspending disbelief while maintaining a referential propriety and 
contemporary academic interestingness.

 

Also yanked was the expression of the notion for the AGI requirement of 
approximating K-complexity which in that I agree is where all the good stuff 
is…. generally and/or specifically… IMO this where the multi-agent 
consciousness mechanics come in but I’ll shield some eyes on that one :)

 

John

 

From: Stefan Reich via AGI  
Sent: Friday, April 19, 2019 4:21 PM
To: AGI 
Subject: Re: [agi] My AGI 2019 paper draft

 

Good review

 

On Fri, Apr 19, 2019, 22:02 Matt Mahoney mailto:mattmahone...@gmail.com> > wrote:

It would help to get your paper published if it had an experimental results 
section. How do you propose to test your system? How do you plan to compare the 
output with prior work on comparable systems? What will you measure? What 
benchmarks will you use (for example, image recognition, text prediction, 
robotic performance)?

 

The paper looks like a collection of random ideas with no coherent structure or 
goal. The math seems to confuse or mislead rather than explain. For example you 
show father(x,y) as a function in the real plain rather than a predicate over 
discrete variables. This is interesting for a moment, but doesn't go anywhere, 
so you move on to the next topic. The whole paper is like this, plugging 
variables from one field of study into equations from another and hoping 
something useful comes out.

 

I know that you are just full of ideas. But actually writing some code that 
does something interesting might really help in sorting out the useful ideas 
from the ones that go nowhere and advance the field of AGI.

 

On Fri, Apr 19, 2019, 9:15 AM YKY (Yan King Yin, 甄景贤) 
mailto:generic.intellige...@gmail.com> > wrote:

Hi,

 

This is my latest draft paper:

https://drive.google.com/open?id=12v_gMtq4GzNtu1kUn9MundMc6OEhJdS8

 

I submitted the same basic idea in AGI 2016, but was rejected by some rather 
superficial reasons.  At that time, reinforcement learning for AI was not 
widely heard of, but since then it has become a ubiquitous hot topic.  I hope 
this time I can get published, as it would allow me to share my ideas more 
easily with other researchers and mathematicians so that I could solicit their 
help and improve my theory, possibly starting the coding project as well.

 

Comments and suggestions are welcome 

-- 

YKY

"The ultimate goal of mathematics is to eliminate any need for intelligent 
thought" -- Alfred North Whitehead

  Artificial General Intelligence List / AGI / 
see discussions   + participants 
  + delivery options 
  Permalink 

  


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3cad55ae5144b323-M5270f3477e3d62edc3b33160
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Mens Latina -- 2019-04-28

2019-05-02 Thread John Rose
> -Original Message-
> From: A.T. Murray 
> 
> For example, the AI might say what means in English, "You are a human
> being and I am a person."
> 
> C. The AI may demonstrate activation spreading from one concept to
> another concept.
> 
> If you type in "homo" for "human being", the AI may spread activation to a
> thought that means "Human beings love nature." Then the AI may spread
> activation from "nature" to an associated statement about nature.
> 

Hmmm... Yes I concur Mentifex, but the activation spreading may encounter a 
devolutionary resistance due to the "Jocko Homo" effect. Are we not men?

https://www.youtube.com/watch?v=5JdS-sSKsBc

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb56221431de788eb-M5ff7c7785ee0c4b10a6894e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> -Original Message-
> From: Matt Mahoney 
> 
> So the hard problem of consciousness is solved. Rats have a thalamus which
> controls whether they are in a conscious state or asleep.
> 
> John, is that what you meant by consciousness?

Matt,

Not sure about the hard problem here but a rat would have far less 
consciousness when sleeping that is for sure 

Why? Think about the communication model with other objects/agents.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M359fb419a5de8fa101e264b0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> Matt,
> 
> Not sure about the hard problem here but a rat would have far less
> consciousness when sleeping that is for sure 
> 
> Why? Think about the communication model with other objects/agents.
> 
> John

Although... I have to say that sometimes when I'm sleeping, lucid dreaming or 
whatever, somehow wandering the world in astounding mental clarity, (probably 
due to drinking too much coffee) I could argue that there is more consciousness 
there. Occupying more representation but less communication so... that darn 
sleeping rat could be doing the same.

John





--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M449578cc2d85d2931b155c62
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] ConscioIntelligent Thinkings

2019-08-23 Thread John Rose
I'm thinking AGI is on the order of 90% consciousness and 10% intelligence.

Consciousness I see as Universal Communication Protocol (UCP) and I see 
consciousness as "Occupying Representation" (OR). Representation being 
structure (or patterns from a patternist perspective).

Then, from panpsychist perspective, how would one be conscious of pure 
randomness? The interesting thought here is that the structure is or comes from 
oneself, the conscious observerwhere pure randomness might be conscious of 
all structure if consciousness is defined as O.R.. 

How does structure come from oneself when pondering pure randomness? One might 
say that there is a consciousness distance between the two,... or a 
consciousness vector field of structure (call it a vector field for now). And 
when pondering a semi-randomness (verses pure) each side has a vector field of 
structure. How they meet is an expression of the computational complexity of 
the agent. What shapes these vector fields of structure? Conceptual capacity 
and potential. 

So consciousness is a conceptual potential of computational complexity on the 
vector fields of structure among agents/patterns, IOW, communication protocol. 
Protocol leads to symbol creation to words to languages to transmit and form 
structural concepts. Structural concepts are reapplied across conscious 
interactions to form new symbols for new words and languages.

One might ask how inanimate objects are conscious? Very simple, half-duplex 
structural representation recognition of language elements by a conscious 
observer being generated by discrete algebraic structural potential.

John




--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Md05fca47475443c969b31a44
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FAO: Senator Reich. Law 1

2019-09-05 Thread John Rose
On Thursday, September 05, 2019, at 9:58 AM, Nanograte Knowledge Technologies 
wrote:
> That's not helping you A.T.Murray ;)

Oh wow, Mentifex biography. How sweet. What's next a movie? LOL  

(You gotta be F'in kidding me)

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T05e88de3f0e66ad3-M6d60e2cbb40c98080368cca4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-13 Thread John Rose
Consciousness mixmuxes structure with protocol with language thus modulating 
the relationship between symbol complexity and communication complexity in an 
environment of agents. And conscious agents regulate symbol entropy in effect 
maintaining a symbol negentropy. The agents route symbols based on 
consciointelligent routing. Conscious agents are arguably more efficient 
routers of structure thus supporting system coherence and multiparty 
intelligence.

A particular dimensional view of it could be akin to a Deep 
Switching/Muxing/Routing :)

Pundits might say, ya well all that can be done without "consciousness". A 
system of p-zombie agents would behave exactly the same. But how would you 
know? And, why would we want to send an AGI to human school (assuming humans 
are conscious)? Just send it to a robot school and/or suck up the internet.

Also, a system of cooperating conscious agents may have a super-optimized 
communication network. Why? Coordinating internal models allow for fast 
communication and more systemic control and prediction. Example, an improvising 
jazz trio. Symbols flow, are created, are anticipated, coordinated, entropied 
and exhausted. Pundits might say - ya well we can create the trio without 
consciousness. Well then what would the human audience think? Can you predict 
their conscious reactions? No. You can simulate but not fully predict and 
experience.

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Ma4b1a1d18113c497beaadd5e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] whats computer vision anyway

2019-09-14 Thread John Rose
On Wednesday, September 11, 2019, at 8:43 AM, Stefan Reich wrote:
> With you, I see zero innovation. No new use case solved, nothing, over the 
> past, what, 2 years? No forays into anything other than text (vision, 
> auditory, whatever)?
> 

Actually, Mentifex did contribute something incredibly bold and unique 
recently. Latin.

What is one of the most un-innovated pieces of computer programming?

Ans: Variable names.

Just think how you can spice things up with a little Latin action going on 
there :)

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9ef92db7ce9c030-Md2ccf179a7f4c1871f93638a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] whats computer vision anyway

2019-09-14 Thread John Rose
On Saturday, September 14, 2019, at 6:19 PM, Stefan Reich wrote:
> Yeah, I'm sure I should increase my use of Latin variable names.

I mean... maybe but.

When you run an obfuscator or minifier on code what does it do? Removes human 
readable. Minifier minimizes representation. But variable names, method names, 
etc. are largely un-innovated like I was saying. Huge opportunity there. It's 
an abstraction on top of code that is uncoupled to the code but coupled to the 
coder.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9ef92db7ce9c030-M451466d707fa1361bf9892ed
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-14 Thread John Rose
On Saturday, September 14, 2019, at 12:57 AM, rouncer81 wrote:
> Seriously, im starting to get ready to go use all this superfluous 
> engineering skill ive collected over the last couple of years to go draw up 
> the schematics for my home personal guillotine system (tm).

Ya just don't become one of those inventors killed by their own inventions :)
https://en.wikipedia.org/wiki/List_of_inventors_killed_by_their_own_inventions

Not that this could ever happen with AGI.  Or... imagine a list of species 
exterminated by the invention of AGI. 

Nah we're safe. 

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Mfcebe52441ad3a44916dac22
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-15 Thread John Rose
Yeah so, one way is the create a Qualia Flow as an Information Ratchet. Each 
click of the ratchet can be a discrete experience. The ratchet gets it's energy 
from the motion in the AGI's internal dynamical systems entropy.

click
click
click

Then this ticking, when regulated is a systems signaling basis for protocol 
interaction with agents on related frequencies thus allowing for common 
structure to be efficiently transmitted...

click
click
click

Then the multiagent system in an environment of complexity class subset 
multiprocesses for more coherent efficiency.

click
click
click

So in effect you get a multiparty shared symbol negentropy.

Hmmm maybe...

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-M41ed1e00dec7dd4c083fcc6a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
On Sunday, September 15, 2019, at 8:32 AM, immortal.discoveries wrote:
> John, interesting posts, some of what you say makes sense, you're not far off 
> (although I would like to see more details).

This is just a hypothetical engineering discussion. But to put it more 
succinctly, is consciousness powered by internal systems entropy resulting in 
inter-agent communication? It seems to be, for many reasons. More intelligent 
agents, humans, communicate with more symbol complexity verses say a squirrel.  
Squirrel is conscious but its alphabet is very simple. And different brain 
frequencies are due to different communicational energy states.

But is a conscious experience a click of the ratchet? Or whatever mechanism 
that converts sub-symbolic energy to a symbol? Or is that overthinking this 
model...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Mcf575c4669fd1fdfe28e063b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
Well then it should be more of a multi-ratchet reflecting the topological 
entropic/chaotic computational synergy of the internal dynamical multi-systems 
mapped and bifurcated  into full-duplex language transmission.

Single ratchet = Morse code.
Multi-ratchet = Polyphony (larger symbol space)

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Me52854f587303a690eb3e661
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] whats computer vision anyway

2019-09-17 Thread John Rose
On Monday, September 16, 2019, at 12:11 PM, rouncer81 wrote:
> yes variables are simple and old,  we dont need them anymore.

Sorry, object names :) In some languages everything is an object.

The thought was going in the direction of reverse obfuscation...opposite 
direction of minification.   Comprende?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9ef92db7ce9c030-Meb77c6d30387d9dea099a9cd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
Please try to get this right it's very important:
https://www.youtube.com/watch?v=xsDk5_bktFo

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Mc9027de72ed4514f96d5c5b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
Allow dimension modulation. Put some dimension control into the protocol layer 
allowing for requests of dimension adjustment from current transmission level...

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-Mcac83f36115aa9d4fe47d3ef
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: by successive approximation.

2019-09-08 Thread John Rose
On Saturday, September 07, 2019, at 10:21 AM, Alan Grimes wrote:
> Some examples of the limitations of the brain's architecture, include 
the inability to multiplex mental resources -> ie having a network of 
dozens of instances while retaining the advantages of having a single 
knowledge and skill pool. The lack of network features, etc...

Humans are designed for this, we are a multi-people intelligence and are 
tweaked for an inter-agent computational topology, for example, a tribe. A 
representation of the external topology exists in a sub-symbolic "model". This 
model IMO seems to be more of a switch/mux like a bidirectional filter who's 
structure is modulated by resource environment. If AGI is modeled after the 
human brain it should modeled after a system of brains communicating. But 
contemporary computational topology, for a example the cloud, provides a 
flattened  centralized fabric (even though the cloud is really distributed). So 
extracting the natural distributed computational structure of a system of 
brains and remodeling/projecting that into a cloud fabric allows for many 
optimizations. But, you just have to be careful about exuding features built 
into the other natural topology that are critical for general intelligence.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf8343c16c309d228-M269003513d50d2ae0e2fc53a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Transformers - update

2019-09-19 Thread John Rose
I'm wrong. You're right. Was just hoping for more :)

Incremental, team and skills building. Inventing and discovering new ideas 
while doing that. And when finding something good not releasing it to the 
public (for safety naturally).

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta06c389ce77f485b-M7f5da3f5d4b086d2da3325d1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: AGI Research Without Neural Networks

2019-09-19 Thread John Rose
For ancillary like sensory you have to?  For core I don't think neural at all. 
Not to say neural is not emulated in some way in core... But I think any design 
has to use architectural optimization or has to be pre-architecturally 
optimized.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T084639b84e0d5b32-Me87d4f3c36a3ad31d1cf5bd0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread John Rose
All four are partially correct. 

It is a simulation. And you're it. When you die your own private Idaho ends 
*poof*.

This can all be modeled within the framework of conscioIntelligence, CI = UCP + 
OR.

When you are that tabula rasa simuloid in your mother's womb you begin to 
occupy a representation of the world. Since there is little structure yet the 
communication protocol is simple. After you are born that occupation expands 
and protocols are built up. Structure is shared amongst other simulators. 
Symbol negentropy is maintained and you could also say a memetic pressure 
exists among those agents.

Communication, particularly, is done through your eyes, ears, mouth, etc.. When 
you chow down on that big cheese burrito what are you doing? You are occupying 
a representation of the glorious nature of it all 

And, the universe observes itself though your senses, or occupies a 
representation of itself by observing through everyone’s senses… informational 
structure gets instantiated, transmitted amongst people nodes.

Whether or not a big alien is running it all is another topic, intelligent 
design? deity? Those topics used to be taboo like UFO’s even though we have 
public military footage of UFO's now.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M209d8471049d236e109afd8e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-21 Thread John Rose
On Saturday, September 21, 2019, at 11:01 AM, Stefan Reich wrote:
> Interesting thought. In all fairness, we can just not really interact with a 
> number which doesn't have a finite description. As soon as we do, we pull it 
> into our finiteness and it stops being infinite.

IMO there are only finite length descriptions. When something more accurate is 
needed in this thermodynamic universe a better description is attempted to be 
expressed and we create pointers to yet to be computed computations a.k.a. 
symbols.

Coincidentally related - did anyone see this quite interesting recent proof 
utilizing Graph Theory!
https://www.scientificamerican.com/article/new-proof-solves-80-year-old-irrational-number-problem/

paper here:
https://arxiv.org/abs/1907.04593

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mb3eab92c328ca9ffb07cc64c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 4:04 PM, Secretary of Trades wrote:
> https://www.gzeromedia.com/so-you-want-to-arm-a-proxy-group

I don't get it.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T354278308a7acf85-M13d18fc26412bd7be5a3f1e7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Transformers - update

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 8:14 AM, immortal.discoveries wrote:
> https://openai.com/blog/emergent-tool-use/

While entertaining there is absolutely nothing new here related to AGI ???

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta06c389ce77f485b-M39ce2af8807a275a6f693f83
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-28 Thread John Rose
On Saturday, September 28, 2019, at 4:59 AM, immortal.discoveries wrote:
> Nodes have been dying ever since they were given life. But the mass is STILL 
> here. Persistence is futile. We will leave Earth and avoid the sun.

You right. It is a sad state of affairs with the environment...the destruction 
going on for centuries actually. It's almost like man subsumed natures 
complexity is his rise to intelligence. Or electronic intelligence did.

Oh well, can't cry over split milk :) Onward.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mbec5b69b96a014bb526c43de
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 8:59 AM, korrelan wrote:
> If the sensory
streams from your sensory organs were disconnected what would your
experience of reality be?  No sight,
sound, tactile or sensory input of any description, how would you play a part/
interact with this wider network you describe… you are a closed box/ system and 
only experience 'your individual' reality through your own senses, you would be 
in a very quiet, dark place indeed.



Open system then closed?

Jump into the isolation tank for a few hours. What happens? You go from full 
duplex to no duplex transmission. Transceiver buffers fill. Symbol negentropy 
builds. Memetic pressure builds. Receptivity builds. Hallucinations occur.

I'm not trying to convince but to further my own thoughts, I guess the question 
is to what extent are we individuals? Are we just parroting patterns and memes 
and changing them a little then acting as switches/routers/reflectors? Some 
people are more reflectors and some add more K-complexity to the distributed 
intelligence I suppose. But I think that we have less individuality than most 
people assume.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M737a43e4e9b79a6e8d63fa7b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 10:57 AM, immortal.discoveries wrote:
> We could say our molecules make the decision korrelan :)

And the microbiome bacteria, etc., transmitting through the gut-brain axis 
could have massive more complexity than the brain.

"The gut-brain axis, a bidirectional neurohumoral communication system, is 
important for maintaining homeostasis and is regulated through the central and 
enteric nervous systems and the neural, endocrine, immune, and metabolic 
pathways, and especially including the hypothalamic-pituitary-adrenal axis (HPA 
axis)."

https://endpoints.elysiumhealth.com/microbiome-explainer-e345658db2c

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Me991df962c2c00a3725d92e3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
Persist as what?

Unpersist the sun rising, break the 99.99... % probability that it rises 
tomorrow. What happens? We burn.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1ebf99508486d21d6e0f55ae
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 1:44 PM, immortal.discoveries wrote:
> Describing intelligence is easier when ignore the low level molecules. 

What if it loops? 

I remember reading a book as a kid where a scientist invented a new powerful 
microscope, looked into it, and saw himself looking into the microscope.

Our view of reality may be all out of wack with reality. 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M5a403b7f8ce8fcdeea94fbe0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread John Rose
On Monday, September 23, 2019, at 7:43 AM, korrelan wrote:
> From the reference/ perspective point of a single
intelligence/ brain there are no other brains; we are each a closed system and a
different version of you, exists in every other brain.



How does ANY brain acting as a pattern reservoir get filled? There is an 
interaction point or a receiver/transmitter component(s). There are no closed 
systems for brains, look closer and you will find the graph edges. Some 
patterns auto-generate but the original structure prepop comes from another 
brain even if it is a geeky programmer.

On Monday, September 23, 2019, at 7:43 AM, korrelan wrote:
> We don’t receive any information from other brains; we receive
patterns that our own brain interprets based solely on our own learning and
experience.  There is no actual
information encoded in any type of language or communication protocol, without
the interpretation/ intelligence of the receiver the data stream is meaningless.



Of course, it's a given that the receiver needs to be able to interpret... the 
transceiver piece also feeds data from environment to add to new pattern 
formation. Even an insect brain can eventually discern between an "A" and a "B".

Another thing I'm looking at - conscioIntelligent flow - or to put it another 
way, patternistic flow. Potentially even modeled somewhat by Bernoulli 
equations. but perhaps this is covered by memetics.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Ma300915a3e0b0902e1050d6d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 6:48 PM, rouncer81 wrote:
> actually no!  it is the power of time.    doing it over time steps is an 
> exponent worse.

Are you thinking along the lines of Konrad Zuse's Rechnender Raum?  I just had 
to go read some again after you mentioned this :)

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Md0f09f577b2797183834e1cf
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:36 AM, korrelan wrote:

"the brain is presented with external patterns"
"When you talk to someone"
"Take this post as an example; I’m trying to explain a concept"
"Does any of the actual visual information you gather"

These phrases above, re-read them, are more related to a 
consciousness/communication layer IMO an open system.  "Presented", "talk", 
"explain", visually gather., etc..  I totally agree this "layer" surrounds 
something much deeper which you are referring to and I wasn't describing yet 
only in a weakly represented form. Your model looks like it has a complexity 
barrier, or on your particular emulation of the human brain. Still not closed.

On Tuesday, September 24, 2019, at 7:36 AM, korrelan wrote:
> Does any of the actual visual information you gather from
viewing ever leave your closed system? You can convert it into a common
protocol and describe it to another brain, but the actual visual information
stays with you.



Yes! But it's representation is far removed from the input... How far away? 
This is very tedious to describe in detail mathematically the description of 
which we could spend much time discussing. I'm not pursuing a complex system 
model perhaps that's our disconnect here?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M00afb1f07c5bac0eed38da4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-24 Thread John Rose
I'm thinking of a mathematical measure called "What The Fuckedness".  WTF({K, 
P, Le, ...}), K-Complexity, Perplexity and Logical Expectation. Anything 
missing?

It can predict the expressive pattern on someone’s face when they go and type 
phrases into Mentifex's website expecting AI.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M2942fcb0a389f9165371557e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:07 AM, immortal.discoveries wrote:
> The brain is a closed system when viewing others

Uhm... a "closed system" that views. Not closed then?

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mdf132d4bed1b97879811f946
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-22 Thread John Rose
On Saturday, September 21, 2019, at 7:24 PM, rouncer81 wrote:
> Time is not the 4th dimension, time is actually powering space.   
> (x*y*z)^time.

And what's the layer on top of (x*y*z)^time that allows for intelligent 
interaction and efficiency to be expressed and executed in this physical 
universe? Symbol representation creation/transmission a.k.a. consciousness. It 
is the fabric on which intelligence operates. You cannot pull it out of the 
equation no matter how hard you try. You can pretend it doesn't exist but it 
will always come back to bite you in the end.

Unless there is some sort of zero energy, zero latency, infinite bandwidth 
network floating this whole boat... which there might be, or I should say 
probably is... Or network on top of network on top of network... turtles.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Medd560a53934d144fcc71295
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Time makes us think that humans are willfully creating AGI, as if it is in the 
future, like the immanentizing of the singularity eschaton. Will scientific 
advances occur at an ever increasing rate? It would have to slow down at a 
certain point. Has to right? As we approach max compression of all knowledge 
into K-complexity delineation… 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8baff210f7f8fb59-M1a5c2bf1fc91f04bdbf9369a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread John Rose
On Wednesday, October 02, 2019, at 1:05 AM, James Bowery wrote:
> Harvard University's Jonathan Haidt is so terrified of the truth coming out 
> that he's actually come out against Occam's Razor 
> .

There are sityations where the simplest explanation is to chuck Occam's Razor 
:) 

There is an over reliance. Though implementors do need to go from complex to 
simple.

But there are issues with rationality. There are issues with scientific 
objectivism.

Aren't Occam and Gödel at odds with each other in some ways? Especially in 
virtual worlds hosted by computers where there is a disconnect between 
thermodynamic and information theoretic.

And NKS (Wolfram) does squeeze in there somewhat between Occam and Gödel…. 
Didn’t gain much traction yet AFAIK.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Md9f2407bfa3e517b3ace41a1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Heat can up-propagate into symbol and replicate out of there. Energy converts 
to informational transmission and disentopizes it's gotta go somewhere 
right? Even backwards in time as we're predicting.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8baff210f7f8fb59-M65ba6bdae96165cfd2c1e54b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-09-29 Thread John Rose
On Sunday, September 29, 2019, at 3:15 AM, Alan Grimes wrote:
> THEY WILL PAY, ALL OF THEM!!!

LOL. Hang in there. IMO us engineers get better with age as long as we keep 
learning, the more you try and fail the wiser you get. Hell I got more than 10 
years on ya son and I’m still kickin’ keister!  (In my own mind at least...  A 
legend in his own mind? heheh)
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Mcab410331ab6cb38acbf89de
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] can someone tell me what before means without saying before in it?

2019-09-29 Thread John Rose
"The graphtropy of a distinction graph, constructed relative to an observer, is 
therefore considerable as a measure of how much excessive algorithmic 
information exists in the system of observations modeled by the distinction 
graph,
relative to the observer. Or to put it more simply, the graphtropy measures how 
much more complexity there is in the environment relative to the observer."

Nice def!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td59c0c4714ffb511-M7e8ec5797eef717873d7956f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-27 Thread John Rose
On Wednesday, September 25, 2019, at 7:01 PM, James Bowery wrote:
> Yes, what is missing is the parsimony of your measure, since the Perplexity 
> and Logical Expectation measures have open parameters that if filled properly 
> reduce to K-Complexity.

James, interesting, thanks for making us aware of that... you know what you are 
talking about. You are making me think!.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8d109c89dd30f9b5-M51d22f0cfe1bf128fd093ff0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: The world on the eve of the singularity.

2019-09-27 Thread John Rose
We must first accept and understand that there are intelligence structures 
bigger than ourselves and some of these structures cannot be fully modeled by 
one puny human brain.

And some structures are vastly inter-generational... and some may be designed 
or emerged that way across generations to positively effect future generations.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3085ad834d97ffb2-M972909f81e8a45b3099e5b68
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 2:05 PM, korrelan wrote:
> The realisation/ understanding that the human brain is
closed system, to me… is a first order/ obvious/ primary concept when designing 
an AGI
or in my case a neuromorphic brain simulation.



A human brain is merely an instance node on the graph of brains. All biologic 
brains are connected by at the very least DNA at a very base level. Not a 
closed system.  It's easy to focus in but ignore the larger networks of 
information flow the human brain or an emulated brain are part of.

For example, when modeling vehicular traffic do you only study or emulate one 
car? Or say when studying the intelligence of elephants do you only model one 
elephant? If that's all you do you miss the larger complex systems and how they 
relate to the structure and behavior of individual components. Also, you ignore 
the intelligence hosted by differences in brains in a larger web of pattern 
networks.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-Mac461188ab36fa130f653384
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 3:34 PM, korrelan wrote:
> Reading back up the thread I do seem rather stern or harsh in my opinions, if 
> I came across this way I apologise. 

I didn't think that of you we shouldn't be overly sensitive and afraid to 
offend. There is no right to not be offended, at least in the country :)

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M73ea70ffba040f4d01d92ba0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote:
> ANY situation can be one where the most viable _decision_ is to stop the 
> search for the simplest explanation and _act_ on the simplest explanation you 
> have found _thus far_.  This is a consequence of the incomputability of 
> Solomonoff Induction in the face of limited resources.

>From my amateurish view of this doesn't Gödel incompleteness show that there 
>will be at least one less simple future explanation that may or may not be 
>found? So the decision to expend more resources searching should be based on 
>trust in environmental computability?

On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote:
> There is an explore/exploit tradeoff. See the prior "issue" with 
> "computability" and then compound that with the "irrationality" of the 
> valuation function applied during sequential decision theory.  How do you 
> justify that, outside of the "exploration" provided by evolution?

Seems rationality leads to smaller and smaller search spaces so you have to 
often back out often while maintaining global/local perspective. What produces 
better results, irrationality or randomness?


On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote:
> Not in the way that theologians posing as "social scientists" would have us 
> believe.  For example, choosing a universal Turing machine as the basis for 
> Solomonoff Induction can be, and has been blown into an argument to abandon 
> induction entirely by simply defining one's UTM as that which outputs all 
> observations up to the present.  The benefit of such theology, posing as 
> "social science" is the theologian, serving his political masters, can 
> "scientifically justify" anything they want to do to you. 

That’s some pretty good insight there. There is flip-flopping between 
theologians and "social scientists"…

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Me50f6ea163e897223b1a2246
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote:
> Wolfram!  Well!  Perhaps you should take this up with Hector Zenil 
> :

Interesting:   https://arxiv.org/abs/1608.05972

Yaneer Bar-Yam has produced much good reading also.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M2f49dd66dcf29167efdf429a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 11:23 AM, rouncer81 wrote:
> and basicly what im doing is im reducing permutations by making everything 
> more the same.
> 

Increasing similarity.. within bounds... good one.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mc81992cc1093a6c38886dbe6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
It would be interesting Venn'ing out all the AGI theories and see how they 
overlap.  Some people tout theirs against others (I won't mention any names 
*cough cough* Google) but I don't do that...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M35c23bcfab53a3f48d1c970b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 10:05 AM, rouncer81 wrote:
> So J.R. whats so good about hybrid compression? 

Real world issues where max compression isn't the goal but an efficient and 
inter-communicable compression is. Things aren't as clean cut like files on 
disk.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M0a9548468cb4e28544ebc00f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 12:36 PM, rouncer81 wrote:
> Lossylossnessness,  total goldmine ill say again.  Dont doubt it. :)

Picture this - when Charles Proteus Steinmetz proposed using imaginary numbers 
for alternating current circuit analysis everyone attacked him and thought he 
was coo-coo.  Now, I'm definitely not a great mind like him but I have good 
intuition. I recently made an effort to find the original location of his cabin 
to absorb some zen, it's a few miles from here near Schenectady, NY hidden. The 
actual cabin was moved to Michigan (I do amateur archaeology on weekends). We 
may have a similar situation with lossy and lossless. Perhaps imaginary/complex 
numbers can do it. Or a similar concept.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M04106d4897e03d1e8d2c31d2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
Partitioning into crisp boolean could be interpreted as pulling fear out of 
your backpocket.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mec8ff2b4b5ecebe6ac016163
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
Couple hybrids, there's more where they came from:

https://arxiv.org/abs/1804.02713

https://www.semanticscholar.org/paper/LOW-COMPLEXITY-HYBRID-LOSSY-TO-LOSSLESS-IMAGE-CODER-Krishnamoorthy-Rajavijayalakshmi/20657ef592513af2e4e2d6907295eb0e3dc206b0

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M8185cf80cd53d6573bc59340
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 8:39 AM, Matt Mahoney wrote:
> JPEG and MPEG combine lossy and lossless compression, but we don't normally 
> call them hybrid. Any compressor with at least one lossy stage is lossy. 
> There is a sharp distinction between lossy and lossless. Either the 
> decompressed file is identical to the original or it isn't.

Ya... what if there is no completed file just continuous?

With bias we search for categories to partition everything into. Don't fear the 
fuzzy!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M04d77be61178b9bd6fb3f46d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-01 Thread John Rose
If lossy or lossless is crisp an A Priori or A Posteriori definition cannot be 
determined unless the complexity of all compressors are partitioned but the 
decompression results are not known until execution on all possible data... 
which is impossible.  

FWIW. So I suspect they're fuzzy and not mutually exclusive.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M6b0c0db89213eb12146f843a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-01 Thread John Rose
On Friday, November 01, 2019, at 3:48 PM, immortal.discoveries wrote:
> Death improves U.

Death. The inevitable lossy compression but if you have a soul it could be 
lossylosslessness  HEY!!!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M5b3bb13532fd43001bdf42dc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-01 Thread John Rose
Well you could have a compressor that starts off lossless then intelligently 
decides that  it needs to operates faster due to some criteria and then 
compress particular less-important data branches lossily. Then it would fall 
into the middle ground no?  A hybrid.

And vice versa, on decompression a compressor could switch from a lossless 
decompression to a lossy. For example if it is required to execute in less time 
available then spits out a lossy best result.

See? all kinds of good stuff here!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mf6633b9e06451787bc034e4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-05 Thread John Rose
Yes, and another official category in the world of compression that fit's into 
the lossylosslessness umbrella and it is called "Perceptual Lossless". This is 
different from "Near Lossless" and it is self explanatory and can be visual, 
audio, and one might imagine extending it to olfactory and tactile and to even 
more creative applications of the technology.

Makes sense!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M2e172963ca628aab8d90f48f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-05 Thread John Rose
On Monday, November 04, 2019, at 4:17 PM, James Bowery wrote:
> This is one reason I tend to perk up when someone comes along with a notion 
> of complex valued recurrent neural nets.

Kind of interesting - deep compression in complex domain:
https://arxiv.org/abs/1903.02358
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mbb820fa6fc428394a11fdc9b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
That worm coming out of the cricket was cringeworthy. Cymothoa exigua is 
another.

It’s not the worm’s fault though it’s just living it’s joyful and pleasureful 
life to the fullest. And the cricket is being open and submissive.

I think there are nonphysical parasites that effect human beings... 
informational, replicating, mind controlling. Though evolution has endowed us 
with defenses with AGI we'll be easily manipulable. It will be able to 
construct particular sorts of mental knots and distributed knots and patterns 
to lock in thinking and use them I would hope in good ways. Skillful rulers and 
parties effectively use that and/or take advantage of it sometimes creating 
human zombies where independent thinking is punished. But if an AGI has no sort 
of higher authority why would it not utilize the ability to benefit only itself 
and a privileged elite few? Like the happy worm AGI could eventually embody 
itself in us instead of the vice-versa mind uploading people usually think 
about.

To be congenial and symbiotic beings it might be easier to embrace our fate 
like the cricket and become willfully zombified. Isn’t it more efficient to 
have one mind thinking for everyone instead of many independent? Like having 
one totalitarian world government instead of many contending individuals? Saves 
energy, less pollution, less resources needed to power the overall 
intelligence. Instead of occupying static patterns we occupy manipulated 
because they or it knows what’s better and how to guide us for the benefit of 
all!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M774616f91bad0415e1ebc797
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread John Rose
Perhaps we need definitions of stupidity. With all artificial intelligence 
there is artificial stupidity? Take the diff and correlate to bliss 
(ignorance). Blue pill me baby. Consumes less watts. More efficient? But 
survival is negentropy. So knowledge is potential energy. Causal entropic force?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6cada473e1abac06-M464c55ef1215f51c8a4afc56
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 11:34 PM, immortal.discoveries wrote:
> "consciousness" isn't a real thing and can't be tested in a lab...

hm... I don't know. It's kind of like doing generalized principle component 
analysis on white noise. Something has to do it. Something has to do the 
consciousing.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M515a0e87e04018e136474d5c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 1:30 PM, WriterOfMinds wrote:
>> Re: John Rose: "It might be effectively lossless it’s not guaranteed to be 
>> lossy."
> True. But I think the usual procedure is that unless the algorithm guarantees 
> losslessness, you treat the compressed output as lossy.  Lossless is, how 
> does one say it, the protected category?

This is like saying, for example it's late fall in the northern latitudes and 
it's 50 °F and you say to your friend "It's warm today." He says, "Agreed."

Then it's mid-summer and it's 50 °F and you say to your friend "It's warm 
today." He says, "Disagreed. Why did you say that?" And you say "Fall is the 
protected category."

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M54787d2f951607f78e5f8896
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
Ha!  I have the opposite problem, believing too much.

Like, I believe I can create an artificial mind based on an I Ching computer.

So tempted to drop everything and go for it. Who needs all this modern science 
malarkey?

COME ON!! DO IT!!! DO IT NOW
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M8c0d1d448897eaa9cc5777f6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
On Thursday, November 07, 2019, at 10:30 AM, WriterOfMinds wrote:
> The compressed output still contains less information than the original, 
> ergo, it is lossy.

Naturally if you have the original raw data to compare. You almost never do, 
that’s why you compress. For example, some compressors built into camera 
electronics spit out compressed directly.

But a perceptual lossless compressor might not remove ANY information. It might 
be effectively lossless it’s not guaranteed to be lossy. For example, an audio 
recorder that records a subset of the frequencies detectable by the human ear 
or a camera that only records subtle shades of red might produce lossless. 
Granted in most applications it’s lossy but conceptually you don’t know unless 
you have the original defeating the purpose of compressed data transmission.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M6f62bb90546303649cb85e39
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-06 Thread John Rose
Question: Why don't the compression experts call near-lossless and 
perceptual-lossless lossy?
Answer: Because you don't know. They could be either though admittedly high 
probability lossy.

How do you know something is conscious? It could be perceptually conscious but 
not really conscious.

So let's loop this around and call perceptual-lossless p-lossless. Then I would 
say a p-zombie is p-lossless.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mf06980c55aa64cadb1d831c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
With consciousness I'm merely observing functional aspects and using that in 
building an engineering model of general intelligence based on >1 agent. I feel 
consciousness improves communication, is a component and is important. And even 
with just one agent it's important IMO.

If you think about it that way and then think about "perceptual lossless" it 
starts getting interesting.

At first blush one may say perceptual lossless is when you have a lossily 
compressed picture of a mountain that looks exactly like the uncompressed. 
Sure, that’s fine.

But you don't know if something is lossless or perceptual lossless.

And the questions begin:

If I give you a lossless file will you always perceive it as lossless? Can a 
lossless file be losslessly recompressed to eliminate non-perceptible 
information? Is it still lossless then?

Who is doing the perception? Decompressors and perceivers of the decompressed?

Are there different perceivers of different capabilities? Can a compressed file 
hold various stages or types of perceptibility for different perceivers?

With perceptual compression you start getting into third parties which involves 
multiparty communication complexity. Typically, it is assumed a compressor 
targets one decompressor type. 

In real life people rely on perceptual lossless compression in many ways when 
you think about it. You don’t really know what’s inside of things do you? You 
are relying on the unknown with confidence and certainty.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mb8b39e6ab1e3d01b5d690dea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
On Wednesday, November 06, 2019, at 9:52 PM, Matt Mahoney wrote:
> The homunculus, or little person inside your head.

Or like Dennett's homuncular hordes. The power of the many.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M2a7ca5e7ae6d54e5dfaf7149
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
On Wednesday, November 06, 2019, at 10:58 PM, immortal.discoveries wrote:
> Every day we kill bugs. Because we can't see them, nor do they look like us.

It's tough with insects and small creatures.  Where does one draw the line? I 
do think they have some consciousness perhaps AGI should have Ahimsa, see  
https://en.wikipedia.org/wiki/Ahimsa_in_Jainism

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Md7e0e5f577da98506fc80af0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
Yes lossy effectively leaves it up to the observer and environment to 
reconstruct missing detail.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mb9d21ac44a0641c924c2d953
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
What is the big picture lossy :)  Everything is a piece of something else.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mce05e55fa6b5eb3c553c8bb6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
On Tuesday, October 29, 2019, at 12:25 PM, WriterOfMinds wrote:
> Lossylossless compression and losslesslossy compression may now join partial 
> pregnancy, having and eating


> one's cake, and the acre of land between the ocean and the shore in the 
> category of Things that Don't Exist.
> 

When the tide goes out on lunar cycles one might find gems from shipwrecks 
scattered. And if not at least some spiral conches, cellular automata 
seashells, fractal sea slugs... It is possible there are some mathematical 
goodies worth finding that you wouldn't find unless you looked. Then you can be 
confident with high probability that the tide will return.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M08a623128a5bf1c715d4c47a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
Oh I see!

That's actually pretty creative. I don't think I ever thought of it that way.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M0c5d5d0637a0d6abdb5f9c5a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
On Tuesday, October 29, 2019, at 3:06 PM, immortal.discoveries wrote:
> If we apply Lossy Compression on a text file that contains the string 
> "2+2=4", it results in missing data because the new data is smaller in size 
> (because of compression).

You are assuming something about the observer when you assume they will compute 
"2+2=4". Another observer might compute "2+2=0" for example the group of 
integers modulo 4 which is a much smaller symbol space.

A subset of all lossy compressors compressing on particularly structured data 
won’t discard any but still compress into smaller strings so the result is a 
lossless compression, or, a losslesslossyness.

In addition, some (if not all?) lossless compressors on particular data, can 
only expand. What is important then is a Compressor Selection Criteria.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Md812f63e06fa7100c7cd475a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
I think that there are size ranges for things to happen. Regions of particulate 
densities, cloud thicknesses, there are separatedness expanses to operate in 
for many things. State changes are gradual in many cases though there is 
definitely abruptness. Chaotic boundaries I suppose...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M2c60718895da61f40c1e7872
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread John Rose
On Tuesday, November 12, 2019, at 11:07 AM, rouncer81 wrote:
> AGI is alot pointless, just like us, if all we end up doing is scoring chicks 
> what the hell was the point of making us so intelligent???

Our destination is to emit AGI and AGI will emerge from us and then we become 
entropy exhaust.

Hope not.

Look at the trends though. Technologies up to now essentially entropy exhausted 
Earth’s natural structural complexities and relegated people to seeking virtual 
existences in alternate realities. Will this trend reverse? Not in the near 
term. But maybe it’s part of a natural progression passing these particular 
“challenging” stages.

And, alternate realities have existed throughout history in various forms. But 
nature was still intact, now it's destroyed so no turning back we are beyond 
the point of no return. The means has become the way.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M3698b7f32bf9f295c18b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread John Rose
True. And why bother learning to write with your hand when you can just wave 
the magical smartphone wand while emitting grunts?

It's like a purpose of AI is to suck the intelligence out of smart monkeys then 
resell it when it's gone. Net effect? Mass subservient zombification with 
parasitic AI embodiment. But there’s still dear consciousness as a tool for 
following the carrot.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-Mfa0482b1c3f41fed598e4a88
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Supercharge ML PCA?

2019-11-17 Thread John Rose
I was thinking this discovery could be used to speed up PCA related 
eigenvector/eigenvalue computations:

https://arxiv.org/abs/1908.03795

Thoughts?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td8188432ee4a4f8c-M49eb5dd3e523449e1ab96a84
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-17 Thread John Rose
I enjoyed reading that rather large paragraph. Reminded me of Beat writing with 
an AGI/consciousness twist to it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2175067dad4afab3bc90eec9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-17 Thread John Rose
Don't want to beat a dead horse but I think with all this discussion we have 
neglected describing the effects of... drum roll please:

*Quantum Lossylosslessness*

Feast your eyes on this article  
https://phys.org/news/2019-11-quantum-physics-reality-doesnt.html 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M740ae192ecd2f756cd9c2949
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-18 Thread John Rose
Compression is a subset of communication protocol. One to one, one to many, 
many to one, and many to many.  Including one to itself and even, none to none? 
 No communication is in fact communication. Why? Being conscious of no 
communication is communication especially in a quantum sense.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T26a5f8008aa0b4f8-Me2fd1813a364bc769a0e4c6f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-18 Thread John Rose
Errors are input, are ideas, and are an intelligence component. Optimal 
intelligence has some error threshold and it's not always zero. In fact errors 
in complicated environments enhance intelligence by adding a complexity 
reference or sort of a modulation feed...
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-Me62a79602003586d165562a5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Standard Model of AGI

2019-11-18 Thread John Rose
On Monday, November 18, 2019, at 8:21 AM, A.T. Murray wrote:
> If anyone here assembled feels that the http://ai.neocities.org/Ghost.html in 
> the machine should not be universally acknowledged as the Standard Model, let 
> them speak up now.

It's just so hard for us mere mortals to read the code bruh. AGI isn't an entry 
into 4k demoscene:
https://ai.neocities.org/mindforth.txt
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M461418734e2805dfc169e24b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-12 Thread John Rose
We might go through a phase where our minds occupy the minds of robots, remote 
control, before we get to AGI automating human labor. One person can occupy 
many robots simultaneously. Multiple self-driving cars can be occupied by one 
person. Imagine wireless connections to the brain to the internet then over 
traditional network protocols for robotic control. This can be done while in a 
meditative state. AGI would be the main server that manages, monitors, error 
checks the traffic and thereby humans are not put totally out of work. We 
assist the main AGI server and it learns from us.

It would be projecting consciousness into devices instead of looking at 
monitors, speakers, mouses etc. our consciousness is projected, injected, 
distributed and AGI server is the multiplexor.

But wait, consciousness has absolutely nothing to do with AGI, is a distraction 
and is not measurable anyway so… would be a total waste of time right? AGI is 
only Algorithmic Information Theory and nothing else is allowed so totally out 
of scope!

Hogwash.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M6f65908f1c9b3ca60879a16d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread John Rose
Hey look a partial taxonomy:

http://immortality-roadmap.com/zombiemap3.pdf
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M83b94db32a801fb28236948c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Missing Data

2019-11-06 Thread John Rose
Good idea James. A lot of research going on with AGI and consciousness. Matt 
may want to Google around a bit to get updated.

I do wonder Matt, if something is "perceptually lossless" why would you call 
that marketing? You can't really call it lossy can you?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M3b7e63e6defd415d8bfa75cb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


  1   2   3   4   5   6   7   8   >