RE: [agi] Blockchainifying Conscious Awareness

2018-06-19 Thread John Rose
Rob, This is a very insightful and knowledgeable reply and most of your coverage is spot-on on. But… Think of it when “databases” were first becoming pursued and popular. I don’t know, say 1990’ ish? What was a database then? And think of databases now, their realm of function, for

RE: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread John Rose
Ehm, "chunking out code"...that's ah, yeah good way to describe it  I agree. Arthur, you need to elevate yourself man. The Elon Musk's of the world are stealing all the thunder. John > -Original Message- > From: Mike Archbold via AGI > > At least A.T. Murray is in the trenches

[agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread John Rose
How I'm thinking lately (might be totally wrong, totally obvious, and/or totally annoying to some but it’s interesting): Consciousness Oriented Intelligence (COI) Consciousness is Universal Communications Protocol (UCP) Intelligence is consciousness manifestation AI is a computational

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread John Rose
> -Original Message- > From: Russ Hurlbut via AGI > > 1. Where do you lean regarding the measure of intelligence? - more towards > that of Hutter (the ability to predict the future) or towards > Winser-Gross/Freer > (causal entropy - soft of a proxy for future opportunities; ref >

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > "Randomness" is merely computational distance from agent perspective." > > That is really interesting but why the fixation on the particular > fictionalization? Randomness is computation distance from the agent > perspective? No it

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > On Thu, Oct 11, 2018 at 12:38 PM John Rose > wrote: > > OK, what then is between a compression agents perspective (or any agent > for that matter) and randomness? Including shades of randomness to > r

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-09 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > Operating on compressed data without having to decompress it is the goal that > I am thinking of so being able to access internal relations would be > important. > There can be some compressed data that does not contain explicit

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > John, > Can you map something like multipartite entanglement to something more > viable in contemporary computer programming? I mean something simple > enough that even I (and some of the other guys in this group) could > understand? Or

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message- > From: Nanograte Knowledge Technologies via AGI > > John. considering eternity, what you described is but a finite event. I dare > say, > not only consciousness, but cosmisity. > Until one comes to terms with their true insignificance will they not grasp their

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > And if the concept of randomness is called into question then > how do you think entropic extremas are going to hold up? > "Entropic extrema" as in computational resource expense barrier, including chaotic boundaries, too expensive

[agi] Massive Bacteriological Consciousness - Gut Homunculi

2018-09-12 Thread John Rose
I’m tellin’ ya, nobody believes me! More and more research has been conducted on microbial gut intelligence... Then a couple years ago bacteria were scientifically shown to be doing quantum optimization processing. Now we see all kinds electrical microbiome activity going on in the gut:

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > > There are some complications of the experience of our existence, and those > complications may be explained by the complex processes of mind. > Since we can think we can think about the experience of life and interweave > the strands

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > > It's relevant if consciousness is the secret sauce. and if it applies to the > complexity problem. > > Jim is right. I don't believe in magic. > A Recipe for a Theory Mind Three pints of AIT (Algorithmic Information Theory) Ale

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-12 Thread John Rose
> -Original Message- > From: Nanograte Knowledge Technologies via AGI > > Challenging a la Haramein? No doubt. But that is what the adventure is all > about. Have we managed to wrap our minds fully round the implications of > Mandelbrot's contribution? And then, there is so much else of

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-19 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > What do you think qualia is? How would you know if something was > experiencing it? > You could look at qualia from a multi-systems signaling and a compressionist standpoint. They're compressed impressed samples of the environment

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > We could say that everything is conscious. That has the same meaning as > nothing is conscious. But all we are doing is avoiding defining something > that is > really hard to define. Likewise with free will. I disagree. Some things

RE: [agi] My AGI 2019 paper draft

2019-04-30 Thread John Rose
Matt > "The paper looks like a collection of random ideas with no coherent structure or goal" Argh... I love this style of paper whenever YKY publishes something my eyes are on it. So few (if any) are written this way, it's a terse jazz fusion improv of mecho-logical-mathematical

RE: [agi] Mens Latina -- 2019-04-28

2019-05-02 Thread John Rose
> -Original Message- > From: A.T. Murray > > For example, the AI might say what means in English, "You are a human > being and I am a person." > > C. The AI may demonstrate activation spreading from one concept to > another concept. > > If you type in "homo" for "human being", the AI

RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> -Original Message- > From: Matt Mahoney > > So the hard problem of consciousness is solved. Rats have a thalamus which > controls whether they are in a conscious state or asleep. > > John, is that what you meant by consciousness? Matt, Not sure about the hard problem here but a rat

RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> Matt, > > Not sure about the hard problem here but a rat would have far less > consciousness when sleeping that is for sure  > > Why? Think about the communication model with other objects/agents. > > John Although... I have to say that sometimes when I'm sleeping, lucid dreaming or

[agi] ConscioIntelligent Thinkings

2019-08-23 Thread John Rose
I'm thinking AGI is on the order of 90% consciousness and 10% intelligence. Consciousness I see as Universal Communication Protocol (UCP) and I see consciousness as "Occupying Representation" (OR). Representation being structure (or patterns from a patternist perspective). Then, from

Re: [agi] FAO: Senator Reich. Law 1

2019-09-05 Thread John Rose
On Thursday, September 05, 2019, at 9:58 AM, Nanograte Knowledge Technologies wrote: > That's not helping you A.T.Murray ;) Oh wow, Mentifex biography. How sweet. What's next a movie? LOL  (You gotta be F'in kidding me) John -- Artificial General

[agi] ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-13 Thread John Rose
Consciousness mixmuxes structure with protocol with language thus modulating the relationship between symbol complexity and communication complexity in an environment of agents. And conscious agents regulate symbol entropy in effect maintaining a symbol negentropy. The agents route symbols

Re: [agi] whats computer vision anyway

2019-09-14 Thread John Rose
On Wednesday, September 11, 2019, at 8:43 AM, Stefan Reich wrote: > With you, I see zero innovation. No new use case solved, nothing, over the > past, what, 2 years? No forays into anything other than text (vision, > auditory, whatever)? > Actually, Mentifex did contribute something incredibly

Re: [agi] whats computer vision anyway

2019-09-14 Thread John Rose
On Saturday, September 14, 2019, at 6:19 PM, Stefan Reich wrote: > Yeah, I'm sure I should increase my use of Latin variable names. I mean... maybe but. When you run an obfuscator or minifier on code what does it do? Removes human readable. Minifier minimizes representation. But variable names,

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-14 Thread John Rose
On Saturday, September 14, 2019, at 12:57 AM, rouncer81 wrote: > Seriously, im starting to get ready to go use all this superfluous > engineering skill ive collected over the last couple of years to go draw up > the schematics for my home personal guillotine system (tm). Ya just don't become

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-15 Thread John Rose
Yeah so, one way is the create a Qualia Flow as an Information Ratchet. Each click of the ratchet can be a discrete experience. The ratchet gets it's energy from the motion in the AGI's internal dynamical systems entropy. click click click Then this ticking, when regulated is a systems

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
On Sunday, September 15, 2019, at 8:32 AM, immortal.discoveries wrote: > John, interesting posts, some of what you say makes sense, you're not far off > (although I would like to see more details). This is just a hypothetical engineering discussion. But to put it more succinctly, is

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
Well then it should be more of a multi-ratchet reflecting the topological entropic/chaotic computational synergy of the internal dynamical multi-systems mapped and bifurcated into full-duplex language transmission. Single ratchet = Morse code. Multi-ratchet = Polyphony (larger symbol space)

Re: [agi] whats computer vision anyway

2019-09-17 Thread John Rose
On Monday, September 16, 2019, at 12:11 PM, rouncer81 wrote: > yes variables are simple and old,  we dont need them anymore. Sorry, object names :) In some languages everything is an object. The thought was going in the direction of reverse obfuscation...opposite direction of minification.   

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
Please try to get this right it's very important: https://www.youtube.com/watch?v=xsDk5_bktFo John -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T354278308a7acf85-Mc9027de72ed4514f96d5c5b3 Delivery options:

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
Allow dimension modulation. Put some dimension control into the protocol layer allowing for requests of dimension adjustment from current transmission level... John -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: by successive approximation.

2019-09-08 Thread John Rose
On Saturday, September 07, 2019, at 10:21 AM, Alan Grimes wrote: > Some examples of the limitations of the brain's architecture, include the inability to multiplex mental resources -> ie having a network of dozens of instances while retaining the advantages of having a single knowledge and

[agi] Re: Transformers - update

2019-09-19 Thread John Rose
I'm wrong. You're right. Was just hoping for more :) Incremental, team and skills building. Inventing and discovering new ideas while doing that. And when finding something good not releasing it to the public (for safety naturally). John -- Artificial

[agi] Re: AGI Research Without Neural Networks

2019-09-19 Thread John Rose
For ancillary like sensory you have to?  For core I don't think neural at all. Not to say neural is not emulated in some way in core... But I think any design has to use architectural optimization or has to be pre-architecturally optimized. John --

Re: [agi] Simulation

2019-09-21 Thread John Rose
All four are partially correct. It is a simulation. And you're it. When you die your own private Idaho ends *poof*. This can all be modeled within the framework of conscioIntelligence, CI = UCP + OR. When you are that tabula rasa simuloid in your mother's womb you begin to occupy a

Re: [agi] Simulation

2019-09-21 Thread John Rose
On Saturday, September 21, 2019, at 11:01 AM, Stefan Reich wrote: > Interesting thought. In all fairness, we can just not really interact with a > number which doesn't have a finite description. As soon as we do, we pull it > into our finiteness and it stops being infinite. IMO there are only

Re: [agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 4:04 PM, Secretary of Trades wrote: > https://www.gzeromedia.com/so-you-want-to-arm-a-proxy-group I don't get it. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: Transformers - update

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 8:14 AM, immortal.discoveries wrote: > https://openai.com/blog/emergent-tool-use/ While entertaining there is absolutely nothing new here related to AGI ??? John -- Artificial General Intelligence List: AGI

Re: [agi] Simulation

2019-09-28 Thread John Rose
On Saturday, September 28, 2019, at 4:59 AM, immortal.discoveries wrote: > Nodes have been dying ever since they were given life. But the mass is STILL > here. Persistence is futile. We will leave Earth and avoid the sun. You right. It is a sad state of affairs with the environment...the

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 8:59 AM, korrelan wrote: > If the sensory streams from your sensory organs were disconnected what would your experience of reality be?  No sight, sound, tactile or sensory input of any description, how would you play a part/ interact with this wider network you

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 10:57 AM, immortal.discoveries wrote: > We could say our molecules make the decision korrelan :) And the microbiome bacteria, etc., transmitting through the gut-brain axis could have massive more complexity than the brain. "The gut-brain axis, a bidirectional

Re: [agi] Simulation

2019-09-27 Thread John Rose
Persist as what? Unpersist the sun rising, break the 99.99... % probability that it rises tomorrow. What happens? We burn. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 1:44 PM, immortal.discoveries wrote: > Describing intelligence is easier when ignore the low level molecules. What if it loops? I remember reading a book as a kid where a scientist invented a new powerful microscope, looked into it, and saw himself looking

Re: [agi] Simulation

2019-09-24 Thread John Rose
On Monday, September 23, 2019, at 7:43 AM, korrelan wrote: > From the reference/ perspective point of a single intelligence/ brain there are no other brains; we are each a closed system and a different version of you, exists in every other brain. How does ANY brain acting as a pattern reservoir

Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 6:48 PM, rouncer81 wrote: > actually no!  it is the power of time.    doing it over time steps is an > exponent worse. Are you thinking along the lines of Konrad Zuse's Rechnender Raum?  I just had to go read some again after you mentioned this :) John

Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:36 AM, korrelan wrote: "the brain is presented with external patterns" "When you talk to someone" "Take this post as an example; I’m trying to explain a concept" "Does any of the actual visual information you gather" These phrases above, re-read them, are

Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-24 Thread John Rose
I'm thinking of a mathematical measure called "What The Fuckedness".  WTF({K, P, Le, ...}), K-Complexity, Perplexity and Logical Expectation. Anything missing? It can predict the expressive pattern on someone’s face when they go and type phrases into Mentifex's website expecting AI. John

Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:07 AM, immortal.discoveries wrote: > The brain is a closed system when viewing others Uhm... a "closed system" that views. Not closed then? John -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Simulation

2019-09-22 Thread John Rose
On Saturday, September 21, 2019, at 7:24 PM, rouncer81 wrote: > Time is not the 4th dimension, time is actually powering space.    > (x*y*z)^time. And what's the layer on top of (x*y*z)^time that allows for intelligent interaction and efficiency to be expressed and executed in this physical

[agi] Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Time makes us think that humans are willfully creating AGI, as if it is in the future, like the immanentizing of the singularity eschaton. Will scientific advances occur at an ever increasing rate? It would have to slow down at a certain point. Has to right? As we approach max compression of

Re: [agi] The Job market.

2019-10-02 Thread John Rose
On Wednesday, October 02, 2019, at 1:05 AM, James Bowery wrote: > Harvard University's Jonathan Haidt is so terrified of the truth coming out > that he's actually come out against Occam's Razor > . There are sityations where the simplest explanation

[agi] Re: Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Heat can up-propagate into symbol and replicate out of there. Energy converts to informational transmission and disentopizes it's gotta go somewhere right? Even backwards in time as we're predicting. -- Artificial General Intelligence List: AGI

Re: [agi] The Job market.

2019-09-29 Thread John Rose
On Sunday, September 29, 2019, at 3:15 AM, Alan Grimes wrote: > THEY WILL PAY, ALL OF THEM!!! LOL. Hang in there. IMO us engineers get better with age as long as we keep learning, the more you try and fail the wiser you get. Hell I got more than 10 years on ya son and I’m still kickin’ keister! 

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-29 Thread John Rose
"The graphtropy of a distinction graph, constructed relative to an observer, is therefore considerable as a measure of how much excessive algorithmic information exists in the system of observations modeled by the distinction graph, relative to the observer. Or to put it more simply, the

Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-27 Thread John Rose
On Wednesday, September 25, 2019, at 7:01 PM, James Bowery wrote: > Yes, what is missing is the parsimony of your measure, since the Perplexity > and Logical Expectation measures have open parameters that if filled properly > reduce to K-Complexity. James, interesting, thanks for making us

Re: [agi] Re: The world on the eve of the singularity.

2019-09-27 Thread John Rose
We must first accept and understand that there are intelligence structures bigger than ourselves and some of these structures cannot be fully modeled by one puny human brain. And some structures are vastly inter-generational... and some may be designed or emerged that way across generations to

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 2:05 PM, korrelan wrote: > The realisation/ understanding that the human brain is closed system, to me… is a first order/ obvious/ primary concept when designing an AGI or in my case a neuromorphic brain simulation. A human brain is merely an instance node on

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 3:34 PM, korrelan wrote: > Reading back up the thread I do seem rather stern or harsh in my opinions, if > I came across this way I apologise.  I didn't think that of you we shouldn't be overly sensitive and afraid to offend. There is no right to not be

Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote: > ANY situation can be one where the most viable _decision_ is to stop the > search for the simplest explanation and _act_ on the simplest explanation you > have found _thus far_.  This is a consequence of the incomputability of >

Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote: > Wolfram!  Well!  Perhaps you should take this up with Hector Zenil > : Interesting:   https://arxiv.org/abs/1608.05972 Yaneer Bar-Yam has produced much good reading also.

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 11:23 AM, rouncer81 wrote: > and basicly what im doing is im reducing permutations by making everything > more the same. > Increasing similarity.. within bounds... good one. -- Artificial General Intelligence List: AGI

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
It would be interesting Venn'ing out all the AGI theories and see how they overlap.  Some people tout theirs against others (I won't mention any names *cough cough* Google) but I don't do that... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 10:05 AM, rouncer81 wrote: > So J.R. whats so good about hybrid compression?  Real world issues where max compression isn't the goal but an efficient and inter-communicable compression is. Things aren't as clean cut like files on disk.

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 12:36 PM, rouncer81 wrote: > Lossylossnessness,  total goldmine ill say again.  Dont doubt it. :) Picture this - when Charles Proteus Steinmetz proposed using imaginary numbers for alternating current circuit analysis everyone attacked him and thought he was

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
Partitioning into crisp boolean could be interpreted as pulling fear out of your backpocket. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mec8ff2b4b5ecebe6ac016163 Delivery options:

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
Couple hybrids, there's more where they came from: https://arxiv.org/abs/1804.02713 https://www.semanticscholar.org/paper/LOW-COMPLEXITY-HYBRID-LOSSY-TO-LOSSLESS-IMAGE-CODER-Krishnamoorthy-Rajavijayalakshmi/20657ef592513af2e4e2d6907295eb0e3dc206b0 --

Re: [agi] Re: Missing Data

2019-11-04 Thread John Rose
On Monday, November 04, 2019, at 8:39 AM, Matt Mahoney wrote: > JPEG and MPEG combine lossy and lossless compression, but we don't normally > call them hybrid. Any compressor with at least one lossy stage is lossy. > There is a sharp distinction between lossy and lossless. Either the >

Re: [agi] Re: Missing Data

2019-11-01 Thread John Rose
If lossy or lossless is crisp an A Priori or A Posteriori definition cannot be determined unless the complexity of all compressors are partitioned but the decompression results are not known until execution on all possible data... which is impossible.  FWIW. So I suspect they're fuzzy and not

Re: [agi] Re: Missing Data

2019-11-01 Thread John Rose
On Friday, November 01, 2019, at 3:48 PM, immortal.discoveries wrote: > Death improves U. Death. The inevitable lossy compression but if you have a soul it could be lossylosslessness  HEY!!! -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Missing Data

2019-11-01 Thread John Rose
Well you could have a compressor that starts off lossless then intelligently decides that  it needs to operates faster due to some criteria and then compress particular less-important data branches lossily. Then it would fall into the middle ground no?  A hybrid. And vice versa, on

Re: [agi] Re: Missing Data

2019-11-05 Thread John Rose
Yes, and another official category in the world of compression that fit's into the lossylosslessness umbrella and it is called "Perceptual Lossless". This is different from "Near Lossless" and it is self explanatory and can be visual, audio, and one might imagine extending it to olfactory and

Re: [agi] Re: Missing Data

2019-11-05 Thread John Rose
On Monday, November 04, 2019, at 4:17 PM, James Bowery wrote: > This is one reason I tend to perk up when someone comes along with a notion > of complex valued recurrent neural nets. Kind of interesting - deep compression in complex domain: https://arxiv.org/abs/1903.02358

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
That worm coming out of the cricket was cringeworthy. Cymothoa exigua is another. It’s not the worm’s fault though it’s just living it’s joyful and pleasureful life to the fullest. And the cricket is being open and submissive. I think there are nonphysical parasites that effect human beings...

Re: [agi] Against Legg's 2007 definition of intelligence

2019-11-09 Thread John Rose
Perhaps we need definitions of stupidity. With all artificial intelligence there is artificial stupidity? Take the diff and correlate to bliss (ignorance). Blue pill me baby. Consumes less watts. More efficient? But survival is negentropy. So knowledge is potential energy. Causal entropic

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 11:34 PM, immortal.discoveries wrote: > "consciousness" isn't a real thing and can't be tested in a lab... hm... I don't know. It's kind of like doing generalized principle component analysis on white noise. Something has to do it. Something has to do the

Re: [agi] Re: Missing Data

2019-11-09 Thread John Rose
On Thursday, November 07, 2019, at 1:30 PM, WriterOfMinds wrote: >> Re: John Rose: "It might be effectively lossless it’s not guaranteed to be >> lossy." > True. But I think the usual procedure is that unless the algorithm guarantees > losslessness, you treat th

Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
Ha!  I have the opposite problem, believing too much. Like, I believe I can create an artificial mind based on an I Ching computer. So tempted to drop everything and go for it. Who needs all this modern science malarkey? COME ON!! DO IT!!! DO IT NOW

Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
On Thursday, November 07, 2019, at 10:30 AM, WriterOfMinds wrote: > The compressed output still contains less information than the original, > ergo, it is lossy. Naturally if you have the original raw data to compare. You almost never do, that’s why you compress. For example, some compressors

Re: [agi] Re: Missing Data

2019-11-06 Thread John Rose
Question: Why don't the compression experts call near-lossless and perceptual-lossless lossy? Answer: Because you don't know. They could be either though admittedly high probability lossy. How do you know something is conscious? It could be perceptually conscious but not really conscious. So

Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
With consciousness I'm merely observing functional aspects and using that in building an engineering model of general intelligence based on >1 agent. I feel consciousness improves communication, is a component and is important. And even with just one agent it's important IMO. If you think

Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
On Wednesday, November 06, 2019, at 9:52 PM, Matt Mahoney wrote: > The homunculus, or little person inside your head. Or like Dennett's homuncular hordes. The power of the many. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Missing Data

2019-11-07 Thread John Rose
On Wednesday, November 06, 2019, at 10:58 PM, immortal.discoveries wrote: > Every day we kill bugs. Because we can't see them, nor do they look like us. It's tough with insects and small creatures.  Where does one draw the line? I do think they have some consciousness perhaps AGI should have

Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
Yes lossy effectively leaves it up to the observer and environment to reconstruct missing detail. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mb9d21ac44a0641c924c2d953 Delivery options:

Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
What is the big picture lossy :)  Everything is a piece of something else. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-Mce05e55fa6b5eb3c553c8bb6 Delivery options:

Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
On Tuesday, October 29, 2019, at 12:25 PM, WriterOfMinds wrote: > Lossylossless compression and losslesslossy compression may now join partial > pregnancy, having and eating > one's cake, and the acre of land between the ocean and the shore in the > category of Things that Don't Exist. >

Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
Oh I see! That's actually pretty creative. I don't think I ever thought of it that way. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T36c83eb0aa31fc55-M0c5d5d0637a0d6abdb5f9c5a Delivery options:

Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
On Tuesday, October 29, 2019, at 3:06 PM, immortal.discoveries wrote: > If we apply Lossy Compression on a text file that contains the string > "2+2=4", it results in missing data because the new data is smaller in size > (because of compression). You are assuming something about the observer

Re: [agi] Re: Missing Data

2019-10-31 Thread John Rose
I think that there are size ranges for things to happen. Regions of particulate densities, cloud thicknesses, there are separatedness expanses to operate in for many things. State changes are gradual in many cases though there is definitely abruptness. Chaotic boundaries I suppose...

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread John Rose
On Tuesday, November 12, 2019, at 11:07 AM, rouncer81 wrote: > AGI is alot pointless, just like us, if all we end up doing is scoring chicks > what the hell was the point of making us so intelligent??? Our destination is to emit AGI and AGI will emerge from us and then we become entropy

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-13 Thread John Rose
True. And why bother learning to write with your hand when you can just wave the magical smartphone wand while emitting grunts? It's like a purpose of AI is to suck the intelligence out of smart monkeys then resell it when it's gone. Net effect? Mass subservient zombification with parasitic AI

[agi] Supercharge ML PCA?

2019-11-17 Thread John Rose
I was thinking this discovery could be used to speed up PCA related eigenvector/eigenvalue computations: https://arxiv.org/abs/1908.03795 Thoughts? -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-17 Thread John Rose
I enjoyed reading that rather large paragraph. Reminded me of Beat writing with an AGI/consciousness twist to it. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M2175067dad4afab3bc90eec9

Re: [agi] Re: Missing Data

2019-11-17 Thread John Rose
Don't want to beat a dead horse but I think with all this discussion we have neglected describing the effects of... drum roll please: *Quantum Lossylosslessness* Feast your eyes on this article   https://phys.org/news/2019-11-quantum-physics-reality-doesnt.html 

Re: [agi] Who wants free cash for the support of AGI creation?

2019-11-18 Thread John Rose
Compression is a subset of communication protocol. One to one, one to many, many to one, and many to many.  Including one to itself and even, none to none?  No communication is in fact communication. Why? Being conscious of no communication is communication especially in a quantum sense.

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-18 Thread John Rose
Errors are input, are ideas, and are an intelligence component. Optimal intelligence has some error threshold and it's not always zero. In fact errors in complicated environments enhance intelligence by adding a complexity reference or sort of a modulation feed...

Re: [agi] Standard Model of AGI

2019-11-18 Thread John Rose
On Monday, November 18, 2019, at 8:21 AM, A.T. Murray wrote: > If anyone here assembled feels that the http://ai.neocities.org/Ghost.html in > the machine should not be universally acknowledged as the Standard Model, let > them speak up now. It's just so hard for us mere mortals to read the

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-12 Thread John Rose
We might go through a phase where our minds occupy the minds of robots, remote control, before we get to AGI automating human labor. One person can occupy many robots simultaneously. Multiple self-driving cars can be occupied by one person. Imagine wireless connections to the brain to the

Re: [agi] Leggifying "Friendly Intelligence" and "Zombies"

2019-11-15 Thread John Rose
Hey look a partial taxonomy: http://immortality-roadmap.com/zombiemap3.pdf -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T251f13454e6192d4-M83b94db32a801fb28236948c Delivery options:

Re: [agi] Re: Missing Data

2019-11-06 Thread John Rose
Good idea James. A lot of research going on with AGI and consciousness. Matt may want to Google around a bit to get updated. I do wonder Matt, if something is "perceptually lossless" why would you call that marketing? You can't really call it lossy can you?

  1   2   3   4   5   6   7   >