Re: [agi] The four things needed to solve AGI.

2018-06-08 Thread johnrose
>Alan
>
>We really really don't have any more time to waste on...
>

Agreed.  Compadres, we should not do FUD, gaslighting, trolling, etc..

>
>MP
>  BUT AT LEAST HE HAS SOMETHING.

As Google knows, searching is SOMETHING, but better to understand what one is 
searching for though often what we are searching for isn’t that which is what 
we are really after but just pieces in a complex systems mosaic of identity. 
IOW searching/researching is a dimension of creating/re-creating... us.

"We tirelessly and ceaselessly search for Something, we know not what, which 
will appear in the end to those who have penetrated to the very heart of 
reality."

--- Pierre Teilhard de Chardin

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T507c404b4595c71c-M21ddd2ee03417c1934e23ee5
Delivery options: https://agi.topicbox.com/groups


Re: [agi] The Singularity Forum

2018-06-15 Thread johnrose
Kimera - I just looked at this a little and translating - they have a working 
"AGI" that can do AI currently and on the roadmap is real AGI but need more 
funding for marketing, partnerships and development.

Apparently about 80% non-engineers on their "team/advisers".

The ICO whitepaper page 18 talks high level architecture, kind of interesting. 
There appears to be some high-level substance and I'm drilling down looking for 
details...

They claim via their app that Nigel learned to dim smartphones in a movie 
theater. This is not AGI, it is not AI either... really... unless somehow it 
pumped through its "AI" system and learned that way and if it did there should 
be more to show for Nigel but not finding much yet. I suppose we could ask some 
friends over at RIT who are "using" it?

Hype isn't bad but claiming to have the first AGI when only achieving some 
partially finished unproven proto-AGI concept is insulting to serious 
researchers. It's like a cool breeze portending a future AGI winter (I doubt 
another winter though). They'll be a lot more of these types of companies and 
the hype might actually help pave the way... and theere may be some real 
substance here after peeling away layer after layer.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ada390c367596a4-Me76b71266bc608a57fe8251a
Delivery options: https://agi.topicbox.com/groups


Re: [agi] The Singularity Forum

2018-06-15 Thread johnrose
The patent affirms what I was saying - the app/server sees others in the same 
movie theater have dimmed their screen so it dims it for that user. Not AGI... 
just a db query add-on to a location service..

"As another example, a Service node may reference
an application that controls user device settings. When a user
enters a movie theater, for example, the geolocation informa
tion may be transmitted to the personal cloud. The personal
cloud obtains the movie theater anchor cloud ID from the
directory and connects to the movie theater cloud. The graph
engine traverses the abstracted Subnets and determines that
global behavior stored in the graph engine, based on the
actions of other users connected to the movie theater anchor
cloud ID, includes a dimming of a user device display and a
muting of the user device Volume. The graph engine may also
encounter the Service node connected to the user device
settings application. The graph engine may return this intel
ligence update to the personal cloud and/or the user device.
The user device settings application may then reduce the
brightness of the user device display and mute the user device
Volume. In this way, the graph engine may calculate effective
probabilities for nodes in a subnet that only contains
abstracted information in order to enable specific, functional
results on a user device."

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5ada390c367596a4-M77799a4f127d4b07db70374d
Delivery options: https://agi.topicbox.com/groups


[agi] Blockchainifying Conscious Awareness

2018-06-17 Thread johnrose
Why would anyone want to do that?

Ans:  For model checking on a distributed imagination.

Just figured I’d throw that out there 

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9353b0b8fd3894d8-M02a06959f6519fca4344b72a
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Blockchainifying Conscious Awareness

2018-06-17 Thread johnrose
If the stream of consciousness, sampled, securely stored in blocks, 
distributed, in a decentralized autonomous multi-agent system is inaccurate. 
aka hacked, the imagined models could be distorted. The distributed 
decentralized AGI's imagination could be intentionally "influenced" in 
deleterious ways.

How do you do the same instead of say using hashgraph? Hundreds of thousands of 
TPS.  You still need to manage gossip issues why not go with the contemporary 
flow of where the money is? Use then benefits being built with the resources. 
We know there are other ways...  and the constraints you refer to bring 
benefits that are real world tested inter-networked related use cases.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9353b0b8fd3894d8-Md8a5c842511f95894ce23c3e
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Blockchainifying Conscious Awareness

2018-06-18 Thread johnrose
Walking this further:

Nuzz:  Facebook is centralized. They own your data. You are the product. They 
get hacked. 

Mahoney: This is about consensus not competition.

So... fullnodes, masternodes, multi-componented. One component set for 
rendering models one for checking. Consensus is n confirmations on models, 
nodes do both, either, or none of this functional subset (meaning doing other). 
Multichain. Need to optimize computational topology due to gossip problem. 
Nodes can be clusters… 

Blockchain topology is prefect for this. 

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9353b0b8fd3894d8-Mbeebb6fe4148b46fd7f07333
Delivery options: https://agi.topicbox.com/groups


Re: [agi] AI Mind spares Indicative() and improves SpreadAct() mind-module.

2018-06-05 Thread johnrose
Arthur,

Every time you start posting about your "AI Mind" app I briefly go and look at 
the JS source, "View page source" from the web browser, and here are a few 
thoughts (after working with thousands of source codes over the years, and 
instead of me just saying "If there were an example of how not to write an AI 
app this would be it"):

1. Ancient source code started when variable names were required to be short 
due to memory constraints, programmer laziness, and/or unprofessional 
selfishness.

2. App code has never been truly refined out of small memory constraints.

3. Code is intentionally obscure to hide non-understandings but provide a sense 
of security to author and others by representing "something" abstractly.

4. Obscure code to deceive readers - or - honestly and unintentionally hiding 
the misunderstood complexity of subject by making a first-person reasonable 
effort at understanding but unprovably failing.

5. Code probably cannot be clearly rewritten since there are obscured forgotten 
memories of misunderstood concepts though somewhat indexed by dates as comments.

6. All these things encrusted over time... layer after layer... often hosted as 
a talking point, a reference point for similar related limitations.

7. - OR - with very low probability, there is real genius hidden in said code, 
loops and loops of abstract recursive representations, the most advanced 
chat-bot ever created... but I have not the time or energy to investigate 
further as I assume few have, perhaps another intention of said app is to wear 
out the seeker of such truths? I cannot rule-out that this app is actually 
towards some really great AI but unfortunately it looks like the opposite and 
is childishly underpowered and frivolously incomplete.


But there is some sort of novelty to this I suppose.

If there were a museum of coding oddities this would definitely be top 10.

IMO the code one writes is a reflection of oneself, a projection of sort. "AI 
Mind" is more about you Arthur, your mind over time, and much is revealed.

So, you can imagine if an AGI were to attempt to kludgely hack out some 
representation of a mind in similar circumstance what would it "hide", limit, 
and represent at the same time? What would it look like?

Note JavaScript and JavaScript AI is becoming increasingly advanced. For 
example, see FAQ auto-creators, bot builders, etc. that use JS and Typescript 
is a very powerful abstraction of JS that is surprisingly becoming widely 
adopted...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7d4ef049c1079ece-M6a7c94400bd97137369b5f87
Delivery options: https://agi.topicbox.com/groups


Re: [agi] Blockchainifying Conscious Awareness

2018-06-21 Thread johnrose
Here are a more blockchain distributed computing videos. Applicable? Maybe. 
Entertaining? Yes.

The networks are probably laggy since some just use unused machine resources 
like BOINC but allow buying and selling via coins or tokens. But not every AGI 
component needs hyper low-latency computing distribution. 

Iagon
https://youtu.be/FdmCfSBkUyI

Elastic
https://www.youtube.com/watch?v=hejEY9HEFO0

Definity
https://youtu.be/kyCfGRZaDnw

Zilliqa
https://www.youtube.com/watch?v=gQiG_ilPGG0

iExec
https://www.youtube.com/watch?v=07ojusto6s4

AION
https://www.youtube.com/watch?v=pFkPiL-dtDY

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9353b0b8fd3894d8-M135dcb22254c417988565a53
Delivery options: https://agi.topicbox.com/groups


[agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread johnrose
Oh OK everybody you can throw away your keyboards, Mentifex created the first 
AGI...

Prob. is only he can read the code!  LOL

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0eb019c4c2b48817-Md4b0bb7b18473f7a90c179c3
Delivery options: https://agi.topicbox.com/groups


Re: [agi] The reality onion...

2018-07-22 Thread johnrose
Watched this Kafkaesque movie last eve called "Enemy" and this message thread 
for some bizarre reason reminds me of the beginning script:

"It's all about control.
Every dictatorship has
one obsession,
and that's it.
So, in ancient Rome,
they gave the people bread
and circuses.
They kept the populace busy
with entertainment.
But other dictatorships use
other... other strategies
to control ideas,
the knowledge.
How do they do that?
They lower education,
they limit culture,
censor information.
They censor any means
of individual expression.
And it's important
to remember this,
that this is a pattern
that repeats itself
throughout history."

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2c15e9dba869b3f0-Mc50bacf78e92108150f98363
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread johnrose
Basically, if you look at all of life (Earth only for this example) over the 
past 4.5 billion years, including all the consciousness and all that “presumed” 
entanglement and say that's the first general intelligence (GI) the algebraic 
structural dynamics on the computational edge... is computing consciousness and 
is correlated directly to general intelligence. They are two versions of the 
same thing.

So to ask why basic AI is only computational consciousness not really 
consciousness computation is left up the reader as an exercise :)

To clarify, my poor grammatical skills –
AI = computational consciousness = consciousness performing computation
GI = consciousness computation= consciousness being created by computation

The original key idea here though is consciousness as Universal Communications 
Protocol. Took me years to tie those two together. That's a very practical 
idea, the stuff above I'm not sure of just toying with...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mcf324d011886fce24bc9a48c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread johnrose
Nanograte,

> In particular, the notion of a universal communication protocol. To me it 
> seems to have a definite ring of truth to it.

It does doesn't it?!

For years I've worked with signaling and protocols lending some time to 
imagining a universal protocol. And for years I've thought about and researched 
consciousness. Totally independent of one another. Then until very recently 
this line in my mind just appeared joining one to the other is was ...weird. 
But it all makes sense! Consciousness is communication protocol but is it 
universal protocol? Possibly, to be explored... I'm sure others have seen the 
same thing especially in biology/biomimicry

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M5b9aad878a55914b54da8358
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread johnrose
Matt,

Zoom out. Think multi-agent not single agent. Multi-agent internally and 
externally. Evaluate this proposition not from first-person narrative and it 
begins to make sense.

Why is there no single general compression algorithm? Same reason as general 
intelligence, thus, multi-agent, thus inter agent communication, thus protocol, 
and thus consciousness.

> But magic doesn't solve engineering problems.
Ehm.. being an engineer I ah disagree with this... half-jokingly :) 

More seriously though:
Doesn't Gödel Incompleteness imply "magic" is needed?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M7c2ff87f368473867c63de2a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread johnrose
> -Original Message-
> From: Matt Mahoney via AGI 
>...

Yes, I'm familiar with these algorithmic information theory *specifics*. Very 
applicable when implemented in isolated systems...

> No, it (and Legg's generalizations) implies that a lot of software and 
> hardware
> is required and you can forget about shortcuts like universal learners sucking
> data off the internet. You can also forget about self improving software
> (violates information theory), quantum computing (neural computation is not
> unitary), or consciousness (an illusion that evolved so you would fear death).

Whoa, saying a lot there? Throwing away a lot of "engineering options" with 
those statements. But I think your view of consciousness, even if just illusion 
to an agent, is still communication protocol! It still fits!

> How much software and hardware? You were born with half of what you
> know as an adult, about 10^9 bits each. That's roughly the information

OK, Laudner's study while a good reference point is in serious need of a new 
data.


> The hard coded (nature) part of your AGI is about 300M lines of code, doable
> for a big company for $30 billion but probably not by you working alone. And
> then you still need a 10 petaflop computer to run it on, or several billion 
> times
> that to automate all human labor globally like you promised your simple
> universal learner would do by next year.
>
> I believe AGI will happen because it's worth $1 quadrillion to automate labor
> and the technology trend is clear. We have better way to write code than
> evolution and we can develop more energy efficient computers by moving
> atoms instead of electrons. It's not magic. It's engineering.
> From: Matt Mahoney
> I believe AGI will happen

You believe! Showing signs of communication protocol with future AGI :) an 
aspect of  CONSCIOUSNESS?

Nowadays that $1 quadrillion might in cryptocurrency units. And the 10 petaflop 
computer a blockchain-like based P2P system. And if a megacorp successfully 
builds AGI the peers (agents) must use signaling protocol otherwise they don't 
communicate. So, can the peers be considered conscious? Conscious as in those 
behaviors common across many definitions of consciousness? Not looking at the 
magical part just the engineering part.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mcef74a38e1012d36f1b77fcb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread johnrose
On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote:
> I don't even think that stuff is relevant.

Jim,

It's relevant if consciousness is the secret sauce. and if it applies to the 
complexity problem.

Would a non-conscious entity have a reason to develop AGI?

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T59bc38b5f7062dbd-M1cea9ea3e894df9dde086333
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My AGI 2019 paper draft

2019-05-02 Thread johnrose
Reread the paper, it makes more sense the more times you read it:

"The main idea is to regard “thinking” as a dynamical system operating on 
mental states:"

Then think about how the system would learn to drive a car, for example... then 
learn to fly an airplane.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3cad55ae5144b323-Mbd5e077f4af6b6e25aae1df8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread johnrose
Possible correction here, this is modeling consciousness assuming everything is 
conscious, "panpsychism" is it?

I mentioned pondering pure randomness. This might not be right it might be when 
pondering pure nothingness. Would pure nothingness have a consciousness of 
everything or pure randomness? Maximal k-Complexity verses zero. Structural 
distance from the pondering agent via comm. protocol.

BTW we know with our Venn Diagrams there is overlap with ML. As with everything 
there is overlap, not trying to draw a borderly distinction but actually 
consciousness could be described as encompassing ML.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M48ec2367d9035c15155ebe07
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-25 Thread johnrose
On Saturday, August 24, 2019, at 11:15 AM, keghnfeem wrote:
> The human mind builds many temporal patterns and pick the best one. Since wet 
> neurons are so 
> slow. Also the human brain build many temporal patterns that will occur or 
> could occur if predicted 
> patterns fails. Also, the brain records everything, so when we sleep complex 
> algorithms are brought 
> into being. to make sense of the more complex temporal patters.  
>  Allot of these altered altered realities, possible temporal patterns, are 
> garbage and a algorithm 
> delete them or retrain them. BUT some are on the border line and are past to 
> conscious 
> mind as we sleep and is up to that person dreaming person salvage any are 
> worth saving before they
> deleted. 


Interesting. Sleeping pattern machines sifting with altered realities.

Going further, sleepers, synchronized to sinusoidal day night cycle, go back a 
few million cycles, in the jungle, each sleeper retaining slightly different 
realities then sharing on the day half, then sleeping, mixing, reprocessing, 
eating some seasonal herbs, sharing some multi-agent consciousness... the sound 
of daily cycles zung zung zung, speed it up to like 100 Hz, buz, agents 
only last a couple minutes, faster hum, meta-patterns emerge, are hosted 
across agent lifetimes in a society shared with other societies, faster, high 
pitched whine, societies fail meta-patterns collapse, shatter, vibrated into 
the cycles reconstituted wheee industrial revolution, internet, STOP. Into 
the future, start zung zung zung buz whee high pitched whine 
dissipates, we left the planet... on other planets now multi-cycles 
zwerherringzwerherringzwerherring... heheh

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M8942072bd1fffa68070fef6f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread johnrose
On Tuesday, August 27, 2019, at 7:51 AM, immortal.discoveries wrote:
> I believe consciousness doesn't exist for many, many, reasons, ex. physics, 
> our brain being a meta ball from the womb, learned cues, etc. I am purely a 
> robot from evolution, with no life. The moment you hear that you fear that 
> and want to feel more special, it's like buying bible movies as a food sold 
> only to fill the spirit, a hobby, marketed. Thinking god or free will exists 
> gives you fake highs, and people sell it to you, it makes cash. There is only 
> things that we do that we are really talking about, like having knowledge 
> about objects, paying attention to the correct context or stimuli, and 
> entailment learning and generation.

Keep in mind that before the electronic communications era people sought forms 
of communication and forms of super/omni-intelligence and developed these 
concepts for many reasons and these were/are not perfect by far since 
transmissions were without sufficient lossless mechanisms.

I could argue that electronic communications is making individuals less 
intelligent in some ways, is short circuiting many high level processes but 
won't bother. Over-classification is another issue, with too much 
labeling/symbolizing and that can paralyze efficient cognitive thought where 
one must obey what they were taught to regurgitate with fear of misapplying a 
label... like a Skitts Law on scientific terminology.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4e7a84d6a3ce726622b5db4b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread johnrose
I was expressing panpsychist mathematical modeling with consciousness as 
Universal Communication Protocol and Occupying Representation in case you 
didn't notice. This has much overlap on other AI fields...

kegineem we may have some similar ideas I see you have something called a 
Visual Alphabet this may be related to my thinking on a universal panpsychist 
language of everything where everything "speaks" based on structure 
observed/occupied. So I will look at what you have there.

But... I don’t know maybe Matt is right consciousness has absolutely nothing to 
do with AGI. Then he falls into the C = null camp in AGI = {I,C,M} so that 
statement is still true.

BTW way I was thinking AGI = {I,C,M,PSI} but not sure. Matt what do you think 
about that? 0 or null? LOL

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4900ff559dbd716ed4713625
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-03 Thread johnrose
Our minds are simulating most everything. We can imagine a model where a 
spaceship goes from Earth to Pluto in 1 second virtually breaking the speed of 
light (we know it isn't really). My thoughts were that consciousness is the one 
piece of the mind that isn't a model or a simulation. And for a machine based 
being self-awareness could be the act of occupying a representation of itself 
of itself of itself of itself...

Holistically sure we could be in a video game that some teenager left running 
on his computer overnight in an advanced technological future. Or this really 
could be it, it is what it is, base. Who knows, I have my suspicions, like:

1) As our consciousness expands, we are creating the universe or something is 
creating it.
2) We are beings somehow injected into this existence from some other reality.
3) We are just part of a larger informational based trans-dimensional creature 
structured on DNA where we are instance nodes of that informational being.

Or each of the above are true. And they are in some way.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M8a5fba12af1e12a3d9f417e8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: The Dawn of AI (Machine Learning Tribes | Deep Learning | What Is Machine Learning)

2019-09-03 Thread johnrose
Great video.  Reminds me of this:

https://external-preview.redd.it/aEB0JKhofXy2Feiu2QrzZRRsLgCBwS8cRbVZwUZHjkE.gif?width=640=mp4=5f296022e7875f78f78d6ea9fa1f15e15ad5f8e2

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tecb9c0c21d65fcb2-Me4f05dfb7935ea2aa79fbd09
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-03 Thread johnrose
Qualia flow, the dots are qualia :)
https://www.youtube.com/watch?v=vw9vjEB1S2Y

Transform into text:
https://www.youtube.com/watch?v=myFR8FTXOM4

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me732f7b91cd5f1781446e973
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
> Like I said when I first posted on this thread, phenomenal consciousness is 
> neither necessary nor sufficient for an intelligent system.

This is the premise that you are misguided by. Who is building the intelligent 
systems? Grunts that so happen to have phenomenon consciousness, not the 
opposite.

Well I was thinking of calling all this Gloobledeglockedicnicty or individually 
use 15 other terms every time it's mentioned. But my qualia on it better fit 
into the term "consciousness" and other grunts can relate better.  (Well some 
of them :) )

...

I also want to build an artificial heart. Oh nnooo can't call it a heart. It 
doesn't feel love. Note IMO the heart is an integral part of human intelligence.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mce6ed6677364c02685c2d5cc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Monday, August 26, 2019, at 5:25 PM, WriterOfMinds wrote:
> "What it feels like to think" or "the sum of all a being's qualia" can be 
> called phenomenal consciousness. I don't think this type of consciousness is 
> either necessary or sufficient for AGI. If you have an explicit goal of 
> creating an Artificial Phenomenal Consciousness ... well, good luck. 
> Phenomenal consciousness is inherently first-person, and measuring or 
> detecting it in anyone but yourself is seemingly impossible. Nothing about an 
> AGI's structure or behavior will tell you what its first-person experiences 
> *feel* like, or if it feels anything at all.


Qualia = Compressed impressed samples symbolized for communication. From the 
perspective of other agents attempting to Occupy Representation of another 
agents phenomenal consciousness would be akin to computing it's K-complexity. 
Some being commutable some being estimable.

Why does this help AGI? This universe has inherent 
separateness/distributedness. It's the same reason why there is no single 
general compression algorithm.

John



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M09d11c426cbd235dd276652c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
On Thursday, August 29, 2019, at 6:32 AM, Nanograte Knowledge Technologies 
wrote:
> Qualia are communicable.
> As such, I propose a new research methodology, which pertains to one-off 
> valid and reliable experimentation when dealing with the "unseen". The 
> "public" and repeat" tests for vetting it as science could be replaced by a 
> suitably-representative body of reviewing
 scientists who are accredited in the limitations of subjective, scientific 
observation.

Originally, I did not like the word "qualia" but it's actually quite good. When 
Chalmers or whoever named it put his or her stake on that location in the 
language on that combination of letters it was a good choice.

Part of the issue here is that engineers particularly software cannot wait for 
science in many cases. It is allowed to break physics or invent new ones in a 
virtual world. And engineers need words to put into code. Also, there are many 
symbol issues in contemporary language that have not been addressed generally. 
So two conscious entities need better communications channels to convey 
structure more efficiently and this is easier to do among software agents 
verses human by expanding the symbol complexity and bandwidth. In a perfect 
world full qualia would be instantly transmittable. But this is facilitated 
contemporarily by transmitting multimedia verses just natural language thus the 
 addition of mechanisms like MMS, video conferencing, realtime document 
sharing, etc..

Some researchers say qualia cannot be transmitted. I would change that to say 
full qualia are not transmittable yet.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M75616ccb2a402d5bdda20964
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread johnrose
Clarified:

AGI={I,C,M,PSI}={I,UCP+OR,M,BB}; BB=Black Box

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M3849c56767c291ea6a534cf9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-25 Thread johnrose
On Friday, August 23, 2019, at 9:57 PM, keghnfeem wrote:
> Consciousness is Memory:
> https://vimeo.com/98785998

Uhm, I was thinking that intelligence is memory. Consciousness is now.  
Intelligence is what comes before and after now.

Could be wrong though I guess... life is a recording that can be replayed.

Consciousness is the act of occupying representation. Intelligence is a memory 
of and a synthesis of that occupation.  Then general intelligence is applying 
new occupation on representations that have morphisms back to previous 
occupations guided by the intelligent synthesis of memory.

Or something like that...

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Me8e73779b8c4ba73bdf070b0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread johnrose
Intelligence, Memory, Consciousness for AGI is a very nice 3 tuple:

AGI = {I,C,M}

Any missing elements?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4a0bc51d34f8bb88282cda4c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread johnrose
On Monday, August 26, 2019, at 7:44 AM, immortal.discoveries wrote:
> Encoding information, remembering information, decoding information, paying 
> attention to context, prediction forecast, loop back to step 1, is the main 
> gist of it. This has generation, feedback, and adapting temporal patterns.

These would all fit into {I,C,M}.

Some researchers say C=0 or null but C is a very convenient for throwing extra 
stuff into :)

I'd say as C increases I goes to zero. What if M increases? 

But they all borrow from each other.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M46093dea6896f817cfc22060
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-01 Thread johnrose
On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies wrote:
> But, I strongly disagree with the following statement, for it contains an 
> inherent contradiction. 
>  
>  "It is allowed to break physics or invent new ones in a virtual world." 
> 
> No, they should not be allowed. The definition of engineering, as putting 
> method to science, denounces such anarchism. Engineers have to take method 
> and use it in context of science. If no science exists yet, they seemingly 
> have the obligation to try equally
 hard to develop and formalize it.

What I meant, for example that old saying, what goes faster than the speed of 
light? Thought. I always considered that stupid but it actually isn’t. If you 
have models in a software virtual world they can break all kinds of physics 
(and mathematics) in an attempt to shortcut to solutions and/or model more 
accurately with existing resources.

A Few wise Yogi quotes:
"In theory there is no difference between theory and practice. In practice 
there is."
"We made too many wrong mistakes."
"If the world was perfect, it wouldn’t be."

What is one way to bypass combinatorial explosions? Break rules.  Shhh it’s a 
secret :) and it’s OK. That’s how things work.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Ma37955495624271bf462819d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread johnrose
How about:  Write an expression for or compute the consciousness of a clock.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M77f3de8ef0fd657b53de65f3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread johnrose
"Shortcut",  yes there is no shortcut... or is there?

"Consciousness is what thinking feels like." EXACTLY!  Define "feel" in the 
mathematical sense.

We coat concepts with words (symbols). Where do they come from?

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M8edbe4e01cbc5c453355e4f4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread johnrose
"AGI is  100 percent consciousness" 

Please throw the AI guys a bone, line 10%?  Even though it's mostly grunt.

Sorry I don't really feel that way I know there is something there there is!

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M314ab7617739e60133f2fef2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread johnrose
"Consciousness has to do with observing temporal patterns."
The term pattern is ... obscure I'm afraid I try to avoid it but... 

It's more than observe, I would say occupy representation. A pattern is a 
representation. Only terminology? 

Two patterns from different domains - the key is how do they relate. A rat 
cannot relate them (well some yes) but an AGI can.

John


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M0771d00d70371748bdf4631d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 3:35 PM, Secretary of Trades wrote:
> https://philpapers.org/archive/CHATMO-32.pdf#page=50

Blah blah blah.

>From AGI perspective we are interested in the multi-agent computational 
>advantages in distributed systems that consciousness (or by other names) 
>facilitates. Thus I look at the communication aspects like communication 
>complexity, protocol, structure, etc. which are an external view, not first 
>person narrative of phenomenal consciousness that many people are so 
>obstinately hung-up on. Thus the utilitarian Qualia = Compressed impressed 
>samples symbolized for communication. Though I think first person narrative is 
>addressed by this also, it's not my goal

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mf579060433b8625fb3c512fd
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> Are you sure you wouldn't be better served by calling your ideas some other 
> names than "consciousness" and "qualia," then?  We're all getting "hung-up 
> on" the concepts that those terms actually refer to. 

Good question.

That's what been going on already. But in this age of intelligence it's time to 
take back what is ours and also preserve human consciousness. Also, human 
machine communications are better served by calling it thus IMO. And why let 
narrow minded visionaries control the labeling? That's a control strategy. 
Shoot for the stars. Consciousness is the full package, not little bits and 
pieces to tiptoe around.

This might be premature but at some point it'll be trendy to call it as it is 
IMO.

On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> I do not see how communication protocols have anything to do with 
> consciousness as it is usually understood.

People communicate their conscious experiences no? Machines do that too :) 
Machines use communication protocols.

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M80abe3880277b7daf241686e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> People can only communicate their conscious experiences by analogy. When you 
> say "I'm in pain," you're not actually describing your experience; you're 
> encouraging me to remember how I felt the last time *I* was in pain, and to 
> assume you feel the same way. We have no way of really knowing whether the 
> assumption is correct.
> 

That’s protocol. They sync up. We are using an established language but it 
changes over time.  The word "Pain" is a transmitted compression symbol that's 
understood already not to be always the same but the majority of others besides 
oneself have a similar experience. Some people get pleasure from pain due to 
different wiring or neurochemicals or whatever. There might be a societal 
tendency for them not to breed


On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> We can both name a certain frequency of light "red" and agree on which 
> objects are "red." But I can't tell you what my visual experience of red is 
> like, and you can't tell me what yours is like. Maybe my red looks like your 
> green -- the visual experience of red doesn't seem to inhere in the 
> frequency's numerical value, in fact color is nothing like number at all, so 
> nothing says my red isn't your green. "Qualia" refers to that indescribable 
> aspect of the experience. If your "qualia" can be communicated with symbols, 
> or described in terms of other things, then we're not talking about the same 
> concept -- and using the same word for it is just confusing.

Think multi-agent. Say my red is your green and your green is my red. We are 
members of a species sampling the environment. If we all saw it the same way it 
would impact evolution? You don’t know my qualia on red. But you do understand 
me communicating the experience using words and symbols generally understood 
and that is what matters in the multi-agent computational standpoint. We are 
multi-sensors emitting compressed samples via symbol transmission hoping the 
external world understands, but the initial sample is lossily compressed and 
fitted into a symbol to traverse a distance. We may never know that your green 
is my red.


On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> Going back to your computer-and-mouse example: if I admit your panpsychist 
> perspective and assume that a computer mouse has qualia, those qualia are not 
> identified with the electro-mechanical events inside the mouse.  I could have 
> full knowledge of those (fully compute or model them) without sharing the 
> mouse's experience.

You can compute mouse electro-mechanical at a functional level but between two 
mice there are actual vast differences in electron flow and microscopic 
mechanical differences. You still are only estimating what is actually going 
on, or the K-complexity or qualia. There could be self-correcting errors in one 
but the signal clicks to external entities is the same...

Please note that terminology gets usurped with technology when implemented. 
Should we not call intelligence intelligence? Usually it is prepended with 
artificial but IMO wrong move there. It is intelligence or better machine 
intelligence.  Should we not call an artificial eye an eye? What's so special 
about the word consciousness that everyone gets all squirmy about?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M640c41a41bf4e294765e68a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-28 Thread johnrose
On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> Great, seems like we've reached agreement on something.
> When we communicate with words like "red," we're really communicating about 
> the frequency of light. I would argue that we are not communicating our 
> qualia to each other. If we could communicate qualia, we would not have this 
> issue of being unable to know whether your green is my red. Qualia are 
> personal and incommunicable *by definition,* and it's good to have that 
> specific word and not pollute it with broader meanings.

We can't fully communicate our qualia only a representation of that which we 
ourselves loose the exact reconstruction of. That's the inter-agent part of it. 
How do you know any qualia ever existed? They are communicated. They are fitted 
into words/symbols. IMO like a pointer in the programming sense. This is all 
utilitarian not philosophical.

On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> In the mouse example, I was assuming that I had fully modeled the 
> electro-mechanical phenomena in *this specific* mouse. I still don't think 
> that would give me its qualia.

There is only a best guess within the context of the observer...

On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> I would be happy to refer to a machine with an incommunicable first-person 
> subjective experience stream as "conscious." But you've admitted that you're 
> not trying to talk about incommunicable first-person subjective experiences, 
> you're trying to talk about communication. I'm not concerned with whether the 
> "consciousness" is mechanical or biological, natural or artificial; I'm 
> concerned with whether it's actually "consciousness."

A sample, lossily compressed internally, symbolized. We loose the original 
basically. You can't transmit the whole qualia it's gone. Yes the utilitarian 
aspect of it is that it is all about communication in a system of agents.  
Everything is not first-person. AGI researchers are so occluded by 
first-person. Human general intelligence in not one person but a system of 
people... a baby dies in isolation.

Another piece of this is occupying representation. A phenomenal conscious 
observer may assume the structure that is transmitted in its symbolic form and 
attempt to reconstruct the original lossy representation based on it's own 
experience.  

Not really aiming for human phenomenal consciousness now but more panpsychist. 
Objects inherently contain structure that can be extracted into discrete 
representation that can be fitted systematically with similar structure of 
other objects.

...

I want to tell you a secret but it's incommunicable. Guess what. It's already 
been communicated.

Can I ask you a question? Thanks, no need to answer.

I felt a unique incommunicable sensation. I call it Gloobledeglock.  Have you 
ever felt Gloobledeglocked?

John

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-Mfcb6e0f90becb8dba4791d4a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FAO: Senator Reich. Law 1

2019-09-05 Thread johnrose
Are you guys testing chatbots or... gibberish generators?  This isn't a Discord 
or Telegram channel.

Maybe I'm not comprehending the topic of discussion... 

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T05e88de3f0e66ad3-M7ac2cd616160c14d307288e5
Delivery options: https://agi.topicbox.com/groups/agi/subscription