>Alan
>
>We really really don't have any more time to waste on...
>
Agreed. Compadres, we should not do FUD, gaslighting, trolling, etc..
>
>MP
> BUT AT LEAST HE HAS SOMETHING.
As Google knows, searching is SOMETHING, but better to understand what one is
searching for though often what we
Kimera - I just looked at this a little and translating - they have a working
"AGI" that can do AI currently and on the roadmap is real AGI but need more
funding for marketing, partnerships and development.
Apparently about 80% non-engineers on their "team/advisers".
The ICO whitepaper page 18
The patent affirms what I was saying - the app/server sees others in the same
movie theater have dimmed their screen so it dims it for that user. Not AGI...
just a db query add-on to a location service..
"As another example, a Service node may reference
an application that controls user device
Why would anyone want to do that?
Ans: For model checking on a distributed imagination.
Just figured I’d throw that out there
John
--
Artificial General Intelligence List: AGI
Permalink:
If the stream of consciousness, sampled, securely stored in blocks,
distributed, in a decentralized autonomous multi-agent system is inaccurate.
aka hacked, the imagined models could be distorted. The distributed
decentralized AGI's imagination could be intentionally "influenced" in
Walking this further:
Nuzz: Facebook is centralized. They own your data. You are the product. They
get hacked.
Mahoney: This is about consensus not competition.
So... fullnodes, masternodes, multi-componented. One component set for
rendering models one for checking. Consensus is n
Arthur,
Every time you start posting about your "AI Mind" app I briefly go and look at
the JS source, "View page source" from the web browser, and here are a few
thoughts (after working with thousands of source codes over the years, and
instead of me just saying "If there were an example of
Here are a more blockchain distributed computing videos. Applicable? Maybe.
Entertaining? Yes.
The networks are probably laggy since some just use unused machine resources
like BOINC but allow buying and selling via coins or tokens. But not every AGI
component needs hyper low-latency computing
Oh OK everybody you can throw away your keyboards, Mentifex created the first
AGI...
Prob. is only he can read the code! LOL
John
--
Artificial General Intelligence List: AGI
Permalink:
Watched this Kafkaesque movie last eve called "Enemy" and this message thread
for some bizarre reason reminds me of the beginning script:
"It's all about control.
Every dictatorship has
one obsession,
and that's it.
So, in ancient Rome,
they gave the people bread
and circuses.
They kept the
Basically, if you look at all of life (Earth only for this example) over the
past 4.5 billion years, including all the consciousness and all that “presumed”
entanglement and say that's the first general intelligence (GI) the algebraic
structural dynamics on the computational edge... is
Nanograte,
> In particular, the notion of a universal communication protocol. To me it
> seems to have a definite ring of truth to it.
It does doesn't it?!
For years I've worked with signaling and protocols lending some time to
imagining a universal protocol. And for years I've thought about
Matt,
Zoom out. Think multi-agent not single agent. Multi-agent internally and
externally. Evaluate this proposition not from first-person narrative and it
begins to make sense.
Why is there no single general compression algorithm? Same reason as general
intelligence, thus, multi-agent, thus
> -Original Message-
> From: Matt Mahoney via AGI
>...
Yes, I'm familiar with these algorithmic information theory *specifics*. Very
applicable when implemented in isolated systems...
> No, it (and Legg's generalizations) implies that a lot of software and
> hardware
> is required and
On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote:
> I don't even think that stuff is relevant.
Jim,
It's relevant if consciousness is the secret sauce. and if it applies to the
complexity problem.
Would a non-conscious entity have a reason to develop AGI?
John
Reread the paper, it makes more sense the more times you read it:
"The main idea is to regard “thinking” as a dynamical system operating on
mental states:"
Then think about how the system would learn to drive a car, for example... then
learn to fly an airplane.
John
Possible correction here, this is modeling consciousness assuming everything is
conscious, "panpsychism" is it?
I mentioned pondering pure randomness. This might not be right it might be when
pondering pure nothingness. Would pure nothingness have a consciousness of
everything or pure
On Saturday, August 24, 2019, at 11:15 AM, keghnfeem wrote:
> The human mind builds many temporal patterns and pick the best one. Since wet
> neurons are so
> slow. Also the human brain build many temporal patterns that will occur or
> could occur if predicted
> patterns fails. Also, the brain
On Tuesday, August 27, 2019, at 7:51 AM, immortal.discoveries wrote:
> I believe consciousness doesn't exist for many, many, reasons, ex. physics,
> our brain being a meta ball from the womb, learned cues, etc. I am purely a
> robot from evolution, with no life. The moment you hear that you fear
I was expressing panpsychist mathematical modeling with consciousness as
Universal Communication Protocol and Occupying Representation in case you
didn't notice. This has much overlap on other AI fields...
kegineem we may have some similar ideas I see you have something called a
Visual
Our minds are simulating most everything. We can imagine a model where a
spaceship goes from Earth to Pluto in 1 second virtually breaking the speed of
light (we know it isn't really). My thoughts were that consciousness is the one
piece of the mind that isn't a model or a simulation. And for a
Great video. Reminds me of this:
https://external-preview.redd.it/aEB0JKhofXy2Feiu2QrzZRRsLgCBwS8cRbVZwUZHjkE.gif?width=640=mp4=5f296022e7875f78f78d6ea9fa1f15e15ad5f8e2
John
--
Artificial General Intelligence List: AGI
Permalink:
Qualia flow, the dots are qualia :)
https://www.youtube.com/watch?v=vw9vjEB1S2Y
Transform into text:
https://www.youtube.com/watch?v=myFR8FTXOM4
John
--
Artificial General Intelligence List: AGI
Permalink:
On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote:
> Like I said when I first posted on this thread, phenomenal consciousness is
> neither necessary nor sufficient for an intelligent system.
This is the premise that you are misguided by. Who is building the intelligent
systems?
On Monday, August 26, 2019, at 5:25 PM, WriterOfMinds wrote:
> "What it feels like to think" or "the sum of all a being's qualia" can be
> called phenomenal consciousness. I don't think this type of consciousness is
> either necessary or sufficient for AGI. If you have an explicit goal of
>
On Thursday, August 29, 2019, at 6:32 AM, Nanograte Knowledge Technologies
wrote:
> Qualia are communicable.
> As such, I propose a new research methodology, which pertains to one-off
> valid and reliable experimentation when dealing with the "unseen". The
> "public" and repeat" tests for
Clarified:
AGI={I,C,M,PSI}={I,UCP+OR,M,BB}; BB=Black Box
John
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M3849c56767c291ea6a534cf9
Delivery options:
On Friday, August 23, 2019, at 9:57 PM, keghnfeem wrote:
> Consciousness is Memory:
> https://vimeo.com/98785998
Uhm, I was thinking that intelligence is memory. Consciousness is now.
Intelligence is what comes before and after now.
Could be wrong though I guess... life is a recording that can
Intelligence, Memory, Consciousness for AGI is a very nice 3 tuple:
AGI = {I,C,M}
Any missing elements?
John
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M4a0bc51d34f8bb88282cda4c
Delivery
On Monday, August 26, 2019, at 7:44 AM, immortal.discoveries wrote:
> Encoding information, remembering information, decoding information, paying
> attention to context, prediction forecast, loop back to step 1, is the main
> gist of it. This has generation, feedback, and adapting temporal
On Friday, August 30, 2019, at 2:31 AM, Nanograte Knowledge Technologies wrote:
> But, I strongly disagree with the following statement, for it contains an
> inherent contradiction.
>
> "It is allowed to break physics or invent new ones in a virtual world."
>
> No, they should not be
How about: Write an expression for or compute the consciousness of a clock.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M77f3de8ef0fd657b53de65f3
Delivery options:
"Shortcut", yes there is no shortcut... or is there?
"Consciousness is what thinking feels like." EXACTLY! Define "feel" in the
mathematical sense.
We coat concepts with words (symbols). Where do they come from?
John
--
Artificial General Intelligence
"AGI is 100 percent consciousness"
Please throw the AI guys a bone, line 10%? Even though it's mostly grunt.
Sorry I don't really feel that way I know there is something there there is!
John
--
Artificial General Intelligence List: AGI
Permalink:
"Consciousness has to do with observing temporal patterns."
The term pattern is ... obscure I'm afraid I try to avoid it but...
It's more than observe, I would say occupy representation. A pattern is a
representation. Only terminology?
Two patterns from different domains - the key is how do
On Wednesday, August 28, 2019, at 3:35 PM, Secretary of Trades wrote:
> https://philpapers.org/archive/CHATMO-32.pdf#page=50
Blah blah blah.
>From AGI perspective we are interested in the multi-agent computational
>advantages in distributed systems that consciousness (or by other names)
On Wednesday, August 28, 2019, at 4:07 PM, WriterOfMinds wrote:
> Are you sure you wouldn't be better served by calling your ideas some other
> names than "consciousness" and "qualia," then? We're all getting "hung-up
> on" the concepts that those terms actually refer to.
Good question.
On Wednesday, August 28, 2019, at 5:09 PM, WriterOfMinds wrote:
> People can only communicate their conscious experiences by analogy. When you
> say "I'm in pain," you're not actually describing your experience; you're
> encouraging me to remember how I felt the last time *I* was in pain, and to
On Wednesday, August 28, 2019, at 6:49 PM, WriterOfMinds wrote:
> Great, seems like we've reached agreement on something.
> When we communicate with words like "red," we're really communicating about
> the frequency of light. I would argue that we are not communicating our
> qualia to each
Are you guys testing chatbots or... gibberish generators? This isn't a Discord
or Telegram channel.
Maybe I'm not comprehending the topic of discussion...
John
--
Artificial General Intelligence List: AGI
Permalink:
40 matches
Mail list logo