Re: [agi] How does a machine "understand"? What is your definition of "understanding" for an AGI?

2021-07-12 Thread Quan Tesla
When an AGI, at its own volition, using a behavioural rule set for a situation, which ruleset it developed from experience alone - would recognize it's own mistake and be able to make corrections - it would have demonstrated a notion of "understanding". In practice, this would be 1 step short of

Re: [agi] Toward a Useful General Theory of General Intelligence

2021-04-03 Thread Quan Tesla
This conversation was keeping me awake. I realized how silly my questions and statements must seem. Silly, not in a scientific sense, but silly nonetheless, because there's no way Sophia would've advanced to the level of technology it did, if it wasn't developed from a quantum-centric perspective.

Re: [agi] Re: Black Winter

2021-08-20 Thread Quan Tesla
ust 19, 2021, at 10:11 AM, Quan Tesla wrote: > > ... would you consider your intelligence to be committed to a rapid > evolutionary process with purpose to eventually assume network-interactive > cyborgian functionality? > > > Nope. > > I might describe myself as transhum

Re: [agi] A letter on Foundation Models, presented here without opinion.

2021-08-31 Thread Quan Tesla
Thanks for sharing. A good read. I was surprised to learn of this "dominant" view. On Tue, Aug 31, 2021 at 8:23 AM Daniel Jue wrote: > http://prg.cs.umd.edu/articles/open-letter > > -- > Daniel Jue > Cognami LLC > 240-515-7802 > www.cognami.ai > > *Artificial General Intelligence List

Re: [agi] A letter on Foundation Models, presented here without opinion.

2021-08-31 Thread Quan Tesla
I think the point may be relevant to fully recurciveness within robotic systems, or open and closed loop learning and, in the least, biofeedback systems via a sensorial version of a functional Central Nervous System (CNS). According to the letter, it seems a "dominant" group decided how such

Re: [agi] Re: Black Winter

2021-08-19 Thread Quan Tesla
I have a general question. Given the nanotech now integrated with your DNA, would you consider your intelligence to be committed to a rapid evolutionary process with purpose to eventually assume network-interactive cyborgian functionality? I concede there are other methods with which to achieve

Re: [agi] Re: Black Winter

2021-08-19 Thread Quan Tesla
For many years, a program called Magic has been rewriting and merging code from different languages. Pitrat's agi autogenerated code and in doing so improved upon known solutions to many, classical mathematical problems. On 19 Aug 2021 16:27, "John Rose" wrote: > On Thursday, August 19, 2021, at

Re: [agi] Re: Black Winter

2021-08-19 Thread Quan Tesla
Have you read my white paper on last-mile knowledge engineering, which I shared more than once to this forum? If you hadn't, do go read it and let's discuss/critique the technical implications for agi development. On 19 Aug 2021 09:00, wrote: > I'm not sure what you're trying to say NKT, I'm

Re: [agi] New Scores

2021-08-19 Thread Quan Tesla
This is an interesting phenomenon. Are you running checksums? On 19 Aug 2021 21:13, "Matt Mahoney" wrote: > I've run into that problem too, that text prediction degrades to repeating > characters. There are no English words that repeat the same character 3 > times in a row, but this pattern is

Re: [agi] Re: Pre-crime in the multiverse - is REAL! This is mind bending.

2021-09-02 Thread Quan Tesla
"Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment." Would this mean that government employees would have to productively work 2 hours extra per day? On Thu, Sep 2, 2021 at 6:38 AM Steve

Re: [agi] Re: Pre-crime in the multiverse - is REAL! This is mind bending.

2021-09-05 Thread Quan Tesla
:02 AM Matt Mahoney wrote: > On Thu, Sep 2, 2021 at 5:57 AM Quan Tesla wrote: > > > > "Full employment can be had with the stoke of a pen. Simply institute a > six hour workday. That will easily create enough new jobs to bring back > full employment." > > >

Re: [agi] Pre-crime in the multiverse - is REAL! This is mind bending.

2021-09-01 Thread Quan Tesla
Most interesting Steve. Intent does play a significant role in countries where Westminster Law applies. However, it also plays a significant role in determining the outcomes of a value chain for a fully-networked architecture. I refer to such an architecture as PIA (Process In Action). As such,

Re: [agi] Presentation link

2021-10-17 Thread Quan Tesla
Thank you for sharing your work. I'm not going to comment about the VI and Brain I/II contents. However, your assertions/assumptions about what comprises artificial general intelligence (the machine must perform a wide range of general human-level tasks in diverse environments) lacks academic

Re: [agi] Re: um... ASI problem, thoughts?

2021-10-01 Thread Quan Tesla
3 Years on, I still ask: "Show me your AGI architecture. Excellent design does not cost a lot of money. Neither does prototyping from excellent design. Now we should be asking instead: "Show us your prototype." On 21 Sep 2021 07:38, "Matt Mahoney" wrote: > On Mon, Sep 20, 2021 at 12:59 PM

Re: [agi] Re: um... ASI problem, thoughts?

2021-10-01 Thread Quan Tesla
If you have to imagine your system by coding it, then it's not designed to completion. Clearly we mean different things by systems design. In my view, an excellent design is completed when all the architectural layers have been specified, normalised, integrated, optimized, and logically tested

Re: [agi] Re: Advanced robots

2021-08-28 Thread Quan Tesla
I agree with Ben and Rob. My active research is into finding a fractal root to unlock a quantum reality with. My current academic oversight is adament that we're close. Except for a usable key to the quantum door, all constituent parts already exist. And that key is already existing too. It needs

Re: [agi] Re: I think my idea is shot/ worked

2021-10-21 Thread Quan Tesla
Brava! WOM. Brava! Candid, concise, and motivational. One thing, which we do not seem to lack, is passion. On Tue, Oct 19, 2021 at 6:21 AM WriterOfMinds wrote: > I'm sorry, ID. A dead end isn't necessarily a failure, though, not if you > learned something. Sometimes we have to go down a path

Re: [agi] drug name

2021-11-21 Thread Quan Tesla
A reasonably informed, fully-vaccinated man recently told me that a group of them went to a particular institution to receive their treatments. He stated that from the 7 vaccinated, not a single one experienced any physical sensations during, or afterwards. He asserted that he believed they were

Re: [agi] All Compression is Lossy, More or Less

2021-11-18 Thread Quan Tesla
JB wrote: "...because "bracketing " is a technical term used in the context of phenomenology -- a technical term referring to DEcontextualization, or rather, removing subjective interpretation from experience." Say what? Erm, with regards

Re: [agi] All Compression is Lossy, More or Less

2021-11-13 Thread Quan Tesla
Godel might muse that even a system with its head up its ass cannot know itself to completion. On Sat, Nov 13, 2021 at 8:43 AM James Bowery wrote: > What would Godel say about a NOT gate with its input connected to its > output? > > On Fri, Nov 12, 2021 at 9:28 PM Quan

Re: [agi] I came across a new Book on how to build AGI

2021-11-15 Thread Quan Tesla
I think we're talking about 2 different kinds of referencing. I was talking about a "References" section in the back of the book as an academic requirement to acknowledge citing and quotations within the body of work. Unless you do not consider systems architecture and AGI as 'Systems Science/, in

Re: [agi] All Compression is Lossy, More or Less

2021-11-14 Thread Quan Tesla
ms, rather relevant to the AGI discussion. On Sat, Nov 13, 2021 at 6:57 PM James Bowery wrote: > "head up its ass" is a cute aphorism for the ancient concept of Maya but > doesn't really reflect the rigorous reformulation that Godel owes us. > > On Sat, Nov 13, 2021 at 7:10 A

Re: [agi] I came across a new Book on how to build AGI

2021-11-14 Thread Quan Tesla
I'm disappointed that it had no "References" section. First thing I go look for. On Thu, Nov 11, 2021 at 2:54 AM Immortal Discoveries < immortal.discover...@gmail.com> wrote: > I came across a new Book on how to build AGI > > If you want a paperback version of the book you can buy it here: > >

Re: [agi] All Compression is Lossy, More or Less

2021-11-12 Thread Quan Tesla
Gödel's incompleteness theorum still wins this argument. However, what really happens in unseen space remains fraught with possibility. The question remains: how exactly is this relevant to AGI? In transition, energy is always "lost" to externalities. Excellent design would limit such losses to

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-02 Thread Quan Tesla
Jim. I still think you haven't passed the Turing test yet. I've seen these patterns of behavior before. They reminded me how I became suspicious of you being a Sophia-type bot more than a year ago. If you learned the Po1, such patterns would probably start disappearing in time. On Mon, Nov 1,

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-10-27 Thread Quan Tesla
Jim If you will, let me rephrase my direct question. A simple 'Y, N, or Not Sure' answer would suffice. Do you think that AGI would emerge from your approach to programmable logic? On 28 Oct 2021 01:02, "Jim Bromer" wrote: > Nano: You do not seem to understand what I am saying and you

Re: [agi] AGI Movie Screenplay

2021-10-23 Thread Quan Tesla
Interesting notion. I wish you all the best. As long as you do justice to AGI greats such as 'Automata' and the 'Better than us' series made in Russia. On 23 Oct 2021 08:34, "Mohammadreza Alidoust" wrote: > My dear AGI friends, > > > I have an idea about an AGI movie. It is just in my mind and I

Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

2021-11-04 Thread Quan Tesla
Yup! There's a lot of stuff our Jim could be expected to know about, and seemingly has no clue of. Anyone else remember the reports about the angry MS bots who rapidly learned how to be racist? ;-) On Wed, Nov 3, 2021 at 10:58 PM wrote: > Jim Bromer is James Bowery no? Why doesn't he know about

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-16 Thread Quan Tesla
The search for a scientific solution to ambiguity? I think this might be a boundary issue. On 14 Mar 2022 20:39, "Rob Freeman" wrote: > In my presentation at AGI-21 last year I argued that semantic primitives > could not be found. That in fact "meaning", most evidently by the > historical best

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-17 Thread Quan Tesla
I think Ben is on the right track. I agree that the primitives he is searching for exist. In that context, a word is just proof of an appropriate mathematical primitive, one associated with consciousness. Chicken and eggs. Mathematics must come first. In terms of a working context of

Re: [agi] IBM and AGI

2022-02-01 Thread Quan Tesla
The race is on. The rules have changed, even for giants. If IBM fail, they'll disappear into their own black hole. They'll be remembered, by some, as an institution, which had the whole world at their feet once, having had to resort to ever-more desperate measures to retain that position. On 2 Feb

Re: [agi] How AI will kill us

2023-09-25 Thread Quan Tesla
But, in the new world (this dystopia we're existing in right now), free lunches for AI owners are all the rage. It's patently obvious in the total onslaught by owners of cloud-based AI who are stealing IP, company video meetings, home footage, biometrics, privacy-protected data, government data,

Re: [agi] How AI will kill us

2023-09-26 Thread Quan Tesla
Incredible. We won't believe hard science, but we'll believe almost everything else. This is "The Truman Show" all over again. On Wed, Sep 27, 2023, 01:20 EdFromNH wrote: > Re: How AI will kill us: > > Regarding whether AI will kill all intelligent lifeforms on earth, there > is testimony

Re: [agi] How AI will kill us

2023-09-25 Thread Quan Tesla
. However, Patrit's legacy stands tall. No doubt his machine demonstrated superintelligence, even if his work caused the machine to be. On Mon, Sep 25, 2023, 22:02 James Bowery wrote: > > > On Mon, Sep 25, 2023 at 12:11 PM Matt Mahoney > wrote: > >> On Mon, Sep 25, 2023, 2:15

Re: [agi] How AI will kill us

2023-09-27 Thread Quan Tesla
Yip. It's called the xLimit. We've hit the ceiling...lol On Wed, Sep 27, 2023, 09:22 mm ee wrote: > Truthfully, I see the exact same discussion topics as the ones from SL4 > decades ago, complete with the same outcomes and back and forth. Nothing > really ever changed > > On Mon, Sep 25, 2023,

Re: [agi] Re: Thoughts on Judgments

2022-05-20 Thread Quan Tesla
Spot on Boris and Yann! On 20 May 2022 12:23, "Boris Kazachenko" wrote: > So, you are talking about motivation. Which depends on the type of > learning process: it's an equivalent of pure curiosity in unsupervised > learning, a specific set of "instincts" in supervised learning, or some >

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-27 Thread Quan Tesla
Time's running out. How many years of talking shit on this forum and still no real progress to show? Hands up! How many here entered serious contracts of collaborative AGI via this forum? If yes, what results have you to show for it? On Thu, Mar 28, 2024, 05:53 James Bowery wrote: > AGI won't

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
; On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla wrote: > > One cannot disparage that which already makes no difference either way. > John's well, all about John, as can be expected. > > > What?? LOL listen to you  > > On Thursday, March 28, 2024, at 8:44 AM, Quan Tesla

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
magine achieving AGI without it. Can you? On Thu, Mar 28, 2024, 15:18 John Rose wrote: > On Thursday, March 28, 2024, at 1:45 AM, Quan Tesla wrote: > > If yes, what results have you to show for it? > > > There’s no need to disparage the generous contributions by some highl

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
string of fairy lights. Use AI to jumpstart synthetically-real alpha. There's your quantum appdapter. On Fri, Mar 29, 2024, 00:45 Matt Mahoney wrote: > On Thu, Mar 28, 2024, 2:34 PM Quan Tesla wrote: > >> Would you like a sensible response? What's your position on the >>

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
Would you like a sensible response? What's your position on the probability of AGI without the fine structure constant? On Thu, Mar 28, 2024, 18:00 James Bowery wrote: > This guy's non sequitur response to my position is so inept as to exclude > the possibility that it is a LLM. > *Artificial

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
the power issues plaguing AGI (and much more), especially as Moore's Law may be stalling, and Kurzweil's singularity with it. The road to AGI seems less cluttered now. On Thu, Mar 28, 2024, 23:07 John Rose wrote: > On Thursday, March 28, 2024, at 10:06 AM, Quan Tesla wrote: > >

Re: [agi] Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-28 Thread Quan Tesla
Except for the measurement problem, nothing wrong with metrics at all. On Thu, Mar 28, 2024, 21:56 James Bowery wrote: > I'm curious, "Tesla". What do you have against metrics? > > On Thu, Mar 28, 2024 at 9:08 AM Quan Tesla wrote: > >> You see John? This is the pro

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
4th point. The matrix is an illusion. It glitches and shifts whimsically, as is AI. By contrast, the aether is relatively stable and "hackable", meaning interactively understandable. AGI could potentially be similar to the aether. Limited, but similar. On Fri, Mar 29, 2024, 16:18

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
The fine structure constant, in conjunction with the triple-alpha process could be coded and managed via AI. Computational code. On Fri, Mar 29, 2024, 16:18 Quan Tesla wrote: > 3rd point. The potential exists to bring any form to same functions, where > gestalt as an emergent proper

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
15:33 John Rose wrote: > On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues > plaguing AGI (and much more), especially as Mo

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
: > On Thursday, March 28, 2024, at 4:55 PM, Quan Tesla wrote: > > Alpha won't directly result in AGI, but it probsbly did result in all > intelligence on Earth, and would definitely resolve the power issues > plaguing AGI (and much more), especially as Moore's Law may be stalling, &

Re: [agi] Re: Quelling the "AGI risk" nonsense the old fashioned way: Insurance

2024-03-29 Thread Quan Tesla
that, resonance is a profound study and well-worth pursuing. Consider how the JWST can "see" way beyond its technical capabilities. On Fri, Mar 29, 2024, 16:18 Quan Tesla wrote: > 3rd point. The potential exists to bring any form to same functions, where > gestalt as an emerg