Re: [agi] openAI's AI advances and PR stunt...

2019-02-18 Thread Rob Freeman
Ben, That's what I thought. You're still working with Link Grammar. But since last year working on informing your links with stats from deep-NN type, learned, embedding vector based predictive models? You're trying to span the weakness of each formalism with the strengths of the other?? There's

Re: [agi] The future of AGI

2019-02-25 Thread Rob Freeman
re there more classes (permutations?) than examples? 2) Is observed gauge in language (your observation!) "pointless", "mostly pointless" or not pointless at all, simply not explored further? 3) How do you explain Chomsky's observation 60 years ago that distributional analysis

Re: [agi] The future of AGI

2019-02-25 Thread Rob Freeman
On Tue, Feb 26, 2019 at 5:19 PM Linas Vepstas wrote: > > On Mon, Feb 25, 2019 at 9:04 PM Rob Freeman > wrote: > >> ... >> You mean you have no knowledge of attempts at distributional learning of >> grammar from the '90s? >> > > Sure. Yes, I suppo

Re: [agi] The future of AGI

2019-02-25 Thread Rob Freeman
On Tue, Feb 26, 2019 at 7:00 PM Nanograte Knowledge Technologies < nano...@live.com> wrote: > ... > > If I may suggest, perhaps stepping back to consider if the personal tone > of your conversation is justified on the hand of the topic and content, or > perhaps your personal frustration alone. >

Re: [agi] openAI's AI advances and PR stunt...

2019-02-19 Thread Rob Freeman
ou get contradictions. And I believe the best way to do that will be to set the network oscillating and varying inhibition, to get the resolution of groupings we want dynamically. -Rob On Tue, Feb 19, 2019 at 6:45 PM Linas Vepstas wrote: > Hi Rob, > > On Mon, Feb 18, 2019 at 4:40 PM

Re: [agi] The future of AGI

2019-02-24 Thread Rob Freeman
Ben and List, I wanted to leave this. I'm glad I didn't. I hadn't previously been paying attention and missed this thread. It's actually very good. Thanks to Dorian Aur for dragging it back up. I agree with almost all of it. As with all I've been commenting on here. There is just one small gap

Re: [agi] openAI's AI advances and PR stunt...

2019-02-22 Thread Rob Freeman
Sorry. That was an addendum. On Sat, Feb 23, 2019 at 11:21 AM Linas Vepstas wrote: > ... > >> Meanwhile linguistics is still split, structuralism is still destroyed. >> No-one knows why distributed representation works better, and equally >> no-one knows why we can't "learn" adequate

Re: [agi] openAI's AI advances and PR stunt...

2019-02-22 Thread Rob Freeman
On Sat, Feb 23, 2019 at 11:48 AM Linas Vepstas wrote: > > > On Fri, Feb 22, 2019 at 4:34 PM Rob Freeman > wrote: > >> >> Can you summarize it in a line? >> > > There's a graph. Here's where it is and what it looks like. Here's how > neural nets f

Re: [agi] openAI's AI advances and PR stunt...

2019-02-22 Thread Rob Freeman
in conversation, I could get some real > work done that I need to do. However, lacking in willpower, I respond: > > On Fri, Feb 22, 2019 at 1:18 AM Rob Freeman > wrote: > >> >> >> So this is just a property of sets. >> > > This is a property of infinite se

Re: [agi] openAI's AI advances and PR stunt...

2019-02-22 Thread Rob Freeman
LV> '...what's the diff? Yes, I'm using the "observed words", just like everyone else. And doing something with them, just like everyone else.' Yup. Except Chomsky won't use observed words. The entire field of Generative Grammar that he created won't use observed words. Chomsky realized you

Re: [agi] Some thoughts about Symbols and Symbol Nets

2019-02-21 Thread Rob Freeman
Jim, I haven't been following this thread closely. But if you look at what we've been talking about in the OpenAI PR stunt thread, at one level I think it comes to much what you are talking about. My old vector parser demo linked in that thread does something like this. You can see it happen.

Re: [agi] openAI's AI advances and PR stunt...

2019-02-20 Thread Rob Freeman
OK, that makes sense Ben. So long as you have a clear picture of how to progress the theory beyond temporary expediency, temporarily using the state-of-the-art may be strategic. So long as you are moving forward with some strong theoretical candidates too. If we get trapped without theory, we're

Re: [agi] openAI's AI advances and PR stunt...

2019-02-16 Thread Rob Freeman
On the substance, here's what I wrote elsewhere in response to someone's comment that it is an "important step": Important step? I don't see it. Bengio's NLM? Yeah, good, we need distributed representation. That was an advance. but it was always a linear model without a sensible way of folding in

Re: [agi] openAI's AI advances and PR stunt...

2019-02-16 Thread Rob Freeman
Ben, I was using linear in two senses. One Bengio's original NLM where word encodings were devoid of context. The other the sense Goodfellow uses it in this lecture: Do statistical models understand the world? Ian Goodfellow https://www.youtube.com/watch?v=hDlHpBBGaKs=19m5s "Modern deep nets

Re: [agi] openAI's AI advances and PR stunt...

2019-02-16 Thread Rob Freeman
I don't know Ben. It feels more sinister to me. It feels like virtue signalling. Very bad to see this entering hard science. I see the idea behind it, probably unconscious and so more dangerous, that engineers and engineering are bad, and the world must be protected from them. If every time you

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
On Mon, Feb 18, 2019 at 10:05 AM Stefan Reich via AGI wrote: > Nothing wrong with pushing your own results if you consider them > worthwhile... > Well, I think on one level it's much the same as Pissanetzky. Pissanetzky's is a meaningful way of relating elements which generates new patterns.

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
chaoticlanguage.com > > It works with "I went to Brazil", but seems to break with "In Brazil, > people are friendly" (it creates "Brazil people" as a node). Any way to > give it feedback? > > On Sun, 17 Feb 2019 at 22:48, Rob Freeman > wrote: >

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Rob Freeman
On Mon, Feb 18, 2019 at 4:01 PM Ben Goertzel wrote: > *** > ... > And likely the way to do this is to set the network oscillating, and > vary inhibition to get the resolution of "invariants" you want. > *** > > But we are not doing that. Interesting... Cool. Maybe there could be a match. I

Re: [agi] openAI's AI advances and PR stunt...

2019-02-19 Thread Rob Freeman
em, it will probably be easier to use your words than argue about them endlessly. Anyway, in substance, you just don't understand what I am proposing. Is that right? -Rob On Wed, Feb 20, 2019 at 8:52 AM Linas Vepstas wrote: > Hi Rob, > > On Tue, Feb 19, 2019 at 3:23 AM Rob Free

Re: [agi] openAI's AI advances and PR stunt...

2019-02-19 Thread Rob Freeman
Ben, On Wed, Feb 20, 2019 at 2:39 AM Ben Goertzel wrote: > ... > The unfortunate fact is we can't currently feed as much data into our > OpenCog self-adapting graph as we can into a BERT type model, given > available resources... thus using the latter to help tweak weights in > the former may

Re: [agi] openAI's AI advances and PR stunt...

2019-02-19 Thread Rob Freeman
that it is hard to feed data into it. Can you give an example? What does an OpenCog network with newly input raw language data look like? -Rob On Wed, Feb 20, 2019 at 4:21 PM Linas Vepstas wrote: > > > On Tue, Feb 19, 2019 at 5:33 PM Rob Freeman > wrote: > >> Linas, &

Re: [agi] Steel-manning 101

2019-07-02 Thread Rob Freeman
On Tue, Jul 2, 2019 at 9:28 PM Colin Hales wrote: > > -- Forwarded message - > From: Rob Freeman ... > As far as my position, I think the answer is a chaos, or a complex system > element to meaningful patterns. And that's why they elude us. Chaos is also > e

Re: [agi] test

2019-06-30 Thread Rob Freeman
Korrelan, Good. Interested to talk to you about this. A lot I agree with. But let me just pick some specific points. On Sun, Jun 30, 2019 at 5:00 PM korrelan wrote: > ... > > The external sensory cortex re-encodes incoming sensory streams by > applying spatiotemporal compression > OK.

Re: [agi] While you were working on AGI...

2019-07-14 Thread Rob Freeman
You undervalue the degree to which research is an ideas market, Matt. This entire current AI boom is the result of the one simple, universal breakthrough. Progress was flat for years before that (winter), and has been since. Of course "flat" is relative. The old, single, universal breakthrough

Re: [agi] Steel-manning 101

2019-07-01 Thread Rob Freeman
man. > > Can I take the trouble to critique your depiction of my position? > > Alas, I'm unable to say anything well-informed on your position, so I am > open to you educating me. > > regards > Colin > > > > > On Tue, Jul 2, 2019 at 12:20 PM Rob F

Re: [agi] ARGH!!!

2019-07-01 Thread Rob Freeman
On Tue, Jul 2, 2019 at 7:57 AM Colin Hales wrote: > ...I'd like to do something different this time. We're part of the 'old > guard' and it's up to us to demonstrate how an intellectual discussion can > be fruitfully conducted to advance the topic in question. So I'd like to > run an experiment.

Re: [agi] The future of AGI

2019-09-23 Thread Rob Freeman
On Tue, Sep 24, 2019 at 9:34 AM James Bowery wrote: > The use of perplexity as model selection criterion seems misguided to me. > See my Quora answer to the question "What is the relationship between > perplexity and Kolmogorov complexity? >

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread Rob Freeman
On Mon, Oct 28, 2019 at 1:48 AM wrote: > No I meant Word2Vec / Glove. They use a ex. 500 dimensional space to > relate words to each other. If we look at just 3 dimensions with 10 dots > (words) we can visualize how a word is in 3 superpositions entangled with > other dots. > Pity. I thought

Re: [agi] Re: Google - quantum computers are getting close

2019-10-28 Thread Rob Freeman
On Mon, Oct 28, 2019 at 11:11 AM wrote: > Do you mean, instead of feeding the net data and learning, to instead > request new output data/solutions? > You could put it like that. Without seeing an exact formalization it is hard to say. You make the example of zebra, horse, dog, mouse, cat.

Re: [agi] Re: Google - quantum computers are getting close

2019-10-26 Thread Rob Freeman
On Sun, Oct 27, 2019 at 12:13 PM wrote: > Better put, a qubit/dot in my 3D cube can be in 3 dimension (or more) > (superposition) > What do you mean you "my 3D cube"? Perhaps I've missed another post where you talk about your work. Have you done something using 3D network representations for

[agi] An uncertainty principle for deep learning?

2020-09-01 Thread Rob Freeman
This came up on Twitter: Deep Learning’s Uncertainty Principle Carlos E. Perez https://medium.com/intuitionmachine/deep-learnings-uncertainty-principle-13f3ffdd15ce An uncertainty principle for grammar. What I've been arguing for 20 years! Posting it here now, because to me it appears to be the

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-08-01 Thread Rob Freeman
On Sat, Aug 1, 2020 at 7:08 PM Matt Mahoney wrote: > > On Fri, Jul 31, 2020, 10:00 PM Ben Goertzel wrote: > >> I think "mechanisms for how to predict the next word" is the wrong >> level at which to think about the problem, if AGI is your interest... >> > > Exactly. The problem is to predict

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-08-01 Thread Rob Freeman
How many billion parameters do PLN and TLCG have? Applications of category theory by Coecke, Sadrzadeh, Clark and others in the '00s are probably also formally correct. As were applications of the maths of quantum mechanics. Formally. Does Dominic Widdows still have that conference?

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-08-01 Thread Rob Freeman
On Sun, Aug 2, 2020 at 1:58 AM Ben Goertzel wrote: > ... > ...I also think that the search for concise > abstract models is another part of what's needed... > It depends how you define "concise abstract model". Even maths has an aspect of contradiction. What does Chaitin call his measure of

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Rob Freeman
I was interested to learn that transformers have now completely abandoned the RNN aspect, and model everything as sequence "transforms" or re-orderings. That makes me wonder if some of the theory does not converge on work I like by Sergio Pissanetzky, which uses permutations of strings to derive

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Rob Freeman
On Sat, Aug 1, 2020 at 3:52 AM wrote: > ... > Semantics: > If 'cat' and 'dog' both share 50% of the same contexts, then maybe the > ones they don't share are shared as well. So you see cat ate, cat ran, cat > ran, cat jumped, cat jumped, cat licked..and dog ate, dog ran, dog ran. >

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Rob Freeman
Ben, By examples do you mean like array reversal in your article? I agree. This problem may not be addressed by their learning paradigm at all. But I disagree this has been the biggest problem for symbol grounding. I think the biggest problem for symbol grounding has been ambiguity. Manifest

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-04 Thread Rob Freeman
On Sat, Jul 4, 2020 at 2:04 PM Ben Goertzel wrote: > ... I believe we discussed some time ago what sort of chaotic dynamical > model I think would be most interesting to explore in a language > learning context, and my thoughts were a little different than what > you're doing, but I haven't had

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Rob Freeman
On Sat, Jul 4, 2020 at 3:28 AM Ben Goertzel wrote: > We have indeed found some simple grammars emerging from the attractor > structure of the dynamics of computer networks, with the grammatical > forms correlating with network anomalies. Currently are wondering if > looking at data from more

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-02 Thread Rob Freeman
Ben, How did the network, symbolic dynamics, work you planned last year work out? Specifically you said (July 17, 2019): "...applying grammar induction to languages derived from nonlinear dynamics of complex systems via symbolic dynamics, is not exactly about artificial languages, it's about a

Re: [agi] Re: Advanced robots

2021-08-27 Thread Rob Freeman
On Sat, Aug 28, 2021 at 4:22 AM Ben Goertzel wrote: > Matt, "Quantum Associative Memory" is an active research area... > > So are reversible NNs, e.g. https://arxiv.org/abs/2108.05862 > > I think your current view that "learning means writing bits into > memory." is overly limited... And

Re: [agi] Re: AGI discussion group, Sep 10 7AM Pacific: Characterizing and Implementing Human-Like Consciousness

2021-09-10 Thread Rob Freeman
On Fri, Sep 10, 2021 at 2:59 PM Ben Goertzel via AGI wrote: > ah yes these are very familiar. materials! ;) > > Linas Vepstas and I have been batting around Coecke's papers for an > awfully long time now... Good. I know I mentioned it to Linas in 2019, and possibly even 2010, but I didn't

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-10 Thread Rob Freeman
On Sat, Sep 11, 2021 at 2:25 PM wrote: > I can pack all my AI mechanisms down to 1 word, all like 16 of them. Never > seen anyone do much of this. > What's the word? -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-11 Thread Rob Freeman
On Sun, Sep 12, 2021 at 7:37 AM Mike Archbold wrote: > ... > The reality is that nobody claims their machine is conscious -- but > regularly people claim their machine understands, but they don't say > what that means Got any examples of people saying their machine understands Mike? I don't

Re: [agi] Re: AGI discussion group, Sep 10 7AM Pacific: Characterizing and Implementing Human-Like Consciousness

2021-09-09 Thread Rob Freeman
On Fri, Sep 10, 2021 at 1:49 PM Ben Goertzel via AGI wrote: > ... > Our OpenCog/SNet team is spending a lot of time on down-to-earth > stuff, some of which we'll talk about in some future AGI Discussion > sessions > > Mainly > > -- design of a new programming language (MeTTA = Meta Type Talk) >

Re: [agi] Re: AGI discussion group, Sep 10 7AM Pacific: Characterizing and Implementing Human-Like Consciousness

2021-09-09 Thread Rob Freeman
On Fri, Sep 10, 2021 at 2:36 PM Ben Goertzel via AGI wrote: > ... > Working out the specifics of the Curry-Howard mapping from MeTTa to > intuitionistic logics, and from there to categorial semantics, is one > of the things on our plate for the next couple months Ah, if that is to be worked

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-10 Thread Rob Freeman
On Sat, Sep 11, 2021 at 12:39 PM Matt Mahoney wrote: > I don't understand why we are so hung up on the definition of > understanding. I think this is like the old debate over whether machines > could think. Can submarines swim? > It's just shorthand for the continued failure of machines at any

Re: [agi] UNDERSTANDING -- Part I -- the Survey, online discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited

2021-09-11 Thread Rob Freeman
On Sun, Sep 12, 2021 at 12:31 PM Mike Archbold wrote: > here's a few > > https://understand.ai/ > > > https://www.forbes.com/sites/cognitiveworld/2020/06/28/machines-that-can-understand-human-speech-the-conversational-pattern-of-ai/ > > >

Re: [agi] Meta Type Talk (Hyperon) language description online, w/ talk coming Fri at AGI-21

2021-10-14 Thread Rob Freeman
On Fri, Oct 15, 2021 at 5:19 AM Ben Goertzel wrote: > ... > ... Metta is also a Pali > word for lovingkindness, which has some AGI ethics resonance. > You've led me an etymological dance, Ben: (https://www.wisdomlib.org/definition/metta) "Metta (मेत्त) in the Prakrit language is related to the

Re: [agi] All Compression is Lossy, More or Less

2021-11-13 Thread Rob Freeman
Hi John, I probably should have read this thread earlier. I agree with your insight. I have been pushing this idea that cognition, or at least specifically natural language grammar, is lossy, for some time now. Matt Mahoney may remember me pushing it re. the Hutter Prize to compress language,

Re: [agi] All Compression is Lossy, More or Less

2021-11-13 Thread Rob Freeman
Erratum: *"Even OpenAI has embraced this idea to an extent. As I cite in my talk" Sorry, that should read OpenCog. I don't think OpenAI has embraced it. It would be nice if they did. On Sun, Nov 14, 2021 at 7:52 AM Rob Freeman wrote: > Hi John, > > I probably should ha

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-16 Thread Rob Freeman
Jean-Paul, On Tue, Mar 15, 2022 at 1:42 PM Jean-Paul VanBelle via AGI < agi@agi.topicbox.com> wrote: > Strange that you didn't reference Schank and conceptual dependency theory > (1975) which appeared to be quite successful at representing huge amounts > of human knowledge with a very small

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-14 Thread Rob Freeman
On Mon, Mar 14, 2022 at 4:47 PM Ben Goertzel wrote: > Whether and in what sense semantic primitives can be found depends > wholly on the definitions involved right? > > Crudely, define ps(p,e) as the number of primitives that is needed to > generate p% of human concepts within error e > That's

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-14 Thread Rob Freeman
On Mon, Mar 14, 2022 at 9:18 PM Ben Goertzel wrote: > ... > Well I am working pragmatically with the notion that the meaning of > concept C to mind M is the set of patterns associated with C in M. I like your pattern based conception of meaning. Always have. It's a great improvement on

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-15 Thread Rob Freeman
On Mon, Mar 14, 2022 at 11:48 PM Ben Goertzel wrote: > The dynamically, contextually-generated pattern-families you describe > are still patterns according to the math definitions of pattern I've > given ... > Good. Then your definition can embrace my hypothesis that cognition is an expansion

Re: [agi] AGi Discussion Forum sessions -- semantic primitives (Mar 18) and formalization of MeTTa (April 8)

2022-03-14 Thread Rob Freeman
In my presentation at AGI-21 last year I argued that semantic primitives could not be found. That in fact "meaning", most evidently by the historical best metrics from linguistics, appears to display a kind of quantum indeterminacy: Vector Parser - Cognition a compression or expansion of the

[agi] The next advance over transformer models

2022-06-25 Thread Rob Freeman
I've been taking a closer look at transformers. The big advance over LSTM was that they relate prediction to long distance dependencies directly, rather than passing long distance dependencies down a long recurrence chain. That's the whole "attention" shtick. I knew that. Nice. But something I

Re: [agi] The next advance over transformer models

2022-07-01 Thread Rob Freeman
On Fri, Jul 1, 2022 at 12:47 AM Boris Kazachenko wrote: > ... > Do you mean two similar input-inputs that are not in the same input? > I'd prefer to phrase it in terms of Howarth's data for natural language. I mean what Howarth calls "blends". Howarth contrasts "blends" with what he calls

Re: [agi] Re: The next advance over transformer models

2022-06-29 Thread Rob Freeman
On Wed, Jun 29, 2022 at 2:19 PM John Rose wrote: > ... > Sorry, I meant that it sounds like an “intuition” mechanism that would be > grouping hierarchies of elements in language which share predictions, > You might call our sense of what structures are "correct" in language an intuition, I

Re: [agi] The next advance over transformer models

2022-07-01 Thread Rob Freeman
On Fri, Jul 1, 2022 at 3:34 PM Brett N Martensen wrote: > If you are looking for a hierarchical structure which reuses simpler parts > (letters, words, phrases) in compositions that include overlaps ... you > might want to have a look at binons. >

Re: [agi] Re: The next advance over transformer models

2022-06-27 Thread Rob Freeman
On Tue, Jun 28, 2022 at 6:25 AM John Rose wrote: > ... > On Saturday, June 25, 2022, at 6:58 AM, Rob Freeman wrote: > > If all the above is true, the key question should be: what method could > directly group hierarchies of elements in language which share predictions? >

Re: [agi] Re: The next advance over transformer models

2022-06-29 Thread Rob Freeman
On Wed, Jun 29, 2022 at 2:19 PM John Rose wrote: > ...Bob Coecke’s spidering and togetherness goes along with how I think > about these things. The spidering though is a simplicity, a visual > dimension reduction itself for symbolic communication coincidentally like a > re-grammaring of

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 1:33 PM Boris Kazachenko wrote: > On Thursday, June 30, 2022, at 3:00 AM, Rob Freeman wrote: > > I'm interested to hear what other mechanisms people might come up with to > replace back-prop, and do this on the fly.. > > > For shared predicti

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 1:51 PM Rob Freeman wrote: > On Thu, Jun 30, 2022 at 1:33 PM Boris Kazachenko > wrote: > >> ... >> My alternative is to directly search for shared properties: lateral >> cross-comparison and connectivity clustering. >> By the way, indepe

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 2:18 PM Boris Kazachenko wrote: > On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote: > > what method do you use to do the "connectivity clustering" over it? > > > I design from the scratch, that's the only way to conceptual integr

Re: [agi] The next advance over transformer models

2022-06-30 Thread Rob Freeman
On Thu, Jun 30, 2022 at 10:40 AM Ben Goertzel wrote: > "what method could directly group hierarchies of elements in language > which share predictions?" > > First gut reaction is, some form of evolutionary learning where the > genomes are element-groups > > Thinking in terms of NN-ish. models,

Re: [agi] The next advance over transformer models

2022-06-29 Thread Rob Freeman
On Wed, Jun 29, 2022 at 11:14 PM James Bowery wrote: > To the extent that grammar entails meaning, it can be considered a way of > defining equivalence classes of sentence meanings. In this sense, the > choice of which sentence is to convey the intended meaning from its > equivalence class is a

Re: [agi] Re: The next advance over transformer models

2022-06-29 Thread Rob Freeman
On Wed, Jun 29, 2022 at 11:11 PM Boris Kazachenko wrote: > On Wednesday, June 29, 2022, at 10:29 AM, Rob Freeman wrote: > > You would start with the relational principle those dot products learn, by > which I mean grouping things according to shared predictions, make it > instead

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread Rob Freeman
Off topic, and I haven't followed this thread, but... On Tue, Jul 4, 2023 at 10:21 PM Matt Mahoney wrote: >... > > We are not close to reversing human aging. The global rate of increase in > life expectancy has dropped slightly after peaking at 0.2 years per year in > the 1990s. We have 0

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 7:54 PM James Bowery wrote: > On Thu, Jul 6, 2023 at 1:09 AM Rob Freeman wrote: >> >> I just always believed the goal of compression was wrong. > > You're really confused. I'm confused? Maybe. But I have examples. You don't address my examples. You

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 7:58 PM Matt Mahoney wrote: > ... > The LTCB and Hutter prize entries model grammar and semantics to some extent > but never developed to the point of constructing world models enabling them > to reason about physics or psychology or solve novel math and coding >

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-05 Thread Rob Freeman
On Wed, Jul 5, 2023 at 7:05 PM Matt Mahoney wrote: >... > LLMs do have something to say about consciousness. If a machine passes the > Turing test, then it is conscious as far as you can tell. I see no reason to accept the Turning test as a definition of consciousness. Who ever suggested that?

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 3:51 AM Matt Mahoney wrote: > > I am still on the Hutter prize committee and just recently helped evaluate a > submission. It uses 1 GB of text because that is how much a human can process > over a lifetime. We have much larger LLMs, of course. Their knowledge is >

Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-06 Thread Rob Freeman
On Thu, Jul 6, 2023 at 11:30 AM wrote: >... > Hold on. The Lossless Compression evaluation tests not just compression, but > expansion! It's easy to get lost in word definitions. It sounds like you're using "expansion" in a sense of recovering an original from a compression. I'm using

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Rob Freeman
On Thu, May 9, 2024 at 6:15 AM James Bowery wrote: > > Shifting this thread to a more appropriate topic. > > -- Forwarded message - >> >> From: Rob Freeman >> Date: Tue, May 7, 2024 at 8:33 PM >> Subject: Re: [agi] Hey, looks like the goertzel

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-05 Thread Rob Freeman
On Sat, May 4, 2024 at 4:53 AM Matt Mahoney wrote: > > ... OpenCog was a hodgepodge of a hand coded structured natural language > parser, a toy neural vision system, and a hybrid fuzzy logic knowledge > representation data structure that was supposed to integrate it all together > but never

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Rob Freeman
without it, > we'll remain stuck in the quagmire of early 1990s+ functional > analysis-paralysis, by any name. > > I'll hold out hope for that one, enlightened developer to make that quantum > leap into exponential systems computing. A seachange is needed. > > Inter-alia, Rob Fr

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-10 Thread Rob Freeman
perfectly classical and observable elements, I tried to present myself in contrast to Bob Coecke's top-down quantum grammar approach, on the Entangled Things podcast: https://www.entangledthings.com/entangled-things-rob-freeman You could look at my Facebook group, Oscillating Networks for AI. Check out my T

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread Rob Freeman
Is a quantum basis fractal? To the extent you're suggesting some kind of quantum computation might be a good implementation for the structures I'm suggesting, though, yes. At least, Bob Coecke thinks quantum computation will be a good fit for his quantum style grammar formalisms, which kind of

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Rob Freeman
I'm disappointed you don't address my points James. You just double down that there needs to be some framework for learning, and that nested stacks might be one such constraint. I replied that nested stacks might be emergent on dependency length. So not a constraint based on actual nested stacks

Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Rob Freeman
Addendum: another candidate for this variational model for finding distributions to replace back-prop (and consequently with the potential to capture predictive structure which is chaotic attractors. Though they don't appreciate the need yet.) There's Extropic, which is proposing using heat noise.

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-16 Thread Rob Freeman
James, For relevance to type theories in programming I like Bartosz Milewski's take on it here. An entire lecture series, but the part that resonates with me is in the introductory lecture: "maybe composability is not a property of nature" Cued up here: Category Theory 1.1: Motivation and

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread Rob Freeman
On Wed, May 22, 2024 at 10:02 PM James Bowery wrote: > ... > You correctly perceive that the symbolic regression presentation is not to > the point regarding the HNet paper. A big failing of the symbolic regression > world is the same as it is in the rest of computerdom: Failure to recognize

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread Rob Freeman
On Thu, May 23, 2024 at 10:10 AM Quan Tesla wrote: > > The paper is specific to a novel and quantitative approach and method for > association in general and specifically. John was talking about the presentation James linked, not the paper, Quan. He may be right that in that presentation they

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-27 Thread Rob Freeman
James, I think you're saying: 1) Grammatical abstractions may not be real, but they can still be useful abstractions to parameterize "learning". 2) Even if after that there are "rules of thumb" which actually govern everything. Well, you might say why not just learn the "rules of thumb". But

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-24 Thread Rob Freeman
LLM freedom of totally ignoring "objects" (which seems to be necessary, both by the success of LLMs at generating text, and by the observed failure of formal grammars historically) if you specify them in terms of external relations. Maybe the paper authors don't see it. But the way they

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-28 Thread Rob Freeman
Matt, Nice break down. You've actually worked with language models, which makes it easier to bring it back to concrete examples. On Tue, May 28, 2024 at 2:36 AM Matt Mahoney wrote: > > ...For grammar, AB predicts AB (n-grams), Yes, this looks like what we call "words". Repeated structure. No

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-19 Thread Rob Freeman
James, My working definition of "truth" is a pattern that predicts. And I'm tending away from compression for that. Related to your sense of "meaning" in (Algorithmic Information) randomness. But perhaps not quite the same thing. I want to emphasise a sense in which "meaning" is an expansion of

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread Rob Freeman
diagonalization lemma? "True" but not provable/predictable within the system?) On Mon, May 20, 2024 at 9:09 PM James Bowery wrote: > > > > On Sun, May 19, 2024 at 11:32 PM Rob Freeman > wrote: >> >> James, >> >> My working definition of "truth&quo

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread Rob Freeman
"Importantly, the new entity ¢X is not a category based on the features of the members of the category, let alone the similarity of such features" Oh, nice. I hadn't seen anyone else making that point. This paper 2023? That's what I was saying. Nice. A vindication. Such categories decouple the

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-29 Thread Rob Freeman
On Wed, May 29, 2024 at 9:37 AM Matt Mahoney wrote: > > On Tue, May 28, 2024 at 7:46 AM Rob Freeman > wrote: > > > Now, let's try to get some more detail. How do compressors handle the > > case where you get {A,C} on the basis of AB, CB, but you don't get, > &g

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-21 Thread Rob Freeman
James, The Hamiltonian paper was nice for identifying gap filler tasks as decoupling meaning from pattern: "not a category based on the features of the members of the category, let alone the similarity of such features". Here, for anyone else: A logical re-conception of neural networks:

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-23 Thread Rob Freeman
nderstand the (relational) theory behind it in order to jump out of the current LLM "local minimum". On Thu, May 23, 2024 at 11:47 PM James Bowery wrote: > > > On Wed, May 22, 2024 at 10:34 PM Rob Freeman > wrote: >> >> On Wed, May 22, 2024 at 10:02 PM James Bowery

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-25 Thread Rob Freeman
Thanks Matt. The funny thing is though, as I recall, finding semantic primitives was the stated goal of Marcus Hutter when he instigated his prize. That's fine. A negative experimental result is still a result. I really want to emphasize that this is a solution, not a problem, though. As the