Ben,
That's what I thought. You're still working with Link Grammar.
But since last year working on informing your links with stats from deep-NN
type, learned, embedding vector based predictive models? You're trying to
span the weakness of each formalism with the strengths of the other??
There's
re there more classes (permutations?) than examples?
2) Is observed gauge in language (your observation!) "pointless", "mostly
pointless" or not pointless at all, simply not explored further?
3) How do you explain Chomsky's observation 60 years ago that
distributional analysis
On Tue, Feb 26, 2019 at 5:19 PM Linas Vepstas
wrote:
>
> On Mon, Feb 25, 2019 at 9:04 PM Rob Freeman
> wrote:
>
>> ...
>> You mean you have no knowledge of attempts at distributional learning of
>> grammar from the '90s?
>>
>
> Sure. Yes, I suppo
On Tue, Feb 26, 2019 at 7:00 PM Nanograte Knowledge Technologies <
nano...@live.com> wrote:
> ...
>
> If I may suggest, perhaps stepping back to consider if the personal tone
> of your conversation is justified on the hand of the topic and content, or
> perhaps your personal frustration alone.
>
ou get contradictions.
And I believe the best way to do that will be to set the network
oscillating and varying inhibition, to get the resolution of groupings we
want dynamically.
-Rob
On Tue, Feb 19, 2019 at 6:45 PM Linas Vepstas
wrote:
> Hi Rob,
>
> On Mon, Feb 18, 2019 at 4:40 PM
Ben and List,
I wanted to leave this. I'm glad I didn't.
I hadn't previously been paying attention and missed this thread. It's
actually very good. Thanks to Dorian Aur for dragging it back up.
I agree with almost all of it. As with all I've been commenting on here.
There is just one small gap
Sorry. That was an addendum.
On Sat, Feb 23, 2019 at 11:21 AM Linas Vepstas
wrote:
> ...
>
>> Meanwhile linguistics is still split, structuralism is still destroyed.
>> No-one knows why distributed representation works better, and equally
>> no-one knows why we can't "learn" adequate
On Sat, Feb 23, 2019 at 11:48 AM Linas Vepstas
wrote:
>
>
> On Fri, Feb 22, 2019 at 4:34 PM Rob Freeman
> wrote:
>
>>
>> Can you summarize it in a line?
>>
>
> There's a graph. Here's where it is and what it looks like. Here's how
> neural nets f
in conversation, I could get some real
> work done that I need to do. However, lacking in willpower, I respond:
>
> On Fri, Feb 22, 2019 at 1:18 AM Rob Freeman
> wrote:
>
>>
>>
>> So this is just a property of sets.
>>
>
> This is a property of infinite se
LV> '...what's the diff? Yes, I'm using the "observed words", just like
everyone else. And doing something with them, just like everyone else.'
Yup.
Except Chomsky won't use observed words. The entire field of Generative
Grammar that he created won't use observed words. Chomsky realized you
Jim,
I haven't been following this thread closely. But if you look at what we've
been talking about in the OpenAI PR stunt thread, at one level I think it
comes to much what you are talking about.
My old vector parser demo linked in that thread does something like this.
You can see it happen.
OK, that makes sense Ben. So long as you have a clear picture of how to
progress the theory beyond temporary expediency, temporarily using the
state-of-the-art may be strategic.
So long as you are moving forward with some strong theoretical candidates
too. If we get trapped without theory, we're
On the substance, here's what I wrote elsewhere in response to someone's
comment that it is an "important step":
Important step? I don't see it. Bengio's NLM? Yeah, good, we need
distributed representation. That was an advance. but it was always a linear
model without a sensible way of folding in
Ben,
I was using linear in two senses. One Bengio's original NLM where word
encodings were devoid of context. The other the sense Goodfellow uses it in
this lecture:
Do statistical models understand the world? Ian Goodfellow
https://www.youtube.com/watch?v=hDlHpBBGaKs=19m5s
"Modern deep nets
I don't know Ben. It feels more sinister to me. It feels like virtue
signalling.
Very bad to see this entering hard science.
I see the idea behind it, probably unconscious and so more dangerous, that
engineers and engineering are bad, and the world must be protected from
them.
If every time you
On Mon, Feb 18, 2019 at 10:05 AM Stefan Reich via AGI
wrote:
> Nothing wrong with pushing your own results if you consider them
> worthwhile...
>
Well, I think on one level it's much the same as Pissanetzky.
Pissanetzky's is a meaningful way of relating elements which generates new
patterns.
chaoticlanguage.com
>
> It works with "I went to Brazil", but seems to break with "In Brazil,
> people are friendly" (it creates "Brazil people" as a node). Any way to
> give it feedback?
>
> On Sun, 17 Feb 2019 at 22:48, Rob Freeman
> wrote:
>
On Mon, Feb 18, 2019 at 4:01 PM Ben Goertzel wrote:
> ***
> ...
> And likely the way to do this is to set the network oscillating, and
> vary inhibition to get the resolution of "invariants" you want.
> ***
>
> But we are not doing that. Interesting...
Cool. Maybe there could be a match. I
em, it will probably be easier to use your
words than argue about them endlessly.
Anyway, in substance, you just don't understand what I am proposing. Is
that right?
-Rob
On Wed, Feb 20, 2019 at 8:52 AM Linas Vepstas
wrote:
> Hi Rob,
>
> On Tue, Feb 19, 2019 at 3:23 AM Rob Free
Ben,
On Wed, Feb 20, 2019 at 2:39 AM Ben Goertzel wrote:
> ...
> The unfortunate fact is we can't currently feed as much data into our
> OpenCog self-adapting graph as we can into a BERT type model, given
> available resources... thus using the latter to help tweak weights in
> the former may
that it is hard to feed data into
it. Can you give an example?
What does an OpenCog network with newly input raw language data look like?
-Rob
On Wed, Feb 20, 2019 at 4:21 PM Linas Vepstas
wrote:
>
>
> On Tue, Feb 19, 2019 at 5:33 PM Rob Freeman
> wrote:
>
>> Linas,
&
On Tue, Jul 2, 2019 at 9:28 PM Colin Hales wrote:
>
> -- Forwarded message -
> From: Rob Freeman ...
> As far as my position, I think the answer is a chaos, or a complex system
> element to meaningful patterns. And that's why they elude us. Chaos is also
> e
Korrelan,
Good. Interested to talk to you about this. A lot I agree with. But let me
just pick some specific points.
On Sun, Jun 30, 2019 at 5:00 PM korrelan wrote:
> ...
>
> The external sensory cortex re-encodes incoming sensory streams by
> applying spatiotemporal compression
>
OK.
You undervalue the degree to which research is an ideas market, Matt.
This entire current AI boom is the result of the one simple, universal
breakthrough. Progress was flat for years before that (winter), and has
been since.
Of course "flat" is relative. The old, single, universal breakthrough
man.
>
> Can I take the trouble to critique your depiction of my position?
>
> Alas, I'm unable to say anything well-informed on your position, so I am
> open to you educating me.
>
> regards
> Colin
>
>
>
>
> On Tue, Jul 2, 2019 at 12:20 PM Rob F
On Tue, Jul 2, 2019 at 7:57 AM Colin Hales wrote:
> ...I'd like to do something different this time. We're part of the 'old
> guard' and it's up to us to demonstrate how an intellectual discussion can
> be fruitfully conducted to advance the topic in question. So I'd like to
> run an experiment.
On Tue, Sep 24, 2019 at 9:34 AM James Bowery wrote:
> The use of perplexity as model selection criterion seems misguided to me.
> See my Quora answer to the question "What is the relationship between
> perplexity and Kolmogorov complexity?
>
On Mon, Oct 28, 2019 at 1:48 AM wrote:
> No I meant Word2Vec / Glove. They use a ex. 500 dimensional space to
> relate words to each other. If we look at just 3 dimensions with 10 dots
> (words) we can visualize how a word is in 3 superpositions entangled with
> other dots.
>
Pity. I thought
On Mon, Oct 28, 2019 at 11:11 AM wrote:
> Do you mean, instead of feeding the net data and learning, to instead
> request new output data/solutions?
>
You could put it like that.
Without seeing an exact formalization it is hard to say.
You make the example of zebra, horse, dog, mouse, cat.
On Sun, Oct 27, 2019 at 12:13 PM wrote:
> Better put, a qubit/dot in my 3D cube can be in 3 dimension (or more)
> (superposition)
>
What do you mean you "my 3D cube"?
Perhaps I've missed another post where you talk about your work. Have you
done something using 3D network representations for
This came up on Twitter:
Deep Learning’s Uncertainty Principle
Carlos E. Perez
https://medium.com/intuitionmachine/deep-learnings-uncertainty-principle-13f3ffdd15ce
An uncertainty principle for grammar. What I've been arguing for 20 years!
Posting it here now, because to me it appears to be the
On Sat, Aug 1, 2020 at 7:08 PM Matt Mahoney wrote:
>
> On Fri, Jul 31, 2020, 10:00 PM Ben Goertzel wrote:
>
>> I think "mechanisms for how to predict the next word" is the wrong
>> level at which to think about the problem, if AGI is your interest...
>>
>
> Exactly. The problem is to predict
How many billion parameters do PLN and TLCG have?
Applications of category theory by Coecke, Sadrzadeh, Clark and others in
the '00s are probably also formally correct.
As were applications of the maths of quantum mechanics. Formally. Does
Dominic Widdows still have that conference?
On Sun, Aug 2, 2020 at 1:58 AM Ben Goertzel wrote:
> ...
> ...I also think that the search for concise
> abstract models is another part of what's needed...
>
It depends how you define "concise abstract model". Even maths has an
aspect of contradiction. What does Chaitin call his measure of
I was interested to learn that transformers have now completely abandoned
the RNN aspect, and model everything as sequence "transforms" or
re-orderings.
That makes me wonder if some of the theory does not converge on work I like
by Sergio Pissanetzky, which uses permutations of strings to derive
On Sat, Aug 1, 2020 at 3:52 AM wrote:
> ...
> Semantics:
> If 'cat' and 'dog' both share 50% of the same contexts, then maybe the
> ones they don't share are shared as well. So you see cat ate, cat ran, cat
> ran, cat jumped, cat jumped, cat licked..and dog ate, dog ran, dog ran.
>
Ben,
By examples do you mean like array reversal in your article?
I agree. This problem may not be addressed by their learning paradigm at
all.
But I disagree this has been the biggest problem for symbol grounding.
I think the biggest problem for symbol grounding has been ambiguity.
Manifest
On Sat, Jul 4, 2020 at 2:04 PM Ben Goertzel wrote:
> ...
I believe we discussed some time ago what sort of chaotic dynamical
> model I think would be most interesting to explore in a language
> learning context, and my thoughts were a little different than what
> you're doing, but I haven't had
On Sat, Jul 4, 2020 at 3:28 AM Ben Goertzel wrote:
> We have indeed found some simple grammars emerging from the attractor
> structure of the dynamics of computer networks, with the grammatical
> forms correlating with network anomalies. Currently are wondering if
> looking at data from more
Ben,
How did the network, symbolic dynamics, work you planned last year work
out? Specifically you said (July 17, 2019):
"...applying grammar induction to languages derived from nonlinear dynamics
of complex systems via symbolic dynamics, is not exactly about artificial
languages, it's about a
On Sat, Aug 28, 2021 at 4:22 AM Ben Goertzel wrote:
> Matt, "Quantum Associative Memory" is an active research area...
>
> So are reversible NNs, e.g. https://arxiv.org/abs/2108.05862
>
> I think your current view that "learning means writing bits into
> memory." is overly limited...
And
On Fri, Sep 10, 2021 at 2:59 PM Ben Goertzel via AGI
wrote:
> ah yes these are very familiar. materials! ;)
>
> Linas Vepstas and I have been batting around Coecke's papers for an
> awfully long time now...
Good. I know I mentioned it to Linas in 2019, and possibly even 2010, but I
didn't
On Sat, Sep 11, 2021 at 2:25 PM wrote:
> I can pack all my AI mechanisms down to 1 word, all like 16 of them. Never
> seen anyone do much of this.
>
What's the word?
--
Artificial General Intelligence List: AGI
Permalink:
On Sun, Sep 12, 2021 at 7:37 AM Mike Archbold wrote:
> ...
> The reality is that nobody claims their machine is conscious -- but
> regularly people claim their machine understands, but they don't say
> what that means
Got any examples of people saying their machine understands Mike? I don't
On Fri, Sep 10, 2021 at 1:49 PM Ben Goertzel via AGI
wrote:
> ...
> Our OpenCog/SNet team is spending a lot of time on down-to-earth
> stuff, some of which we'll talk about in some future AGI Discussion
> sessions
>
> Mainly
>
> -- design of a new programming language (MeTTA = Meta Type Talk)
>
On Fri, Sep 10, 2021 at 2:36 PM Ben Goertzel via AGI
wrote:
> ...
> Working out the specifics of the Curry-Howard mapping from MeTTa to
> intuitionistic logics, and from there to categorial semantics, is one
> of the things on our plate for the next couple months
Ah, if that is to be worked
On Sat, Sep 11, 2021 at 12:39 PM Matt Mahoney
wrote:
> I don't understand why we are so hung up on the definition of
> understanding. I think this is like the old debate over whether machines
> could think. Can submarines swim?
>
It's just shorthand for the continued failure of machines at any
On Sun, Sep 12, 2021 at 12:31 PM Mike Archbold wrote:
> here's a few
>
> https://understand.ai/
>
>
> https://www.forbes.com/sites/cognitiveworld/2020/06/28/machines-that-can-understand-human-speech-the-conversational-pattern-of-ai/
>
>
>
On Fri, Oct 15, 2021 at 5:19 AM Ben Goertzel wrote:
> ...
> ... Metta is also a Pali
> word for lovingkindness, which has some AGI ethics resonance.
>
You've led me an etymological dance, Ben:
(https://www.wisdomlib.org/definition/metta)
"Metta (मेत्त) in the Prakrit language is related to the
Hi John,
I probably should have read this thread earlier.
I agree with your insight. I have been pushing this idea that cognition, or
at least specifically natural language grammar, is lossy, for some time
now. Matt Mahoney may remember me pushing it re. the Hutter Prize to
compress language,
Erratum: *"Even OpenAI has embraced this idea to an extent. As I cite in my
talk"
Sorry, that should read OpenCog. I don't think OpenAI has embraced it. It
would be nice if they did.
On Sun, Nov 14, 2021 at 7:52 AM Rob Freeman
wrote:
> Hi John,
>
> I probably should ha
Jean-Paul,
On Tue, Mar 15, 2022 at 1:42 PM Jean-Paul VanBelle via AGI <
agi@agi.topicbox.com> wrote:
> Strange that you didn't reference Schank and conceptual dependency theory
> (1975) which appeared to be quite successful at representing huge amounts
> of human knowledge with a very small
On Mon, Mar 14, 2022 at 4:47 PM Ben Goertzel wrote:
> Whether and in what sense semantic primitives can be found depends
> wholly on the definitions involved right?
>
> Crudely, define ps(p,e) as the number of primitives that is needed to
> generate p% of human concepts within error e
>
That's
On Mon, Mar 14, 2022 at 9:18 PM Ben Goertzel wrote:
> ...
> Well I am working pragmatically with the notion that the meaning of
> concept C to mind M is the set of patterns associated with C in M.
I like your pattern based conception of meaning. Always have. It's a great
improvement on
On Mon, Mar 14, 2022 at 11:48 PM Ben Goertzel wrote:
> The dynamically, contextually-generated pattern-families you describe
> are still patterns according to the math definitions of pattern I've
> given ...
>
Good.
Then your definition can embrace my hypothesis that cognition is an
expansion
In my presentation at AGI-21 last year I argued that semantic primitives
could not be found. That in fact "meaning", most evidently by the
historical best metrics from linguistics, appears to display a kind of
quantum indeterminacy:
Vector Parser - Cognition a compression or expansion of the
I've been taking a closer look at transformers. The big advance over LSTM
was that they relate prediction to long distance dependencies directly,
rather than passing long distance dependencies down a long recurrence
chain. That's the whole "attention" shtick. I knew that. Nice.
But something I
On Fri, Jul 1, 2022 at 12:47 AM Boris Kazachenko wrote:
> ...
> Do you mean two similar input-inputs that are not in the same input?
>
I'd prefer to phrase it in terms of Howarth's data for natural language.
I mean what Howarth calls "blends".
Howarth contrasts "blends" with what he calls
On Wed, Jun 29, 2022 at 2:19 PM John Rose wrote:
> ...
> Sorry, I meant that it sounds like an “intuition” mechanism that would be
> grouping hierarchies of elements in language which share predictions,
>
You might call our sense of what structures are "correct" in language an
intuition, I
On Fri, Jul 1, 2022 at 3:34 PM Brett N Martensen
wrote:
> If you are looking for a hierarchical structure which reuses simpler parts
> (letters, words, phrases) in compositions that include overlaps ... you
> might want to have a look at binons.
>
On Tue, Jun 28, 2022 at 6:25 AM John Rose wrote:
> ...
> On Saturday, June 25, 2022, at 6:58 AM, Rob Freeman wrote:
>
> If all the above is true, the key question should be: what method could
> directly group hierarchies of elements in language which share predictions?
>
On Wed, Jun 29, 2022 at 2:19 PM John Rose wrote:
> ...Bob Coecke’s spidering and togetherness goes along with how I think
> about these things. The spidering though is a simplicity, a visual
> dimension reduction itself for symbolic communication coincidentally like a
> re-grammaring of
On Thu, Jun 30, 2022 at 1:33 PM Boris Kazachenko wrote:
> On Thursday, June 30, 2022, at 3:00 AM, Rob Freeman wrote:
>
> I'm interested to hear what other mechanisms people might come up with to
> replace back-prop, and do this on the fly..
>
>
> For shared predicti
On Thu, Jun 30, 2022 at 1:51 PM Rob Freeman
wrote:
> On Thu, Jun 30, 2022 at 1:33 PM Boris Kazachenko
> wrote:
>
>> ...
>> My alternative is to directly search for shared properties: lateral
>> cross-comparison and connectivity clustering.
>>
By the way, indepe
On Thu, Jun 30, 2022 at 2:18 PM Boris Kazachenko wrote:
> On Thursday, June 30, 2022, at 6:10 AM, Rob Freeman wrote:
>
> what method do you use to do the "connectivity clustering" over it?
>
>
> I design from the scratch, that's the only way to conceptual integr
On Thu, Jun 30, 2022 at 10:40 AM Ben Goertzel wrote:
> "what method could directly group hierarchies of elements in language
> which share predictions?"
>
> First gut reaction is, some form of evolutionary learning where the
> genomes are element-groups
>
> Thinking in terms of NN-ish. models,
On Wed, Jun 29, 2022 at 11:14 PM James Bowery wrote:
> To the extent that grammar entails meaning, it can be considered a way of
> defining equivalence classes of sentence meanings. In this sense, the
> choice of which sentence is to convey the intended meaning from its
> equivalence class is a
On Wed, Jun 29, 2022 at 11:11 PM Boris Kazachenko
wrote:
> On Wednesday, June 29, 2022, at 10:29 AM, Rob Freeman wrote:
>
> You would start with the relational principle those dot products learn, by
> which I mean grouping things according to shared predictions, make it
> instead
Off topic, and I haven't followed this thread, but...
On Tue, Jul 4, 2023 at 10:21 PM Matt Mahoney wrote:
>...
>
> We are not close to reversing human aging. The global rate of increase in
> life expectancy has dropped slightly after peaking at 0.2 years per year in
> the 1990s. We have 0
On Thu, Jul 6, 2023 at 7:54 PM James Bowery wrote:
> On Thu, Jul 6, 2023 at 1:09 AM Rob Freeman wrote:
>>
>> I just always believed the goal of compression was wrong.
>
> You're really confused.
I'm confused? Maybe. But I have examples. You don't address my
examples. You
On Thu, Jul 6, 2023 at 7:58 PM Matt Mahoney wrote:
> ...
> The LTCB and Hutter prize entries model grammar and semantics to some extent
> but never developed to the point of constructing world models enabling them
> to reason about physics or psychology or solve novel math and coding
>
On Wed, Jul 5, 2023 at 7:05 PM Matt Mahoney wrote:
>...
> LLMs do have something to say about consciousness. If a machine passes the
> Turing test, then it is conscious as far as you can tell.
I see no reason to accept the Turning test as a definition of
consciousness. Who ever suggested that?
On Thu, Jul 6, 2023 at 3:51 AM Matt Mahoney wrote:
>
> I am still on the Hutter prize committee and just recently helped evaluate a
> submission. It uses 1 GB of text because that is how much a human can process
> over a lifetime. We have much larger LLMs, of course. Their knowledge is
>
On Thu, Jul 6, 2023 at 11:30 AM wrote:
>...
> Hold on. The Lossless Compression evaluation tests not just compression, but
> expansion!
It's easy to get lost in word definitions.
It sounds like you're using "expansion" in a sense of recovering an
original from a compression.
I'm using
On Thu, May 9, 2024 at 6:15 AM James Bowery wrote:
>
> Shifting this thread to a more appropriate topic.
>
> -- Forwarded message -
>>
>> From: Rob Freeman
>> Date: Tue, May 7, 2024 at 8:33 PM
>> Subject: Re: [agi] Hey, looks like the goertzel
On Sat, May 4, 2024 at 4:53 AM Matt Mahoney wrote:
>
> ... OpenCog was a hodgepodge of a hand coded structured natural language
> parser, a toy neural vision system, and a hybrid fuzzy logic knowledge
> representation data structure that was supposed to integrate it all together
> but never
without it,
> we'll remain stuck in the quagmire of early 1990s+ functional
> analysis-paralysis, by any name.
>
> I'll hold out hope for that one, enlightened developer to make that quantum
> leap into exponential systems computing. A seachange is needed.
>
> Inter-alia, Rob Fr
perfectly classical and observable elements, I tried to present
myself in contrast to Bob Coecke's top-down quantum grammar approach,
on the Entangled Things podcast:
https://www.entangledthings.com/entangled-things-rob-freeman
You could look at my Facebook group, Oscillating Networks for AI.
Check out my T
Is a quantum basis fractal?
To the extent you're suggesting some kind of quantum computation might
be a good implementation for the structures I'm suggesting, though,
yes. At least, Bob Coecke thinks quantum computation will be a good
fit for his quantum style grammar formalisms, which kind of
I'm disappointed you don't address my points James. You just double
down that there needs to be some framework for learning, and that
nested stacks might be one such constraint.
I replied that nested stacks might be emergent on dependency length.
So not a constraint based on actual nested stacks
Addendum: another candidate for this variational model for finding
distributions to replace back-prop (and consequently with the
potential to capture predictive structure which is chaotic attractors.
Though they don't appreciate the need yet.) There's Extropic, which is
proposing using heat noise.
James,
For relevance to type theories in programming I like Bartosz
Milewski's take on it here. An entire lecture series, but the part
that resonates with me is in the introductory lecture:
"maybe composability is not a property of nature"
Cued up here:
Category Theory 1.1: Motivation and
On Wed, May 22, 2024 at 10:02 PM James Bowery wrote:
> ...
> You correctly perceive that the symbolic regression presentation is not to
> the point regarding the HNet paper. A big failing of the symbolic regression
> world is the same as it is in the rest of computerdom: Failure to recognize
On Thu, May 23, 2024 at 10:10 AM Quan Tesla wrote:
>
> The paper is specific to a novel and quantitative approach and method for
> association in general and specifically.
John was talking about the presentation James linked, not the paper,
Quan. He may be right that in that presentation they
James,
I think you're saying:
1) Grammatical abstractions may not be real, but they can still be
useful abstractions to parameterize "learning".
2) Even if after that there are "rules of thumb" which actually govern
everything.
Well, you might say why not just learn the "rules of thumb".
But
LLM freedom of totally ignoring "objects" (which
seems to be necessary, both by the success of LLMs at generating text,
and by the observed failure of formal grammars historically) if you
specify them in terms of external relations.
Maybe the paper authors don't see it. But the way they
Matt,
Nice break down. You've actually worked with language models, which
makes it easier to bring it back to concrete examples.
On Tue, May 28, 2024 at 2:36 AM Matt Mahoney wrote:
>
> ...For grammar, AB predicts AB (n-grams),
Yes, this looks like what we call "words". Repeated structure. No
James,
My working definition of "truth" is a pattern that predicts. And I'm
tending away from compression for that.
Related to your sense of "meaning" in (Algorithmic Information)
randomness. But perhaps not quite the same thing.
I want to emphasise a sense in which "meaning" is an expansion of
diagonalization lemma? "True" but
not provable/predictable within the system?)
On Mon, May 20, 2024 at 9:09 PM James Bowery wrote:
>
>
>
> On Sun, May 19, 2024 at 11:32 PM Rob Freeman
> wrote:
>>
>> James,
>>
>> My working definition of "truth&quo
"Importantly, the new entity ¢X is not a category based on the
features of the members of the category, let alone the similarity of
such features"
Oh, nice. I hadn't seen anyone else making that point. This paper 2023?
That's what I was saying. Nice. A vindication. Such categories
decouple the
On Wed, May 29, 2024 at 9:37 AM Matt Mahoney wrote:
>
> On Tue, May 28, 2024 at 7:46 AM Rob Freeman
> wrote:
>
> > Now, let's try to get some more detail. How do compressors handle the
> > case where you get {A,C} on the basis of AB, CB, but you don't get,
> &g
James,
The Hamiltonian paper was nice for identifying gap filler tasks as
decoupling meaning from pattern: "not a category based on the features
of the members of the category, let alone the similarity of such
features".
Here, for anyone else:
A logical re-conception of neural networks:
nderstand the (relational) theory behind it in order to jump out of
the current LLM "local minimum".
On Thu, May 23, 2024 at 11:47 PM James Bowery wrote:
>
>
> On Wed, May 22, 2024 at 10:34 PM Rob Freeman
> wrote:
>>
>> On Wed, May 22, 2024 at 10:02 PM James Bowery
Thanks Matt.
The funny thing is though, as I recall, finding semantic primitives
was the stated goal of Marcus Hutter when he instigated his prize.
That's fine. A negative experimental result is still a result.
I really want to emphasize that this is a solution, not a problem, though.
As the
94 matches
Mail list logo