On Wed, May 22, 2024 at 10:02 PM James Bowery <[email protected]> wrote:
> ...
> You correctly perceive that the symbolic regression presentation is not to 
> the point regarding the HNet paper.  A big failing of the symbolic regression 
> world is the same as it is in the rest of computerdom:  Failure to recognize 
> that functions are degenerate relations and you had damn well better have 
> thought about why you are degenerating when you do so.  But likewise, when 
> you are speaking about second-order theories (as opposed to first-order 
> theories), such as Category Theory, you had damn well have thought about why 
> you are specializing second-order predicate calculus when you do so.
>
> Not being familiar with Category Theory I'm in no position to critique this 
> decision to specialize second-order predicate calculus.  I just haven't seen 
> Category Theory presented as a second-order theory.  Perhaps I could 
> understand Category Theory thence where the enthusiasm for Category Theory 
> comes from if someone did so.
>
> This is very much like my problem with the enthusiasm for type theories in 
> general.

You seem to have an objection to second order predicate calculus.
Dismissing category theory because you equate it to that. On what
basis do you equate them? Why do you reject second order predicate
calculus?

What I like about category theory (as well as quantum formulations) is
that I see it as a movement away from definitions in terms of what
things are, and towards definitions in terms of how things are
related. Which fits with my observations of variation in objects
(grammar initially) defying definition, but being accessible to
definition in terms of relations.

> But I should also state that my motivation for investigating Granger et al's 
> approach to ML is based not the fact that it focuses on abduced relations -- 
> but on its basis in "The grammar of mammalian brain capacity" being a 
> neglected order of grammar in the Chomsky Hierarchy: High Order Push Down 
> Automata.  The fact that the HNet paper is about abduced relations was one of 
> those serendipities that the prospector in me sees as a of gold in them thar 
> HOPDAs.

Where does the Granger Hamiltonian net paper mention "The grammar of
mammalian brain capacity"? If it's not mentioned, how do you think
they imply it?

> To wrap up, your definition of "regression" seems to differ from mine in the 
> sense that, to me, "regression" is synonymous with data-driven modeling which 
> is that aspect of learning, including machine learning, concerned with what 
> IS as opposed to what OUGHT to be the case.

The only time that paper mentions regression seems to indicate that
they are also making a distinction between their relational encoding
and regression:

'LLMs ... introduce sequential information supplementing the standard
classification-based “isa” relation, although much of the information
is learned via regression, and remains difficult to inspect or
explain'

How do you relate their relational encoding to regression?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2f9210fa34834e5bb8e46d0c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to