From *A logical re-conception of neural networks: Hamiltonian bitwise part-whole architecture*
> *From hierarchical statistics to abduced symbols*It is perhaps useful to > envision some of the ongoing devel- > opments that are arising from enlarging and elaborating the > Hamiltonian logic net architecture. As yet, no large-scale > training whatsoever has gone into the present minimal HNet > model; thus far it is solely implemented at a small, introduc- > tory scale, as an experimental new approach to representa- > tions. It is conjectured that with large-scale training, hierar- > chical constructs would be accreted as in large deep network > systems, with > *the key difference that, in HNets, such con-structs would have relational > properties* beyond the “isa” > (category) relation, as discussed earlier. > Such relational representations lend themselves to abduc- > tive steps (McDermott 1987) (or “retroductive” (Pierce > 1883)); i.e., inferential generalization steps that go beyond > warranted statistical information. If John kissed Mary, Bill > kissed Mary, and Hal kissed Mary, etc., then a novel cate- > gory ¢X can be abduced such that ¢X kissed Mary. > Importantly, the new entity ¢X is not a category based on > the features of the members of the category, let alone the > similarity of such features. I.e., it is not a statistical cluster > in any usual sense. Rather, it is a “position-based category,” > signifying entities that stand in a fixed relation with other > entities. John, Bill, Hal may not resemble each other in any > way, other than being entities that all kissed Mary. Position- > based categories (PBCs) thus fundamentally differ from > “isa” categories, which can be similarity-based (in unsuper- > vised systems) or outcome-based (in supervised systems). > PBCs share some characteristics with “embeddings” in > transformer architectures. > Abducing a category of this kind often entails overgener- > alization, and subsequent learning may require learned ex- > ceptions to the overgeneralization. (Verb past tenses typi- > cally are formed by appending “-ed”, and a language learner > may initially overgeneralize to “runned” and “gived,” neces- > sitating subsequent exception learning of “ran” and “gave”.) The abduced "category" ¢X bears some resemblance to the way Currying (as in combinator calculus <https://en.wikipedia.org/wiki/Combinatory_logic>) binds a parameter of a symbol to define a new symbol. In practice it only makes sense to bother creating this new symbol if it, in concert with all other symbols, compresses the data in evidence. (As for "overgeneralization", that applies to any error in prediction encountered during learning and, in the ideal compressor, increases the algorithm's length even if only by appending the exceptional data in a conditional -- *NOT* "falsifying" anything as would that rascal Popper). This is "related" to quantum-logic in the sense that Tom Etter calls out in the linked presentation: Digram box linking, which is based on the *mathematics of relations > rather than of functions*, is a more general operation than the > composition of transition matrices. On Thu, May 16, 2024 at 7:24 PM James Bowery <jabow...@gmail.com> wrote: > First, fix quantum logic: > > > https://web.archive.org/web/20061030044246/http://www.boundaryinstitute.org/articles/Dynamical_Markov.pdf > > Then realize that empirically true cases can occur not only in > multiplicity (OR), but with structure that includes the simultaneous (AND) > measurement dimensions of those cases. > > But don't tell anyone because it might obviate the risible tradition of > so-called "type theories" in both mathematics and programming languages > (including SQL and all those "fuzzy logic" kludges) and people would get > *really* pissy at you. > > > On Thu, May 16, 2024 at 10:27 AM <ivan.mo...@gmail.com> wrote: > >> What should symbolic approach include to entirely replace neural networks >> approach in creating true AI? Is that task even possible? What benefits and >> drawbacks we could expect or hope for if it is possible? If it is not >> possible, what would be the reasons? >> >> Thank you all for your time. >> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* >> / AGI / see discussions <https://agi.topicbox.com/groups/agi> + >> participants <https://agi.topicbox.com/groups/agi/members> + >> delivery options <https://agi.topicbox.com/groups/agi/subscription> >> Permalink >> <https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Mf9d9e99b7d5517ff12239b07> >> ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Ma9215f03be1998269e14f977 Delivery options: https://agi.topicbox.com/groups/agi/subscription