The ANPA guy most likely to get me to change my perspective is Michael
Manthey, a CS professor whose recent patent on "space like computations"
<https://patents.google.com/patent/US20160148110A1/en> extends beyond
complex probabilities to quanternic probabilities.  He would, I think tend
to cast negative case counts as teleological:

"So the conclusion to the question 'What is the relationship between
causality -- the explanation of events in terms of causes -- and teleology
-- the explanation of events in terms of purposes?' is that both are true
and equivalent accounts of reality, to the extent that the physical
universe indeed has the structure U(1) × SU(2) × SU(3) × SO(4), which is
the Standard Model of quantum mechanics augmented with an entangling
projection into 3+1d via SO(4)."

On Wed, Jan 6, 2021 at 3:41 PM James Bowery <jabow...@gmail.com> wrote:

> I tend to see negative case counts as providing atomic action roll-backs
> (in relational database state transition terms) -- kind of "undo" operation
> -- but not necessarily to exactly the "prior" state.  In physical/QM terms
> they provide for time symmetric flow of information/causality -- permitting
> the "future" to influence/constrain/cause the "past" in the case of
> negative case counts.   But I hasten to add, I differ with Kauffman and
> most others in the Laws of Form/ANPA community on this (including Etter,
> Shoup and even GS Brown himself).  Everyone but me, it seems, either sees
> space as primary OR sees spacetime as inseparably emergent as a whole.
>
> My tendency is more with Wagner:  “You see, son, here time becomes space.”
>
> On Wed, Jan 6, 2021 at 2:38 PM Ben Goertzel <b...@goertzel.org> wrote:
>
>> Interesting, will reflect a bit on that...
>>
>> James, what is your interpretation of negative case counts, in this model?
>>
>> On Wed, Jan 6, 2021 at 11:57 AM James Bowery <jabow...@gmail.com> wrote:
>> >
>> > The correct link is on archive.org:
>> >
>> >
>> https://web.archive.org/web/20130511044913/http://www.boundaryinstitute.org/bi/articles/Link_Theory_intro.pdf
>> >
>> >
>> >
>> > On Wed, Jan 6, 2021 at 1:17 PM James Bowery <jabow...@gmail.com> wrote:
>> >>
>> >> See Appendix A - Complex Case Counts for what qualifies as a dynamical
>> logic's 4-valued (1, i, -1, -i) approach to deriving the core of quantum
>> mechanics (complex probability amplitudes) as a theorem of the
>> combinatorics of 4 real-valued, 2x2 spinor matrices.
>> >>
>> >> On Sat, Jan 2, 2021 at 1:47 PM Ben Goertzel <b...@goertzel.org> wrote:
>> >>>
>> >>> To kick off the new year ... here is Part 2 of a trilogy of papers I'm
>> >>> working on ...
>> >>>
>> >>> "Paraconsistent Foundations for Probabilistic Reasoning, Programming
>> >>> and Concept Formation
>> >>> "
>> >>>
>> >>> https://arxiv.org/abs/2012.14474
>> >>>
>> >>> this one grounds (key aspects of) PLN in paraconsistent logic, and
>> >>> thus makes clearer the programming-language Curry-Howard cognate of
>> >>> PLN (via the known prog-lang cognate of relevant sorts of
>> >>> paraconsistent logic).   Also some other related stuff like
>> >>> paraconsistent Formal Concept Analysis...
>> >>>
>> >>> Part 3 (another paper) will sketchily represent the core OpenCog
>> >>> cognitive algorithms as  Galois connections involving
>> >>> continuation-passing-style metagraph chronomorphisms , where the
>> >>> metagraph targets are labeled w/ probabilistic/paraconsistent
>> >>> dependent types as outlined in Part 2 ...
>> >>>
>> >>> But I will defer starting on that till I finish some work on the SNet
>> >>> roadmap and related issues ...
>> >>>
>> >>> --
>> >>> Ben Goertzel, PhD
>> >>> http://goertzel.org
>> >>>
>> >>> “Words exist because of meaning; once you've got the meaning you can
>> >>> forget the words.  How can we build an AGI who will forget words so I
>> >>> can have a word with him?” -- Zhuangzhi++
>> >
>> > Artificial General Intelligence List / AGI / see discussions +
>> participants + delivery options Permalink
>> 
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>> 
>> “Words exist because of meaning; once you've got the meaning you can
>> forget the words.  How can we build an AGI who will forget words so I
>> can have a word with him?” -- Zhuangzhi++

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T425c68f0cea319cd-M4a7130ed35763297e95b9cac
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to