JB stated: ""head up its ass" is a cute aphorism for the ancient concept of Maya but doesn't really reflect the rigorous reformulation that Godel owes us."
Thankfully, the balance of scientific argument doesn't rest squarely on the shoulders of a single scientist, albeit how brilliant. It wasn't so much me trying to fabricate a "cute aphorism", than applying a Godelesque mindset to your question about Godel's likely perspective. I thought that you offered up concise logic about a system that may well end up with its head up its own ass. There are many systems today who are designed to do that, the halting problem possibly being proof of that. But wait! Is it actual proof, or maybe just work-in-progress not completed yet? Irrespective of an opinion about Godel, I'd think that the point of considering the relationship between finite and infinite systems, rather relevant to the AGI discussion. On Sat, Nov 13, 2021 at 6:57 PM James Bowery <[email protected]> wrote: > "head up its ass" is a cute aphorism for the ancient concept of Maya but > doesn't really reflect the rigorous reformulation that Godel owes us. > > On Sat, Nov 13, 2021 at 7:10 AM Quan Tesla <[email protected]> wrote: > >> Godel might muse that even a system with its head up its ass cannot know >> itself to completion. >> >> On Sat, Nov 13, 2021 at 8:43 AM James Bowery <[email protected]> wrote: >> >>> What would Godel say about a NOT gate with its input connected to its >>> output? >>> >>> On Fri, Nov 12, 2021 at 9:28 PM Quan Tesla <[email protected]> wrote: >>> >>>> Gödel's incompleteness theorum still wins this argument. However, what >>>> really happens in unseen space remains fraught with possibility. The >>>> question remains: how exactly is this relevant to AGI? >>>> >>>> In transition, energy is always "lost" to externalities. Excellent >>>> design would limit such losses to not impact negatively on internal >>>> functionality. E.g., Losses can be recycled for reuse, and so on. It all >>>> depends on the relevance of the dynamical boundary that was either set, or >>>> which emerged. >>>> >>>> Even so, the "lossy" argument should be finite. As a system, its >>>> boundaries of argument should also be maintained. This remains true for all >>>> systems, even systems of systems. As such, it's more a function of a design >>>> decision, than an incomplete argument. >>>> On 12 Nov 2021 20:41, "James Bowery" <[email protected]> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Nov 12, 2021 at 6:27 AM John Rose <[email protected]> >>>>> wrote: >>>>> >>>>>> ... >>>>>> >>>>>> While these examples may sound edgy, often these incompleteness's are >>>>>> where there is much to be learned. Exploring may help some understandings >>>>>> especially, as James pointed out, that “AIXI = AIT⊗SDT = Algorithmic >>>>>> Information Theory ⊗ Sequential Decision Theory". >>>>>> >>>>> >>>>> AIXI *reduced* the parameter count of an AGI with unlimited >>>>> computation but limited information. Before you jump all over the fact >>>>> that it is necessary to limit the computation, we still need to talk about >>>>> the remaining open parameters in AIXI. In AIT the open parameter is: >>>>> "Which Turing Machine?" In SDT the open parameter is "Which Utility >>>>> Function?" >>>>> >>>>> To answer "Which Turing Machine?" I've intuited an approach that Matt >>>>> reduced to a pretty restrictive descriptive space of NOR DCGs. This >>>>> reduces what might be thought of as the descriptive space of Turing >>>>> Machines to what Matt formalized. It doesn't get rid of _all_ of the >>>>> unknowns in that space, but it is far more rigorous than the descriptive >>>>> space of all UTMs. There is a _lot_ of work to be done with this approach >>>>> and advances will, IMNSHO, have immediate and profound application in >>>>> logic >>>>> design. >>>>> >>>>> To answer "Which Utility Function?" we must become a lot more >>>>> philosophically serious than has heretofore been the case in all the >>>>> brouhaha about "friendly AI". >>>>> >>>>> Hutter's paper "A Complete Theory of Everything (will be subjective) >>>>> <https://arxiv.org/abs/0912.5434>" is his (still incomplete) approach >>>>> to addressing what you refer to as "these incompleteness's". >>>>> >>>>> Now, having said all that: Yes, the measurement level of abstraction >>>>> does get into the economics of computation resources and, yes, it would be >>>>> nice to find approaches that obviate all of the above "incompletenesses", >>>>> but you must do better than to redefine the words "lossy" and "lossless" >>>>> compression as that merely hobbles an existing approach to these >>>>> incompletenesses while at the same time threatening to hobble their >>>>> practical applications by confusing the meanings of words. >>>>> >>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-Ma2df8658bd0d2dce40074df9> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-Mc2236031bc6960d08147c790 Delivery options: https://agi.topicbox.com/groups/agi/subscription
