You seem to be at a loss to differentiate between associations and relationships, as structures within data. Until it is encapsulated as an AI program, it really doesn't matter if these structures were translated into computer code, or not.
In technical terms, these are vastly different. Whereas a relationship has a deterministic (pure functional) bond, associations are vastly more complex, and not limited by such determinism. In narrow AI, as smart programs, the smartness has been coded deterministically. Hence, my comment about its relevance. Again, your view on stochastic systems is rather limited. It suffers from the same limitations as your view on structures of relationships and associations. In your worldview, an object could never have a value of zero and 1 at the same time. As an aside, the notion of the term "liberal" being relevant to computerized systems is nonsense. We should stop trying to personify machines. Storms and earthquakes do not "attack" countries, and neither can; "A more liberal definition or non-traditional definition can be applied to the use of those objects as components." If so, how can this be done, exactly? Last, you show a lack of comprehension of the nature of randomness by stating: "The objects of randomness could be formed using the members or elements of abstractions and generalizations." In this context, the programmer doing this would probably be the source of such "randomness". In conclusion, I've aptly demonstrated my original "tangent", which was an assertion of how I could probably comprehend and analyse what you stated, whereas the same could not be said for you. It referred to the chasm of understanding among persons in the programmable space, which has - in my opinion - been considerably widened by the advent of AGI, even AI. It seems to be caused (as in relationship) by something more worrisome than a semantic gap, which always existed among IT professionals, for many good reasons. Most importantly Jim, do you think that your approach to programmable logic would eventually yield AGI? ________________________________ From: Jim Bromer <[email protected]> Sent: Wednesday, 27 October 2021 15:57 To: AGI <[email protected]> Subject: Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas Nanograte, I said: "I wasn't thinking of hypergraphs as being completely connected all of the time, since relationships in AI are conditional." Your response was, "This should be true for narrow AI in reductionism only, which is the more physical (in the sense of programmable) aspect of any, functional system. " This makes no sense to me. You seemed to go off on a tangent which you did not actually explain and came up with a conclusion which is almost the direct opposite of what I was saying. I was talking about AI that can be implemented with computers. If programmable computers are not relevant to what you were saying then we are talking about different things. Stochastic models, which use numbers, can be made with measurable objects where the relationships between the objects were understood as well as measurable and which different mixtures (of quantities) are relevant. But it can also include different kinds of measurable objects that may be relevant to the model can also be considered. A more liberal definition or non-traditional definition can be applied to the use of those objects as components. The objects of randomness could be formed using the members or elements of abstractions and generalizations. Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / see discussions<https://agi.topicbox.com/groups/agi> + participants<https://agi.topicbox.com/groups/agi/members> + delivery options<https://agi.topicbox.com/groups/agi/subscription> Permalink<https://agi.topicbox.com/groups/agi/T1675e049c274c867-M263c87817ed35c5fc56b5251> ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1675e049c274c867-M2fd28f6d6283d9296146ed55 Delivery options: https://agi.topicbox.com/groups/agi/subscription
