Jim, I've been trying to wrap my mind around a significant semantic gap that 
seemingly exists between the philosophy, concepts, and methods, which different 
researchers have been employing for understanding AGI. I'm going to try and 
breach this gap.

When I read what you're saying, I comprehend it. Yet, when I speak in a systems 
language, which is well documented by Science, and not a developer' language 
per se, then it's rather foreign sounding to some.

For example, you said: "And I wasn't thinking of hypergraphs as being 
completely connected all of the time, since relationships in AI are 
conditional." This should be true for narrow AI in reductionism only, which is 
the more physical (in the sense of programmable) aspect of any, functional 
system.

Furthermore: "And I was also thinking that relations (or edges of hypergraphs) 
could be used as references to subsystems (or subgraphs)." In theory, this 
might be so, but in practice, the dimensional shifts implied within a 
hypergraph already scales way beyond many invisible boundaries. Unless you set 
the boundary down firmly, in a hypothetical sense, as you would for placing a 
scope limit on a project, then the workspace may become too large to be dealt 
with at the programmable level.

Even further: "Can you use stochastic methods to compare theories to models (or 
to constraints or goals of the theory)?" If you meant by "stochastic methods", 
such methods which resulted in any weighted state, then they should help to 
elucidate the understanding of those aspects of a system under consideration, 
but it's unlikely that they would be sufficient for a holistic system all on 
their own.

Second to last you stated: "I don't see why not. It is like using what-if 
situations.  I guess a stochastic method - in the traditional sense - would be 
part of what-if modelling. That makes sense right?"
I'm open to correction, but "traditionally" - in the West - , stochastic 
methods for "what-if modelling" were employed to help support biased decision 
making, very pertinent to large-scaled strategic contexts.

Last: By "testing theory for stochastic, CAS at the highest level of 
abstraction." I meant that I develop theory for predicting systems of ultra 
complexity, (as CAS), but dealing with the dynamical environment of the system 
as well as it tends to evolve and shifts independently along a stochastic scale 
(more at a logical level than measuring every little thing at the 
physical/operational level).

I include these perspectives assuming any system, or parts thereof, or wholes 
thereof, would tend towards optimal efficiency, as effective complexity, 
zero-point energy, AKA states of equilibrium. In other words, for holistic, 
natural systems, one always has to include the perspectives of the linear and 
alinear for any subsystem. Even so, there other perspectives to include. For 
example, considering a rainbow in the sky.


This is being done in order to develop sufficient understanding for "weaving" 
the integrals of a "single" system from it. I think programmers far too often 
try to "weave" systems from their code, trying to manifest the idea of the 
system they have via the hard constraints of a simple tool.

I just cannot see how typing a million lines of code would result in a 
dynamical CAS. Maybe, others realized this as well. For this reason alone, most 
of AI would eventually fall flat on its ass, and perhaps cause devastating 
damage to the economies and control systems they are being deployed in.

For example, consider the Boeing disasters. That was a direct failure in 
conditional narrow AI, a design and programming oversight. The whole stinky 
fish was supposed to have been blamed for those material losses and deaths, but 
only the head was. Frightening, really.

Further, consider the covid-19 experience, as zero accountability attached to 
experimental biotechnology being released among - forced upon - living systems 
and life-supporting ecologies. Perhaps even in the sense of AI integration into 
living systems, trying a shortcut to AGI (in my opinion). Many recent patent 
approvals support this notion. How can this possibly not end up being 
devastating to the living ecology?

To continue with answering your question on abstraction. Quantum systems could 
be in states of chaos and order near instantly, many times over, as a life 
cycle of naturally-scripted chain reactions, yet what would emerge may be 
tremendous bursts of energy. An active volcano and everything seen or unseen it 
affects over time, might be one such an example.

As such, in this context of my limited answer; abstraction simply means 
different levels/aspects of "visible/tracable" knowledge pertinent to any 
system being considered.

By the same token, deabstraction would mean including methods for instantiating 
"unseen" knowledge and expressing that in a language suitable to the rest of 
the system (preparing it to be coded in an application), which was already made 
visible to the public (integrating it with the modules of code that was already 
written as an application).

Without running ahead of myself, such deabstraction would then be likened to an 
act of integrating a previously-unseen perspective, into the wholeness of the 
knowledge model being assembled, manifesting the functionality thereof. For 
example, it would be similar to integrating quantum entanglement into an 
existing AGI systems model.

None of this would become possible without having the support of scientifically 
sound methods. In my view, the correct toolkit of philosophy, language, 
methodology, convention, information processing technology, and testability has 
to be employed as a suitable ontology for doing AGI-related development. Even 
so, it has to be kept in a ready-state of flexibility. Simply by doing that, 
massive uncertainty would be introduced into the process. We should be asking 
then: "How to reduce uncertainty within AGI developments."

>From what I'm observing, the theory of order from chaos is being applied in a 
>fatalistic sense. First, there's no real control at all. Where there's 
>control, little real progress is being made. And so on.

There are deep cracks appearing in this "rocking-boat scenario", but the 
fatalist Nihilists are seemingly saying: "Be damned with this world and its 
inhabitants. It was shit anyway to begin with. Let's start over."  The 
resultant destruction is being welcomed as an "expected outcome". I think this 
kind of thinking is insane.

Maybe it's a case of; humankind should not be messing around with biosynthetic 
AGI at all? Who would be willing to consider that? If not, how can such persons 
with restrictive mindsets have the mind to develop trustworthy AGI in the first 
instance? How can they be trusted by the unknowing society at all?

Too many really complex questions there, right?

________________________________
From: Jim Bromer <[email protected]>
Sent: Sunday, 24 October 2021 03:02
To: AGI <[email protected]>
Subject: Re: [agi] Re: Ideas Have to Act on Other Kinds of Ideas

Nongrate: I did not understand what it was that you getting at. I still don't 
completely get it. And I wasn't thinking of hypergraphs as being completely 
connected all of the time, since relationships in AI are conditional.  And I 
was also thinking that relations (or edges of hypergraphs) could be used as 
references to subsystems (or subgraphs).
However, I started wondering: Can you use stochastic methods to compare 
theories to models (or to constraints or goals of the theory)?  I don't see why 
not. It is like using what-if situations.  I guess a stochastic method - in the 
traditional sense - would be part of what-if modelling. That makes sense right? 
And, what did you mean by, "testing theory for stochastic, CAS at the highest 
level of abstraction." ?
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T1675e049c274c867-M252989fe0130e882846e488a>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1675e049c274c867-M5259f045ebafcc14bab4c857
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to