Dorian et. al. Installment #2 of my stab at a paper. It is section 5 in the original docx. This is a section on the synthetic approach and the science of consciousness .... again with a slant on AGI investment.
Section 4 is next and is where I'll need Dorian's contribution for the organic synthetic AGI program example. I have put in a section for references although only a few have been put in as yet. I suggest an acknowledgement section. Because of my personal circumstances meaning I can't spend time in discussion until next week, if I could continue my 'seagull' depositing technique it would be greatly appreciated. =================================== 5 Machine consciousness and the synthetic AGI approach Synthetic AGI, whatever the chosen hybridization level, cannot divorce itself from dealing with consciousness. Indeed, in introducing synthetic approaches to AGI such as those described above, it becomes quite clear that the discipline of AGI itself and the science of consciousness are deeply connected. We find ourselves faced with the realization that the science of consciousness and the AGI program may actually be regarded, eventually, as the same thing. It seems worth acknowledging the possibility that the explicit recognition of that state of affairs is actually central to the proposed changes in AGI approach. To see this confronting possibility we can use the established vocabulary of the youthful science of consciousness (Hales, 2014 <file:///C:/Users/cghales/Documents/Towards%20H-AGI.docx#_ENREF_2>). In the most general sense that can be used in a science context, the word consciousness refers to the first-person-perspective (1PP) of *anything*. We can consider consciousness of X to be ‘*what it is like to be X from the first person perspective of being X*’. To scientifically study consciousness is to construct some kind of account predictive of the 1PP of some part of the natural world. We need have no theory of consciousness to speak of it this way. Nor need we attribute any relationship between consciousness-as-the-1PP and any behaviour or memory or any other state of affairs. We need not presuppose any particular chunk of the natural world to speak of consciousness this way. It is a completely general concept. It is one of the few concrete positions that the science of consciousness has been able to formulate. Consider ‘being’ a rock. What might the scientific statement of the consciousness, the 1PP, of a rock be? Rocks cannot behave. Yet we have to admit that from the perspective of *being* the rock there may be a 1st person perspective of some kind. It may be an experience of ‘happy’ or ‘cold’ or something more sophisticated. For example there may be a visual scene, from the point of view of being the rock, of everything surrounding the rock. If we had a science of consciousness and we were able to claim, scientifically, that ‘*it is not like anything from the 1PP of a rock’* and that claim was to be scientifically accepted, what would that scientific statement look like? The answer to this riddle is that currently we do not know. What we can demonstrate, however, is that central to the synthetic AGI science program is the potential to be able to say something about consciousness – the 1PP – in a way that was previously impossible. That is why we have to accept, from its inception, that synthetic AGI and the matter of the science of consciousness are deeply enmeshed. This can be a difficult mental leap to make for some investigators. To help, consider the 1PP of a bacterium, worm, mouse, dog, computer, a neuromorphic chipset, tree, rock, human. Of all these things the only thing we know for sure is that ‘it is like something’ to be that part of the natural world called a human or, better, to ‘be a human brain’. It is also one of the few proved facts of the science of consciousness that whatever the physics involved in the generation of a 1PP, it is contained within human brain tissue only and no other part of the human. This knowledge of the existence of a 1PP is accepted despite us being unable to scientifically prove it to each other. This is because we cannot observe observations (the mental experiential life of another human) themselves. The science of consciousness is a scientific account of how we observe at all – in the first place. All we can actually observe with consciousness is brain material delivering consciousness - an act of observation - to the brain itself, from the 1PP. Some deny consciousness exists at all (Dennett, 1991 <file:///C:/Users/cghales/Documents/Towards%20H-AGI.docx#_ENREF_1>). Some accept consciousness as real but irrelevant to intelligence and cognition. We are forced here to accept that there is something to explain, not because any particular position is right or wrong, but merely because the argument exists at all. The argument itself – a capacity to be unsure or confused about consciousness, is all that is needed to justify that a science of it is required. Here we can show that synthetic AGI can be used to get closer to an answer to the question of consciousness in both humans and elsewhere including machines. In doing so we also get to understand the relationship between consciousness and intelligence. We must also conceded that we may never solve the problem. It is, however, possible to see how synthetic AGI may take us as close to these answers as we can ever get. Perhaps in its maturity there may be more clarity in this. At this juncture of the birth of synthetic AGI options, however, this seems to be what is afoot. To see the centrality of synthetic AGI approaches to a science of consciousness and its connection to intelligence, simply ask these questions: 1. “*What is it like to be a computer (100% analytic in software) AGI inside a robot body X?*” 2. “*What is it like to be a neuromorphic chipset (100% analytic in hardware) AGI inside a robot body X?*” 3. “*What is it like to be a 100% synthetic organic brain AGI inside a robot body X?*” 4. “*What is it like to be a neuromorphic chipset (100% inorganic synthetic in hardware) AGI inside a robotic body X?*” 5. “*What is it like to be a H% synthetic hybrid AGI inside a robotic body X?*” Notice that in every case robot body X is the same. The same sensory/motor capabilities exist in each instance. Robots (a) to (e) may behave differently in identical contexts. They may have different abilities to learn and behave. Different levels of actual or potential intelligence. Whatever the differences between (a)…(e), they relate entirely to differences in the brain of the complete robot. What is particularly striking is (c). Consider a synthesised *organic* human brain that is grown to become (somehow by means yet to be found) identical to a human brain except to the extent that its peripherals must accommodate the robot body sensory/motor capacities. In this case we are faced with a very compelling case that, because all of the physics of the brain is retained, the resultant brain must have consciousness. If anyone decides to deny consciousness in the synthetic brain, then that claimant is must make an argument that a brain originating synthetically has some different level/kind of consciousness compared to an identical naturally arising brain. We do not have to make that claim here or prove it either way. What we do here is demonstrate how the synthetic brain idea confronts this prospect and suggests a scientific way to make some progress: though development of synthetic AGI brains. Consider next the inorganic synthetic brains (d) and (e). We know that whatever it is in the physics of the brain that results in consciousness may be literally incorporated in the physics of a new kind of synthetic-style neuromorphic chipset. Actual brain physics, albeit in inorganic form, exists in the chipset. That being the case we can therefore make an argument that ‘it may be like something (or not)’ to be that chipset. Some brain physics may be necessary for consciousness, some brain physics may not. The question of which is essential physics and which is not is, from the perspective of a program of synthetic AGI works, a scientific question that has a testable answer. That is, for the first time ever, a completely analytic AGI that *models* the physics can be compared to a hybrid AGI that actually uses the physics. If there is a difference in behaviour that can be found to be critically involved in the difference in the physical instantiation, then that difference can be, potentially, linked to an argument about the essentialness of the physics to AGI behaviour and, eventually, to arguments about consciousness. This lineage of empirical testing therefore can, in its maturity, become a scientific program with testable results that would not otherwise exist. Pure analytic AGI on its own cannot answer such questions. Pure synthetic AGI on its own is afflicted with a different kind of the same problem: that which has dogged the science of consciousness all along. Together, however, the analytic/synthetic contrast can provide us answers for an argument that leads somewhere that would otherwise be missing. Now go back to (b), the 100% analytic and present-day neuromorphic hardware AGI brain. There is a huge industry involved in these chips. They are growing in sophistication and in application week by week. If synthetic AGI approaches join them, become mainstream, and a new kind of neuromorphic chipset emerged that actually has some brain physics on it instead of what we currently do, then within that community we would expect the find the question “*What is it like to be one of these neuromorphic chipsets?*” in papers, in workshops and in conferences. The scientific posing of such a question, and an expectation that it has a scientific answer that involves the community in the science of consciousness, would be expected to be normal practice. It is a little sobering to imagine the ultimate impact that synthetic AGI approaches may have on the existing neuromorphic engineering community. The area has to undergo a profound shift in thinking. That shift means that the science of consciousness, computer science and related engineering fields will eventually have synthetic approaches as a normal part of training curricula that seems very foreign at this point in time. We can now see how synthetic AGI approaches, however they are implemented, combined with analytic approaches, speak directly to the role (or not) of consciousness in intelligent behaviour *and*, potentially, the origins and nature of consciousness itself. Such claims are, within a mature form of the approach, in-principle scientifically testable. The cultural impact is, however, a little more far-reaching. Observe the altered state of science itself that is implicit in synthetic AGI approaches. It involves, for the first time in history, the science of a first person perspective. As practised in neuroscience at present or as might be practiced in the synthetic AGI approach proposed here, this is unique as science. No other science is expected to involve itself in the direct account of a first person perspective. That account is unique in that it is actually an account (albeit indirectly) of ourselves (scientists) in our role as the scientific observer. Whatever science itself looks like after a science of consciousness is accessed, the introduction of synthetic approaches to AGI are a fundamental part of its progress. Currently we have particle physics smashing the matter of the universe into ever smaller components in the CERN particle supercollider. It is not without a sense of irony that while we invest vast amounts of funding in this amazing science of the infinitesimal, that at the other end science, where we meet the most complex single object in the known universe, the brain, we see that the equivalent of the CERN supercollider – the creation of a massive ‘particle’ called a brain through synthetic AGI experimentation, speaks a story of how scientists do science at all and yet is essentially missing the synthetic half of its natural science program … until now. 5.1 Summary: Synthetic AGI and the route to a science of consciousness We leave this section having identified a significant complexity entailed in the way synthetic AGI is deeply involved in the science of consciousness and in its impact on the nature of science itself. It is part of a major shift in science practice. However complicated this sounds, synthetic AGI practice need not concern itself with any this to start and operate successfully. Synthetic AGI needs no theory of consciousness to proceed. Synthetic AGI practice approach need not concern itself with abstract considerations of science practice and culture. How can this be? This is because synthetic AGI is actually a reversion to a form of discovery by empirical investigation. Fire was used millennia prior to a physics of combustion. We burned things to *acquire* a theory of combustion. Likewise we flew first, in ignorance, in order to acquire the physics of flight. In exactly the same way the practice of synthetic AGI can be seen as a way to in-principle build a conscious machine without a theory of consciousness. Instead we first build the conscious machine in order that we find a theory of consciousness. The synthetic AGI approach can now be accurately seen for what it is. It is not any kind of discovery. In fact it is a *reversion* to a kind of science practice that has simply been sidelined while the birth of analytic AGI took centre stage for a while. With the maturity of analytic AGI, and with a new modern vocabulary we can now see the events of the last half century from a new perspective. The introduction of synthetic AGI can now be seen to be a reversion to a form of empirical science that has always been there and that has always presented this kind of possibility. Synthetic AGI presents as a way to reconnect a relatively estranged community to centuries of empirical practice; a mode of discovery that was simply set aside at the birth of what we now see to be analytic AGI. This presents another way of viewing synthetic AGI as a new investment opportunity: it involves nothing new. It is actually backed by a history of success of empirical science itself that is centuries older than the age of computer revolution. What is actually novel is the *pair* of approaches. Analytic *and* synthetic AGI form a duo of approaches that, together, form a novel route to science’s future. That will only happen, of course, with investment in the synthetic half of the duo that has, to date, been lacking. ================================end of section cheers for now. Really sorry I can't play ball elsewhere here. Gotta go. regards Colin Hales On Sun, May 24, 2015 at 11:36 AM, Colin Hales <[email protected]> wrote: > Dear IGI enthusiasts, > Here's a stab at an intro to a paper that I hope begins to capture the > essence of what is proposed. > I don't claim it as perfect or the final product. > What I need to know is if it speaks in a way that might lead to the change > we are looking for. > > *=========================================* > *AGI Directions: towards Hybrid (H) and Synthetic (S) Forms.* > By > Dorian Aur (see previous posts) > > (blame for this bit is accepted by Colin Hales > others? TBA. > 1 Introduction > > Here we seek to instigate a broadening of approaches to artificial general > intelligence (AGI). Be it an artificial brain the size of a worm, ant, bee, > dog or human, such an artificial intelligence is recognized here as a kind > of AGI. The original science program coined ‘artificial intelligence’ (AI) > in 1956 {refs} set sail, at the birth of computing, with a goal to create > machines that potentially have human level intelligence or better. What has > actually happened since then is the application of computers to a vast > array of technical challenges within which great successes have occurred > and are ongoing. However, in practice AI successes fell, and continue to > fall, within a now well recognized category called ‘narrow’ or > ‘domain-bound’ AI. Within the atmosphere of its successes, however, the > original goal of human-level intelligence has, at least so far, evaded the > energies of a huge investment. Such has been the prevalence of this pattern > it can now be called a kind of syndrome and in recognition of that syndrome > in recent years the attainment of the original goal of human level AI has > taken on two main forms. > > > > The first approach to human level AI one of simple assumption that by > attending to the AI ‘parts’ that the route to the AGI ‘whole’ will become > apparent or emerge naturally. This activity, now industrialised, forms the > backbone of AI investment at this present time. Its successes emerge almost > weekly now. The second approach is one of a concerted direct attack on > human-level AI. This is a recent phenomenon manifest in a comparatively > small community of investigators, with commensurate levels of investment, > who have explicitly coined the name of the goal: AGI. In doing so the > target is explicitly recognised as being of a nature deserved of an > integrated, holistic approach. This, too, is having its successes, but once > again the syndrome of narrow-AI outcomes tends to be what the practice > achieves. > > > > Throughout all this history one thing has been invariant: The use of the > computer or more generally the use of models of intelligence as an instance > of machine intelligence. This document signals the beginning of another > approach: where the computer (model) approach is joined (to an extent to be > determined) by its natural counterpart. This new approach, for whatever > reason, is essentially untried and invisible to the AI community. It was > always an option. All we do here is get it off the shelf and dust it off as > an AGI option. This paper is a vehicle for the clear expression of an > untried approach. As such it is hoped that AI and AGI acquire a suite of > ideas and new scientific assessment techniques that will improve AI > generally as a science discipline based on a new kind of empirical testing. > Investment in the approach has been zero since day one of AI. We seek here > to make a case that if investment in this new approach was non-zero, a > cost-effective dramatic shift may occur in our understanding of the > potential kinds of machine intelligence. Specifically we seek to introduce > the concept of synthetic and hybrid AGI. > 2 Computation and AGI – a perspective on practice > > To understand what follows we need to carefully compare and contrast two > fundamentally different forms of computation. Formally their difference is > best captured by the words analytic computation and synthetic computation. > The first kind, analytic, is easily recognised as model-based computation. > This is where, by whatever means chosen, an abstract model is explored by > its designers. Its usefulness is inherent in what the computation tells us > upon interpretation. Within the model are representations of > characteristics that are being studied. A voltage in model may be used, for > example, to represent the actual voltage of what is being modelled. That > *representation* of something is not an *instance of* the original thing. > Recognizable forms of analytic computation include that of the analog or > digital computer (Turing machines). Its distinguishing feature is that > however the computation is carried out, its meaning is ultimately inherent > in the mental processes of a designer or in some explicit, separate > document such as software or a circuit diagram of a model. However, complex > the model is, it is best thought of as a description of something. The > description itself is the analytic form. Clearly the analytic form is > responsible for a dramatic change and technological advances in science > over decades. The computer revolution itself. > > > > The second kind of computation, synthetic, is best understood as simply > the regularity of nature itself. Synthetic computation occurs when nature > itself is simply regarded as computation. Synthetic computation, too, may > have a designer. That is, the distinction between analytic and synthetic > computation is not held up as the distinction between ‘human-made’ and > ‘naturally occurring’. Synthetic computation is when the regularity of > nature itself accepted as, or configured to be the computation. There may > be documents needed to establish the initial conditions of the > ‘computation’. For example, an engineer builds and configures the initial > conditions of natural material as an automobile. The result is a synthetic > computation called ‘the automobile’ or ‘transport’. No documents are needed > to further interpret the meaning of the result of the computation. Nature > itself is the outcome of synthetic computation. Another simple example of > such computation may be seen in the concept of flight. A bird ‘computes’ > those aspects of the physics of flight suited to the needs of a bird. > Humans have used those same synthetic computations (manifest in > air/fight-surface interactions) to create artificial flight. The result is > a regularity in nature accepted as a form of computation. Physically the > result is flight. That being the case, what is ‘analytic flight’? We all > recognise this: the flight simulator. > > > > The program of future directions proposed here is one that recognises the > two different kinds of computation in the very specialized science of the > brain where the analytic/synthetic distinction can be shown to be > under-developed and potentially confused. The brain is unique in that it is > a synthetic object with a specialised role to become the natural regularity > that forms the control system of natural organisms. It embodies the > intellect of whatever creature it inhabits. The kinds of tasks such a > control system does can and have been modelled to great effect in analytic > approaches. The question is: *“What is the difference, application to the > brain, between the analytic and the synthetic approach?”* Asking that > question, and expecting a scientific answer, is what this paper is seeking. > > > > For over half a century, approaches to creating an artificial brain have > been entirely confined to analytic forms. These analytic approaches are > explorations of models of the brain made by humans. That being the case, > then the hyper-critical issue is in understanding the conditions under > which the analytic is indistinguishable from the synthetic. If there is a > difference, then how does that difference manifest in the capability of an > AGI. For the brain, for these many decades, the synthetic half of the route > to AGI has simply been neglected for a variety of reasons. The actual > reasons for the absence of synthetic approaches to AGI is something > historians can evaluate. The practical restoration of the synthetic > approach is the goal here. The restoration of the synthetic approach is > necessary to scientifically test the difference between the analytic and > synthetic AGI. Whatever that difference is, the whole AGI enterprise has > been living within a realm of that difference for reasons that are > essentially unexplored. *Scientifically *evaluating the > analytic/synthetic difference (or the lack of it) is the goal of the > proposed shift in methodology. > > > > In summary: The prospect of restoration of a synthetic approach to AGI is > our topic. We look at a potential change in the direction of AGI science, > and therefore the investment profile, where the analytic, the synthetic and > their hybrid are formally recognised as separate and where scientific > testing is then applied to compare and contrast their scope and > effectiveness in application to the science of the artificial brain as AGI. > In the creation of such a brain the approach can be > > 1. > > Nil% synthetic computation (entirely analytic) > > or > > 1. > > 100% synthetic computation > > or > > 1. > > H% synthetic. A hybrid form of both. > > > > That is, the inclusion of synthetic computation to some desired level > becomes an experimental parameter. Natural brain tissue can be regarded as > naturally occurring object based on (2) synthetic computation. In > application to artificial brain tissue (AGI) so far, option (1) has been > the only approach. This has achieved all of the progress in artificial > intelligence to date. Here we suggest that the success of analytic > approaches be joined by synthetic approaches to AGI. If indeed the time has > arrived for the formal introduction of (2) synthetic AGI and (3) hybrid AGI > as viable prospects, then we need to open a discourse. What would the new > AGI science look like? What does it tell us about the scope, nature and > expectations inherent in the purely analytic approach? What does it add to > the nearly 70 year-old AGI program? > > (end of section) > ============================ > > This is offered up for discussion as the possible first part of the > document Dorian started. I have a lot more to add. > > regards > > Colin Hales > > > > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
