On Saturday 03 November 2007 16:53, Edward W. Porter wrote: > In my below recent list of ways to improve the power of human > intelligent augmentation I forgot to think about possible ways to > actually increase the bandwidth of the top level decision making of > the brain, which I had listed as a real problem but had made no
To increase the actual bandwidth we would have to change the number of neurons. To "squeeze performance" out of the neurons as they are, direct neural interfaces might be appropriate, so that the "higher level" decision making can operate at a more abstract level, and tech can then translate it down to the levels that the rest of the brain works with (for example). > On way to improve the bandwidth of the top level of human decision > making would be to replace or augment the brain's machinery for > performing it, which is probably in the prefrontal cortex, basil > ganglia, and general cortico-thalamic loop. Some include the > Cerebellum in such mechanism for its role in fine tuning behaviors > into the current context (including very time sensitive feedback) and > by controlling the timing of learned sequential behaviors, including > mental behaviors. Re: augmenting/replacing the PFC. We can advance this field of knowledge via attempting to extend Dr. White's work on brain transplantation in monkeys, instead with mice, in an attempt to keep brain regions of the mice on life support systems, perhaps on silicon for recording and stimulation. Buffer as many signals in/out as possible, drop replacement transponder chip in mouse brain, then start playing around with emulating/simulating the cortex-on-a-chip. > -A--have the AGI learn the goal system of the human brain and have > delegated authority to make decisions on its own, much as the basil > ganglia often does relative to our conscious decision processes. > (i.e., if you drop something you are often first aware of that fact > by the subconscious response your body is making to catch it.) Such > a system could respond in real time to complex inputs thousands or > millions of times faster than a human. Although it might not always > do what we want, neither does our basil ganglia. It might be just as > faithful to our goals and emotions as the basil ganglia, Such a > system could help us keep pace with many superintelligences, when, > for example trying to prevent them from infecting our trusted > machines. My hope is that AGI will one day be able to do most of my redundant mental cycles for me ;) It would be nice to eliminate redundancy. Computers are very, very good at doing things over and over again. > -B--Create a super intelligent basil ganglia (either by replacement > or supplementation) that receives the inputs from the portions of the > cortex the basil ganglia currently does, but also receives inputs > from the AGI > Can, and how can, our human descendants compete with > superintelligences, other than by deserting human wetware and > declaring machines to be our descendants? Are you asking how to compete with change without changing ourselves? > There are real issues about the extent to which any intelligence that > has a human brain at its top level of control can compete with > machines that conceivably could have a top level decision process > with hundreds or perhaps millions of times the bandwidth. Yes- I think that we can do an "information theoretics" analysis of the optimal performance of the human brain. Est. 100 billion neurons, a few quadrillion possible connections, so much protein, LTP activation networks, etc. This could then be used to show our optimal intelligence without augmentation. But this would of course require us to figure out a good definition of 'intelligence' to work with. > There are also questions of how much bandwidth of machine > intelligence can be pumped into, or shared, with a human > consciousness and/or subconscious, i.e., how much of the > superintelligence we humans could be conscious of and/or effectively > use in our subconscious. Is the limit "yourself"? If the superintelligent machine that shares itself with you is an order of magnitude more than you, then does this mean that you can only 'comprehend' a part of the superintelligence's data output at once, even if you have full data access? - imagine a superintelligence embedded in your brain via nanotech, living in between your current neurons and synapses > (In fact, it would not be that hard to have a system where the > superintelligence only communicates to our brain its consciousness, > or portions of its consciousness that its learning indicate will have > importance or interest to us, so that it would be acting somewhat > like an extended subconsciousness that would occasionally pop ideas > up into our subconsciousness or consciousness. This would greatly > increase our mental powers, particularly if we had the capability to > send information down to control it, give it sub-goals, or queries, > etc. ) You have given me an idea: we could always just use wetware biology as the "missing link" in our quest for artificial intelligence. All of the other tasks of the "superintelligence" would just be automated functionalities that are augmented to interface with our neurons. > The questions is, how much better than a good video monitor and > speaker system on the input side could such links be. Presumably > they could communicate semantic knowledge much faster, but how much, > I haven't a clue. The improvement in bandwidth could be much greater > in the reverse direction, from the brain out. Since speech, gestures, > mouse, and keyboard, are about our only current output links. "semantic telepathic link" <-- Good one. > ---Kurzweil's little nanobots navigating into cortical columns and > wirelessly receiving inputs allowing them to provided equivalent, say > a gigabit a second of input to the brain, Do you have a reference of his estimations on this? > ---nanowires through brain's circulatory system to provide high > bandwidth I/O (somebody is actually specing out such a system) Refs? I haven't heard of this one. > ---A nano/bio engineered lining wrapped around the top level of > surface of the layer one of the cortex that could read output from > and supply input to that, the important level of neural interconnect. I've heard of MEAs, but not wrapping wire. Refs? > minds? Could we keep any such network itself from conspiring against > us? From a theory-constraints point of view of cybernetics, this is an interesting question. It largely depends on how AI/AGI develops, their communication tactics, etc. Right? - Bryan ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=60965326-b4ecd9
