+5 for using grok in a sentence.

Carry on.

On Mon, May 4, 2015 at 7:33 PM, Greg Staskowski <[email protected]> wrote:

> Steve,
>
> D00d, really? I've installed HMI and PLC controllers on heat treating
> furnaces. The thing is, we run furnace surveys anyway to check the set
> points on those furnaces and have alarms set up at very specific points
> using precisely calibrated thermocouples (see SAE AMS2750C and D for
> example).
>
> You are telling me everything going wonky with my body's temp set points
> can get fixed by strapping a peltier cooler to my torso or doing jumping
> jacks for 15 minutes, twelve times a day?
>
> Like, WOW, mahn. First off, I want to see you data for 24 hours with five
> calibrated thermocouples strapped to your femoral artery, throat, the top
> of your head, your foot and probably your anus. Then I want to see how much
> delta there is. Then I want to see that data for three months and then
> maybe.. maybe I'm going to study the ancient Tibetan art of Tumo out on
> some mountainside or buy into your whole deal?
>
> Would also point out, I tend to get a little chilly when my blood sugar is
> getting a little depleted. Color me not convinced. But hey, I get your
> arguments on hardware so far. Or I think I do.
>
> -GJS
>
> On Sun, May 3, 2015 at 11:22 AM, Steve Richfield <
> [email protected]> wrote:
>
>> Jim,
>>
>> Your posting encapsulates Babbage's quandary. Babbage could see that
>> computers could do (almost) anything, but was unable to explain that in
>> terms that could be widely accepted, especially when all he had to show was
>> a design for a clunky mechanical computer that he was never able to build.
>>
>> Adaptive control is now in the same quandary - where it seems "obvious"
>> to some but impossible to others that intelligence and consciousness could
>> arise from an "unprogrammed" complex adaptive control system.
>>
>> I had long thought that we must be made of some sort of "universal
>> components" that self-organize to become us, but NNs failed to deliver on
>> that promise. Colin has some new thoughts here that at absolute minimum
>> provide new directions for NN research.
>>
>> Religions have long viewed consciousness as something apart from our
>> physical reality, and even now many AGIers (you?) view consciousness as
>> something apart from the rest of our wetware, most of which is concerned
>> with mundane things like controlling our digestion, breathing, blood
>> pressure, temperature, etc.
>>
>> Then, when it comes to controlling lipids and glucose, these must be
>> controlled by adjusting what we decide to EAT. Oh, we want some MEAT for
>> the nutrients (like vitamin B12) that are available in meat, so we must
>> KILL something. And, there doesn't seem to be anything around that doesn't
>> avoid being killed, so we must out-think our meals to be able to eat them.
>>
>> But, what if the available meat is too big and/or dangerous to kill, like
>> buffalo? Then, we must work TOGETHER to eat, which involves weapons,
>> planning, communication, etc.
>>
>> In short, I/we see intelligence and consciousness and simply the next
>> higher level of adaptive process control. If we can do any part of it
>> correctly, then there is a good chance that with more components, it will
>> *spontaneously* do everything!!!
>>
>> Whatever it is that we have that insects do NOT have seems to be a
>> quantitative issue, so there doesn't seem to be a "threshold" of
>> consciousness, but rather it permeates the entire structure, regardless of
>> size.
>>
>> However, I/we can NOT explain this operation in enough detail to convince
>> skeptics, and even if we could produce such an explanation, I suspect it
>> would probably be beyond any human's ability to understand.
>>
>> *Flashback:* When I figured out how body temperature was controlled and
>> how to correct it when it was low, I called the doctor who had first
>> pioneered permanent temperature correction to discuss my theories and
>> possibly produce a joint publication. From my writings (which have since
>> brought many people up to speed) he was UNABLE to understand my theories.
>> It wasn't because of any shortfall in my explanations, but rather because
>> he was unable to grok the subtleties of adaptive control systems. From my
>> own observations, anyone who hasn't learned about PID control systems by
>> their mid-20s probably can NEVER understand more complex things, like
>> adaptive control systems, probably because that place in their brains has
>> already been committed to other tasks.
>>
>> So, in the absence of any other apparent path forward in an AGI
>> direction, Colin and I are looking into adaptive control from our
>> respective POVs.
>>
>> Steve
>> ====================
>>
>>
>> On Sun, May 3, 2015 at 7:41 AM, Jim Bromer <[email protected]> wrote:
>>
>>> I thought the ideas are interesting and Colin's description was more
>>> readable than usual but the arguments supporting the method weren't
>>> very powerful.  I am curious about how Colin is implementing the
>>> method. Could you give me a little more about that? Are you designing
>>> some kind of electrical circuit?
>>>
>>> What I was trying to say in this thread is that you have to supply a
>>> little more insight about why you think that the methods that you are
>>> designing and will be implementing would rise above being 'narrow ai'.
>>> For instance, Colin's honest report on how far he has actually gotten
>>> so far sounds like it is on par with simple narrow AI. As I reread
>>> your messages I keep finding a little more in it. But back to my
>>> point. Since I can rough out the algorithms that I would use as if
>>> they were abstractions, or as if they could exist within an abstract
>>> world, it would seem that I should be able to conduct simple tests to
>>> show that they could diversify in some way that is: 1. at least better
>>> than narrow ai, and 2. useful in some way. So perhaps I should add
>>> that. I would say, for example, that artificial neural networks would
>>> pass this kind of test. However, the criticism then is, ironically
>>> given our use of the narrow ai term, that they lack efficient means to
>>> focus and they cannot be efficiently used as componential objects.
>>>
>>> So, can you guys define some abstract or simple tests that could show
>>> that your ideas would become able to adapt to the more complicated
>>> demands of actual tests? The value of the simple test is that once you
>>> can get your algorithms to pass the first test you might come up with
>>> ways to design a slightly more aggressive test. So if I could test my
>>> ideas to,say, try to learn to recognize some simple classifications
>>> then I might try to see if I can get it to try to get it to learn to
>>> utilize systems of classifications effectively and efficiently
>>> (without redesigning the program only for that specific kind of test.)
>>> So then I would have to design some other kind of test to make sure
>>> that it is somewhat general.
>>> Jim Bromer
>>>
>>> On Sun, May 3, 2015 at 3:25 AM, Colin Hales <[email protected]> wrote:
>>> >
>>> >
>>> >> On Sat, May 2, 2015 at 2:50 AM, Steve Richfield <
>>> [email protected]> wrote:
>>> >>>
>>> >>> Jim,
>>> >>>
>>> >>> Again, I think I see the POV to solve this. All animals, from single
>>> cells to us, are fundamentally adaptive process control systems. We use our
>>> intelligence to live better and more reliably, procreate, etc., much as
>>> single-celled animals, only with MUCH richer functionality. Everything fits
>>> this hierarchy of function leading to intelligence.
>>> >>>
>>> >>> Then, people like those on this forum start by ignoring this and
>>> trying to create intelligence from whole cloth. This may be possible, but
>>> there is NO existence proof for this, no data to guide the effort, etc. In
>>> short, there is NO reason to expect a whole-cloth approach to work anytime
>>> during the next century (or two).
>>> >>>
>>> >>> However, some of the mathematics of adaptive process control is
>>> known, and I suspect the rest wouldn't be all that tough - if only SOMEONE
>>> were working on it.
>>> >
>>> >
>>> > Erm.... guys. This would be me.
>>> >
>>> > I am working on it. For well over a decade now. Cognition and
>>> intelligence is implemented as an adaptive control system replicating,
>>> inorganically, the natural original called the human (mammal) nervous
>>> system. I simply replicate it inorganically. Tough job but I am getting
>>> there. There's no programming. No software. Just radically adaptively
>>> nested looping processes. In control strategy terms it is a non-stationary
>>> system (architecture itself is adaptive). Control loops come into existence
>>> and bifurcate and vanish adaptively. The architecture commences at the
>>> level of single ion channels and nest at multiple levels that then appear
>>> in tissue as neurons doing what they do, but need not appear like this in
>>> the inorganic version. You don't actually need cells at all. These then
>>> nest at increasing spatiotemporal scales forming coalitions, layers,
>>> columns and finally whole tissue. All inorganically. All the same at all
>>> scales from an adaptive control perspective. Power-law scalable. Physically
>>> and logically.
>>> >
>>> > In my case, for the conscious version the hardware includes the
>>> field-superposing, active additional feedback in the wave mechanics of the
>>> EM field system produced by brain cells at specific points. The fields form
>>> an addition/secondary loop modulation that operates orthogonally,
>>> outside/through the space occupied by the chip substrate.
>>> >
>>> > What I am starting with is the 'zombie' or symbolically ungrounded
>>> version. It doesn't produce the active field system (missing a whole
>>> control system feedback mechanism) and uses supervised learning
>>> (externalised by a conscious human trainer) to compensate for the loss of
>>> the natural role consciousness has as an endogenous supervisor. It will, in
>>> the zombie form, underperform in precisely the way all computer AGI
>>> underperforms. This is what is missing when you use computers to do it all.
>>> You end up with a recipe (software) for pulling Pinocchio's strings.
>>> Whereas my system bypasses the puppetry altogether. It makes the little
>>> boy, not the puppet.
>>> >
>>> > However you view it, there's nothing else there in a brain except
>>> nested loops that have power-law responses in two orthogonal axes: sensory
>>> and cognitive.  Adding the field system to the sensory axis (e.g. visual
>>> experience) or part of the cognitive axis (e.g. emotional experience)
>>> provide the active role for consciousness  implemented through the causal
>>> impact of the Lorentz force within the hardware. I suppose it'd be an
>>> 'adaptive control loop' philosophy for cognition and 'EM field theory of
>>> consciousness' combined. No computing needed whatever. Just like the brain.
>>> Most of the last ten years has been spent figuring out the EM field bits!
>>> That I am now omitting, knowing what I lose when I do that (i.e.
>>> consciousness).
>>> >
>>> > Teeny weeny Zombie version 0.0 this year I hope. No EM field
>>> generation. I call it the 'circular causality controller'. I aim to add the
>>> EM fields later. That part requires $millions. It's chip-foundry stuff.
>>> >
>>> > So chalk me in under this 'adaptive control loop' category for AGI
>>> implementation please. I know this forum is a 'using computers to do AGI'
>>> forum so I'll just continue to zip it. I haven't mentioned it much over the
>>> years because it seems that most of you aren't interested in my approach.
>>> For reference and for the record.... I am the 'AGI as adaptive control' guy.
>>> >
>>> > cheers
>>> > colin
>>> >
>>> >>>
>>> >>>
>>> >>> I suspect that when the answers are known, it will be a bit like
>>> spread spectrum communications, where there is a payoff for complexity, but
>>> where ultimately there is a substitute for designed-in complexity, e.g.
>>> like the pseudo-random operation of spread spectrum systems. Genetics seems
>>> to prefer designed-in complexity (like our brains) but there is NO need for
>>> computers to have such limitations.
>>> >>>
>>> >>> Whatever path you take, you must "see a path" to have ANY chance of
>>> succeeding. You must have a POV that helps you to "cut the crap" in pursuit
>>> of your goal. Others here are working on whole-cloth approaches, yet
>>> bristle when challenged for lacking a guiding POV. I see some hope in
>>> adaptive control math. Perhaps you see something else, but it MUST have an
>>> associated guiding POV for you to have any hope of succeeding - more than a
>>> simple list of what it does NOT have.
>>> >>>
>>> >>> Steve
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>> --
>> Full employment can be had with the stoke of a pen. Simply institute a
>> six hour workday. That will easily create enough new jobs to bring back
>> full employment.
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/27055757-c218d4f9> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/27079473-66e47b26> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Regards,
Mark Seveland



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to