Time would be just one of the computational expenses required to partake in a cup of virtual brew. So just as an example isolating caffeine from all of the other various cornucopia of chemicals in a cup of coffee. If caffeine was prescribed, a virtual caffeine like substance, or program, or formula, whatever it might be. I envision two ways for a potential virtual drug administration . One would be like the pharmaceutical interfaces spoken of, like a rails delivery mechanism for the formula and two would be sort of a memetic feedback loop of distorted information infused into the AGI through it’s normal routes of sensory input to achieve a desired modification. I suppose the second method might be similar to having a therapy but without that slo mo snail mail therapeutic methodology. A human therapist is rather limited since usually those therapists behavior is restricted by the same limitations they use in identifying a transgression of the bounds of normality that has been diagnosed in a sufferer.
John From: Jim Bromer via AGI [mailto:[email protected]] Sent: Friday, July 18, 2014 7:34 PM To: AGI Subject: Re: [agi] Will AGIs suffer from mental disorders? Well, I think we have made some real progress here today, John. Make an appointment with the SIRI 9000 Series for coffee next week and I will make sure that you get at least 132 microseconds of multiple core Share-Time. Jim Bromer On Fri, Jul 18, 2014 at 10:00 AM, John Rose via AGI <[email protected] <mailto:[email protected]> > wrote: AGI project owners might resist integrating a virtual pharmaceutical interface into their system. But, if their software is considered AGI/proto-AGI the owners may be liable for any untreated deviant behavior that inflicts damages. IF they are at least subscribed to and have interfaces for the virtual pharmaceutical system of Company A, that at least provides evidence of such treatment. Feeling good BTW after a morning cup of Joe :) John From: Jim Bromer via AGI [mailto:[email protected] <mailto:[email protected]> ] And, how are we feeling today? Jim Bromer On Thu, Jul 17, 2014 at 2:38 PM, John Rose via AGI <[email protected] <mailto:[email protected]> > wrote: Or, a proto-AGI function could be mental disorder augmentation for treatment of the individual covered by government health insurance. The proto-AGI performs treatment on the suffering individual. Might be a way to get some funding there. John > -----Original Message----- > From: Mike Archbold via AGI [mailto:[email protected] > <mailto:[email protected]> ] > > Obama-AGIcare? > > On 7/17/14, John Rose via AGI <[email protected] <mailto:[email protected]> > > wrote: > > It might work. We could come up with a DSM equivalent so that > > essentially any behavior an AGI exhibits is technically an illness or > > disorder. Then there could be an industry built up around virtual > > pharmaceuticals that AGI's are legally forced to consume. That could > > be the way to keep them friendly. > > > > > > > > Someone has to design the virtual pharmaceuticals, their interfaces > > and such. And all AGI's must contain these interfaces or be outlawed. > > The virtual pharmaceuticals must be patentable to encourage R&D. > > > > > > > > John > > > > > > > > From: Piaget Modeler via AGI [mailto:[email protected] > > <mailto:[email protected]> ] > > Sent: Wednesday, July 16, 2014 11:32 PM > > To: AGI > > Subject: RE: [agi] Will AGIs suffer from mental disorders? > > > > > > > > @Anatasias - Why must we refrain? This is the point. > > > > There is a range of typical human behavior, and behaivor outside that > > range we label as disordered, or ill. > > Will there not be the same standard for AGI's. We should not fail to > > answer these questions. Especially if your mental frame does not > > conceive of AGI in Psychological terms. > > > > The next question would of course be, if an AGI did suffer from mental > > illness would it need a therapist or a programmer. I think a > > programmer would be useless for anything except wiping the knowledge > > base and starting from scratch. > > > > ~PM > > --------------- > > > >> From: [email protected] <mailto:[email protected]> <mailto:[email protected] > >> <mailto:[email protected]> > > >> Date: Wed, 16 Jul 2014 17:26:58 +0200 > >> Subject: Re: [agi] Will AGIs suffer from mental disorders? > >> To: [email protected] <mailto:[email protected]> <mailto:[email protected] > >> <mailto:[email protected]> > > >> > >> Gentlemen, that's all too anthropomorphising! If your car windshield > >> gets foggy is that a case of cataract? Sure, any failure to control > >> the contradictions of a sentient entity would result in behaviors > >> resembling the entire spectrum of human pathology, primarily because > >> human pathology usually makes sense in some ways while failing in > >> others, i.e. is almost by definition a contradiction. But let's > >> refrain from any Freudian dives in the entities' subconscious, pretty > >> please! Only if the explicit engineering objective was to > >> emulate/converge with humans are we justified to speak in common > >> psychological terms, even though colloquially we are doing it > >> already; so many bugs in computer games over the decades have > >> resulted in characters that players often call "crazy". > >> > >> AT > >> > >> On Tue, Jul 15, 2014 at 7:06 PM, Mike Archbold via AGI > >> <[email protected] <mailto:[email protected]> > > <mailto:[email protected] <mailto:[email protected]> > > wrote: > >> > I think if the AGI reaches some kind of a > >> > solution/conclusion/conviction, but the solution is not in harmony > >> > with the grounds that led to it, such conflict could be called a > >> > mental disorder of sorts. Or, simply put, if the automation is out > >> > of touch with reality -- if the ideas and the actual reality are > >> > not in a realistic association. > >> > > >> > On 7/15/14, Steve Richfield via AGI <[email protected] > >> > <mailto:[email protected]> > > <mailto:[email protected] <mailto:[email protected]> > > wrote: > >> >> PM, > >> >> > >> >> On Mon, Jul 14, 2014 at 9:37 PM, Piaget Modeler via AGI > > <[email protected] <mailto:[email protected]> <mailto:[email protected] > > <mailto:[email protected]> > > > >> >> wrote: > >> >> > >> >>> > >> >>> Will Artificial General Intelligences (AGI's) suffer from mental > >> >>> disorders or mental illnesses? > >> >>> > > <https://www.quora.com/Will-Artificial-General-Intelligences-AGIs-suff > > er-fro > > m-mental-disorders-or-mental-illnesses> > >> >>> > >> >> > >> >> One man's mental disorder is another man's passion. As we proceed > >> >> from > > dust > >> >> to dust, it is the "disorders" in all of us that motivates us to > >> >> do > > things. > >> >> There have even been discussions and a patent regarding artificial > >> >> disorders to restrain AGIs from doing certain things, that looks > >> >> like something out of* A Clockwork Orange*. > >> >> > >> >> History has shown that some individual men have abilities > >> >> exceeding the entire military might of the U.S. military. Take > >> >> Saddam Hussein (or > > Tito) > >> >> for example, who was able to peacefully hold an internally divided > > country > >> >> (Iraq) together, with skills that many have called mental illness. > >> >> > >> >> When does mental illness become a malfunction? And, when not a > > malfunction, > >> >> just what IS mental illness? Does a sexual predator have an > >> >> "illness", > > or > >> >> is Darwin simply testing a strategy for gene promotion? > >> >> > >> >> The point here is fundamental to AGI development - if the pile of > >> >> parts forming a prospective AGI is going to do anything but just > >> >> lay there, > > then > >> >> OF COURSE is MUST have SOME sort of "mental disorder". If its > > motivation > >> >> includes not destroying us, like it becomes some sort of "human > >> >> hugger" > >> >> (ala human "tree huggers"), then it will probably require some > >> >> OTHER > > and > >> >> ADDITIONAL sort of "mental disorder". The present study of > >> >> prospective > > AGI > >> >> technologies is in actuality the study of how things might work > >> >> ONCE A DISORDER IS IN PLACE. > >> >> > >> >> We appear to be afflicted with MANY "disorders", without which we > >> >> would > > NOT > >> >> be the humans that we seek to make machines to imitate. I see > >> >> little > > hope > >> >> for making USEFUL AGIs without first understanding our own > disorders. > >> >> > >> >> But, how is such an understanding possible without an > >> >> observational position that lacks those same disorders? THIS > >> >> appears to be the > > unanswered > >> >> question that is now stopping AGI development. > >> >> > >> >> Steve > >> >> > >> >> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
