Re: [agi] A probabilistic/algorithmic puzzle...
Let X_i, i=1,...,n, denote a set of discrete random variables X_i is the set of all integers between i and n, initial value for i is 1? or is i any member of the set X? or does i function only as a lower bound to set X? hi me again. if forgot to ask: is i,...,n an integer index pointing to a array of related variables? could X_i have this structure[1,6,3,7] in other words is the internal order significant? can this exist X_i = [3,5,8,1,8]? related question: does X_i represent constructions in Novamente's knowledge representation system? or thoughts or trains of thought? or a hybrid thought/knowledge construction? thanks J Standley
Re: [agi] Re: AGI Complexity
There is no reason you couldn't take every single deterministic, P algorithm in the standard C++ libraries and implement it as hardware. Most programs would then be mostly written in assembly language, with constructions like binarysearch[sorted_array x, search_target y] replacing add a, mov y, etc etc. That approach went out with the introduction of the 4004. yeah I know but with the technology today it becomes a very powerful tool. 95% + of the processing in a Nvidia Geforce4 is pure hard-wired logic. when you write an application that involves the Direct3d API, the vast majority of calls to the API go direct to hardware. you tell the card to apply a certain texture or shader or transform to a 3d object, and the chip grabs that object from it;s on-card RAM and runs it through the appropriate task-dedicated circuitry. This is why video cards have such insane bandwidth on their internal bus, throughput higher than 1 GB/sec is common on mid-range cards Even an old geforce 1 (worth about 20 bucks today) outperforms a top of the current line pentium or athlon in the 3d rendering domain. Imagine a motherboard that acted as the physical layer for a TCP/IP-based mesh network. This motherboard could have a number of slots for major card-based subsystems like graphics and sound, and multiple zif type sockets for several standardized chip sizes and pin configuration. And of course a CPU to play the role of conductor and traffic cop. there could be dozens of chip sockets on a given mother board, and you could connect motherboards to build a more powerful system. Then you simply add all the optional chips and cards you want(the system is a fully functional PC from the start). The thing is, once mass production of all this stuff starts, it's is just as cheap as a conventional pc is today. There is no fundamental technological upgrade, just a different way of using current tech. By having entire c++, java, etc. libraries in hardware on a base system, you take a huge load off the cpu. Instant supercomputer in a tidy 1000$ package. :) J Standley --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Re: AGI Complexity
As I said (maybe you read what I had written as a joke) reconfigurable logic is your best choice. It's almost as good as custom hardware. Even though its pricey, you only have to buy it once and simply upload new designs to it. no, I didn't take it as a joke. I know FPGA's and such are the best choice if you are developing processor intensive custom applications such as an AGI. They are indeed expensive... the general thrust of my post though was about bringing supercomputer power to your average joe. :) J Standley --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)
Alan, I strongly suggest you increase your familiarity with neuroscience before making such claims in the future. I'm not sure what simplified model of the neuron you are using, but be assured that there are many layers of complexity of function within even a simple neuron, let alone in networks. The coupled resistor/capacitor model is only given as a simplified version in textbooks to make the topic of neural networks digestible to the entry-level student. Dendrites are not simple summators, they have a variety of nonlinear processes including recursive, catalytic chemical reactions and complex second-messenger systems. That's just the tip of the iceberg once you get into pharmacological subsystems, the complexity becomes a bit staggering. agreed that the brain is enormously complex; however I think the point Alan was making hinges on a slightly different interpretation of the word complexity. His interpretation seems to be similar to that which Hofstadter elucidates in GEB; namely the idea of 'sealing off' of levels. You can look at the mind through different perspectives and at varying scales because of it's high complexity. Yet this very trait, arising from the brain's mind-boggling complexity, allows one to model it at a system-scale level. At a high enough level, you can start treating various major components as black boxes, and dealing only with their high functionality. Of course you lose a certain amount of accuracy in doing this, but it is nonetheless a valid approach. We view and deal with other people as unified personalities who we cannot 'read their mind'. Rather we observe their actions and draw conclusions about internal states that cannot be directly observed in the absence of sophisticated brain-scanning technology. Despite this limitation, we are able to interact with others and predict their future behavior and mental states to a reasonable degree. Say I'm designing an AGI architecture (which I am btw, but it is irrelevant to this discussion :) and I want to preprocess audio data so that speech is already parsed by the time it enters the AI's cognitive modules. All I need to do is obtain a preexisting natural language parser program and then tailor the AI cognitive module(s) to work w/ it's output instead of raw audio data. I don't need to even look at the parsers' code if I don't want to. (Although it may ease the use of it if I do examine it, it;s not necessary) I suppose I'm saying you can approach the mind (or any complex system that has at least vaguely recognizable functional subsystems) in a manner analogous to that of Object Oriented Programming Jonathan Standley --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
[agi] Re: AGI Complexity
Ed Helfin wrote: It's been some time since I looked at this, but I believe my conclusion was that it wasn't all that reliable, I.e. low % accuracy for correct POS identification?, etc. I don't know if this gets you where you want to go, but it might be worth looking at. I've looked at a number of different speech and text parsers for my project, but haven't decided yet on any one solution. I think in a couple years this technology will have advanced to the point of being 'plug and play' so to speak, where you can include it as a standard library within , say, C++. thanks for the suggestion :) BTB, it seems a better, more forward looking approach to your architecture might be to implement audio parsing (AP - or speech recognition SR?), natural language parsing (NLP) and cognitive processing (CP) or cognition as a coherent whole, not the other way around with separate and distinct audio parsing (AP), natural language parsing (NLP), and cognitive processing (CP) modules...as you suggest with your comments about an OO approach. I've thought about this, and the conclusion I have come to is that depending on how you approach AI, each architecture has its pros and cons. This is why I feel that a functional, modular approach to sensory processing is the easiest but certainly not the only correct way of doing it -- If you show 10 people a simple object like a soda can or a pencil, and then ask them to draw what they see without looking at the object, all ten results are identifiably the same object. This to me suggests that the visual system itself is a highly reliable, predictable system. Given the same input, most individuals visual systems will (assuming no colorblindness or other mutation) pass the same output to the concious levels of the mind. Differences in perception exist to be sure, but the regularity of perception among people is quite remarkable. In addition to the tremendous benefits of architecting something closer to real AGI, i.e. an obvious increase in the 'Goertzelian Real-AGI' level ;-), you would have the benefits of computational optimization, specifically, reduced # of ops to cognition, reduced object I/O, reduced latency, reduced processing redundancy, etc. assuming, of course, your implementation of the cognitive processing (CP) doesn't incur a tremendous overhead from the synthesis with the other two modules. This is a quite perceptive summary of the benefits of the approach you suggest :) I take a quite non-mainstream approach to AI, and more generally to computer science as a whole. For one, I am not at all interested in the CPU-centric paradigm that permeates the computer industry. Dedicated purpose hardware provides task specific performance orders of magnitude higher than that of a general purpose CPU. And task-specific hardware need not be inordinately expensive. Look at graphics and sound boards as an example of this. There is no reason you couldn't take every single deterministic, P algorithm in the standard C++ libraries and implement it as hardware. Most programs would then be mostly written in assembly language, with constructions like binarysearch[sorted_array x, search_target y] replacing add a, mov y, etc etc. not only are you getting the efficiency boost of assembly language, but also the speed boost of dedicated hardware! I'm not suggesting eliminating CPUs, just saying they should act as the conductor, not the conductor plus the orchestra members plus the instruments plus the stage... Also, software can be written in hardware. Photoshop costs 500$, an entry level computer from dell that will run PS quite well costs 400$. This is kinda nutty. Put the fucker on a chip, with some flash ram to allow patching, halve the price (who the hell pirates IC's?), and get at least an order of magnitude increase in program speed *compared to current top of the line Intel/AMD processors running software version of Photoshop*. And this speed would be more or less constant if you put the Photoshop chip in a 400$ PC or a 4000$ dollar pc. (actually, the faster PC's could help out with math-heavy stuff such as certain filters). ok that was all rather off topic :) anyway back to the topic on hand - I personally am not so much interested in either imitating the brains architecture or designing a mind that is highly efficient and 'smart' from the get go. I'm trying to solve the problem of general cognition, and hence I don't care if an AI based on my methods starts out with the smarts of a mouse :). As long as the general conceptual basis is sound, and scaleable to human-level cognition or higher, I would be a very, very happy person. Ed, thanks for your insightful and thought provoking comments :) they have my brain going off in all sorts of directions as a result of writing this response, and that is definitely a good thing. J Standley --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go
Re: [agi] Emergent ethics via training - eg. game playing
Indeed, I regretted those choices as soon as I hit the send button ... Hi Jonathan, I think Sim City and many of the Sim games would be good but Civilization 3 and Alpha Centauri and Black White are highly competitive and allow huge scope for being combative. Compared to earlier versions, Civilisation 3 has added more options for non-war based domination but unless players are committed to a peaceful approach the program is largely a war game. I don't know Black White personally but I picked up a review at: http://www.game-revolution.com/games/pc/strategy/black_and_white.htm The premise is simple: you're a god and it's your task to convert as many nonbelievers to your cause as possible, thereby gaining power. You can be a good god or a bad god, an evil master of destruction or a benevolent flower daddy - or any of the millions of shades in between. By managing your villages and fighting other gods, you vie for ultimate control. I'm not sure that Black White would be good training for an AGI. Do we really want it to limber up as a dominating god - maybe benevolent and maybe not?? Cheers, Philip --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Emergent ethics via training - eg. game playing
Sim City, Black White, the Sims, civ3 the related Alpha Centauri All good choices I think --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Language and AGI (was Re: Early Apps)
- Original Message - From: Shane Legg [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, December 27, 2002 7:48 PM Subject: Re: [agi] Language and AGI (was Re: Early Apps) I guess people continue to do AI with languages like English because that is what is of practical use and where more money is likely to be. Shane A newspeak style language might be useful for communicating with fairly simple AI's An emerging mind would probably have no more use for 10 synonyms for have than a baby learning to talk does. But natural language may be one of the more 'difficult' approaches to AI. The various experiments that have been conducted in regards to the Sapir-Worf hypothesis lead me to question the notion of language as the root of intelligence. It seems likely to me that a human stores and manipulates primarily conceptual constructions, not linguistic ones. The language one speaks certainly influences thought processes, but few people think in sentences. J Standley --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Re: Games for AIs
Gary Miller wrote: People who have pursued the experience such as myself and have been given small tastes of success will tell you unequivocally that if it is not endorphins that are being released then there is something even more powerful at work within the brain. I think that it has been fairly well established that endorphins are involved in these flow states; my contention is that concious sensation is a result of the change in neural activity patterns caused by neurotransmitters and other factors, not the neurotransmitters themselves. IMO this is important b/c it generalizes conciousness as a property of complex dynamic systems such as the brain The interesting thing is that while in this state you perceive the intellect as being greatly heightened, with thoughts flowing at an extremely accelerated pace and the sense of one's self or being separate from everything else is eliminated or greatly diminished. Mystics who devote their live to the self-inducement of this state are not necessarily doing so for just philosophic or religious reasons. The sense of clarity and pleasure experienced during the state may be very addictive and may be the basis for the revelatory experiences that inspired all modern day religions. In many cases the experiences are so strong that a single experience has been known to cause people to completely change the direction of their lives. I've experienced this state before, it is very powerful... While it is difficult to separate the scientific literature from the large body of new age and religious hyperbole, there may be an overdrive gear that can be triggered in the mind by practice of meditative biofeedback. Should a FAI have a MetaGoal to maximize it's own perceived pleasure. Since the FAI will need a mechanism to prioritize it's internal goal states the external trigger for such a state could be used reprioritize the FAI's goals states at least during early development to induce it to follow positive modes of thought and stay out of areas such as obsessive compulsive behavior, antisocial behavior, paranoia, megalomania and other states associated with mental illness. Current research into mental illness does indeed suggest that such disorders are the result of faulty internal mechanisms which in a normal person keep the mind on an even keel. the Discovery channel had a program about OCD on not that long ago that profiled a team of researchers who are testing that hypothesis J Standley http://users.rcn.com/standley/AI/AI.htm --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: Re[3]: [agi] TLoZ: Link's Awakening.
I do agree that it seems a bit forced at times; the thing that struck me about it is that it seems to be a efficient method of filtering confusing or seemingly contradictory ideas into a set of data that is relatively easy to parse and/or analyze. the 'debate' between Kant and a modern philosopher IMO is a good example of this I'm not sure, but I think this process could be implemented in an algorithmic manner. A true AI would probably look at this (CIN) the way we do, as an idea to be evaluated and discussed, but CIN might be an easy way to add some abstract reasoning capability to something like a chatterbot J Standley http://users.rcn.com/standley/AI/AI.htm A clarification: CS I'm not quite getting their generic space and CS blended space concepts; it all seems a bit forced and CS overabstracted. In their monk-mountain and regatta-race examples I get the mental overlapping of the same space on two different occasions point, where drawing neat diagrams of blended spaces makes some sense; I just don't get the generalization of this to other classes of problem, where it seems forced. -- Cliff --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Re: Games for AIs
The idea of putting a baby AI in a simulated world where it might learn cognitive skills is appealing. But I suspect that it will take a huge number of iterations for the baby AI to learn the needed lessons in that situation This is definitely a serious consideration - one way to overcome this might be the inclusion of innate behaviors that steer the new mind towards activities/actions that engender cognitive and emotional development. Babies instinctively look at faces, reach for objects, and track moving things w/ their eyes and eventually head and neck. An AI's innate behaviors could have a built in reward structure, where, for example, succesfully tracking a ball rolled across the simulated floor would reinforce the neural network patterns that produced the desired behavior. On a related note, what is the nature of pleasure(reward)? is it simply the sensation that occurs b/c of the neural activity/reorganization that occurs when needs are fulfilled or tasks completed successfully? If so, does pleasure correlate to increases in neural efficiency? Neurons and the networks they make up require a certain amount of reinforcement to maintain normal functioning (this is a fact, though I wish I had a reference handy to back up that assertion :). I'm guessing that pleasure is caused when reinforcement levels rise above their recent average. This would account for the fact that a) practising or doing something you like is pleasurable b) pleasure is relative to circumstance, and c) all forms of pleasure seem to be built upon the same core sensation. IMO this is important because it takes chemical effects out of the emotion equation, ie chemicals cause pleasure by activating existing reinforcement mechanisms. If I'm right, emotions are (at their most basic level) nothing but patterns in the activity of neural network type system, which we feel b/c we 'are' the system's activity, not the system itself... On a practical note, if the above hypothesis is correct, it would be relatively easy to identify the signature patterns of different emotions (via PET or fMRI) and emotionally program an AI's reward structure to ensure that it behaves itself J Standley http://users.rcn.com/standley/AI/AI.htm updated today! see: http://users.rcn.com/standley/AI/Neural%20Processing.htm http://users.rcn.com/standley/AI/ISL.htm --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Re: Games for AIs
Alan, [motovation problem]. No, human euphoria is much more than simple neural reenforcement. It is a result of special endorphines such as dopomine that are released when the midbrain is happy about something. You're right. I really should have thought out that post a little more before writing it when I said about removing chemistry from the equation what I was sort of getting at was that the endorphins'(or cocaine or any other pleasure-generating chemical) presence results in changes in the behavior and activity of affected neurons, and what we 'feel' is the shifts in activity patterns. Without getting too offtopic or philosophical, I was trying to universalize the phenomena of feeling emotions, by saying that it's not the chemical activity itself we feel. If one were to stimulate a given cluster of neurons in a manner that would cause them to act exactly as if they were being influenced by endorphins, I think the subject would 'feel' the exact same sensation as if the neurons were 'naturally' stimulated You see, the cortex has no oppinion about anything whatsovever. It is merely a computational matrix. It receives its programming from exactly two sources. External stimuli and the midbrain/brain-steam. (though special areas of the cortex are dedicated to doing some of the high-level work required by emotional circuits). In the brain steam there are special neural networks that generate special kinds of decisions that I will call oppinions. ;) When this circuit likes something it gets all happy and sends excitory signals... When it is unhappy it sends inhibitory signals. A particular disorder that I have (and many other people have) is depression where excessive inhibitory signals are generated I have moderate depression w/ an associated sleep disorder; it's one of the things that originally got me interested in neurology and cog. science. I'm still reading and hopefully I'll have some ideas about emotional qualia and the like. I'm looking at your website as I write this, you have some fascinating ideas on there... J Standley http://users.rcn.com/standley/AI/AI.htm --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Automated Turing Test
how about selling the software to spammers while also selling more difficult tests to sites like yahoo? anyone want to get rich quick? :) Two things come to mind: 1) might be useful at some stage for doing tests of perceptual systems, in some cases possibly even for directing research (i.e., using the given tests as challenges) 2) make money selling captcha-passing software to spammers although if you do 2), we'll have to kill you. -- Cliff --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]