I'm not doing too well, had a few days of brain outage and poor sleep patterns, I'm a little better now.
I'm not going to rehash the concept of the homunculus in the field of philosophy of mind here, (there is a different but related idea of a homunculus in neural science, not quite what I'm talking about here...) If you need to know what a homunculus is, use the googs, and filter out "Edward Elrick" and "Alchemist". ;) In the philosophy of mind, the homunculus has two uses. It can be made to stand in for the part of the mind that we don't understand, as in as we explain properties of human psychology, we can shrink the homunculus and make him smaller and weaker until he finally goes away and we have a full understanding of how the brain works. Alternatively, if we can show that a proposal for how the brain works does NOT make the homunculus smaller and weaker than the whole mind then therefore that theory has no explanatory value. To the best of my knowledge, deep learning networks are basically feed-forward systems. Feedback, as it is used, is thought of as correcting the response of the varrious layers to the input. Prednet is the only system I know of that does things the Right Way... =P I think a lot of people intuitively stay away from the idea of a primarily feedback system, even though it is supported by the neural science. I think the line of thought is "Ok, so I'm simulating the input, so what? I'll just go sample the input and be done... What do you want me to do? Show the simulation to the homunculus?" =P So what did you have to do to simulate the input? .... So what did you have to do to simulate the input? .... Yeah... Let me go out on a limb here but you created a heirarchy of abstractions that specified what to simulate and where and when to simulate it, ie, conscious awareness. -> JACKPOT!!! Yeah, you can think of the mind as layers around a homunculus, but it's like an ... um ... er ... well, onion, <runs and hides> in that the skin is the meat. In general, we can talk about any signal. We know that the first few phases of the auditory system attempts to divide signals into discreet sources, (to the extent it is able) and performs a frequency domain analysis which is further decomposed into a base frequency and a space time domain. Ie, imagine a piece of classical music being played at different tempos and in different keys... The brain is not daunted in recognizing a voice or a piece of music when the pitch or tempo is changed and can accurately report exactly what was changed relative to some standard. This indicates, to some, extent how the brain processs things. It is able to encode abstractions for pitch, tempo, and score and be able to compose them to simulate the piece of music, while it is being heard, and then upon successfully doing so, report what is being heard. In order to solve the Obstacle tower challenge, there needs to be a way to encode all the things necessary to reproduce the input from the simulation, in a way that is learnable from feedback, in real time so that the agent can successfully perceive the environment and, hopefully have the other components necessary to solve the problem. So basically I need to invent a whole new type of computer rendering, that can generate high quality images and train itself on the fly, and run obtainable hardware and have it done by the end of March... And do it by myself because there's no chance of anyone on this list will help (or take my ideas and go collect the prize money and blame themselves...) ugh, I think I'll go back to my video game addiction. -- Please report bounces from this address to [email protected] Powers are not rights. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T9974b13cda814d12-M088768ddd1e846000c319517 Delivery options: https://agi.topicbox.com/groups/agi/subscription
