On 08/11/2007, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: > > My impression is that most machine learning theories assume a search space > of hypotheses as a given, so it is out of their scope to compare *between* > learning structures (eg, between logic and neural networks). > > Algorithmic learning theory - I don't know much about it - may be useful > because it does not assume a priori a learning structure (except that of a > Turing machine), but then the algorithmic complexity is incomputable. > > Is there any research that can tell us what kind of structures are better > for machine learning?
Not if all problems are equi-probable. http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization However this is unlikely in the real world. It does however give an important lesson, put as much information as you have about the problem domain into the algorithm and representation as possible, if you want to be at all efficient. This form of learning is only a very small part of what humans do when we learn things. For example when we learn to play chess, we are told or read the rules of chess and the winning conditions. This allows us to create tentative learning strategies/algorithms that are much better than random at playing the game and also giving us good information about the game. Which is how we generally deal with combinatorial explosions. Consider a probabilistic learning system based on statements about the real world TM, without this ability to alter how it learns and what it tries, it would be looking at the probability of whether a bird tweeting is correlated with his opponent winning, and also trying to figure out whether emptying an ink well over the board is a valid move. I think Marcus Hutter has a bit about how slow AIXI would be at learning chess somewhere in writings, due to only getting a small amounts of information (1 bit ?) per game about the problem domain. My memory might be faulty and I don't have time to dig at the moment > Or perhaps w.r.t a certain type of data? Are there > learning structures that will somehow "learn things faster"? Thinking in terms of fixed learning structures is IMO a mistake. Interstingly AIXI doesn't have fixed learning structures per se, even though it might appear to. Because it stores the entire history of the agent and feeds it to each program under evaluation, each of these may be a learning program and be able to create learning strategies from that data. You would have to wait a long time for these types of programs to become the most probable if a good prior was not given to the system though. Will Pearson ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=62882969-3d3172
