My impression is that most machine learning theories assume a search space
of hypotheses as a given, so it is out of their scope to compare *between*
learning structures (eg, between logic and neural networks).

Algorithmic learning theory - I don't know much about it - may be useful
because it does not assume a priori a learning structure (except that of a
Turing machine), but then the algorithmic complexity is incomputable.

Is there any research that can tell us what kind of structures are better
for machine learning?  Or perhaps w.r.t a certain type of data?  Are there
learning structures that will somehow "learn things faster"?

Note that, if the answer is negative, then the choice of learning structures
is arbitrary and we should choose the most developed / heavily researched
ones (such as first-order logic).

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62846736-33363c

Reply via email to