On 10/22/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Also to Novamente, if I understand correctly. Terms are linked by a
probability and confidence. This seems to me to be an optimization of a neural
network or connectionist model, which is restricted to one number per link,
representing
Hi Matt,Regarding logic-based knowledge representation and language/perceptual/action learning -- I understand the nature of your confusion, because the point you are confused on is exactly the biggest point of confusion for new members of the Novamente AI team.
A very careful distinction needs to
Matt Mahoney wrote:
My concern is that structured knowledge is inconsistent with the development of language in children. As I mentioned earlier, natural language has a structure that allows direct training in neural networks using fast, online algorithms such as perceptron learning, rather
On 23 Oct 2006 at 10:06, Ben Goertzel wrote:
A very careful distinction needs to be drawn between:
1) the distinction between
1a) using probabilistic and formal-logical operators for representing
knowledge
1b) using neural-net type operators (or other purely quantitative, non-
Hi, For instance, this means that the cat concept may well not be
expressed by a single cat term, but perhaps by a complex learned (probabilistic) logical predicate.I don't think it's really useful to discuss representing word meaningswithout a sufficiently powerful notion of context (which is
On 10/23/06, Matt Mahoney [EMAIL PROTECTED] wrote: [...]
One aspect of NARS and many other structured or semi-structured knowledge representations that concerns me is the direct representation of concepts such as is-a, equivalence, logic (if-then, and, or, not), quantifiers (all, some), time
Ben Goertzel wrote:
The limited expressive scope of classic ANNs was actually essential
for getting relatively naïve and simplistic learning algorithms (e.g.
backprop, Hebbian learning) to produce useful solutions to an
interesting (if still fairly narrow) class of problems.
Well, recurrent
YKY,Of course there is no a priori difference betw a set of nodes and links and a set of logical relationships...The question with your DB of facts about love and so forth is whether it captures the subtler uncertain patterns regarding love that we learn via experience My strong suspicion is
I don't exactly have the same reaction, but I have some things to add
to the following exchange.
On 10/23/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Children also learn language as a progression toward increasingly complex
patterns.
- phonemes beginning at 2-4 weeks
In child development understanding seems to considerably precede the ability to articulate that understanding. Also development seems to generally move from highly abstract representations (stick men, smily suns) to more concrete adult-like ones.
On 23/10/06, justin corwin [EMAIL PROTECTED] wrote:
On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:
Another interesting development is the rise of the use of invariant feature
detection algorithms together with geometric hashing for some kinds of
object recognition. The most notable successes to date have been using
David Lowe's SIFT method,
On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:
It's a shame that Evolution Robotics weren't able to develop that system
further. A logical progression would be to extend the geometric hashing to
3D and eventually 4D, although that would require a stereo camera or some
other way of measuring
You can get depth information from single camera motion (eg Andrew Davison's MonoSLAM), but this requires an initial size calibration and continuous tracking. If the tracking is lost at any time you need to recalibrate. This makes single camera systems less practical. With a stereo camera the
On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:
My inside sources tell me that there's little or no software development
going on at Evolution Robotics, and that longstanding issues and bugs remain
unfixed. They did licence their stuff to WoWee, and also Whitebox Robotics,
so its likely we'll
I am interested in identifying barriers to language modeling and how to
overcome them.
I have no doubt that probabilistic models such as NARS and Novamente can
adequately represent human knowledge. Also, I have no doubt they can learn
e.g. relations such as all frogs are green from examples
So my question is: what is needed to extend language models to the level of
compound sentences? More training data? Different training data? A new
theory of language acquisition? More hardware? How much?
What is needed is:
A better training approach, involving presentation of compound
16 matches
Mail list logo