Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread BillK
On Thu, Oct 23, 2008 at 12:55 AM, Matt Mahoney wrote: I suppose you are right. Instead of encoding mathematical rules as a grammar, with enough training data you can just code all possible instances that are likely to be encountered. For example, instead of a grammar rule to encode the

Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread Mark Waser
: [agi] Language learning (was Re: Defining AGI) --- On Wed, 10/22/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: You make the implicit assumption that a natural language understanding system will pass the turing test. Can you prove this? If you accept that a language model is a probability

AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Dr. Matthias Heger
You make the implicit assumption that a natural language understanding system will pass the turing test. Can you prove this? Furthermore, it is just an assumption that the ability to have and to apply the rules are really necessary to pass the turing test. For these two reasons, you still

Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Matt Mahoney
--- On Wed, 10/22/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote: You make the implicit assumption that a natural language understanding system will pass the turing test. Can you prove this? If you accept that a language model is a probability distribution over text, then I have already

AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
Andi wrote This really seems more like arguing that there is no such thing as AI-complete at all. That is certainly a possibility. It could be that there are only different competences. This would also seem to mean that there isn't really anything that is truly general about intelligence,