--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
What did your simulation actually accomplish? What were the results?
What do
you think you could achieve on a modern computer?
Oh, I hope there's no misunderstanding: I did not build networks to do
any kind of
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Uh... I forgot to mention that explaining those data about child
language learning was the point of the work. It's a well known effect,
and this is one of the reasons why the connectionist models got everyone
excited: psychological facts
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
One problem with some
connectionist models is trying to assign a 1-1 mapping between words and
neurons. The brain might have 10^8 neurons devoted to language, enough to
represent many copies of the different senses of
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I doubt you could model sentence structure usefully with a neural network
capable of only a 200 word vocabulary. By the time children learn to use
complete sentences they already know thousands of words after exposure
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
Personally, I would experiment with
neural language models that I can't currently
implement because I lack the
computing power.
If such neural systems can actually spit out sensible
analyses of natural language, it would obviously be a
huge discovery and could probably be sold to a good
number of people as a commercial product. So why
aren't more people investing in this, if you've
already got working software that just
Matt Mahoney wrote:
I doubt you could model sentence structure usefully with a neural network
capable of only a 200 word vocabulary. By the time children learn to use
complete sentences they already know thousands of words after exposure to
hundreds of megabytes of language. The problem seems