Sean Markan (principally),

I just subscribed to the AGI list today and saw your message from March.

I don't know if I can access an old thread just by hi-jacking the subject
line. But as an additional comment. Without trying to discourage you.
Because it's almost inevitable that any new effort is going to be naive in
places, and actually I think _everyone_ ignores this old chestnut I'm going
to give another polish. It's actually really good you are "naively"
approaching it tabula rasa. Fresh eyes and all that.

You write (with reason):

"Whatever ideas we come up with for capturing the richness of thought, they
need to be applied in the context of a behaving system to see if they
actually work. It is this interface between thought and productive behavior
that linguists don’t tend to cross"

OK, you went to MIT(?) Chomsky was at MIT so that might explain it. But
actually there is an entire school of linguistics, Functional Linguistics,
which insists exactly that language can only be understood in relation to
behaviour (function).

But that's not all. There's actually a whole fascinating story around the
origin of this separate school of linguistics. Especially in the context of
your next statement that:

'...we eventually need to think about the automated acquisition of
mentalese and “rules of thought” from experience'.

Well, you could say that the whole reason Functional Linguistics came into
being, was exactly because Chomsky pointed out that 'automated acquisition
of mentalese and “rules of thought” from experience,' led to contradictions.

Sort of. Functionalism was a thing before this. But after Chomsky they had
to abandon objective mental structure entirely. While Chomsky took the
other tack and insisted mental structure existed, but could not be learned.

The whole field of AI, not to mention linguistics, has really forgotten
this. Syd Lamb said he had written some more papers on it when I debated it
with him some years back, but I guess he's retired now.

Meanwhile Deep Learning is back in the game of automatically acquiring
mental structure.

To recap: a basis for mental structure in function/behaviour was actually
state of the art circa 1950. They were developing your
'automated acquisition of mentalese and “rules of thought” from experience'.
That was also state of the art circa 1950. Chomsky was Harris's student,
and had the job of extending the "rules of thought" to syntax. But Chomsky
was also a fresh thinker, and he pointed out the "discovery procedures" led
to contradictory structure.

Meanwhile, back in 2017, Deep Learning is again trying to learn internal
representations for language. They still can't quite seem to nail it. Could
they be missing those last few threads in the sweater because automated
learning leads to contradictory structure?

Anyway your points are apposite.

-Rob



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to