The problem with AIT is you need a PhD to understand it, the

On Mon, Jun 26, 2023 at 11:10 AM James Bowery <[email protected]> wrote:

> https://youtu.be/wPonuHqbNds?t=1199
>
> Chomsky's apparently erroneous critique of Transformer-based LLMs is
> actually correct in the larger sense. His apparent error?
>
> Ask ChatGPT the following:
>
> "What is the grammar diagram for the sentence:
> The friends of my brothers are in England."
>
> Contrary to what Chomsky says, it will produce the correct structure and,
> indeed, if asked "Who is in England, my brother or their friends?" It will
> answer correctly.
>
> The larger sense in which Chomsky is correct is given in the paper "Neural
> Networks and the Chomsky Hierarchy".  See, in particular, Figure 1, which
> classifies Transformers as at the bottom rung in the Chomsky Hierarchy of
> grammars.  The reason for this classification is similar to the reason that
> diagram places RNNs just above Transformers despite the fact that
> topologically speaking, they are capable of emulating a universal Turing
> machine (which is at or next to the top grammar, depending on how strict
> one wants to be):
>
> The pragmatic limits on gradient descent training algorithms combined with
> that of attempting to represent a UTM's writable store in RNN form.
>
> Transformers can, within the context length they provide, learn grammars
> with recursion depth to some extent (much shorter than their context
> length) -- but aside from the limited recursion per sentence, there is also
> the fact that that number of parameters goes up as the square of the
> context length, which makes total document comprehension subject to limits
> that natural language understanding is not.
>
> This distinction becomes crucial when the field of AI ethics refuses to
> address the IS vs OUGHT distinction head-on and, instead, comes up with all
> manner of unprincipled "metrics" that they use to "quantify" properties of
> LLMs such as "bias" or "safety" or "toxicity" or "hallucination" or... the
> list goes on and on. By conflating IS with OUGHT they commit the first and
> most egregious transgression against ethics and they even do so in the name
> of "ethics". AIs that cannot comprehend the cognitive _structure_ of the
> _entire_ corpus on which they are trained, cannot critically examine the
> utterances contained therein for self-consistency. That means they are
> incapable of _constructing_ truth even as defined _relative_ to the corpus
> as the universe of observations being modeled. I one pointed Chomsky to
> his colleague, Marvin Minsky's final plea to the field of AI, that they
> take seriously Algorithmic Information Theory's power in discerning truth.
> Minsky was so forceful in his admonition that he recommended everyone spend
> the rest of their lives studying it. Chomsky's response? People should
> take Minsky's advice.
>

I noticed papers re: AIT have dropped off a lot in the AGI conference. It
used to be they won all the awards and accolades.

>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tc63b3ba66d5ff6e7-M57c8e2476636fdf536dff426>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc63b3ba66d5ff6e7-M9121f2d754fd26334d4a9a47
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to