Contradictions are an interesting and important topic...

PLN logic is paraconsistent, which Curry-Howard-corresponds to a sort
of gradual typing

Intuitionistic logic  maps into Type Logical Categorial Grammar (TLCG)
and such; paraconsistent logic would map into a variant of TLCG in
which there could be statements with multiple contradictory
parses/interpretations

In short formal grammar is not antithetical to contradictions at the
level of syntax, semantics or pragmatics

It is true that GPT3 can capture contradictory and ambiguous aspects
of language.  However, capturing these without properly drawing
connections btw abstract patterns and concrete instances, doesn't get
you very far and isn't particularly a great direction IMO

ben

On Fri, Jul 31, 2020 at 8:25 PM Rob Freeman <chaotic.langu...@gmail.com> wrote:
>
> Ben,
>
> By examples do you mean like array reversal in your article?
>
> I agree. This problem may not be addressed by their learning paradigm at all.
>
> But I disagree this has been the biggest problem for symbol grounding.
>
> I think the biggest problem for symbol grounding has been ambiguity. Manifest 
> in language.
>
> So I agree GPT-3 may not be capturing necessary patterns for the kind of 
> reason used in array reversal etc. But I disagree that this kind of reasoning 
> has been the biggest problem for symbol grounding.
>
> Where GPT-3 may point the way is by demonstrating a solution to the ambiguity 
> problem.
>
> That solution may be hidden. They may have stumbled on to the solution simply 
> by virtue of the fact that they have no theory at all! No preconceptions.
>
> I would contrast this with traditional grammar learning. Which does have 
> preconceptions. Traditional grammar learning starts with the preconception 
> that grammar will not contradict. The GPT-x algorithm may not have this 
> expectation. So they may be capturing contradictions and indexing them on 
> context, by accident.
>
> So that's my thesis. The fundamental problem which has been holding us back 
> for symbol grounding is that meaning can contradict. A solution to this, even 
> by accident (just because they had no theory at all?) may still point the way.
>
> And the way it points in my opinion is towards infinite parameters. 
> "Parameters" constantly being generated (and contradiction is necessary for 
> that, because you need to be able to interpret data multiple ways in order to 
> have your parameters constantly grow in number 2^2^2^2....)
>
> Grok that problem - contradictions inherent in human meaning - and it will be 
> a piece of cake to build the particular patterns you need for abstract 
> reasoning on top of that. Eliza did it decades ago. The problem was it 
> couldn't handle ambiguity.
>
> -Rob
>
> On Sat, Aug 1, 2020 at 9:40 AM Ben Goertzel <b...@goertzel.org> wrote:
>>
>> Rob, have you looked at the examples cited in my article, that I
>> linked here?   Seeing this particular sort of stupidity from them,
>> it's hard to see how these networks would be learning the same sorts
>> of "causal invariants" as humans are...
>>
>> Transformers clearly ARE a full grammar learning architecture, but in
>> a non-AGI-ish sense.  They are learning the grammar of the language
>> underlying their training corpus, but mixed up in a weird and
>> non-human-like way with so many particulars of the corpus.
>>
>> Humans also learn the grammar of their natural languages mixed up with
>> the particulars of the linguistic constructs they've encountered --
>> but the "subtle" point (which obviously you are extremely capable to
>> grok) is that the mixing-up of abstract grammatical patterns with
>> concrete usage patterns in human minds is of a different nature than
>> the mixing-up of abstract grammatical patterns with concrete usage
>> patterns in GPT3 and other transformer networks.   The human form of
>> mixing-up is more amenable to appropriate generalization.
>>
>> ben
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T21c073d3fe3faef0-Mf7acad4918d160262f7b65f3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to