It's not that it's hard to feed data into OpenCog, whose
representation capability is very flexible

It's simply that deep NNs running on multi-GPU clusters can process
massive amounts of text very very fast, and OpenCog's processing is
much slower than that currently...


On Wed, Feb 20, 2019 at 3:57 PM Rob Freeman <[email protected]> wrote:
>
> No problem Linas.
>
> From my point of view I'm encouraged that OpenCog is closer to an interesting 
> language model than I thought.
>
> I was surprised to see you discussing category theory in the context of a 
> language model. Category theory is motivated by formal incompleteness. To see 
> this applied to language is something I argued for long and hard. I remember 
> a thread on U. Bergen's "Corpora" list in 2007 with very little traction on 
> exactly this point. People could not see the relevance of formal 
> incompleteness for language. To see you, and others, embracing this is 
> progress.
>
> I'm glad you are deconstructing the grammar. You are probably forced to it by 
> the success of distributed representation these last few years. But at least 
> you are doing it. I feared some ghastly fixed Link Grammar with neural nets 
> just disambiguating.
>
> Instead I see Ben is right. My basic data formulation of the problem may well 
> be compatible with what OpenCog are doing. That's good.
>
> Though I am still confused by Ben's statement that "we can't currently feed 
> as much data into our OpenCog self-adapting graph as we can into a BERT type 
> model".
>
> What does an OpenCog network look like that it is hard to feed data into it. 
> Can you give an example?
>
> What does an OpenCog network with newly input raw language data look like?
>
> -Rob
>
> On Wed, Feb 20, 2019 at 4:21 PM Linas Vepstas <[email protected]> wrote:
>>
>>
>>
>> On Tue, Feb 19, 2019 at 5:33 PM Rob Freeman <[email protected]> 
>> wrote:
>>>
>>> Linas,
>>>
>>> OK. I'll take that to be saying, "No, I was not influenced by Coecke et al.
>>
>> Note to self: do not write long emails. (I was hoping it would serve some 
>> educational purpose)
>>
>> I knew the basics of cat theory before I knew any linguistics. I skimmed the 
>> Coecke papers, I did not see anything surprising/unusual that made me want 
>> to study them closely. Perhaps there are some golden nuggets in those 
>> papers? What might they be?
>>
>>  So, no, I was not influenced by it.
>>
>>> For all that, I can't figure out if you are contrasting yourself with their 
>>> treatment or if you like their treatment.
>>
>>
>> I don't know what thier treatment is. After a skim, It seemed like word2vec 
>> with some minor twist. Maybe I missed something.
>>>
>>>
>>> I quite liked their work when I came across it. In fact I had been thinking 
>>> for some time that category theory has something the flavour of a gauge 
>>> theory.
>>
>>
>> Yellow flag. Caution. I wouldn't go around saying things like that, if I 
>> were you. The problem is that I've got a PhD in theoretical particle physics 
>> and these kinds of remarks don't hold water.
>>
>>> I have no problem with the substance of it. I just don't think it is 
>>> necessary. At least for the perceptual problem. The network is a perfectly 
>>> good representation for itself.
>>
>>
>> To paraphrase: "I know that the earth goes around the sun. I don't think 
>> it's necessary to understand Kepler's law".  For most people, that's a 
>> perfectly fine statement.  Just don't mention black holes in the same breath.
>>
>> > I say you can't resolve above the network. Simple enough for you?
>>
>> Too simple. No clue what that sentence means.
>>
>> > '"fixed"? What is being "lost"?  What are you "learning"? What do you mean 
>> > by "training"? What do you mean by "representation"? What do you mean by 
>> > "contradiction"?'...
>> >  But if you haven't understood them, it will probably be easier to use 
>> > your words than argue about them endlessly.
>>
>> ???
>>
>> > Anyway, in substance, you just don't understand what I am proposing. Is 
>> > that right?
>>
>> I don't recall seeing a proposal. Perhaps I hopped in at the wrong end of an 
>> earlier conversation.
>>
>> I'm sorry, this conversation went upside down really fast. I've hit dead end.
>>
>> --linas
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

"The dewdrop world / Is the dewdrop world / And yet, and yet …" --
Kobayashi Issa

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-Mcf61623c192a8a81a07f18d1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to