Linas,

> I want to keep this conversation realistic.  Sophia, today, struggles to
> see human faces.
>

We don't talk about applying existing narrow method. These method may be
needed/realistic/practical now, but they don't bring us much closer to AGI.
If we were talking about 'realistic' things in this sense, we would not
talk about AGI at all.
Our task is to move forward to goal of creating a vision system for AGI.
It's not about making a better narrow face recognition algo. We can do
this, but it is not our task now.


> Realistic compute power -- lets say several laptops worth of compute, and
> a GPU card that doesn't have some insanely whirry fan.  This is what you
> can get on-site, at the location where the vision is happening.
>

Realistic - not now, but 5 or 10 years from now. I remember, we were
developing an image matching system which ran 'unrealistic' few minutes in
late 1990-th. But then it became less then a second. 'Unrealistic' is maybe
10^6 times slower now than needed...


>
> > We would like to hardcode as less as possible. We can (and likely
> should) code TensorFlowValue
>
> I think that would be a good experiment to conduct.  While Ben and other
> enjoy designing systems top-down, I like to pursue a bottom-up approach --
> build something, see how well it works. If it works poorly, make sure that
> we understood *why* it failed, and what parts were good, and then try
> again.  So, for me a TensorFlowValue object would highlight what's good and
> what's bad in the current design.  Engineering hill-climbing.
>

Nice.


>
>
> > but we would like to avoid hardcoding (is-near? A B).
>
> I agree, sort-of-ish.  English language propositions are a "closed class"
> - its a finite list, and a fairly small list -- a few dozen that are truly
> practical. A few hundred, if you start listing archaic, obsolete, rare
> ones, ones unapplicable to images ... https://en.wikipedia.org/wiki/
> List_of_English_prepositions   So for now, I find it acceptable to hard
> code a certain subset.
>

> A discussion about "how can we learn prepositions from nothing?" would
> have to be a distinct conversation.
>

True, but the problem is not in the number of thse prepositions, but in
their applications in different contexts. If we hard-code (is-near? A B),
say, for rectangular regions on images, it will be inapplicable even to
regions of a different shape. So, these prepositions can have some built-in
templates, but not a procedural implementation.


>
>
>>  PlusLink, TimesLink, etc..
>>>
>> > This is exactly my question whether we need them or not :)
>
> Whether they are needed or not depends a lot on what kind of data is
> exposed by TensorFlowValue, and how that data is then routed up into the
> natural-language and reasoning layers. There are multiple possible designs
> for this; there is no particular historical precedent (in the atomspace)
> for this.
>

OK


> https://en.wikipedia.org/wiki/Centroid
>
> It avoids some of the complexity of bounding boxes (which might be
> touching, overlapping or inside-of.)
>

It will not work. In the case of centroids, a nose can be IsLeftOf a face.
So, we shouldn't oversimply either...

-- Alexey

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CABpRrhzekOY_R9tq1EKnV2SDJscK2A06NBb65Fw7SNCE_2%3Dxpg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to