On Thu, May 24, 2018 at 6:03 AM, Alexey Potapov <[email protected]> wrote:

> Linas,
>
>
>> I want to keep this conversation realistic.  Sophia, today, struggles to
>> see human faces.
>>
>
> We don't talk about applying existing narrow method. These method may be
> needed/realistic/practical now, but they don't bring us much closer to AGI.
> If we were talking about 'realistic' things in this sense, we would not
> talk about AGI at all.
> Our task is to move forward to goal of creating a vision system for AGI.
> It's not about making a better narrow face recognition algo. We can do
> this, but it is not our task now.
>

And that is not what I was saying, at all. What I was talking about was the
principles of software architecture: if you want to write code, then you
have to design for present-day platforms, present-day speeds-and-feeds.  If
you want to speculate about what kind of hardware we'll have in 10 or 20
years, that's a different game, than attempting to build something
semi-workable today.


>
> > but we would like to avoid hardcoding (is-near? A B).
>
> I agree, sort-of-ish.  English language propositions are a "closed class"
> - its a finite list, and a fairly small list -- a few dozen that are truly
> practical. A few hundred, if you start listing archaic, obsolete, rare
> ones, ones unapplicable to images ... https://en.wikipedia.org/wiki/
> List_of_English_prepositions   So for now, I find it acceptable to hard
> code a certain subset.
>

> A discussion about "how can we learn prepositions from nothing?" would
> have to be a distinct conversation.
>

> True, but the problem is not in the number of these prepositions, but in
> their applications in different contexts. If we hard-code (is-near? A B),
> say, for rectangular regions on images, it will be inapplicable even to
> regions of a different shape. So, these prepositions can have some
> built-in templates, but not a procedural implementation.

In any sort of engineering progression, there are subsystems that you can
work on and perfect today, and other subsystems that you just punt on,
hack, and say to yourself "I'll fix this sometime later".   How to learn
prepositional relationships from scratch is an interesting discussion, but
it is different from the question "how can I attach tensorflow to opencog
in the next month or two?"

Perhaps it is possible to figure out how to learn prepositions, from
scratch, in a month or two.  But I doubt it. Unless you happen to have some
very clear ideas about this, because I certainly don't.


>
>
>>  PlusLink, TimesLink, etc..
>>>
>> > This is exactly my question whether we need them or not :)
>
> Whether they are needed or not depends a lot on what kind of data is
> exposed by TensorFlowValue, and how that data is then routed up into the
> natural-language and reasoning layers. There are multiple possible designs
> for this; there is no particular historical precedent (in the atomspace)
> for this.
>

> OK

Again, it would be great if we could nail down the next level of details.
Exactly what kind of output is generated by tensorflow, and exactly what we
want to do with it in opencog.

Or perhaps its a different question: maybe the question is "how can we map
tf.keras to atomese?"
Because this snippet:

model = tf.keras.Sequential([
  tf.keras.layers.Dense(10, activation="relu", input_shape=(4,)),  #
input shape required
  tf.keras.layers.Dense(10, activation="relu"),
  tf.keras.layers.Dense(3)
])

appears to be a purely declarative definition of a network topology,
which we could map to Atomese.
This would allow us to write tensorflow programs in Atomese. Why is
that interesting? Not because

we want humans to write tensorflow models in atomese, but because
maybe we can have PLN
perform reasoning about tensorflow models, or because we can use MOSES
to create, control
and evaluate tensorflow models, or perhaps you have so
probbilistic-programing idea that could
auto-general different tensorflow models.

So far, I am very unclear about exactly what problem we are trying to
solve, here (other than the

"problem of AGI").



> https://en.wikipedia.org/wiki/Centroid
>
> It avoids some of the complexity of bounding boxes (which might be
> touching, overlapping or inside-of.)
>

> It will not work. In the case of centroids, a nose can be IsLeftOf a
face. So, we shouldn't oversimply either...

Sure. But a common engineering progression is to have the system architect,
and/or a senior programmer create a functioning prototype, and then have 3
or 5 junior programmers run around and replace the centroids by bounding
boxes, or whatever. its a division-of-labor issue.

Linas.


-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA34td-hhfAeW8%3DJ2EdReGzbSzO3Wm7oD50L85wKdSQyF0g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to