Hmm, I'll think about this, thx...
In a way, I wonder if this is related to what InfoGAN does, with its
latent variables that need to have high mutual information with the
state of the NN modeling the data..
more later...
On Sat, Jan 21, 2017 at 9:53 AM, Linas Vepstas wrote:
> I don't think I e
Hey Noah,
>> On Friday, September 16, 2016 at 11:37:31 AM UTC-4, Noah Bliss wrote:
>> I understand an issue recently discussed with embodiment concerns
>> methods for processing visual input. It's well known that at this time
>> sending raw video into atomspace is a bad idea and that humans have
b
I don't think I ever spoke very carefully to Ralf about this, nor am I
sure that the seed I tried to plant ever germinated in Ben's head, so
let me restart from scratch. Perhaps this is something Noah could
work on?
For simplicity, let me work with sound, because its 1D (as a time
series) not 2D
Ben,
Sounds good, I would definitely be interested. Seems pretty ambitious but
no one ever achieved great things by aiming low. I noticed Ralf was CC'd in
this topic so if he could reach out I am available on all major platforms
and while I may spend most of my initial time learning and "looking o
Noah,
What Ralf is working on is making a "DeStin-like" visual processing
hierarchy in Tensorflow, probably using InfoGAN as a key ingredient
(within each "DESTIN-like node"), and then integrating this hierarchy
with OpenCog so that OpenCog can be used to recognize semantic
patterns in the state o
Ralf Mayet in HK is working on an approach such as you describe... help
would be valued ... more later...
On Jan 18, 2017 14:15, "Noah Bliss" wrote:
> College has kept me busy but I finally took the time to go through the
> pivision code on the hansonrobotics github. Correct me if I am wrong, bu
College has kept me busy but I finally took the time to go through the
pivision code on the hansonrobotics github. Correct me if I am wrong, but I
saw no integration of visual information being fed into opencog, at least
not directly. I don't know what kind of chewing ROS does to the information
Afterthought:
Checked out Kinfu, looks to do something quite similar. I am somewhat
concerned about the resolution currently offered though. I'll see if there
is a way to scale it down to simpler objects for easier atomspace digging
and verification. Otherwise I do understand the draw of Kinfu.
I was reflecting on your email Ben... I agree, arbitrarily segmenting a
blob into predefined sections for processing may not be the best focus of
this long-term. Perhaps for a small region this would be useful. (e.g.
"What color is the object I am holding?" Then it would be able to set an
arbit