Sure. Not sure what to say. The human brain certainly has the ability to perform affine transformations at 60 frames per second. Presumably, babies learn how to do this by moving their heads around, and seeing how the visual input changes. I'm not a physio-psychologist, but if i recall correctly, it takes 3 months or 1+ years for the child visual system to stabilize, and additional learning (hand-eye coordination) continues through the teens and young adulthood. That represents -- lets see -- 30 frames per second x 3600 seconds/hour x 12 waking hours/day x 400 days = 500 million frames of training data. -- five-thousand hours of training data.
That's a lot of data. --linas On Tue, Aug 30, 2016 at 1:23 AM, Jan Matusiewicz <[email protected]> wrote: > >> You will have another, different CYC-type failure, if you attempt to >> hand-code (by humans) the visual subsystem. Automation is kind-of the >> whole point of deep learning, etc. >> >> This would be great if the system could also learn how to interpret > visual data without a need to hard code anything. However, I am afraid it > would be more chalanging than anything else. Humans have built-in visual > processing system. Imagine that someone presents you images in a form of > sequence of color codes of consecutive pixels like #FF88AA#0F0F0F and > expect to match images presenting the same object seen from different > angle. This would be a very difficult task for human. > > It is chalanging enough to make a system, which learn that cat ("C") > chases and eats every mouse ("M") if the only input given is a time-based > sequence of text screen-shots of the situation like > ........................ > ....M................... > ........................ > ................C....... > ........................ > ....M..........M........ > ........................ > Transforming character matrix representation into list of animals with > their current position, finding that x, y of C approaches x, y of some M, > etc. would not be easy. That's why testing OpenCog in Minecraft environment > seems like a good first step. > > >One reason that people are infatuated with deep-learning neural nets is > >that NN's provide a concrete, achievable architecture that is proven to > >work, and is closely described in thousands of papers and books, so any > >joe-blow programmer can sit down and start coding up the algorithms, >and > get some OK results > I also experienced that when I say Artificial Intelligence, people think: > Neural Networks. Too few people try different approaches. > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA34L-mbXvqpzSO224hQRdJmYXaznzny1kv%3DuqHx%2BtRvOgg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
