Personally, and this may be ambitious, I would like to see a robot - real or 
simulated - that has innate behaviors it is able to build upon (much more 
granular than "move left", "go forward", etc) in order to achieve some manner 
of goal. This would show the sensory-motor stuff and HTM working together in a 
non-timestep environment (or in simulation at least minimal timestep), and 
would reinforce the "biologically inspired" framing of HTM in general.



Sent from Samsung Mobile

<div>-------- Original message --------</div><div>From: Vinh 
<[email protected]> </div><div>Date:09-29-2014  11:38 AM  (GMT-05:00) 
</div><div>To: Matthew Lohbihler <[email protected]> 
</div><div>Subject: Re: [nupic-discuss] NuPIC on Hacker News </div><div>
</div>On Monday 29,September,2014 09:31 PM, Fergal Byrne wrote:
Could I ask people to have a think about this and possibly bounce around ideas? 
We could schedule a round-table session during the hackathon next month and see 
if there's an application area to focus on in this regard. The most immediate 
candidates I can see right now are cortical.io (aka CEPT) for NLP and the 
Geospatial Encoder.
IMHO, NLP is one of the hot areas that deep learning researchers are attacking. 
So yeah, if Nupic and cortical.io can deliver comparable or better results than 
deep networks', surely people will take Nupic seriously. 
One important domain that deep networks still have lower performance than the 
state-of-the-art is video, e.g., action recognition. But I think, unless we had 
GPU version of Nupic, Nupic would not be able to compete in this area.

Regards,
Vinh

Reply via email to