Ok folks, let's move discussion of the implementation to Github. First
question to answer is which HTM implementation to use:
https://github.com/nupic-community/htm-over-http/issues/2

Anyone else reading this is free to jump in and help out, but I want
to define our work properly using Github issues so we all know what is
happening and who is working on what.
---------
Matt Taylor
OS Community Flag-Bearer
Numenta


On Sun, Dec 6, 2015 at 10:25 PM, Jonathan Mackenzie <[email protected]> wrote:
> Sounds like a good app Matt, I can help out. Personally, for getting an web
> app off the ground quickly in python I recommend pyramid:
> http://www.pylonsproject.org/
>
> On 7 December 2015 at 03:31, Matthew Taylor <[email protected]> wrote:
>>
>> Thanks for the interest! I'll try to respond to everyone in this
>> email. But first, who reading this would want to use an HTM over HTTP
>> service like this? It means that you won't need to have HTM running on
>> the same system that is generating the data. It's basically HTM in the
>> Cloud. :)
>>
>> On Sat, Dec 5, 2015 at 12:16 PM, Marcus Lewis <[email protected]> wrote:
>> > I'm interested in HTTP GET, inspecting models.
>>
>> Great feature to add after a minimum viable product has been created,
>> but this adds the complexity of either caching or persistence
>> (depending on how much history you want).
>>
>> On Sat, Dec 5, 2015 at 2:03 PM, cogmission (David Ray)
>> <[email protected]> wrote:
>> > One thing I am concerned about is the call/answer nature of the
>> > interface
>> > you describe because of the latency involved in a
>> > submit-one-row-per-call
>> > methodology? Should it not be able to "batch" process rows of data
>> > instead?
>> > (batches could contain one row if you were dedicated to being a
>> > masochist)?
>>
>> Yes, we will eventually need that, but I don't need it in the
>> prototype. Let's focus on one row at a time and expand to batching
>> later.
>>
>> > Next, at Cortical we use a technology called DropWizard which makes it
>> > very
>> > easy to deploy an HTTP server capable of Restful queries (I have done
>> > this
>> > for Twitter processing involving HTM.java).
>>
>> If this is going to use NuPIC and python, I have found that it's super
>> easy to set up REST with web.py [1]. Just a matter for writing a class
>> and a few functions. For REST on the JVM, I am open for suggestions.
>>
>> On Sat, Dec 5, 2015 at 5:50 PM, Pascal Weinberger
>> <[email protected]> wrote:
>> > Like a extended version of HTM engine?
>> > This would be the solution to the htmengine prediction issue :)
>>
>> If we chose the HTM Engine option, then yes we would need to add some
>> features to HTM Engine, especially prediction and user-defined model
>> params. This is not a little job, but it would be great to have a
>> scaling platform already built into the HTTP server. I would be happy
>> even if we just started with an attempt to make HTM Engine (and the
>> HTTP server in the skeleton app) deployable to a the cloud. Even with
>> it's current capabilities, I could start using it immediately and we
>> could add features over time.
>>
>> > Will you set up a repo in the community? :)
>>
>> Placeholder: https://github.com/nupic-community/htm-over-http
>>
>> Let's continue discussion on Gitter [2]. Our first decision is to
>> decide which HTM implementation to use. I am leaning towards HTM
>> Engine because it would take the smallest amount of effort to do the
>> deployment configuration around it and get an MVP running the fastest
>> (even if it doesn't to prediction or custom model params out of the
>> box).
>>
>> IMO the best way to attack this is to get something minimal running
>> ASAP and add features as required.
>>
>> [1] http://webpy.org/
>> [2] https://gitter.im/nupic-community/htm-over-http
>> ---------
>> Matt Taylor
>> OS Community Flag-Bearer
>> Numenta
>>
>
>
>
> --
> Jonathan Mackenzie
> BEng (Software) Hons
> PhD Candidate, Flinders University

Reply via email to