Here is an example of a short paper that packs in a lot of fundamental descriptiveness of a neuron-dendrite model using SPICE. A Carbon Nanotube Neuron with Dendritic Computations: h ttp://ceng.usc.edu/~parker/prepublication.pdf<http://ceng.usc.edu/~parker/prepublication.pdf> The cortical neuron design includes dendritical circuits and is intended to be implemented in CMOS or carbon nanotube technology. Because it includes dendritic calculations I thought it was a good match for discussing SPICE models of the CLA.
In the current white paper the CLA has been presented as an abstraction of biology in a way that allows for those ideas to be embodied in software. The pseudo code example was designed for single threaded von Neumann digital hardware. This translation from analog brain chemistry to serialized digital operation is a painful one, requiring much back and forth mental conversions while thinking between the serialized bit domain and the analog model. This approach works for software engineers and coders because we currently live in a von Neumann world and think in those data structures and algorithms. A large part of the CLA whitepaper is directed to those that can understand and implement this type of code. I believe we will be leaving the von Neumann world behind so we need to model our systems in ways that will make bridges to new architectures while still serving us now in build the OSS ecosystem that will drive adoption. I'm assuming there would be some math models included in a new comprehensive paper. If so, these math models would help bridge the biology to other domains. With the math models in hand it may allow for an easy translation to a SPICE model by a seasoned EE. If we had that, I believe the inclusion of a simple SPICE model or analog circuit diagram would help some people and would be a solid addition to any new version of the CLA theory and description. However I am left wondering about the utility of a SPICE model as part of the theory-to-implementation stack. Would we use it to validate the model or just as a learning tool? On Sat, Jan 18, 2014 at 10:43 AM, Francisco Webber <[email protected]> wrote: > Sebastian, > I can absolutely second that. We use Knime to prepare our datasets for > retina training. And we also plan to create Knime-nodes (plug-ins) that > call our REST services and allow interactive creation of CEPT based data > workflows. > There is a well established SDK available that makes node development (in > Java) a real ease. > When, in my dreams, I drift into analog Spice modeling than this targets > primarily a reference CLA implementation, that serves as baseline for other > realizations (like software) and can be used for scientific investigation > and experimentation. > > All the Best > Francisco > > > Francisco De Sousa Webber > Founder, GM > *CEPT Systems GmbH* > Mariahilferstrasse 4, 1070 Vienna, Austria > +43 664 502 77 96 > [email protected] <[email protected]> > http://cept.at > > On 19.01.2014, at 02:00, Sebastian Hänisch <[email protected]> wrote: > > Nice to see the progress made towards an easy to use development platform. > Flow based programming in conjunction with machine learning reminds me of > the open source project KNIME (http://www.knime.org/). > > Straight from their site: > 'KNIME [naim] is a user-friendly graphical workbench for the entire > analysis process: data access, data transformation, initial investigation, > powerful predictive analytics, visualisation and reporting. The open > integration platform provides over 1000 modules (nodes), including those of > the KNIME community and its extensive partner network.' > > This or something similar would lessen the barrier of entry a lot and as a > side effect it's far easier to test several implementations concurrently > based on a benchmark network. > > > > > 2014/1/18 Stewart Mackenzie <[email protected]> > >> Nothing wrong with psuedo-code for this level of 'idea->implementation' >> stage. >> >> Dinner conversations with Francisco has stimulated constructive thoughts. >> That's a special person in our community. Our approaches are slightly >> different. My thoughts lean towards an environment that allows >> non-developers to pivot quickly, test new brain theory, drag in the >> component test then throw away if needed. This test bed becomes the source >> of papers and more efficient implementations. >> >> Pspice, needs hardened hardware experts, a large amount of money to >> purchase a license and the implementation isn't approachable to an eye >> untrained in hardware design. My thoughts are we need tools that are a >> delight for neuroscientists. From an open-source perspective Pspice isn't >> feasible, developing a competent effective community is a primary concern, >> the community will not be able to interact with the 'good stuff' if they >> have high barriers of entry. Though at some later stage given a hardware >> implementation is needed or more finer tests are needed then so be it, >> pspice. >> >> I am okay with using our oz based flow based programming environment >> called fractal. Probably best to read J.Morrison's "Flow Based Programming >> 2nd edition" book. Currently fbp is being reignited by these guys: >> noflojs.org https://www.youtube.com/watch?v=LYJxsaeHVJU . Funnily enough >> Morrison has also read On Intelligence and at the back of his book he >> recommends using FBP to implement numunta's nupic. Except i'd do it in a >> FBP implementation using Oz, which means we can _remove the need for a >> clock_, which more closely emulates the brain. (this is important: >> declarative concurrency removes state / time from the picture). A cortex >> implementation in fractal could become the 'whiteboard', the flow diagram >> of how it fits together, the components neuroscientists, engineers and >> community string together to prove and incorporate any new brain research. >> It exposes the right level of complexity for non-developers that isn't >> source code but only the inputs/outputs and component names. >> Neuroscientists can easily give the specifications for a component which >> a software developer can implement. The component implementation is clean >> and modular. Specifications are easy to communicate, and the component >> becomes reusable. This implementation then becomes the example >> implementation other faster more efficient implementations and future >> academic papers are based on. >> >> Many _developers_ are still stuck on getting nupic compiled and running, >> now imagine neuroscientists getting into the nupic codebase to make changes >> based on new theory. It won't happen. How will they test their hypotheses? >> Therefore the right level of abstraction is needed for these diverse >> parties. >> >> I'm looking forward to the rewriting of the whitepaper. I think it'll >> bring the whole project into focus and get everyone on the same page. >> >> Kind regards >> Stewart >> >> >> >> Jeff Hawkins <[email protected]> wrote: >> >Thank you for all these thoughts, I am still digesting them. >> > >> >Part of what is motivating me to tackle documentation this year is that >> >I might be closing in on a broader understanding of what all the layers >> >in the cortex are doing. The CLA is just part of that. I need to >> >figure out the best way(s) to communicate these new ideas. >> > >> >One question is what kind of language to use. In the white paper we >> >used prose, pseudo-code, and neuroscience. The pseudo-code was >> >intended to be a clear and somewhat formal description, immune to the >> >messiness of the actual code. Yes, you need to have some programming >> >skills but not much. Do you think that pseudo-code isn't sufficient as >> >a formal description? >> > >> >The other question is where to publish. We could just write a new >> >white paper, the current one is getting old anyway. The nice thing >> >about a white paper is it can be as comprehensive as needed. But I am >> >also feeling some pressure to publish in a peer reviewed journal. Some >> >people don't take you seriously without peer review. As you suggest we >> >could do a white paper and then a series of smaller papers. >> > >> >The Oz-based programming environment sounds cool but I have to see it >> >to understand it better. >> > >> >Still thinking about ths... >> >Jeff >> >> _______________________________________________ >> nupic mailing list >> [email protected] >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >> > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
