David,

        Thanks for the quick response. Now that I look at the code I should’ve 
guessed that behavior. However, won’t that create a lot of overhead with 
projects that have many input fields but are interested in a single classifier?

         Loving the observer pattern makes output and debugging very easy. 

        Also, I’m using the Publisher/ObservableSensor feature for the 
pre-editing phase — pre-sensor processing, data munging etc... Having all of 
this in one executable makes things much easier to maintain and execute. In the 
past I had a separate process to work the data into manageable form prior to 
sending it to the HTM.

        
Thanks
-Phil




> On Dec 22, 2015, at 9:36 PM, cogmission (David Ray) 
> <[email protected]> wrote:
> 
> Hi Phil,
> 
> Welcome! and thanks for using HTM.java! It's great to hear from people who 
> are using it.
> 
> When setting up the network (NAPI) , and when using the line:
> 
> Network network = Network.create("test network", p)
>             .add(Network.createRegion("r1")
>                 .add(Network.createLayer("1", p)
>  ---->              .alterParameter(KEY.AUTO_CLASSIFY, Boolean.TRUE)
>                     .add(Anomaly.create())
>                     .add(new TemporalMemory())
>                     .add(new SpatialPooler())
>                     .add(Sensor.create(FileSensor::create, 
> SensorParams.create(
>                         Keys::path, "", 
> ResourceLocator.path("rec-center-hourly.csv"))))));
> ==============
> 
> ... alterParameter(KEY.AUTO_CLASSIFY, Boolean.TRUE)
> 
> Classifiers are automatically set up for you. At this time, the NAPI sets up 
> classifiers for all fields (each with its own classifier) - without the 
> ability to limit the classifier creation to one or some number of classifiers 
> less than the number of fields.
> 
> So no, you do not have to worry about which column your intended field 
> resides in terms of sequence order.
> 
> Classifiers and their results may be retrieved using the following syntax:
> 
> (From your subscriber...)
> 
> public void onNext(Inference i) {
>     Map<String, NamedTuple> m = i.getClassifierInput();
>     System.out.println(m);
> }
> 
> ...will print out the Encoder inputs used to associate Classifier inputs, 
> such as:
> 
> key=encoding, value=[I@32fa1633, hash=3
> key=bucketIdx, value=20, hash=3            <-- bucket here refers to encoder 
> bucket
> key=name, value=timestamp, hash=3
> 
> =========
> 
> public void onNext(Inference i) {
>     NamedTuple nt = i.getClassifiers();
>     System.out.println(nt);
> }
> 
> ...will print out the NamedTuple containing the field name key to classifier 
> values, such as:
> 
> keys: Bucket: 0
> Bucket: 1
>       key=consumption, 
> value=org.numenta.nupic.algorithms.CLAClassifier@7d907bac, hash=1
> Bucket: 2
>       key=timestamp, 
> value=org.numenta.nupic.algorithms.CLAClassifier@7791a895, hash=2
> Bucket: 3
> 
> ... The above Buckets refer to the NamedTuple hash buckets not the encoder 
> (just pointing that out).
> 
> =========
> 
> public void onNext(Inference i) {
>     System.out.println("keys c: " + 
> i.getClassification("consumption").getMostProbableValue(1));
>     System.out.println("keys t: " + 
> i.getClassification("timestamp").getMostProbableValue(1));
> }
> 
> ... will print out the most probable values as computed by the two 
> classifiers (how many classifiers you have will correspond to the number of 
> fields).
> 
> example printout:
> 
> keys c: 21.2
> keys t: 2010-07-02T00:00:00.000-05:00
> keys c: 16.4
> keys t: 2010-07-02T00:00:00.000-05:00
> keys c: 16.4
> keys t: 2010-07-02T01:00:00.000-05:00
> keys c: 4.7
> keys t: 2010-07-02T02:00:00.000-05:00
> keys c: null
> keys t: null
> keys c: 4.67
> keys t: null
> keys c: 23.5
> keys t: 2010-07-02T05:00:00.000-05:00
> keys c: 23.5
> keys t: 2010-07-02T05:00:00.000-05:00
> keys c: 45.4
> keys t: 2010-07-02T07:00:00.000-05:00
> keys c: null
> keys t: null
> 
> 
> nulls are produced in some cases because I only ran this for 9 iterations so 
> the network didn't have time to "warm up".
> 
> ============
> 
> At this time, all fields must have corresponding Field Encoding Maps setup 
> and inserted into your Parameters. For example:
> 
> Map<String, Map<String, Object>> fieldEncodings = 
> NetworkTestHarness.getNetworkDemoFieldEncodingMap(); 
> 
> Parameters p = Parameters.getAllDefaultParameters();
> p.setParameterByKey(KEY.FIELD_ENCODING_MAP, fieldEncodings);
> 
> The NetworkTestHarness is meant to be used as a convenience guide for 
> parameters setup. The defaults it uses will not work for all cases, but it 
> shows the technique by which you would enter parameters for each one of your 
> columns (an unfortunate necessity at this point in time).
> 
> I hope this helps!
> 
> Cheers,
> David
> 
> 
> On Tue, Dec 22, 2015 at 7:11 PM, Phil iacarella <[email protected] 
> <mailto:[email protected]>> wrote:
> Hey Nupic,
> 
> I’m in the process of using NuPIC to predict the next fueling location of 
> drivers. I’ve setup a single 2/3 layer with SP and Classifier. I’m 
> implementing in HTM.java. The data being fed into the layer consists of 3 
> fields (there could be more fields in the future depending on the relevance): 
> datetime, driverID, Location.
> 
> My question is how do I specify the field to be used in the Classifier. It 
> looks as if the Classifier assumes the 2nd field to be the one to classify 
> by. Do I have to organize my inputs such that the classified field is in the 
> 2nd position?
> 
> Thanks
> Phil
> 
> 
> 
> -- 
> With kind regards,
>  
> David Ray
> Java Solutions Architect
>  
> Cortical.io <http://cortical.io/>
> Sponsor of:  HTM.java <https://github.com/numenta/htm.java>
>  
> [email protected] <mailto:[email protected]>
> http://cortical.io <http://cortical.io/>

Reply via email to