update: Here's my public repo for the vagrant box:
https://github.com/donders/nupicbox.

P.M. me if you have any comments regarding this repository.

With regards,

Casper Rooker
[email protected]

On Tue, Dec 8, 2015 at 4:21 PM, Cas <[email protected]> wrote:

> I've decided to make a new vagrant box, since the current one is causing
> problems.
>
> I initially based the provisioning scripts on the dockerfile in the
> https://github.com/numenta/nupic/ root, since a friend of mine was able
> to do swarming from within the resulting docker container.
>
> I noticed that the dockerfile defines a bunch of environment variables.
> Furthermore, the packages that are installed deviate somewhat from the
> requirements specified in the NuPIC README
> <https://github.com/numenta/nupic/#installing-nupic-035> and on the wiki
> page
> <https://github.com/numenta/nupic/wiki/Installing-and-Building-NuPIC#compile-dependencies>
> .
>
> I have made a vagrant box based on a trusty ubuntu box, with dependencies
> based on the dockerfile (also based on an ubuntu 14.04 image). I have
> installed NuPIC on this box using the commands from the README:
>
> pip install 
> https://s3-us-west-2.amazonaws.com/artifacts.numenta.org/numenta/nupic.core/releases/nupic.bindings/nupic.bindings-0.2.2-cp27-none-linux_x86_64.whl
> pip install nupic
>
> After the vagrant up command finishes, I customize the nupic configuration
> with my mysql credentials. I can now properly swarm using the above example
> and I can execute the sine wave example.
>
> I suspect that a large part of the dependencies in the provisioning
> scripts are unneccesary since I am just using the nupic python package, but
> since the instructions for installation are so different from each other,
> I'm not sure which dependencies are essential and which are not.
>
> I'll submit a public repo with my vagrant box asap so you can check out
> the setup yourself.
>
> For now, my question is: which one of the installation instructions should
> I follow if I want to consume NuPIC functionality from an Ubuntu Vagrant
> Box?
>
> Met vriendelijke groet,
>
> Casper Rooker
> [email protected]
>
> On Tue, Dec 8, 2015 at 2:27 PM, Cas <[email protected]> wrote:
>
>> This isn't directly related to this issue, but when I edited it, I missed
>> this comment in nupic-default.xml:
>>
>> <!-- Do not modify this file directly.  Instead, copy entries that you -->
>> <!-- wish to modify from this file into nupic-site.xml and change them -->
>> <!-- there.  If nupic-site.xml does not already exist, create it.      -->
>>
>> This is not mentioned on the wiki page that I found looking for the place
>> to change the mysql credentials:
>>
>> https://github.com/numenta/nupic/wiki/MySQL-Settings
>>
>> So I have two minor points:
>>
>> Can the wiki be updated to point to the right location for the
>> nupic-default.xml file?
>>
>> Also, I suggest that the configuration file is given its own wiki page
>> with instructions on how to use it, since I'm not the only one who thinks
>> the usage is not obvious (https://github.com/numenta/nupic/issues/2676).
>>
>> Met vriendelijke groet,
>>
>> Casper Rooker
>> [email protected]
>>
>> On Tue, Dec 8, 2015 at 1:34 PM, Cas <[email protected]> wrote:
>>
>>> Hi NuPIC,
>>>
>>> I'm trying to run a swarm on my vagrant box with a pip install nupic.
>>> I'm using a password for my mysql server so I changed the password that
>>> nupic uses in the config file at this location:
>>> /home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/support/nupic-default.xml.
>>> The connection with the mysql server is OK.
>>>
>>> Right now I am getting the following error:
>>>
>>> vagrant@vagrant-ubuntu-trusty-64:~/resources/nupic/scripts$ python
>>> run_swarm.py ../examples/swarm/simple/search_def.json --maxWorkers=8
>>> --overwrite
>>> Generating experiment files in directory:
>>> /home/vagrant/resources/nupic/examples/swarm/simple...
>>> Writing  313 lines...
>>> Writing  114 lines...
>>> done.
>>> None
>>> Successfully submitted new HyperSearch job, jobID=1016
>>> Evaluated 0 models
>>> HyperSearch finished!
>>> Worker completion message: None
>>>
>>> Results from all experiments:
>>> ----------------------------------------------------------------
>>> Generating experiment files in directory: /tmp/tmpYqUxss...
>>> Writing  313 lines...
>>> Writing  114 lines...
>>> done.
>>> None
>>> json.loads(jobInfo.results) raised an exception.  Here is some info to
>>> help with debugging:
>>> jobInfo:  _jobInfoNamedTuple(jobId=1016, client=u'GRP', clientInfo=u'',
>>> clientKey=u'', cmdLine=u'$HYPERSEARCH', params=u'{"hsVersion": "v2",
>>> "maxModels": null, "persistentJobGUID":
>>> "206bb66a-9da6-11e5-8fb8-080027480f3d", "useTerminators": false,
>>> "description": {"includedFields": [{"fieldName": "timestamp", "fieldType":
>>> "datetime"}, {"fieldName": "consumption", "fieldType": "float"}],
>>> "streamDef": {"info": "test", "version": 1, "streams": [{"info":
>>> "hotGym.csv", "source": "file://extra/hotgym/hotgym.csv", "columns": ["*"],
>>> "last_record": 100}], "aggregation": {"seconds": 0, "fields":
>>> [["consumption", "sum"], ["gym", "first"], ["timestamp", "first"]],
>>> "months": 0, "days": 0, "years": 0, "hours": 1, "microseconds": 0, "weeks":
>>> 0, "minutes": 0, "milliseconds": 0}}, "inferenceType": "MultiStep",
>>> "inferenceArgs": {"predictionSteps": [1], "predictedField": "consumption"},
>>> "iterationCount": -1, "swarmSize": "medium"}}', jobHash="
>>> lY0\x9d\xa6\x11\xe5\x8f\xb8\x08\x00'H\x0f=", status=u'notStarted',
>>> completionReason=None, completionMsg=None,
>>> workerCompletionReason=u'success', workerCompletionMsg=None, cancel=0,
>>> startTime=None, endTime=None, results=None, engJobType=u'hypersearch',
>>> minimumWorkers=1, maximumWorkers=8, priority=0, engAllocateNewWorkers=1,
>>> engUntendedDeadWorkers=0, numFailedWorkers=0,
>>> lastFailedWorkerErrorMsg=None, engCleaningStatus=u'notdone',
>>> genBaseDescription=None, genPermutations=None,
>>> engLastUpdateTime=datetime.datetime(2015, 12, 8, 12, 20, 53),
>>> engCjmConnId=None, engWorkerState=None, engStatus=None,
>>> engModelMilestones=None)
>>> jobInfo.results:  None
>>> EXCEPTION:  expected string or buffer
>>> Traceback (most recent call last):
>>>   File "run_swarm.py", line 187, in <module>
>>>     runPermutations(sys.argv[1:])
>>>   File "run_swarm.py", line 178, in runPermutations
>>>     fileArgPath, optionsDict, outputLabel, permWorkDir)
>>>   File
>>> "/home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/swarming/permutations_runner.py",
>>> line 310, in runWithJsonFile
>>>     verbosity=verbosity)
>>>   File
>>> "/home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/swarming/permutations_runner.py",
>>> line 277, in runWithConfig
>>>     return _runAction(runOptions)
>>>   File
>>> "/home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/swarming/permutations_runner.py",
>>> line 218, in _runAction
>>>     returnValue = _runHyperSearch(runOptions)
>>>   File
>>> "/home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/swarming/permutations_runner.py",
>>> line 161, in _runHyperSearch
>>>     metricsKeys=search.getDiscoveredMetricsKeys())
>>>   File
>>> "/home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/swarming/permutations_runner.py",
>>> line 826, in generateReport
>>>     results = json.loads(jobInfo.results)
>>>   File
>>> "/home/vagrant/.local/lib/python2.7/site-packages/nupic-0.3.4-py2.7.egg/nupic/swarming/object_json.py",
>>> line 163, in loads
>>>     json.loads(s, object_hook=objectDecoderHook, **kwargs))
>>>   File "/usr/lib/python2.7/json/__init__.py", line 351, in loads
>>>     return cls(encoding=encoding, **kw).decode(s)
>>>   File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
>>>     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>>> TypeError: expected string or buffer
>>>
>>> I have looked at the issues listed on GitHub and from what I can tell
>>> this is supposed to have been fixed when the dependency on the NUPIC
>>> environment variable was removed.
>>>
>>> I suspect the output contains some crucial information on the error, but
>>> I do not know how to interpret it. There are a number of fields set to
>>> 'none', but I don't know how they relate to each other. Any help?
>>>
>>> with regards,
>>>
>>> Casper Rooker
>>> [email protected]
>>>
>>
>>
>

Reply via email to