Thanks again. I have a bug report for that problem now:
https://github.com/numenta/nupic/issues/1805

Does that look familiar?
---------
Matt Taylor
OS Community Flag-Bearer
Numenta


On Sun, Feb 22, 2015 at 9:22 AM, David Wood <[email protected]> wrote:
> Thanks for your quick response, Matt.
>
>> On Feb 21, 2015, at 23:46, Matthew Taylor <[email protected]> wrote:
>>
>> Hi David, thanks for the report. Unfortunately, I cannot tell what the
>> root error is, because of
>> https://github.com/numenta/nupic/issues/1815.
>
>
> Oh, yes, that looks familiar.  Sorry I didn’t see that before reporting.
>
>
>> If you don't mind doing a little legwork, you might be able to help us
>> identify the problem.
>>
>> 1. Download our source code by either:
>>  - git clone https://github.com/numenta/nupic.git
>>  - or get the tarball at https://github.com/numenta/nupic/archive/master.zip
>> 2. Run the simple hotgym example code:
>>  - cd nupic/examples/opf/clients/hotgym/simple
>>  - python hotgym.py
>>
>> Does this give you any errors?
>
>
> Yes, the error was:
> [[
> Traceback (most recent call last):
>   File "hotgym.py", line 94, in <module>
>     runHotgym()
>   File "hotgym.py", line 69, in runHotgym
>     with open (findDataset(_DATA_PATH)) as fin:
>   File "/usr/local/lib/python2.7/dist-packages/nupic/data/datasethelpers.py", 
> line 79, in findDataset
>     (datasetPath, os.environ.get('NTA_DATA_PATH', '')))
> Exception: Unable to locate: extra/hotgym/rec-center-hourly.csv using 
> NTA_DATA_PATH of
> ]]
>
> The last line was really truncated as shown. There was nothing following the 
> trailing “of” other than the shell prompt.
>
> So the problem does seem to be related to the data file not being found. I 
> changed this line in hotgym.py:
> [[
> _DATA_PATH = "extra/hotgym/rec-center-hourly.csv”
> ]]
> …to use an absolute file path, which avoids the problem. The script then runs 
> fine.
>
> I haven’t dug deeply enough yet to see where the path problem is (my Python 
> skills need some dusting), but it is clearly not resolving the relative path 
> correctly.
>
> Regards,
> Dave
> --
> http://about.me/david_wood
>
>
>>
>> Thanks,
>> ---------
>> Matt Taylor
>> OS Community Flag-Bearer
>> Numenta
>>
>>
>> On Sat, Feb 21, 2015 at 5:43 PM, David Wood <[email protected]> wrote:
>>> Hi all,
>>>
>>> I have recently read Jeff’s book and the Numenta white paper, and am very
>>> interested in getting started with NuPIC. Unfortunately, I am having some
>>> trouble getting a clean installation.
>>>
>>> My environment is a Rackspace cloud VM running:
>>> - Ubuntu 14.10 (Utopic Unicorn)
>>> - Python 2.7.8
>>> - mysql  Ver 14.14 Distrib 5.6.19, for debian-linux-gnu (x86_64) using
>>> EditLine wrapper
>>>
>>> I installed NuPIC as follows:
>>> [[
>>> # apt-get install python-pip
>>> # apt-get install python-dev
>>> # pip install numpy
>>> # pip install
>>> https://s3-us-west-2.amazonaws.com/artifacts.numenta.org/numenta/nupic/releases/nupic-0.1.3-cp27-none-linux_x86_64.whl
>>> ]]
>>>
>>> …and then attempted to run the “hot gym” predication example:
>>> [[
>>> # cd ~/nupic/examples/opf/clients/hotgym/prediction/one_gym
>>> # ./swarm.py
>>> ]]
>>>
>>> The output of the swarm run is provided below my signature.
>>>
>>> Unfortunately, the numpy and nupic installations as well as the hotgym
>>> example as seem to have significant numbers of python errors. The errors are
>>> both trivial and apparently problematic, such as API changes. The swarm run
>>> shows, for example, a TypeError.
>>>
>>> Should I expect this example to “just work” in the 0.13 release, or is my
>>> environment too new? Would a downgrade of Ubuntu or Python “fix” the
>>> problem? Does anyone have other suggestions? Thanks in advance!
>>>
>>> Regards,
>>> Dave
>>> --
>>> http://about.me/david_wood
>>>
>>>
>>> The output of the swarm run was:
>>> [[
>>> This script runs a swarm on the input data (rec-center-hourly.csv) and
>>> creates a model parameters file in the `model_params` directory containing
>>> the best model found by the swarm. Dumps a bunch of crud to stdout because
>>> that is just what swarming does at this point. You really don't need to
>>> pay any attention to it.
>>>
>>> =================================================
>>> = Swarming on rec-center-hourly data...
>>> = Medium swarm. Sit back and relax, this could take awhile.
>>> =================================================
>>> Generating experiment files in directory:
>>> /root/nupic/examples/opf/clients/hotgym/prediction/one_gym/swarm...
>>> Writing  313 lines...
>>> Writing  113 lines...
>>> done.
>>> None
>>> Successfully submitted new HyperSearch job, jobID=1002
>>> Evaluated 0 models
>>> HyperSearch finished!
>>> Worker completion message: None
>>>
>>> Results from all experiments:
>>> ----------------------------------------------------------------
>>> Generating experiment files in directory: /tmp/tmpBvtweU...
>>> Writing  313 lines...
>>> Writing  113 lines...
>>> done.
>>> None
>>> json.loads(jobInfo.results) raised an exception.  Here is some info to help
>>> with debugging:
>>> jobInfo:  _jobInfoNamedTuple(jobId=1002, client=u'GRP', clientInfo=u'',
>>> clientKey=u'', cmdLine=u'$HYPERSEARCH', params=u'{"hsVersion": "v2",
>>> "maxModels": null, "persistentJobGUID":
>>> "8090e46e-ba32-11e4-ad72-bc764e202244", "useTerminators": false,
>>> "description": {"includedFields": [{"fieldName": "timestamp", "fieldType":
>>> "datetime"}, {"maxValue": 53.0, "fieldName": "kw_energy_consumption",
>>> "fieldType": "float", "minValue": 0.0}], "streamDef": {"info":
>>> "kw_energy_consumption", "version": 1, "streams": [{"info": "Rec Center",
>>> "source": "file://rec-center-hourly.csv", "columns": ["*"]}]},
>>> "inferenceType": "TemporalMultiStep", "inferenceArgs": {"predictionSteps":
>>> [1], "predictedField": "kw_energy_consumption"}, "iterationCount": -1,
>>> "swarmSize": "medium"}}', jobHash='\x80\x90\xe4o\xba2\x11\xe4\xadr\xbcvN
>>> "D', status=u'notStarted', completionReason=None, completionMsg=None,
>>> workerCompletionReason=u'success', workerCompletionMsg=None, cancel=0,
>>> startTime=None, endTime=None, results=None, engJobType=u'hypersearch',
>>> minimumWorkers=1, maximumWorkers=4, priority=0, engAllocateNewWorkers=1,
>>> engUntendedDeadWorkers=0, numFailedWorkers=0, lastFailedWorkerErrorMsg=None,
>>> engCleaningStatus=u'notdone', genBaseDescription=None, genPermutations=None,
>>> engLastUpdateTime=datetime.datetime(2015, 2, 22, 1, 31, 19),
>>> engCjmConnId=None, engWorkerState=None, engStatus=None,
>>> engModelMilestones=None)
>>> jobInfo.results:  None
>>> EXCEPTION:  expected string or buffer
>>> Traceback (most recent call last):
>>>  File "./swarm.py", line 109, in <module>
>>>    swarm(INPUT_FILE)
>>>  File "./swarm.py", line 101, in swarm
>>>    modelParams = swarmForBestModelParams(SWARM_DESCRIPTION, name)
>>>  File "./swarm.py", line 78, in swarmForBestModelParams
>>>    verbosity=0
>>>  File
>>> "/usr/local/lib/python2.7/dist-packages/nupic/swarming/permutations_runner.py",
>>> line 276, in runWithConfig
>>>    return _runAction(runOptions)
>>>  File
>>> "/usr/local/lib/python2.7/dist-packages/nupic/swarming/permutations_runner.py",
>>> line 217, in _runAction
>>>    returnValue = _runHyperSearch(runOptions)
>>>  File
>>> "/usr/local/lib/python2.7/dist-packages/nupic/swarming/permutations_runner.py",
>>> line 160, in _runHyperSearch
>>>    metricsKeys=search.getDiscoveredMetricsKeys())
>>>  File
>>> "/usr/local/lib/python2.7/dist-packages/nupic/swarming/permutations_runner.py",
>>> line 825, in generateReport
>>>    results = json.loads(jobInfo.results)
>>>  File
>>> "/usr/local/lib/python2.7/dist-packages/nupic/support/object_json.py", line
>>> 163, in loads
>>>    json.loads(s, object_hook=objectDecoderHook, **kwargs))
>>>  File "/usr/lib/python2.7/json/__init__.py", line 351, in loads
>>>    return cls(encoding=encoding, **kw).decode(s)
>>>  File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
>>>    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
>>> TypeError: expected string or buffer
>>> ]]
>>>
>>
>
>

Reply via email to