So just to give an update... I managed to get the Grok API up and running (at least some form of the api it seems to be missing some of the node_modules). The issue was in the supervisor.conf when issuing the uwsgi command -- to issue the command it needed to point to /usr/bin/uwsgi and I had to change the paths around a little with the GROK_API_HOME_CONF to poing to the webservices path. I tailor the grok-api portion of the conf to initiate uwsgi to look similar to the start-uwsgi.sh command is the script.
On to debugging node-modules and front-end apps On Fri, Jun 12, 2015 at 10:02 AM, Michael Parco < [email protected]> wrote: > I can certainly make an issue, I am going to try to debug for a little bit > more time, I am not a nginx expert by any means. > > Here is the tail of the logs though: > > uwsgi.log > > > > > > > > > > > > > > > > > > **** Starting uWSGI 2.0.4 (64bit) on [Fri Jun 12 09:55:39 2015] > ***compiled with version: 4.8.3 20140911 (Red Hat 4.8.3-9) on 08 June 2015 > 15:00:28os: Linux-3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC > 2015nodename: sv11b.[ ]machine: x86_64clock source: unixpcre jit > disableddetected number of CPU cores: 4current working directory: /[ > ]/numenta-apps/grok/grok/app/webservicesdetected binary path: > /usr/bin/uwsgiuWSGI running as root, you can use --uid/--gid/--chroot > options*** WARNING: you are running uWSGI as root !!! (use the --uid flag) > ***your processes number limit is 38338your memory page size is 4096 > bytesdetected max file descriptor number: 1024lock engine: pthread robust > mutexesthunder lock: disabled (you can enable it with --thunder-lock)Listen > queue size is greater than the system max net.core.somaxconn (128).* > > *grok-supervisord.log* > > > > > > > > > > *2015-06-12 09:55:34,599 INFO success: metric_listener entered RUNNING > state, process has stayed up for > than 1 seconds (startsecs)2015-06-12 > 09:55:34,599 INFO success: anomaly_service entered RUNNING state, process > has stayed up for > than 1 seconds (startsecs)2015-06-12 09:55:34,599 INFO > success: metric_storer entered RUNNING state, process has stayed up for > > than 1 seconds (startsecs)2015-06-12 09:55:34,599 INFO success: > model_scheduler entered RUNNING state, process has stayed up for > than 1 > seconds (startsecs)2015-06-12 09:55:34,632 INFO exited: grok-api_00 (exit > status 1; not expected)2015-06-12 09:55:36,769 INFO spawned: 'grok-api_00' > with pid 244272015-06-12 09:55:36,781 INFO exited: grok-api_00 (exit status > 1; not expected)2015-06-12 09:55:39,784 INFO spawned: 'grok-api_00' with > pid 244322015-06-12 09:55:39,795 INFO exited: grok-api_00 (exit status 1; > not expected)2015-06-12 09:55:40,797 INFO gave up: grok-api_00 entered > FATAL state, too many start retries too quickly* > > > I can see this in the nginx logs which seems straightforward: > *2015/06/12 09:55:38 [error] 24292#0: *1 connect() failed (111: Connection > refused) while connecting to upstream, client: 10.1.10.22, server: > 10.1.10.126, request: "GET /grok/welcome HTTP/1.1", upstream: > "uwsgi://0.0.0.0:19002 <http://0.0.0.0:19002>", host: "10.1.10.126"* > > > On Thu, Jun 11, 2015 at 11:14 PM, Austin Marshall <[email protected]> > wrote: > >> The nginx 502 error is an inability of nginx to talk to the uwsgi >> back-end, which would happen if uwsgi isn't running. >> >> Would you mind creating an issue at >> https://github.com/numenta/numenta-apps/issues/new and supplying any >> additional details about your experience? >> >> Specifically, I'd be interested in the contents of logs/uwsgi.log >> and logs/grok-supervisord.log >> On Jun 11, 2015 11:45 AM, "Michael Parco" <[email protected]> >> wrote: >> >>> Has anyone had any luck setting up the grok-api outside of aws yet? I >>> seem to be getting nginx 502 errors, I have aggregator_service, >>> metric_collector, notification_service, anomaly_service, metric_listener, >>> metric_storer, and metric_scheduler all running and showing up via my >>> supervisor status. The only one that I can't get up is the grok-api >>> >> >
