Re: [openstack-dev] [tempest]Tempest test concurrency

2016-09-21 Thread Bob Hansen
Matthew, this helps tremendously. As you can tell the conclusion I was
heading towards was not accurate.

Now to look a bit deeper.

Thanks,

Bob Hansen
z/VM OpenStack Enablement

Matthew Treinish <mtrein...@kortar.org> wrote on 09/21/2016 11:07:04 AM:

> From: Matthew Treinish <mtrein...@kortar.org>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Date: 09/21/2016 11:09 AM
> Subject: Re: [openstack-dev] [tempest]Tempest test concurrency
>
> On Wed, Sep 21, 2016 at 10:44:51AM -0400, Bob Hansen wrote:
> >
> >
> > I have been looking at some of the stackviz output as I'm trying to
improve
> > the run time of my thrid-party CI. As an example:
> >
> > http://logs.openstack.org/36/371836/1/check/gate-tempest-dsvm-
> full-ubuntu-xenial/087db0f/logs/stackviz/#/stdin/timeline
> >
> > What jumps out is the amount of time that each worker is not running
any
> > tests. I would have expected quite a bit more concurrecy between the
two
> > workers in the chart, e.g. more overlap. I've noticed a simliar thing
with
> > my test runs using 4 workers.
>
> So the gaps between tests aren't actually wait time, the workers
aresaturated
> doing stuff during a run. Those gaps are missing data in the subunit
streams
> that are used as the soure of the data for rendering those timelines. The
gaps
> are where things like setUp, setUpClass, tearDown, tearDownClass, and
> addCleanups which are not added to the subunit stream. It's just an
> artifact of
> the incomplete data, not bad scheduling. This also means that testr does
not
> take into account any of the missing timing when it makes decisions based
on
> previous runs.
>
> >
> > Can anyone explain why this is and where can I find out more
information
> > about the scheduler and what information it is using to decide when to
> > dispatch tests? I'm already feeding my system a prior subunit stream to
> > help influence the scheduler as my test run times are different due to
the
> > way our openstack implementation is architected. A simple round-robin
> > approach is not the most efficeint in my case.
>
> If you're curious about how testr does scheduling most of that happens
here:
>
> https://github.com/testing-cabal/testrepository/blob/master/
> testrepository/testcommand.py
>
> One thing to remember is that testr isn't actually a test runner, it's a
test
> runner runner. It partitions the tests based on time information and
passes
> those to (multiple) test runner workers. The actual order of execution
inside
> those partitions is handled by the test runner itself. (in our case
> subunit.run)
>
> -Matt Treinish
> [attachment "signature.asc" deleted by Bob Hansen/Endicott/IBM]
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest]Tempest test concurrency

2016-09-21 Thread Bob Hansen


I have been looking at some of the stackviz output as I'm trying to improve
the run time of my thrid-party CI. As an example:

http://logs.openstack.org/36/371836/1/check/gate-tempest-dsvm-full-ubuntu-xenial/087db0f/logs/stackviz/#/stdin/timeline

What jumps out is the amount of time that each worker is not running any
tests. I would have expected quite a bit more concurrecy between the two
workers in the chart, e.g. more overlap. I've noticed a simliar thing with
my test runs using 4 workers.

Can anyone explain why this is and where can I find out more information
about the scheduler and what information it is using to decide when to
dispatch tests? I'm already feeding my system a prior subunit stream to
help influence the scheduler as my test run times are different due to the
way our openstack implementation is architected. A simple round-robin
approach is not the most efficeint in my case.

(maybe openstack-infra is a better place to ask?)

Thanks!

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstac-dev][infra] High si/sys values via top in instances

2016-03-25 Thread Bob Hansen


Looking for some help to figure out what's going on here. I'm in the
process of creating a third party CI system for our project. I'm initially
trying to setup 6 manually created jenkins slaves using diskimage builder
and puppet to run gate jobs and will scale from there and eventually move
to nodepool.

I don't think this is specific to devstack-gate. I suspect it'll do this
with system activity that stresses the instance.

My setup is as follows:

Physical Servers(2): Intel 1 socket 12 core, 128gb RAM.
Openstack Liberty installed as a 3 node; 1 controller, 1 compute/network
(96gb RAM) , 2nd Compute (96gb RAM) as per the liberty installation guide.
Openstack controller, and compute ndoe guests, were created by hand using
libvirt on the respective physical server.
Using provider network, with linuxbridge.
Backing store for jenkins slaves/openstack liberty is the local file
system.
Jenkins slaves are built using puppet, images are built using diskimage
builder. The standard third party setup described in the CI documentation.
Jenkins slaves are 4 vcpu and 8gb of ram.
I have verified kvm acceleration is being used. All vm definitions are
using virtio for network and disk and virtio-pci is installed. All vms
using mode-passthrough in the cpu-model in the libvirt.xml describing it.

Trying to keep it simple as I learn the ropes...

All systems are using Kernel 3.19.0-56-generic #62~14.04.1-Ubuntu SMP on
Ubuntu 14.04.4 LTS (I've seen the same thing on early kernels and earlier
14.04 versions).

My issue is as follows,

If I create a single jenkins slave on a single compute node, the basic
setup time (we'll ignore tempest, but a similar thing happens) to run
devstack-gate is about roughly 20 minutes, sometimes less. As I scale the
number of jenkins slaves on the compute node, up to 3, the setup time
increases dramatically, the last run I did had it at nearly an hour.
Clearly something is wrong, as I have not over-comitted memory, nor ram on
either of the compute nodes.

What I'm finding is the CPU's are getting overwhelmed as I scale in the
jenkins slaves. Top will show sys/si percentages eating up the majority of
CPU, sometimes collectively they are taking up 70-80% of the cpu time. This
will drop to what's shown below when the system becomes idle.

When the systems are idle (after one run) this is a typical view of top,
mongodb is using 9.3% of the cpu, sys is at 9.8% and si at 5.2% of the
available cpu (Irix mode off). The compute node and the physical server do
not show this sort of load, they are typically in the 1-2% for sys, and 0,
for si when the slaves are idle, but will grow a bit when the slaves are
running the devstack-gate script.

top - 19:39:43 up 1 day, 39 min,  1 user,  load average: 0.65, 1.03, 1.59
Tasks: 145 total,   1 running, 144 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  9.8 sy,  0.0 ni, 77.9 id,  0.3 wa,  0.0 hi,  5.2 si,
6.5 st
KiB Mem:   8175872 total,  2620708 used,  164 free,   211212 buffers
KiB Swap:0 total,0 used,0 free.  1665764 cached Mem

  PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
 1402 mongodb   20   0  382064  48232  10912 S  9.3  0.6 162:25.72 mongod
18436 rabbitmq  20   0 2172776  54528   4072 S  4.2  0.7  20:41.26 beam.smp
20059 root  10 -10   20944420 48 S  2.9  0.0  26:54.20 monitor
20069 root  10 -10   21452432 48 S  2.6  0.0  25:45.48 monitor
28786 mysql 20   0 2375444 110308  11216 S  2.0  1.3  15:43.30 mysqld
 3731 jenkins   20   0 4113288 114320  21160 S  1.9  1.4  31:01.35 java
3 root  20   0   0  0  0 S  1.3  0.0  10:29.24
ksoftirqd/0


When the devstack-gate script is running this is typical. Again the compute
node as 0.6 for sy and 0.0 for si, when I copied this, similarly for the
physical server.

top - 19:45:02 up 1 day, 44 min,  1 user,  load average: 14.67, 12.20,
11.20
Tasks: 217 total,   5 running, 212 sleeping,   0 stopped,   0 zombie
%Cpu(s): 18.9 us, 43.5 sy,  0.0 ni,  5.2 id,  0.0 wa,  0.0 hi, 32.0 si,
0.4 st
KiB Mem:   8175872 total,  4970836 used,  3205036 free,   217968 buffers
KiB Swap:0 total,0 used,0 free.  1604240 cached Mem

  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
  687 jenkins   20   0   78420  21544   3296 R  45.7  0.3   4:17.87 ansible
  676 jenkins   20   0   78556  25556   7116 S  40.1  0.3   4:19.29 ansible
 1368 mongodb   20   0  382064  48508  10896 S  32.2  0.6 207:31.76 mongod
 5060 root  10 -10   20944420 48 S  14.1  0.0  12:04.99 monitor

Digging deeper with the various perf related tools, the best I can find for
a clue (used vmstat, looked at /proc/interrupts and mpstat, nothing in
logs), is that when idle mongo is doing this, which is driving up the sy
number. I have yet to figure out what may be driving the si number.

% time seconds  usecs/call callserrors syscall
-- --- --- - - 

Re: [openstack-dev] [devstack][tempest] RuntimeError: no suitable implementation for this system thrown by monotonic.py

2016-01-27 Thread Bob Hansen

This appears to have gone away this morning. I ran clean.sh and removed
monotonic.

The next run of stack.sh installed version 0.6 of monotonic and I no longer
see this exception.

Bob Hansen
z/VM OpenStack Enablement



From:   Bob Hansen/Endicott/IBM@IBMUS
To: "openstack-dev" <openstack-dev@lists.openstack.org>
Date:   01/26/2016 05:21 PM
Subject:[openstack-dev] [devstack][tempest] RuntimeError: no suitable
implementation for this system thrown by monotonic.py



I get this when running tempest now on a devstack I installed today. I did
not have this issue on a devstack installation I did a few days ago. This
is on ubuntu 14.04 LTS.

Everything else on this system seems to be working just fine. Only
run_tempest throws this exception.

./run_tempest.sh -s; RuntimeError: no suitable implementation for this
system.

A deeper look finds this in key.log

2016-01-26 20:07:53.991616 10461 INFO keystone.common.wsgi
[req-a184a559-91d4-4f87-b36c-5b5c1c088a4c - - - - -] POST
http://127.0.0.1:5000/v2.0/tokens
2016-01-26 20:07:58.000373 mod_wsgi (pid=10460): Target WSGI script
'/usr/local/bin/keystone-wsgi-public' cannot be loaded as Python module.
2016-01-26 20:07:58.008776 mod_wsgi (pid=10460): Exception occurred
processing WSGI script '/usr/local/bin/keystone-wsgi-public'.
2016-01-26 20:07:58.009147 Traceback (most recent call last):
2016-01-26 20:07:58.009308 File "/usr/local/bin/keystone-wsgi-public", line
6, in 
2016-01-26 20:07:58.019625 from keystone.server.wsgi import
initialize_public_application
2016-01-26 20:07:58.019725 File
"/opt/stack/keystone/keystone/server/wsgi.py", line 28, in 
2016-01-26 20:07:58.029498 from keystone.common import config
2016-01-26 20:07:58.029561 File
"/opt/stack/keystone/keystone/common/config.py", line 18, in 
2016-01-26 20:07:58.090786 from oslo_cache import core as cache
2016-01-26 20:07:58.090953 File
"/usr/local/lib/python2.7/dist-packages/oslo_cache/__init__.py", line 14,
in 
2016-01-26 20:07:58.095327 from oslo_cache.core import * # noqa
2016-01-26 20:07:58.095423 File
"/usr/local/lib/python2.7/dist-packages/oslo_cache/core.py", line 42, in

2016-01-26 20:07:58.097721 from oslo_log import log
2016-01-26 20:07:58.097776 File
"/usr/local/lib/python2.7/dist-packages/oslo_log/log.py", line 50, in

2016-01-26 20:07:58.100863 from oslo_log import formatters
2016-01-26 20:07:58.100931 File
"/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 27,
in 
2016-01-26 20:07:58.102743 from oslo_serialization import jsonutils
2016-01-26 20:07:58.102807 File
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py",
line 60, in 
2016-01-26 20:07:58.124702 from oslo_utils import timeutils
2016-01-26 20:07:58.124862 File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/timeutils.py", line 27,
in 
2016-01-26 20:07:58.128004 from monotonic import monotonic as now # noqa
2016-01-26 20:07:58.128047 File
"/usr/local/lib/python2.7/dist-packages/monotonic.py", line 131, in

2016-01-26 20:07:58.152613 raise RuntimeError('no suitable implementation
for this system')
2016-01-26 20:07:58.154446 RuntimeError: no suitable implementation for
this system
2016-01-26 20:09:49.247986 10464 INFO keystone.common.wsgi
[req-7c00064c-318d-419a-8aaa-0536e74db473 - - - - -] GET
http://127.0.0.1:35357/
2016-01-26 20:09:49.339477 10468 DEBUG keystone.middleware.auth
[req-4851e134-1f0c-45b0-959c-881a2b1f5fd8 - - - - -] There is either no
auth token in the request or the certificate issuer is not trusted. No auth
context will be set.
process_request /opt/stack/keystone/keystone/middleware/auth.py:171

A peek in monotonic.py finds where the exception is.

The version of monotonic is:

---
Metadata-Version: 2.0
Name: monotonic
Version: 0.5
Summary: An implementation of time.monotonic() for Python 2 & < 3.3
Home-page: https://github.com/atdt/monotonic
Author: Ori Livneh
Author-email: o...@wikimedia.org
License: Apache
Location: /usr/local/lib/python2.7/dist-packages

Any suggestions on how to get around this one?
Bug?

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Rabbit credentails revert to defaults are reboot?

2016-01-26 Thread Bob Hansen


I have seen this twice now. Once on a devstack I downloaded a few days ago,
and again today. What I discovered was that if I reboot the machine, the
credentials for rabbit go back to the defaults. The symptom is anything
that requires any sort of authentication results in a 500 nova list,
neutron subnet-list.. Glance image-list will work.

The clue is this is spread all through the log files.

2016-01-26 21:58:59.981 ERROR oslo.messaging._drivers.impl_rabbit
[req-7559dc92-d7c9-47bc-8e68-7062acf74b8b None None] AMQP server
127.0.0.1:5672 closed the connection. Check login credentials: Socket
closed

When I run stack.sh I set the credentials, stack.sh creates the credentials
for rabbit just as I had specified in my local.conf file.

Not sure exactly how rabbit is configured, and how to set the credentials
back, which I think I can do by modifying /etc/rabbitmq/rabbitmq.conf,
which doesn't exist.

Any help? If I reboot for any reason, I have a choice, fix the conf files,
or reinstall devstack.

Thanks!

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][tempest] RuntimeError: no suitable implementation for this system thrown by monotonic.py

2016-01-26 Thread Bob Hansen


I get this when running tempest now on a devstack I installed today. I did
not have this issue on a devstack installation I did a few days ago. This
is on ubuntu 14.04 LTS.

Everything else on this system seems to be working just fine. Only
run_tempest throws this exception.

./run_tempest.sh -s;RuntimeError: no suitable implementation for this
system.

A deeper look finds this in key.log

2016-01-26 20:07:53.991616 10461 INFO keystone.common.wsgi
[req-a184a559-91d4-4f87-b36c-5b5c1c088a4c - - - - -] POST
http://127.0.0.1:5000/v2.0/tokens
2016-01-26 20:07:58.000373 mod_wsgi (pid=10460): Target WSGI script
'/usr/local/bin/keystone-wsgi-public' cannot be loaded as Python module.
2016-01-26 20:07:58.008776 mod_wsgi (pid=10460): Exception occurred
processing WSGI script '/usr/local/bin/keystone-wsgi-public'.
2016-01-26 20:07:58.009147 Traceback (most recent call last):
2016-01-26 20:07:58.009308   File "/usr/local/bin/keystone-wsgi-public",
line 6, in 
2016-01-26 20:07:58.019625 from keystone.server.wsgi import
initialize_public_application
2016-01-26 20:07:58.019725   File
"/opt/stack/keystone/keystone/server/wsgi.py", line 28, in 
2016-01-26 20:07:58.029498 from keystone.common import config
2016-01-26 20:07:58.029561   File
"/opt/stack/keystone/keystone/common/config.py", line 18, in 
2016-01-26 20:07:58.090786 from oslo_cache import core as cache
2016-01-26 20:07:58.090953   File
"/usr/local/lib/python2.7/dist-packages/oslo_cache/__init__.py", line 14,
in 
2016-01-26 20:07:58.095327 from oslo_cache.core import *  # noqa
2016-01-26 20:07:58.095423   File
"/usr/local/lib/python2.7/dist-packages/oslo_cache/core.py", line 42, in

2016-01-26 20:07:58.097721 from oslo_log import log
2016-01-26 20:07:58.097776   File
"/usr/local/lib/python2.7/dist-packages/oslo_log/log.py", line 50, in

2016-01-26 20:07:58.100863 from oslo_log import formatters
2016-01-26 20:07:58.100931   File
"/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 27,
in 
2016-01-26 20:07:58.102743 from oslo_serialization import jsonutils
2016-01-26 20:07:58.102807   File
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py",
line 60, in 
2016-01-26 20:07:58.124702 from oslo_utils import timeutils
2016-01-26 20:07:58.124862   File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/timeutils.py", line 27,
in 
2016-01-26 20:07:58.128004 from monotonic import monotonic as now  #
noqa
2016-01-26 20:07:58.128047   File
"/usr/local/lib/python2.7/dist-packages/monotonic.py", line 131, in

2016-01-26 20:07:58.152613 raise RuntimeError('no suitable
implementation for this system')
2016-01-26 20:07:58.154446 RuntimeError: no suitable implementation for
this system
2016-01-26 20:09:49.247986 10464 INFO keystone.common.wsgi
[req-7c00064c-318d-419a-8aaa-0536e74db473 - - - - -] GET
http://127.0.0.1:35357/
2016-01-26 20:09:49.339477 10468 DEBUG keystone.middleware.auth
[req-4851e134-1f0c-45b0-959c-881a2b1f5fd8 - - - - -] There is either no
auth token in the request or the certificate issuer is not trusted. No auth
context will be set.
process_request /opt/stack/keystone/keystone/middleware/auth.py:171

A peek in monotonic.py finds where the exception is.

The version of monotonic is:

---
Metadata-Version: 2.0
Name: monotonic
Version: 0.5
Summary: An implementation of time.monotonic() for Python 2 & < 3.3
Home-page: https://github.com/atdt/monotonic
Author: Ori Livneh
Author-email: o...@wikimedia.org
License: Apache
Location: /usr/local/lib/python2.7/dist-packages

Any suggestions on how to get around this one?
Bug?

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-21 Thread Bob Hansen

Yes, it is image-list not image list. I don't seem to be able to  find any
other hints in any of the nova logs.

nova --debug image-list shows this:

DEBUG (extension:157) found extension EntryPoint.parse('token =
keystoneauth1.loading._plugins.identity.generic:Token')
DEBUG (extension:157) found extension EntryPoint.parse('v3token =
keystoneauth1.loading._plugins.identity.v3:Token')
DEBUG (extension:157) found extension EntryPoint.parse('password =
keystoneauth1.loading._plugins.identity.generic:Password')
DEBUG (v2:62) Making authentication request to
http://127.0.0.1:35357/tokens
INFO (connectionpool:207) Starting new HTTP connection (1): 127.0.0.1
DEBUG (connectionpool:387) "POST /tokens HTTP/1.1" 404 93
DEBUG (session:439) Request returned failure status: 404
DEBUG (shell:894) The resource could not be found. (HTTP 404)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
892, in main
OpenStackComputeShell().main(argv)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
726, in main
api_version = api_versions.discover_version(self.cs, api_version)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 267, in discover_version
client)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 248, in _get_server_version_range
version = client.versions.get_current()
  File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 83, in get_current
return self._get_current()
  File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 56, in _get_current
url = "%s" % self.api.client.get_endpoint()
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",
line 132, in get_endpoint
return self.session.get_endpoint(auth or self.auth, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 634, in get_endpoint
return auth.get_endpoint(self, **kwargs)
  File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 209, in get_endpoint
service_catalog = self.get_access(session).service_catalog
  File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 135, in get_access
self.auth_ref = self.get_auth_ref(session)
  File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v2.py", line
64, in get_auth_ref
authenticated=False, log=False)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 545, in post
return self.request(url, 'POST', **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/_utils.py",
line 180, in inner
return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 440, in request
raise exceptions.from_response(resp, method, url)
NotFound: The resource could not be found. (HTTP 404)



Bob Hansen
z/VM OpenStack Enablement




From:   "Chen CH Ji" <jiche...@cn.ibm.com>
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   01/21/2016 04:25 AM
Subject:Re: [openstack-dev] nova cli commands fail with 404. devstack
installation from today




Guess it's image-list instead of image list,right?  maybe you can check
with nova --debug image-list and see the API which was
send to nova-api server then analyze the nova api log to know what's
exactly the error?

-"Bob Hansen" <hans...@us.ibm.com> wrote: -
To: openstack-dev@lists.openstack.org
From: "Bob Hansen" <hans...@us.ibm.com>
Date: 01/20/2016 10:31PM
Subject: [openstack-dev] nova cli commands fail with 404. devstack
installation from today



Installed devstack today, this morning actually, and most everything
works except simple nova cli commands, nova image list, list,
flavor-list all fail) glance ok, nuetron ok,

As an example, nova image list returns:

devstack$ nova image list
ERROR (NotFound): The resource could not be found. (HTTP 404)

However the command; openstack image list returns the correct list of
cirros images, plus one I have already imported.

key.log has:

127.0.0.1 - - [20/Jan/2016:21:10:49 +] "POST /tokens HTTP/1.1" 404 93
"-" "keystoneauth1/2.2.0 python-requests/2.9.1 CPython/2.7.6" 2270(us)

Clearly an authentication thing. Since other commands work, e.g. neutorn
subnet-list, I concluded keystone auth is just fine.

I suspect it is something in nova.conf. [keystone_auth] has this in it,
which stack.sh built

[keystone_authtoken]
signing_dir = /var/cache/nova
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://127.0.0.1:

Re: [openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-21 Thread Bob Hansen

Found it. The contents of  the admin file
(e.g. ../devstack/accrc/admin/admin) that I sourced for the admin
credentials do not work with the nova cli. This combination of OS_*
variables produced the error.

export OS_PROJECT_NAME="admin"
export OS_AUTH_URL="http://127.0.0.1:35357;
export OS_CACERT=""
export OS_AUTH_TYPE=v2password
export OS_PASSWORD="secretadmin"
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default

this combination works with nova, glance and neutron.

export OS_PROJECT_NAME=admin
export OS_PASSWORD=secretadmin
export OS_AUTH_URL=http://127.0.0.1:35357
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_CACERT=

To be honest, I've seen so many examples of the 'correct' set of
environment variables with different AUTH_TYPES, it's very hard to tell
which variable 'set' is appropriate for which AUTH_TYPE and version of the
keystone API.

A pointer to this sort of information is appreciated.

Bob Hansen
z/VM OpenStack Enablement




From:   Bob Hansen/Endicott/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   01/21/2016 10:46 AM
Subject:Re: [openstack-dev] nova cli commands fail with 404. devstack
installation from today



Yes, it is image-list not image list. I don't seem to be able to find any
other hints in any of the nova logs.

nova --debug image-list shows this:

DEBUG (extension:157) found extension EntryPoint.parse('token =
keystoneauth1.loading._plugins.identity.generic:Token')
DEBUG (extension:157) found extension EntryPoint.parse('v3token =
keystoneauth1.loading._plugins.identity.v3:Token')
DEBUG (extension:157) found extension EntryPoint.parse('password =
keystoneauth1.loading._plugins.identity.generic:Password')
DEBUG (v2:62) Making authentication request to
http://127.0.0.1:35357/tokens
INFO (connectionpool:207) Starting new HTTP connection (1): 127.0.0.1
DEBUG (connectionpool:387) "POST /tokens HTTP/1.1" 404 93
DEBUG (session:439) Request returned failure status: 404
DEBUG (shell:894) The resource could not be found. (HTTP 404)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
892, in main
OpenStackComputeShell().main(argv)
File "/usr/local/lib/python2.7/dist-packages/novaclient/shell.py", line
726, in main
api_version = api_versions.discover_version(self.cs, api_version)
File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 267, in discover_version
client)
File "/usr/local/lib/python2.7/dist-packages/novaclient/api_versions.py",
line 248, in _get_server_version_range
version = client.versions.get_current()
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 83, in get_current
return self._get_current()
File "/usr/local/lib/python2.7/dist-packages/novaclient/v2/versions.py",
line 56, in _get_current
url = "%s" % self.api.client.get_endpoint()
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",
line 132, in get_endpoint
return self.session.get_endpoint(auth or self.auth, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 634, in get_endpoint
return auth.get_endpoint(self, **kwargs)
File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 209, in get_endpoint
service_catalog = self.get_access(session).service_catalog
File
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/base.py",
line 135, in get_access
self.auth_ref = self.get_auth_ref(session)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/identity/v2.py",
line 64, in get_auth_ref
authenticated=False, log=False)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 545, in post
return self.request(url, 'POST', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/_utils.py", line
180, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/session.py",
line 440, in request
raise exceptions.from_response(resp, method, url)
NotFound: The resource could not be found. (HTTP 404)



Bob Hansen
z/VM OpenStack Enablement


Inactive hide details for "Chen CH Ji" ---01/21/2016 04:25:28 AM---Guess
it's image-list instead of image list,right?  maybe yo"Chen CH Ji"
---01/21/2016 04:25:28 AM---Guess it's image-list instead of image
list,right? maybe you can check with nova --debug image-list

From: "Chen CH Ji" <jiche...@cn.ibm.com>
To: "OpenStack Development Mailing List \(not for usage questions\)"
<openstack-dev@lists.openstack.org>
Date: 01/21/2016 04:25 AM
Subject: Re: [openstack-dev] nova cli commands fail with 404. devstack
installation from today

[openstack-dev] nova cli commands fail with 404. devstack installation from today

2016-01-20 Thread Bob Hansen

Installed devstack today, this morning actually,  and most everything
works except simple nova cli commands, nova image list, list,
flavor-list all fail) glance ok, nuetron ok,

As an example, nova image list returns:

devstack$ nova image list
ERROR (NotFound): The resource could not be found. (HTTP 404)

However the command; openstack image list returns the correct list of
cirros images, plus one I have already imported.

key.log has:

127.0.0.1 - - [20/Jan/2016:21:10:49 +] "POST /tokens HTTP/1.1" 404 93
"-" "keystoneauth1/2.2.0 python-requests/2.9.1 CPython/2.7.6" 2270(us)

Clearly an authentication thing. Since other commands work, e.g. neutorn
subnet-list, I concluded keystone auth is just fine.

I suspect it is something in nova.conf. [keystone_auth] has this in it,
which stack.sh built

[keystone_authtoken]
signing_dir = /var/cache/nova
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://127.0.0.1:5000
project_domain_id = default
project_name = service
user_domain_id = default
password = secretservice
username = nova
auth_url = http://127.0.0.1:35357
auth_type = password

Any suggestions on where else to look?

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev