[Yahoo-eng-team] [Bug 1406784] Re: Can't create volume from non-raw image

2015-01-20 Thread vishal yadav
** Project changed: nova = openstack-manuals

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406784

Title:
  Can't create volume from non-raw image

Status in Cinder:
  Invalid
Status in OpenStack Manuals:
  Confirmed

Bug description:
  1. Create an image using a non-raw image (qcow2 or vmdk is ok)
  2. Copy the image to a volume,  and failed.

  Log:
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 363, in 
create_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
_run_flow()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 356, in 
_run_flow
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
flow_engine.run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/utils/lock_utils.py, line 53, in 
wrapper
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py, 
line 111, in run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._run()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py, 
line 121, in _run
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
self._revert(misc.Failure())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py, 
line 78, in _revert
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
misc.Failure.reraise_if_any(failures.values())
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py, line 558, in 
reraise_if_any
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
failures[0].reraise()
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/utils/misc.py, line 565, in reraise
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(*self._exc_info)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py, 
line 36, in _execute_task
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher result = 
task.execute(**arguments)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py,
 line 594, in execute
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
**volume_spec)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py,
 line 556, in _create_from_image
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
image_id, image_location, image_service)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/cinder/volume/flows/manager/create_volume.py,
 line 463, in _copy_image_to_volume
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher raise 
exception.ImageUnacceptable(ex)
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
ImageUnacceptable: Image 92fad7ae-6439-4c69-bdf4-4c6cc5759225 is unacceptable: 
qemu-img is not installed and image is of type vmdk.  Only RAW images can be 
used if qemu-img is not installed.
  2014-12-31 07:06:09.299 2159 TRACE oslo.messaging.rpc.dispatcher 
  

[Yahoo-eng-team] [Bug 1371082] Re: nova-scheduler high cpu usage

2014-09-18 Thread vishal yadav
*** This bug is a duplicate of bug 1371084 ***
https://bugs.launchpad.net/bugs/1371084

** Project changed: cinder = nova

** Tags added: nova-scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1371082

Title:
  nova-scheduler high cpu usage

Status in OpenStack Compute (Nova):
  New

Bug description:
  For no particular reason nova-scheduler cpu utilization can jump to
  100%. I was unable to find any pattern and reason why this is
  happening. We've small cluster 1 cloud controller and 7 node
  controllers. Except high cpu usage nothing bad happens, we're able to
  create/delete instances, after nova-scheduler restart everything goes
  back to normal state.

  I was able to strace 2 processes while nova-scheduler was using 100%
  cpu.

  1st process is in loop and it prints:
  122014-09-16 00:02:21.501 5668 WARNING nova.openstack.common.loopingcall 
[-] task run outlasted interval by 12.322771 sec\0

  2nd processes is in loop as well and it's repeating:
  epoll_ctl(6, EPOLL_CTL_ADD, 3, {EPOLLOUT|EPOLLERR|EPOLLHUP, {u32=3, 
u64=40095890530107395}}) = 0
  epoll_wait(6, {{EPOLLOUT, {u32=3, u64=40095890530107395}}}, 1023, 878) = 1
  epoll_ctl(6, EPOLL_CTL_DEL, 3, 
{EPOLLRDNORM|EPOLLRDBAND|EPOLLWRNORM|EPOLLMSG|EPOLLHUP|0x2485d020, {u32=32708, 
u64=23083065509183428}}) = 0
  sendto(3, 142014-09-16 00:44:19.272 5673 INFO 
oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 
10.3.128.254:5672\0, 125, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not 
connected)

  Other processes doesn't have any issues with AMQP server just only
  nova-scheduler.

  We're using Debian, kernel 3.14-1-amd64, nova-scheduler 2014.1.2-1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1371082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355623] Re: nova floating-ip-create need pool name

2014-09-10 Thread vishal yadav
** Changed in: python-novaclient
   Status: New = Confirmed

** Project changed: python-novaclient = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355623

Title:
  nova floating-ip-create need pool name

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  #
  # help menu
  #
  [root@cnode35-m ~(keystone_admin)]# nova help floating-ip-create
  usage: nova floating-ip-create [floating-ip-pool]

  Allocate a floating IP for the current tenant.

  Positional arguments:
floating-ip-pool Name of Floating IP Pool. (Optional)

  #
  # error log
  #
  [root@cnode35-m ~(keystone_admin)]# nova floating-ip-create
  ERROR: FloatingIpPoolNotFound: Floating ip pool not found. (HTTP 404) 
(Request-ID: req-224995d7-b1bf-4b82-83f6-d9259c1ca265)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1355623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368030] [NEW] nova-manage command when executed by non-root user, should give authorization error instead of low level database error

2014-09-10 Thread vishal yadav
Public bug reported:

Version of nova-compute and distribution/package (1:2014.1.2-0ubuntu1.1)

1) Execute below command using non-root user.
ubuntu@mc1:~$ nova-manage flavor list

It gives below error:

Command failed, please check log for more info
2014-09-11 13:43:17.501 12857 CRITICAL nova 
[req-07bc6065-3ece-4fd5-b478-48d37c63a2c6 None None] OperationalError: 
(OperationalError) unable to open database file None None

2) Execute above command using root user:
ubuntu@mc1:~$ sudo su -
root@mc1:~# nova-manage flavor list
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 1GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 
0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 5, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}

So instead of low level database error, it should give kind of
authorization error to operator or end-user of nova-manage CLI.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: ubuntu
 Importance: Undecided
 Status: New


** Tags: nova-manage

** Summary changed:

- nova-manage command when executed by non-root user should give authorization 
error instead of low level database error
+ nova-manage command when executed by non-root user, should give 
authorization error instead of low level database error

** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368030

Title:
  nova-manage command when executed by non-root user, should give
  authorization error instead of low level database error

Status in OpenStack Compute (Nova):
  New
Status in Ubuntu:
  New

Bug description:
  Version of nova-compute and distribution/package
  (1:2014.1.2-0ubuntu1.1)

  1) Execute below command using non-root user.
  ubuntu@mc1:~$ nova-manage flavor list

  It gives below error:

  Command failed, please check log for more info
  2014-09-11 13:43:17.501 12857 CRITICAL nova 
[req-07bc6065-3ece-4fd5-b478-48d37c63a2c6 None None] OperationalError: 
(OperationalError) unable to open database file None None

  2) Execute above command using root user:
  ubuntu@mc1:~$ sudo su -
  root@mc1:~# nova-manage flavor list
  m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.tiny: Memory: 512MB, VCPUS: 1, Root: 1GB, Ephemeral: 0Gb, FlavorID: 1, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 
5, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
  m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, 
Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}

  So instead of low level database error, it should give kind of
  authorization error to operator or end-user of nova-manage CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1368030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp