Hi,

Thanks for your help all, glance is also implemented with ceph rbd and both have ample amounts of space available:

compute node:  /dev/sda1       5.5T  890G  4.3T  17% /

Ceph cluster: 16555 GB used, 23666 GB / 40221 GB avail

Turns out is in an issue with the keystone tokens that are timing out when the snapshot is taking place.

I will get onto looking into that now.

Thanks again for the advice and help.

Grant

On 03/06/16 11:57, Saverio Proto wrote:
Hello,

what is the state of the instance before asking the snapshot ? Is it running or paused ?

Check on the hypervisor when the snapshot starts if you see files in these folders:

/var/lib/libvirt/qemu/save/
/var/lib/nova/instances/snapshots/

How is your glance implemented ? Also with ceph rbd ? Remember that a "nova snapshot" is a glance image.

Saverio



2016-06-03 12:17 GMT+02:00 Grant <gr...@absolutedevops.io <mailto:gr...@absolutedevops.io>>:

    Hi all,

    I was wondering if someone could shed any light on an issue we are
    seeing. We are running Kilo in our production environment and when
    we are trying to create a snapshot for   a particular instance it
    gets stuck in a "saving" state and doesn't actually ever save the
    image.

    We are using a ceph back-end and the user that is trying to take a
    snapshot is able to take snapshots of all of their other
    instances, it is just one that is failing.

    Error log from the nova compute host below:

    2016-06-02 17:13:48.594 52559 WARNING urllib3.connectionpool
    [req-8200a3b0-ad2a-406e-969e-c22762db3455
    bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
    - - -] HttpConnectionPool is full, discarding connection: 10.5.0.205
    2016-06-02 17:14:00.042 52559 ERROR nova.compute.manager
    [req-8200a3b0-ad2a-406e-969e-c22762db3455
    bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
    - - -] [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] Error
    while trying
     to clean up image f9844dd5-5a92-4cd4-956d-8ad04cfc5e84
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] Traceback (most
    recent call last):
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
    405, in decorated_function
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
    self.image_api.delete(context, image_id)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 141, in
    delete
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] return
    session.delete(context, image_id)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 410,
    in delete
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
    self._client.call(context, 1, 'delete', image_id)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 218,
    in call
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] return
    getattr(client.images, method)(*args, **kwargs)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line
    255, in delete
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]     resp, body =
    self.client.delete(url)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
    line 271, in delete
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] return
    self._request('DELETE', url, **kwargs)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   File
    "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py",
    line 227, in _request
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]     raise
    exc.from_response(resp, resp.content)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] HTTPUnauthorized:
    <html>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <head>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <title>401
    Unauthorized</title>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] </head>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <body>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] <h1>401
    Unauthorized</h1>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]   This server
    could not verify that you are authorized to access the document
    you requested. Either you supplied the wrong credential
    s (e.g., bad password), or your browser does not understand how to
    supply the credentials required.<br /><br />
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] </body>
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0] </html> (HTTP 401)
    2016-06-02 17:14:00.042 52559 TRACE nova.compute.manager
    [instance: 70d42d14-66f6-4374-9038-4b6f840193e0]
    2016-06-02 17:14:00.173 52559 ERROR oslo_messaging.rpc.dispatcher
    [req-8200a3b0-ad2a-406e-969e-c22762db3455
    bb07f987fbae485c9e05f06fb0d422c2 a22e503869c34a92bceb66b0c1da7231
    - - -] Exception during message handling: Not authorized for imag
    e f9844dd5-5a92-4cd4-956d-8ad04cfc5e84.

    Any help will be appreciated.

    Regards,

-- Grant Morley
    Cloud Lead
    Absolute DevOps Ltd
    Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
    www.absolutedevops.io <http://www.absolutedevops.io>
    gr...@absolutedevops.io <mailto:gr...@absolutedevops.io> 0845 874
    0580

    _______________________________________________
    OpenStack-operators mailing list
    OpenStack-operators@lists.openstack.org
    <mailto:OpenStack-operators@lists.openstack.org>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
Grant Morley
Cloud Lead
Absolute DevOps Ltd
Units H, J & K, Gateway 1000, Whittle Way, Stevenage, Herts, SG1 2FP
www.absolutedevops.io <http://www.absolutedevops.io/> gr...@absolutedevops.io <mailto:grant@absolutedevops.i> 0845 874 0580
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to