Re: [Openstack] [Nova][Glance] Nova imports flat images from base file despite ceph backend

2018-10-12 Thread Eugen Block

Hi Melanie,

thanks for your response.

I would consider this thread as closed. After I learned that nova  
imports flat images if the image's disk-format is not "raw" I tested  
some different scenarios to understand more about this topic. I still  
couldn't explain why nova did that in the specific case as I was sure  
the image's format was raw (the image has been deleted in the meantime  
but I can see in the database that the image's disk-format was indeed  
"raw").


But a couple of days ago I encountered the same problem. I tried to  
upload an image from volume via Horizon dashboard (where a dropdown  
exists to choose disk-format) and the preselected value was indeed  
"qcow2" instead of "raw". I have no idea *why* it was different from  
the default "raw" but this could have happened last week, too, althoug  
according to the database entry it should not have happened. Anyway,  
now I know the reason why some ephemeral disks are flat, I just can't  
tell how Horizon selects the format, but that's enough research for  
now. If I find out more I'll report back.


Thanks again!
Regards,
Eugen


Zitat von melanie witt :


On Tue, 09 Oct 2018 08:01:01 +, Eugen Block wrote:

So it's still unclear why nova downloaded a raw glance image to the
local filesystem during the previous attempt.

I always knew that with Ceph as backend it's recommended to use raw
images but I always assumed the "disk-format" was not more than a
display option. Well, now I know that, but this still doesn't explain
the downloaded base image.
If anyone has an idea how to reproduce such behavior or an
explanation, I'd love to hear it.


Right, in order to get the ceph CoW behavior, you must use RAW  
glance image format [1]. You also need to have created the image as  
'raw' before you upload to glance (qemu-img info should show the  
file format as 'raw').


If you have done this, then you shouldn't see any image downloaded  
to the local filesystem. Is it possible that your image did not have  
file format 'raw' before you uploaded it to glance?


Cheers,
-melanie

[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova][ceph] Libvirt Error when add ceph as nova backend

2018-10-12 Thread Eugen Block

Hi,

the keyrings and caps seem correct to me.


yes both nodes, ceph & nova-compute node was on same network
192.168.26.xx/24 , does any special port need to allow at firewalld ?


Yes, the firewall should allow the traffic between the nodes. If this  
is just a test environment you could try disabling the firewall. If  
this is not an option, open the respective ports, an excerpt from the  
docs:


For iptables, add port 6789 for Ceph Monitors and ports 6800:7300  
for Ceph OSDs.


For more information take a look into [1]. Can you see requests  
blocked in your firewall?


Regards,
Eugen

[1] http://docs.ceph.com/docs/master/start/quick-start-preflight/


Zitat von Adhi Priharmanto :


Hi,
This is my ceph node ( using single node ceph) for test only


[cephdeploy@ceph2 ~]$ cat /etc/ceph/ceph.client.nova.keyring
[client.nova]
key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
[cephdeploy@ceph2 ~]$ ceph auth get client.nova
exported keyring for client.nova
[client.nova]
key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx
pool=vms, allow rx pool=images"
[cephdeploy@ceph2 ~]$



and this at my compute-node


[root@cp2 ~]# cat /etc/ceph/ceph.client.nova.keyring
[client.nova]
key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
[root@cp2 ~]#


yes both nodes, ceph & nova-compute node was on same network
192.168.26.xx/24 , does any special port need to allow at firewalld ?

On Thu, Oct 11, 2018 at 2:24 PM Eugen Block  wrote:


Hi,

your nova.conf [libvirt] section seems fine.

Can you paste the output of

ceph auth get client.nova

and does the keyring file exist in /etc/ceph/ (ceph.client.nova.keyring)?

Is the ceph network reachable by your openstack nodes?

Regards,
Eugen


Zitat von Adhi Priharmanto :

> Hi, Im running my openstack environment with rocky release, and I want to
> integrate ceph as nova-compute backend, so I followed instruction here :
> http://superuser.openstack.org/articl...
> <http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/>
>
> and this is my nova.conf at my compute node
>
> [DEFAULT]
> ...
> compute_driver=libvirt.LibvirtDriver
>
> [libvirt]
> images_type = rbd
> images_rbd_pool = vms
> images_rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_user = nova
> rbd_secret_uuid = a93824e0-2d45-4196-8918-c8f7d7f35c5d
> 
>
> and this is log when I restarted the nova compute service :
>
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> [req-f4e2715a-c925-4c12-b8e6-aa550fc588b1 - - - - -] Exception
> handling connection event: AttributeError: 'NoneType' object has no
> attribute 'rfind'
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host Traceback
> (most recent call last):
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
> 148, in _dispatch_conn_event
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host handler()
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
> 414, in handler
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host return
> self._conn_event_handler(*args, **kwargs)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> 470, in _handle_conn_event
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> self._set_host_enabled(enabled, reason)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> 3780, in _set_host_enabled
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> mount.get_manager().host_up(self._host)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
> line 134, in host_up
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> self.state = _HostMountState(host, self.generation)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
> line 229, in __init__
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> mountpoint = os.path.dirname(disk.source_path)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
> "/usr/lib64/python2.7/posixpath.py", line 129, in dirname
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host i =
> p.rfind('/') + 1
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> AttributeError: 'NoneType' object has no attribute 'rfind'
> 2018-10-11 01:59:57.1

Re: [Openstack] [nova][ceph] Libvirt Error when add ceph as nova backend

2018-10-11 Thread Eugen Block

Hi,

your nova.conf [libvirt] section seems fine.

Can you paste the output of

ceph auth get client.nova

and does the keyring file exist in /etc/ceph/ (ceph.client.nova.keyring)?

Is the ceph network reachable by your openstack nodes?

Regards,
Eugen


Zitat von Adhi Priharmanto :


Hi, Im running my openstack environment with rocky release, and I want to
integrate ceph as nova-compute backend, so I followed instruction here :
http://superuser.openstack.org/articl...


and this is my nova.conf at my compute node

[DEFAULT]
...
compute_driver=libvirt.LibvirtDriver

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = a93824e0-2d45-4196-8918-c8f7d7f35c5d


and this is log when I restarted the nova compute service :

2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
[req-f4e2715a-c925-4c12-b8e6-aa550fc588b1 - - - - -] Exception
handling connection event: AttributeError: 'NoneType' object has no
attribute 'rfind'
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host Traceback
(most recent call last):
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
148, in _dispatch_conn_event
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host handler()
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
414, in handler
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host return
self._conn_event_handler(*args, **kwargs)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
470, in _handle_conn_event
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
self._set_host_enabled(enabled, reason)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
3780, in _set_host_enabled
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
mount.get_manager().host_up(self._host)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
line 134, in host_up
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
self.state = _HostMountState(host, self.generation)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
line 229, in __init__
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
mountpoint = os.path.dirname(disk.source_path)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host   File
"/usr/lib64/python2.7/posixpath.py", line 129, in dirname
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host i =
p.rfind('/') + 1
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
AttributeError: 'NoneType' object has no attribute 'rfind'
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
2018-10-11 01:59:57.231 5275 WARNING nova.compute.monitors
[req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Excluding
nova.compute.monitors.cpu monitor virt_driver. Not in the list of
enabled monitors (CONF.compute_monitors).
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
[req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Error updating
resources for node cp2.os-srg.adhi.: TimedOut: [errno 110] error
connecting to the cluster
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager Traceback
(most recent call last):
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722,
in _update_available_resource_for_node
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
rt.update_available_resource(context, nodename)
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager   File
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
line 687, in update_available_resource
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager resources
= self.driver.get_available_resource(nodename)
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
6505, in get_available_resource
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
disk_info_dict = self._get_local_gb_info()
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
5704, in _get_local_gb_info
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager info =
LibvirtDriver._get_rbd_driver().get_pool_info()
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
line 368, in get_pool_info
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager with
RADOSClient(self) as client:

Re: [Openstack] [Nova][Glance] Nova imports flat images from base file despite ceph backend

2018-10-09 Thread Eugen Block

Hi,

I just wanted to follow up on this for documentation purpose. Although  
I still don't have all answers there's something I can explain.


When I upload a new image (iso) to create a new base image for glance,  
and I use "--disk-format iso", this will lead to the described  
behavior, nova will report something like:


rbd image clone requires image format to be 'raw' but image  
rbd:///images//snap is 'iso' is_cloneable


If I launch a new instance from that iso, nova will download it to the  
filesystem (/var/lib/nova/instances/_base) which will take some time.  
Then I attach an empty volume, finish the installation, destroy the  
instance and upload the volume to glance, that new glance image has a  
default "disk-format = raw".


Now when I launch an instance from this new image (raw) I usually get  
a CoW clone on RBD layer. The _base file of the ISO will eventually be  
removed by nova if the base file is old enough. This is how I always  
created new instances, not knowing that the ISOs should have the "raw"  
disk-format. Despite the wrong format of ISOs my procedure usually  
leads to CoW clones anyway since I upload volumes to glance.
I tried to reproduce it with the exact same workflow, but everything  
worked as expected (including the download of the iso image to local  
filesystem, I learned that now).


So it's still unclear why nova downloaded a raw glance image to the  
local filesystem during the previous attempt.


I always knew that with Ceph as backend it's recommended to use raw  
images but I always assumed the "disk-format" was not more than a  
display option. Well, now I know that, but this still doesn't explain  
the downloaded base image.
If anyone has an idea how to reproduce such behavior or an  
explanation, I'd love to hear it.


Regards,
Eugen


Zitat von Eugen Block :


Hi list,

this week I noticed something strange in our cloud (Ocata).

We use Ceph as backend for nova, glance and cinder, everything  
really works like a charm. But from time to time we've noticed that  
some instances take much longer to launch than others. So I wanted  
to take a look what's happening, turned on debug logs and found that  
in some cases (I have no idea how to reproduce yet) there is an  
image downloaded to /var/lib/nova/instances/_base which then is used  
to import it back to Ceph, that is obviously the reason for the delay.
The problem is that this new instance is not CoW, it's a flat rbd  
image. Here are some relevant logs (instance_id:  
65567fc1-017f-45dc-b0ee-570c44146119, image_id:  
0da1ba0f-c504-45ea-b138-16026aec022b)


---cut here---
[...]
2018-10-04 11:46:39.189 10293 DEBUG nova.compute.manager  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:  
65567fc1-017f-45dc-b0ee-570c44146119] Start spawning the instance on  
the hypervisor. _build_and_run_instance  
/usr/lib/python2.7/site-packages/nova/compute/manager.py:1929

[...]
2018-10-04 11:46:39.192 10293 INFO nova.virt.libvirt.driver  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:  
65567fc1-017f-45dc-b0ee-570c44146119] Creating image
2018-10-04 11:46:39.220 10293 DEBUG  
nova.virt.libvirt.storage.rbd_utils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image  
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__  
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.241 10293 DEBUG  
nova.virt.libvirt.storage.rbd_utils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image  
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__  
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.245 10293 DEBUG oslo_concurrency.lockutils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock  
"3c60237fbf59101d3411c4f795d0a72b82752e0b" acquired by  
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.001s  
inner  
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2018-10-04 11:46:39.246 10293 DEBUG oslo_concurrency.lockutils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock  
"3c60237fbf59101d3411c4f795d0a72b82752e0b" released by  
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.001s  
inner  
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.

[Openstack] [Nova][Glance] Nova imports flat images from base file despite ceph backend

2018-10-04 Thread Eugen Block

Hi list,

this week I noticed something strange in our cloud (Ocata).

We use Ceph as backend for nova, glance and cinder, everything really  
works like a charm. But from time to time we've noticed that some  
instances take much longer to launch than others. So I wanted to take  
a look what's happening, turned on debug logs and found that in some  
cases (I have no idea how to reproduce yet) there is an image  
downloaded to /var/lib/nova/instances/_base which then is used to  
import it back to Ceph, that is obviously the reason for the delay.
The problem is that this new instance is not CoW, it's a flat rbd  
image. Here are some relevant logs (instance_id:  
65567fc1-017f-45dc-b0ee-570c44146119, image_id:  
0da1ba0f-c504-45ea-b138-16026aec022b)


---cut here---
[...]
2018-10-04 11:46:39.189 10293 DEBUG nova.compute.manager  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:  
65567fc1-017f-45dc-b0ee-570c44146119] Start spawning the instance on  
the hypervisor. _build_and_run_instance  
/usr/lib/python2.7/site-packages/nova/compute/manager.py:1929

[...]
2018-10-04 11:46:39.192 10293 INFO nova.virt.libvirt.driver  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:  
65567fc1-017f-45dc-b0ee-570c44146119] Creating image
2018-10-04 11:46:39.220 10293 DEBUG  
nova.virt.libvirt.storage.rbd_utils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image  
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__  
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.241 10293 DEBUG  
nova.virt.libvirt.storage.rbd_utils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image  
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__  
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.245 10293 DEBUG oslo_concurrency.lockutils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock  
"3c60237fbf59101d3411c4f795d0a72b82752e0b" acquired by  
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.001s  
inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2018-10-04 11:46:39.246 10293 DEBUG oslo_concurrency.lockutils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock  
"3c60237fbf59101d3411c4f795d0a72b82752e0b" released by  
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.001s inner  
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
2018-10-04 11:46:39.266 10293 DEBUG  
nova.virt.libvirt.storage.rbd_utils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image  
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__  
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.269 10293 DEBUG oslo_concurrency.processutils  
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] Running cmd (subprocess): rbd  
import --pool images  
/var/lib/nova/instances/_base/3c60237fbf59101d3411c4f795d0a72b82752e0b  
65567fc1-017f-45dc-b0ee-570c44146119_disk --image-format=2 --id  
openstack --conf /etc/ceph/ceph.conf execute  
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:355

[...]

# no parent data
control:~ # rbd info images/65567fc1-017f-45dc-b0ee-570c44146119_disk
rbd image '65567fc1-017f-45dc-b0ee-570c44146119_disk':
size 6144 MB in 1536 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.c8f88a6b8b4567
format: 2
features: layering, exclusive-lock, object-map
flags:
create_timestamp: Thu Oct  4 11:46:39 2018

---cut here---

###

For comparison I moved the respective base file and tried to spawn a  
new instance from the same glance image:


---cut here---
[...]
2018-10-04 10:30:29.412 2336 DEBUG nova.compute.manager  
[req-942ba103-1932-4adf-b9b2-670e1a2fc126  
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e  
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:  
91d0b930-97b0-4dd0-81b4-929599b7c997] Start spawning the instance on  
the hypervisor. _build_and_run_instance  

Re: [Openstack] [Horizon][Keystone] Migration to keystone v3

2018-09-28 Thread Eugen Block
Since nova-compute reports that failure, what is your auth_url in  
/etc/nova/nova.conf in the [placement] section?




Zitat von Davide Panarese :


@Paul
Yes keystone:5000 is my endpoint.

@Eugen
OPENSTACK_KEYSTONE_URL = "http://%s/v3 <http://%s/v3>" % OPENSTACK_HOST

Still not working.


Davide Panarese



On 28 Sep 2018, at 13:50, Eugen Block  wrote:

Hi,

what is your current horizon configuration?

control:~ # grep KEYSTONE_URL  
/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3; % OPENSTACK_HOST

Maybe this still configured to v2?

Regards,
Eugen


Zitat von Davide Panarese :


Goodmorning every one,
i'm finally approaching migration to keystone v3 but i want to  
maintain keystone v2 compatibility for all users that have custom  
scripts for authentication to our openstack.
Migration seems to be pretty simple, change endpoint direct into  
database changing http://keystone:5000/v2.0 to  
http://keystone:5000 <http://keystone:5000/>; Openstack client  
have the capability to add /v2.0 or /v3 at the end of url  
retrieved from catalog.
But i'm stuck with horizon dashboard, login works but compute  
information are not available and error log show:
“ Forbidden: You are not authorized to perform the requested  
action: rescope a scoped token. (HTTP 403)"

All other tabs works properly.
I think that is a keystone issue but i don't understand why with  
openstack client works perfectly and with horizon not.

Anyone can explain what i missed in migration?

Thanks a lot,
Davide Panarese





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato  
non infetto.
Seguire il link qui sotto per segnalarlo come  
spam:http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=D389145856.A899A








___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Horizon][Keystone] Migration to keystone v3

2018-09-28 Thread Eugen Block

Hi,

what is your current horizon configuration?

control:~ # grep KEYSTONE_URL  
/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3; % OPENSTACK_HOST

Maybe this still configured to v2?

Regards,
Eugen


Zitat von Davide Panarese :


Goodmorning every one,
i'm finally approaching migration to keystone v3 but i want to  
maintain keystone v2 compatibility for all users that have custom  
scripts for authentication to our openstack.
Migration seems to be pretty simple, change endpoint direct into  
database changing http://keystone:5000/v2.0 to http://keystone:5000  
; Openstack client have the capability to add  
/v2.0 or /v3 at the end of url retrieved from catalog.
But i'm stuck with horizon dashboard, login works but compute  
information are not available and error log show:
“ Forbidden: You are not authorized to perform the requested action:  
rescope a scoped token. (HTTP 403)"

All other tabs works properly.
I think that is a keystone issue but i don't understand why with  
openstack client works perfectly and with horizon not.

Anyone can explain what i missed in migration?

Thanks a lot,
Davide Panarese





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Create a PNDA on openstack

2018-09-26 Thread Eugen Block

Hi,

to be honest, I think if you are supposed to work with PNDA and  
OpenStack is your platform, you should at least get an overview what  
components it has and what they are for. No example or screenshot will  
help you understand how those components interact.
The PNDA pages mention OpenStack Mitaka, so you should study the  
respective docs [1]. Different guides for installation, operations and  
administration are available for different platforms (Ubuntu,  
openSUSE/SLES and RedHat/CentOS), pick the one suitable for your  
environment and understand the basics (creating tenants and users,   
neutron networking, nova compute etc.).


In addition to the mandatory services (neutron, nova, glance, cinder)  
you'll need Swift (object storage) and Heat (orchestration).
Maybe you already get an idea, this is not covered with "send me a  
screenshot". ;-)
Another mandatory requirement is a basic understanding of the command  
line interface (CLI), this enables you to access and manage your  
openstack cloud.


A quick search will also reveal some tutorials or videos [3] about the  
basic concepts.


Regards,
Eugen


[1] https://docs.openstack.org/mitaka/
[2] https://docs.openstack.org/mitaka/cli-reference/
[3] https://opensource.com/business/14/2/openstack-beginners-guide

Zitat von Suma Gowda :


i need to create a PNDA on openstack..
1. i installed openstack on devstack in vitualbox. ..after that what i have
to do.. send me the screenshot.. i am beginner to this. What is
cli,heat..etc..please and one example





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] boot order with multiple attachments

2018-09-17 Thread Eugen Block

Hi Volodymyr,

I didn't really try to reproduce this, but here's an excerpt from a  
template we have been using successfully:


---cut here---
[...]
  vm-vda:
type: OS::Cinder::Volume
properties:
  description: VM vda
  image: image-vda
  name: disk-vda
  size: 100
  vm-vdb:
type: OS::Cinder::Volume
properties:
  description: VM vdb
  image: image-vdb
  name: disk-vdb
  size: 120
  vm:
type: OS::Nova::Server
depends_on: [vm_subnet, vm_floating_port, vm-vda, vm-vdb, service]
properties:
  flavor: big-flavor
  block_device_mapping:
  - { device_name: "vda", volume_id : { get_resource : vm-vda },  
delete_on_termination : "true" }
  - { device_name: "vdb", volume_id : { get_resource : vm-vdb },  
delete_on_termination : "true" }

  networks:
[...]
---cut here---

So basically, this way you tell the instance which volume has to be  
/dev/vda, vdb etc. We don't use any boot_index for this.


Hope this helps!

Regards,
Eugen


Zitat von Volodymyr Litovka :


Hi again,

there is similar case - https://bugs.launchpad.net/nova/+bug/1570107  
- but I get same result (booting from VOLUME2) regardless of whether  
I use or don't use device_type/disk_bus properties in BDM description.


Any ideas on how to solve this issue?

Thanks.

On 9/11/18 10:58 AM, Volodymyr Litovka wrote:

Hi colleagues,

is there any mechanism to ensure boot disk when attaching more than  
two volumes to server? At the moment, I can't find a way to make it  
predictable.


I have two bootable images with the following properties:
1) hw_boot_menu='true', hw_disk_bus='scsi',  
hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi',  
img_hide_hypervisor_id='true', locations='[{u'url':  
u'swift+config:...', u'metadata': {}}]'


which corresponds to the following volume:

- attachments: [{u'server_id': u'...', u'attachment_id': u'...',  
u'attached_at': u'...', u'host_name': u'...', u'volume_id':  
u'', u'device': u'/dev/sda', u'id': u'...'}]
- volume_image_metadata: {u'checksum': u'...',  
u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw',  
u'image_name': u'bionic-Qpub', u'hw_scsi_model': u'virtio-scsi',  
u'image_id': u'...', u'hw_boot_menu': u'true', u'min_ram': u'0',  
u'container_format': u'bare', u'min_disk': u'0',  
u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi',  
u'size': u'...'}


and second image:
2) hw_disk_bus='scsi', hw_qemu_guest_agent='yes',  
hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true',  
locations='[{u'url': u'cinder://...', u'metadata': {}}]'


which corresponds to the following volume:

- attachments: [{u'server_id': u'...', u'attachment_id': u'...',  
u'attached_at': u'...', u'host_name': u'...', u'volume_id':  
u'', u'device': u'/dev/sdb', u'id': u'...'}]
- volume_image_metadata: {u'checksum': u'...',  
u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw',  
u'image_name': u'xenial', u'hw_scsi_model': u'virtio-scsi',  
u'image_id': u'...', u'min_ram': u'0', u'container_format':  
u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true',  
u'hw_disk_bus': u'scsi', u'size': u'...'}


Using Heat, I'm creating the following block_devices_mapping_v2 scheme:

block_device_mapping_v2:
    - volume_id: 
  delete_on_termination: false
  device_type: disk
  disk_bus: scsi
  boot_index: 0
    - volume_id: 
  delete_on_termination: false
  device_type: disk
  disk_bus: scsi
  boot_index: -1

which maps to the following nova-api debug log:

Action: 'create', calling method: ServersController.create of  
0x7f6b08dd4890>>, body: {"ser
ver": {"name": "jex-n1", "imageRef": "", "block_device_mapping_v2":  
[{"boot_index": 0, "uuid": "", "disk_bus": "scsi",  
"source_type": "volume"
, "device_type": "disk", "destination_type": "volume",  
"delete_on_termination": false}, {"boot_index": -1, "uuid":  
"", "disk_bus": "scsi", "so
urce_type": "volume", "device_type": "disk", "destination_type":  
"volume", "delete_on_termination": false}], "flavorRef":  
"4b3da838-3d81-461a-b946-d3613fb6f4b3", "user_data": "...",  
"max_count": 1, "min_count": 1, "networks": [{"port":  
"9044f884-1a3d-4dc6-981e-f585f5e45dd1"}], "config_drive": true}}  
_process_stack  
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:604


Regardless of boot_index value, server boots from VOLUME2  
(/dev/sdb), while having attached VOLUME1 as well as /dev/sda


I'm using Queens. Where I'm wrong?

Thank you.



--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 

Re: [Openstack] HA configuration misunderstanding

2018-09-13 Thread Eugen Block

Hi,


HA documentation guide says "OpenStack services are configured with the
list of these IP addresses"


I created a bug report for this issue [1] a couple of months ago. The  
HA guide is not very good at the moment.


You'll have to use a virtual IP and configure OpenStack services to  
use that IP (or the respective hostname):


[database]
connection = mysql+pymysql://keystone:@/keystone

I'm not finished yet, but I use a combination of HA guide, more or  
less old blog posts and trying to figure out the error messages. So  
good luck to you. ;-)


Regards,
Eugen

[1] https://bugs.launchpad.net/openstack-manuals/+bug/1755108


Zitat von Valdinei Rodrigues dos reis :


Hi there.

I'm configuring HA for Openstack services, have just configured GaleraDB
with 5 nodes, and get stucked in indicating to Openstack services how to
use this cluster.

HA documentation guide says "OpenStack services are configured with the
list of these IP addresses"

But I cant figure out how to do this.  Without HA configuration goes like:

connection = mysql+pymysql://user:password@172.16.0.226/glance

I really appreciate any help.





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Nova-scheduler: when are filters applied?

2018-09-03 Thread Eugen Block

Thanks, that is a very good explanation, I get it now.

Thank you very much for your answers!


Zitat von Balázs Gibizer :


On Mon, Sep 3, 2018 at 1:27 PM, Eugen Block  wrote:

Hi,

To echo what cfriesen said, if you set your allocation ratio to  
1.0, the system will not overcommit memory. Shut down instances  
consume memory from an inventory management perspective. If you  
don't want any danger of an instance causing an OOM, you must set  
you ram_allocation_ratio to 1.0.


let's forget about the scheduler, I'll try to make my question a  
bit clearer.


Let's say I have a ratio of 1.0 on my hypervisor, and let it have  
24 GB of RAM available, ignoring the OS for a moment. Now I launch  
6 instances, each with a flavor requesting 4 GB of RAM, that would  
leave no space for further instances, right?
Then I shutdown two instances (freeing 8 GB RAM) and create a new  
one with 8 GB of RAM, the compute node is full again (assuming all  
instances actually consume all of their RAM).


When you shutdown the two instances the phyisical RAM will be  
deallocated BUT nova will not remove the resource allocation in  
placement. Therefore your new instance which requires 8GB RAM will  
not be placed to the host in question because on that host all the  
24G RAM is still allocated even if physically not consumed at the  
moment.



Now I boot one of the shutdown instances again, the compute node  
would require additional 4 GB of RAM for that instance, and this  
would lead to OOM, isn't that correct? So a ratio of 1.0 would not  
prevent that from happening, would it?


Nova did not place the instance require 8G RAM to this host above.  
Therefore you can freely start up the two 4G consuming instances on  
this host later.



Regards,
Eugen


Zitat von Jay Pipes :


On 08/30/2018 10:54 AM, Eugen Block wrote:

Hi Jay,

You need to set your ram_allocation_ratio nova.CONF option to  
1.0 if you're running into OOM issues. This will prevent  
overcommit of memory on your compute nodes.


I understand that, the overcommitment works quite well most of the time.

It just has been an issue twice when I booted an instance that  
had been shutdown a while ago. In the meantime there were new  
instances created on that hypervisor, and this old instance  
caused the OOM.


I would expect that with a ratio of 1.0 I would experience the  
same issue, wouldn't I? As far as I understand the scheduler  
only checks at instance creation, not when booting existing  
instances. Is that a correct assumption?


To echo what cfriesen said, if you set your allocation ratio to  
1.0, the system will not overcommit memory. Shut down instances  
consume memory from an inventory management perspective. If you  
don't want any danger of an instance causing an OOM, you must set  
you ram_allocation_ratio to 1.0.


The scheduler doesn't really have anything to do with this.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Nova-scheduler: when are filters applied?

2018-09-03 Thread Eugen Block

Hi,

To echo what cfriesen said, if you set your allocation ratio to 1.0,  
the system will not overcommit memory. Shut down instances consume  
memory from an inventory management perspective. If you don't want  
any danger of an instance causing an OOM, you must set you  
ram_allocation_ratio to 1.0.


let's forget about the scheduler, I'll try to make my question a bit clearer.

Let's say I have a ratio of 1.0 on my hypervisor, and let it have 24  
GB of RAM available, ignoring the OS for a moment. Now I launch 6  
instances, each with a flavor requesting 4 GB of RAM, that would leave  
no space for further instances, right?
Then I shutdown two instances (freeing 8 GB RAM) and create a new one  
with 8 GB of RAM, the compute node is full again (assuming all  
instances actually consume all of their RAM).
Now I boot one of the shutdown instances again, the compute node would  
require additional 4 GB of RAM for that instance, and this would lead  
to OOM, isn't that correct? So a ratio of 1.0 would not prevent that  
from happening, would it?


Regards,
Eugen


Zitat von Jay Pipes :


On 08/30/2018 10:54 AM, Eugen Block wrote:

Hi Jay,

You need to set your ram_allocation_ratio nova.CONF option to 1.0  
if you're running into OOM issues. This will prevent overcommit of  
memory on your compute nodes.


I understand that, the overcommitment works quite well most of the time.

It just has been an issue twice when I booted an instance that had  
been shutdown a while ago. In the meantime there were new instances  
created on that hypervisor, and this old instance caused the OOM.


I would expect that with a ratio of 1.0 I would experience the same  
issue, wouldn't I? As far as I understand the scheduler only checks  
at instance creation, not when booting existing instances. Is that  
a correct assumption?


To echo what cfriesen said, if you set your allocation ratio to 1.0,  
the system will not overcommit memory. Shut down instances consume  
memory from an inventory management perspective. If you don't want  
any danger of an instance causing an OOM, you must set you  
ram_allocation_ratio to 1.0.


The scheduler doesn't really have anything to do with this.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] Nova-scheduler: when are filters applied?

2018-08-30 Thread Eugen Block

Hi Jay,

You need to set your ram_allocation_ratio nova.CONF option to 1.0 if  
you're running into OOM issues. This will prevent overcommit of  
memory on your compute nodes.


I understand that, the overcommitment works quite well most of the time.

It just has been an issue twice when I booted an instance that had  
been shutdown a while ago. In the meantime there were new instances  
created on that hypervisor, and this old instance caused the OOM.


I would expect that with a ratio of 1.0 I would experience the same  
issue, wouldn't I? As far as I understand the scheduler only checks at  
instance creation, not when booting existing instances. Is that a  
correct assumption?


Regards,
Eugen


Zitat von Jay Pipes :


On 08/30/2018 10:19 AM, Eugen Block wrote:

When does Nova apply its filters (Ram, CPU, etc.)?
Of course at instance creation and (live-)migration of existing  
instances. But what about existing instances that have been  
shutdown and in the meantime more instances on the same hypervisor  
have been launched?


When you start one of the pre-existing instances and even with RAM  
overcommitment you can end up with an OOM-Killer resulting in  
forceful shutdowns if you reach the limits. Is there something I've  
been missing or maybe a bad configuration of my scheduler filters?  
Or is it the admin's task to keep an eye on the load?


I'd appreciate any insights or pointers to something I've missed.


You need to set your ram_allocation_ratio nova.CONF option to 1.0 if  
you're running into OOM issues. This will prevent overcommit of  
memory on your compute nodes.


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [nova] Nova-scheduler: when are filters applied?

2018-08-30 Thread Eugen Block

Sorry. I was to quick with the send button...

Hi *,

I posted my question in [1] a week ago, but no answer yet.

When does Nova apply its filters (Ram, CPU, etc.)?
Of course at instance creation and (live-)migration of existing
instances. But what about existing instances that have been shutdown
and in the meantime more instances on the same hypervisor have been
launched?

When you start one of the pre-existing instances and even with RAM
overcommitment you can end up with an OOM-Killer resulting in
forceful shutdowns if you reach the limits. Is there something I've
been missing or maybe a bad configuration of my scheduler filters? Or
is it the admin's task to keep an eye on the load?

I'd appreciate any insights or pointers to something I've missed.

Regards,
Eugen

[1]  
https://ask.openstack.org/en/question/115812/nova-scheduler-when-are-filters-applied/





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [nova]

2018-08-30 Thread Eugen Block

Hi *,

I posted my question in [1] a week ago, but no answer yet.

When does Nova apply its filters (Ram, CPU, etc.)?
Of course at instance creation and (live-)migration of existing  
instances. But what about existing instances that have been shutdown  
and in the meantime more instances on the same hypervisor have been  
launched?


When you start one of the pre-existing instances and even with RAM  
overcommitment you can end up with an OOM-Killer resulting in forceful  
shutdowns if you reach the limits. Is there something I've been  
missing or maybe a bad configuration of my scheduler filters? Or is it  
the admin's task to keep an eye on the load?


I'd appreciate any insights or pointers to something I've missed.

Regards,
Eugen

[1]  
https://ask.openstack.org/en/question/115812/nova-scheduler-when-are-filters-applied/



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend

2018-08-29 Thread Eugen Block


Hi Chris,

I can't seem to reproduce your issue. What OpenStack release are you using?


openstack volume backup create --name backup-1 --force volume-foo
openstack volume backup create --name backup-2 --force volume-foo
openstack volume backup create --name backup-3 --force volume-foo
```
Cinder reports the following via `volume backup show`:
- backup-1 is not an incremental backup, but backup-2 and backup-3 are
(`is_incremental`).
- All but the latest backup have dependent backups (`has_dependent_backups`).


If I don't create the backups with the --incremental flag, they're all  
indepentent and don't have dependent backups:


---cut here---
(openstack) volume backup create --name backup1 --force  
51c18b65-db03-485e-98fd-ccb0f0c2422d
(openstack) volume backup create --name backup2 --force  
51c18b65-db03-485e-98fd-ccb0f0c2422d


(openstack) volume backup show backup1
+---+--+
| Field | Value|
+---+--+
| availability_zone | nova |
| container | images   |
| created_at| 2018-08-29T09:33:42.00   |
| data_timestamp| 2018-08-29T09:33:42.00   |
| description   | None |
| fail_reason   | None |
| has_dependent_backups | False|
| id| 8c9b20a5-bf31-4771-b8db-b828664bb810 |
| is_incremental| False|
| name  | backup1  |
| object_count  | 0|
| size  | 2|
| snapshot_id   | None |
| status| available|
| updated_at| 2018-08-29T09:34:14.00   |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+---+--+

(openstack) volume backup show backup2
+---+--+
| Field | Value|
+---+--+
| availability_zone | nova |
| container | images   |
| created_at| 2018-08-29T09:34:20.00   |
| data_timestamp| 2018-08-29T09:34:20.00   |
| description   | None |
| fail_reason   | None |
| has_dependent_backups | False|
| id| 9de60042-b4b6-478a-ac4d-49bf1b00d297 |
| is_incremental| False|
| name  | backup2  |
| object_count  | 0|
| size  | 2|
| snapshot_id   | None |
| status| available|
| updated_at| 2018-08-29T09:34:52.00   |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+---+--+

(openstack) volume backup delete backup1
(openstack) volume backup list
+--+-+-+---+--+
| ID   | Name| Description |  
Status| Size |

+--+-+-+---+--+
| 9de60042-b4b6-478a-ac4d-49bf1b00d297 | backup2 | None|  
available |2 |

+--+-+-+---+--+

(openstack) volume backup create --name backup-inc1 --incremental  
--force 51c18b65-db03-485e-98fd-ccb0f0c2422d

+---+--+
| Field | Value|
+---+--+
| id| 79e2f71b-1c3b-42d1-8582-4934568fea80 |
| name  | backup-inc1  |
+---+--+

(openstack) volume backup create --name backup-inc2 --incremental  
--force 51c18b65-db03-485e-98fd-ccb0f0c2422d

+---+--+
| Field | Value|
+---+--+
| id| e1033d2a-f2c2-409a-880a-9630e45f1312 |
| name  | backup-inc2  |
+---+--+

# Now backup2 ist the base backup, it has dependents now
(openstack) volume backup show 9de60042-b4b6-478a-ac4d-49bf1b00d297

Re: [Openstack] Error Neutron: RTNETLINK answers: File exists

2018-08-09 Thread Eugen Block

Sorry, somehow I didn't notice your answer and forgot the thread.

The problem is a wireless TP-link router with the OpenWRT firmware  
configured with bridge.
When I connect this wireless router to the switch with the OpenStack  
cloud servers, the Linux bridge agent starts to make an error and I  
lose access to the VMs.


It's good you have a hint to the cause, but I'm afraid I can't help  
you with this. Hopefully someone with more expertise will be able to  
point you to the right direction.


Regards


Zitat von Marcio Prado :


Guys, I figured out part of the problem.

The problem is a wireless TP-link router with the OpenWRT firmware  
configured with bridge.


When I connect this wireless router to the switch with the OpenStack  
cloud servers, the Linux bridge agent starts to make an error and I  
lose access to the VMs.


It is not duplicate IP or DHCP.

Does anyone have any idea what it is?




Em 27-07-2018 08:32, Marcio Prado escreveu:

Thanks for the help Eugen,

This log is from the linuxbridge of the controller node. Compute nodes
are not logging errors.

Follows the output of the "openstack network agent list"

+--+++---+---+---+---+
| ID   | Agent Type | Host
 | Availability Zone | Alive | State | Binary|
+--+++---+---+---+---+
| 590f5a6d-379b-4e8d-87ec-f1060cecf230 | Linux bridge agent |
controller | None  | True  | UP|
neutron-linuxbridge-agent |
| 88fb87c9-4c03-4faa-8286-95be3586fc94 | DHCP agent |
controller | nova  | True  | UP| neutron-dhcp-agent
   |
| b982382e-438c-46a9-8d4e-d58d554150fd | Linux bridge agent | compute1
 | None  | True  | UP| neutron-linuxbridge-agent |
| c7a9ba41-1fae-46cd-b61f-30bcacb0a4e8 | L3 agent   |
controller | nova  | True  | UP| neutron-l3-agent
   |
| c9a1ea4b-2d5d-4bda-9849-cd6e302a2917 | Metadata agent |
controller | None  | True  | UP|
neutron-metadata-agent|
| e690d4b9-9285-4ddd-a87a-f28ea99d9a73 | Linux bridge agent | compute3
 | None  | False | UP| neutron-linuxbridge-agent |
| fdd8f615-f5d6-4100-826e-59f8270df715 | Linux bridge agent | compute2
 | None  | False | UP| neutron-linuxbridge-agent |
+--+++---+---+---+---+

compute2 and compute3 are turned off intentionally.

Log compute1

/var/log/neutron/neutron-linuxbridge-agent.log

2018-07-27 07:43:57.242 1895 INFO neutron.common.config [-]
/usr/bin/neutron-linuxbridge-agent version 10.0.0
2018-07-27 07:43:57.243 1895 INFO
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
[-] Interface mappings: {'provider': 'eno3'}
2018-07-27 07:43:57.243 1895 INFO
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
[-] Bridge mappings: {}
2018-07-27 07:44:00.954 1895 INFO
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
[-] Agent initialized successfully, now running...
2018-07-27 07:44:01.582 1895 INFO
neutron.plugins.ml2.drivers.agent._common_agent
[req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] RPC agent_id:
lb525400d52f59
2018-07-27 07:44:01.589 1895 INFO
neutron.agent.agent_extensions_manager
[req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Loaded agent
extensions: []
2018-07-27 07:44:01.716 1895 INFO
neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent
Agent has just been revived. Doing a full sync.
2018-07-27 07:44:01.778 1895 INFO
neutron.plugins.ml2.drivers.agent._common_agent
[req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge
agent Agent RPC Daemon Started!
2018-07-27 07:44:01.779 1895 INFO
neutron.plugins.ml2.drivers.agent._common_agent
[req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge
agent Agent out of sync with plugin!
2018-07-27 07:44:02.418 1895 INFO
neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect
[req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Clearing orphaned
ARP spoofing entries for devices []


I'm using this OpenStack cloud to run my master's experiment. I turned
off all nodes, and after a few days I called again and from that the
VMs were not remotely accessible.

So I delete existing networks and re-create. It was in an attempt to
solve the problem.

Here is an attached image. Neutron is creating multiple interfaces on
the 10.0.0.0 network on the router.


Em 27-07-2018 05:05, Eugen Block escreveu:

Hi,

is there anything in the linuxbridge-agent logs on control and/or
compute node(s)?
Which neutron services don't start? Can you paste "openstack network
agent list" output?

The importa

Re: [Openstack] Adding new Hard disk to Compute Node

2018-08-09 Thread Eugen Block
 4.00 MiB
  Total PE              95109
  Free PE               4
  Allocated PE          95105
  PV UUID               BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR

  "/dev/sdb1" is a new physical volume of "5.46 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name
  PV Size               5.46 TiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443

root@h020:~# vgextend /dev/h020-vg /dev/sdb1
  Volume group "h020-vg" successfully extended
root@h020:~# vgdisplay
  --- Volume group ---
  VG Name               h020-vg
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               5.82 TiB
  PE Size               4.00 MiB
  Total PE              1525900
*  Alloc PE / Size       95105 / 371.50 GiB*
*  Free  PE / Size       1430795 / 5.46 TiB*
  VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U

root@h020:~# service nova-compute restart
root@h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME                FSTYPE        SIZE MOUNTPOINT LABEL
sda                               5.5T
├─sda1              vfat          500M            ESP
├─sda2              vfat          100M            DIAGS
└─sda3              vfat            2G            OS
sdb                               5.5T
└─sdb1              LVM2_member   5.5T
sdk                               372G
├─sdk1              ext2          487M /boot
├─sdk2                              1K
└─sdk5              LVM2_member 371.5G
  ├─h020--vg-root   ext4        370.6G /
  └─h020--vg-swap_1 swap          976M [SWAP]
root@h020:~# pvscan
  PV /dev/sdk5   VG h020-vg         lvm2 [371.52 GiB / 16.00 MiB free]
  PV /dev/sdb1   VG h020-vg         lvm2 [5.46 TiB / 5.46 TiB free]
  Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0   ]
root@h020:~# vgs
  VG      #PV #LV #SN Attr   VSize VFree
  h020-vg   2   2   0 wz--n- 5.82t 5.46t
root@h020:~# vi /var/log/nova/nova-compute.log
root@h020:~# 


On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block mailto:ebl...@nde.ag>> wrote:

Okay, I'm really not sure if I understand your setup correctly.

Server does not add them automatically, I tried to mount them.
I tried they
way they discussed in the page with /dev/sdb only. Other hard
disks I have
mounted them my self. Yes I can see them in lsblk output as below


What do you mean with "tried with /dev/sdb"? I assume this is a
fresh setup and Cinder didn't work yet, am I right?
The new disks won't be added automatically to your cinder
configuration, if that's what you expected. You'll have to create
new physical volumes and then extend the existing VG to use new disks.

In Nova-Compute logs I can only see main hard disk shown in
the the
complete phys_disk, it was supposed to show more  phys_disk
available
atleast 5.8 TB if only /dev/sdb is added as per my understand
(May be I am
thinking it in the wrong way, I want increase my compute node
disk size to
launch more VMs)


If you plan to use cinder volumes as disks for your instances, you
don't need much space in /var/lib/nova/instances but more space
available for cinder, so you'll need to grow the VG.

Regards


Zitat von Jay See mailto:jayachander...@gmail.com>>:

Hai,

Thanks for a quick response.

- what do you mean by "disks are not added"? Does the server
recognize
them? Do you see them in the output of "lsblk"?
Server does not add them automatically, I tried to mount them.
I tried they
way they discussed in the page with /dev/sdb only. Other hard
disks I have
mounted them my self. Yes I can see them in lsblk output as below
root@h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME                                          FSTYPE        SIZE
MOUNTPOINT                   LABEL
sda                                                         5.5T
├─sda1                                        vfat          500M
                  ESP
├─sda2                                        vfat          100M
                  DIAGS
└─sda3                                        vfat            2G
                  OS
sdb                                                         5.5T
├─sdb1                                                      5.5T
├─cinder--volumes-cinder--volumes--pool_tmeta                8

Re: [Openstack] Adding new Hard disk to Compute Node

2018-08-08 Thread Eugen Block
w: name=h020 phys_ram=515767MB used_ram=512MB
*phys_disk=364GB* used_disk=0GB total_vcpus=
40 used_vcpus=0 pci_stats=[]

- Please describe more precisely what exactly you tried and what exactly
fails.
As explained in the previous point, I want to increase the  phys_disk size
to use the compute node more efficiently. So to add the HD to compute node
I am installing cinder on the compute node to add all the HDs.

I might be doing something wrong.

Thanks and Regards,
Jayachander.

On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block  wrote:


Hi,

there are a couple of questions rising up:

- what do you mean by "disks are not added"? Does the server recognize
them? Do you see them in the output of "lsblk"?
- Do you already have existing physical volumes for cinder (assuming you
deployed cinder with lvm as in the provided link)?
- If the system recognizes the new disks and you deployed cinder with lvm
you can create a new physical volume and extend your existing volume group
to have more space for cinder. Is this a failing step or someting else?
- Please describe more precisely what exactly you tried and what exactly
fails.

The failing neutron-l3-agent shouldn't have to do anything with your disk
layout, so it's probably something else.

Regards,
Eugen


Zitat von Jay See :

Hai,


I am installing Openstack Queens on Ubuntu Server.

My server has extra hard disk(s) apart from main hard disk where
OS(Ubuntu)
is running.

(
https://docs.openstack.org/cinder/queens/install/cinder-stor
age-install-ubuntu.html
)
As suggested in cinder (above link), I have been trying to add the new
hard
disk but the other hard disks are not getting added.

Can anyone tell me , what am i missing to add these hard disks?

Other info : neutron-l3-agent on controller is not running, is it related
to this issue ? I am thinking it is not related to this issue.

I am new to Openstack.

~ Jayachander.
--
P  *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*






___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k





--
​
P  *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Adding new Hard disk to Compute Node

2018-08-08 Thread Eugen Block

Hi,

there are a couple of questions rising up:

- what do you mean by "disks are not added"? Does the server recognize  
them? Do you see them in the output of "lsblk"?
- Do you already have existing physical volumes for cinder (assuming  
you deployed cinder with lvm as in the provided link)?
- If the system recognizes the new disks and you deployed cinder with  
lvm you can create a new physical volume and extend your existing  
volume group to have more space for cinder. Is this a failing step or  
someting else?
- Please describe more precisely what exactly you tried and what  
exactly fails.


The failing neutron-l3-agent shouldn't have to do anything with your  
disk layout, so it's probably something else.


Regards,
Eugen


Zitat von Jay See :


Hai,

I am installing Openstack Queens on Ubuntu Server.

My server has extra hard disk(s) apart from main hard disk where OS(Ubuntu)
is running.

(
https://docs.openstack.org/cinder/queens/install/cinder-storage-install-ubuntu.html
)
As suggested in cinder (above link), I have been trying to add the new hard
disk but the other hard disks are not getting added.

Can anyone tell me , what am i missing to add these hard disks?

Other info : neutron-l3-agent on controller is not running, is it related
to this issue ? I am thinking it is not related to this issue.

I am new to Openstack.

~ Jayachander.
--
P  *SAVE PAPER – Please do not print this e-mail unless absolutely
necessary.*





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] OpenStack neutron error

2018-08-02 Thread Eugen Block

Hi,

the description in [1] sounds very similar to your problem and seems  
to be a bug in the docs. Can you check the ports you configured for  
keystone and which ports you have set in neutron configs?


Regards,
Eugen

[1]  
https://ask.openstack.org/en/question/114642/neutron-configuration-errot-failed-to-retrieve-extensions-list-from-network-api/



Zitat von Zufar Dhiyaulhaq :


Hi, im trying to install openstack queens from sratch (manual) from
openstack documentation. but i have problem in neutron. when im try to
verify with `openstack netwrok agent list` there are error `HTTP exception
unknown error`

when im check the logs from controller
in`/var/log/neutron/neutron-server.log` i have this error

2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors [-] An
error occurred during processing the request: GET /v2.0/extensions
HTTP$
Accept: application/json
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Type: text/plain
Host: controller:9696
User-Agent: python-neutronclient
X-Auth-Token: *: DiscoveryFailure: Could not determine a suitable
URL for the plugin
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
Traceback (most recent call last):
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py",
lin$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
response = req.get_response(self.application)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in
send
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
application, catch_exc_info=False)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in
call$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
app_iter = application(self.environ, start_response)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
resp = self.call_func(req, *args, **self.kwargs)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in
call_func
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
return self.func(req, *args, **kwargs)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
response = self.process_request(req)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
resp = super(AuthProtocol, self).process_request(request)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
allow_expired=allow_expired)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
Traceback (most recent call last):
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py",
lin$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
response = req.get_response(self.application)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in
send
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
application, catch_exc_info=False)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in
call$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
app_iter = application(self.environ, start_response)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
resp = self.call_func(req, *args, **self.kwargs)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in
call_func
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
return self.func(req, *args, **kwargs)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors
response = self.process_request(req)
2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors   File
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$
2018-08-02 19:21:37.511 2486 ERROR 

Re: [Openstack] [Horizon] Horizon responds very slowly

2018-07-31 Thread Eugen Block
Interesting, the HA guide [2] states that memcached should be  
configured with the list of hosts:


Access to Memcached is not handled by HAProxy because replicated  
access is currently in an experimental state.
Instead, OpenStack services must be supplied with the full list of  
hosts running Memcached.


On the other hand, it would be only one of many incorrect statements  
in that guide since I've dealt with it, so maybe this is just outdated  
information (although the page has been modified on July 25th). Which  
OpenStack version are you deploying?


Regards,
Eugen

[2] https://docs.openstack.org/ha-guide/controller-ha-memcached.html

Zitat von "gao.song" :


Further report!
We finally figure it out.
It because of the original memcache_server configuration which lead  
to load key from the poweroff controller

configuration example:
[cache]
backend = oslo_cache.memcache_pool
enabled = True
memcache_servers = controller1:11211,controller2:11211,controller3:11211
After change the server set to contoller_vip:11211,problem solved.






At 2018-07-24 02:35:09, "Ivan Kolodyazhny"  wrote:

Hi,


It could be a common issue between horizon and keystone.


As a temporary workaround for this, you can apply this [1] patch to  
redirect admin user to the different page.



[1] https://review.openstack.org/#/c/577090/


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Jul 19, 2018 at 11:47 AM, Eugen Block  wrote:
Hi,

we also had to deal with slow dashboard, in our case it was a  
misconfiguration of memcached [0], [1].


Check with your configuration and make sure you use oslo.cache.

Hope this helps!

[0] https://bugs.launchpad.net/keystone/+bug/158
[1]  
https://ask.openstack.org/en/question/102611/how-to-configure-memcache-in-openstack-ha/



Zitat von 高松 :


After kill one node of a cluster which consist of three nodes,
I found that Horizon based on keystone with provider set to fernet  
respondes very slowly.

Admin login will cost at least 20 senconds.
And cli verbose command return show making authentication is stuck  
about 5 senconds.

Any help will be appreciated.





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] URGENT - Live migration error DestinationDiskExists

2018-07-26 Thread Eugen Block
I assume /var/lib/nova/ uses shared storage and is mounted on the  
compute node(s)? It sounds like the directory already existed before  
it was configured to use shared storage or something. I believe I had  
a similar issue some time ago, I can't remember every detail, but  
although I believed that /var/lib/nova was mounted it actually was  
not. So make sure your configuration is correct, maybe delete the  
respective directory. Since you are using ceph as backend there isn't  
any data except a console.log file in that directory, so it should be  
safe. But you'll have to double check that before deleting anything,  
of course!


Regards


Zitat von Satish Patel :


I am using PIKE 16.0.15 version and seeing following error during live
migration, I am using Ceph storage for shared storage. any idea what
is going on ?

2018-07-25 13:15:00.773 52312 ERROR oslo_messaging.rpc.server
DestinationDiskExists: The supplied disk path
(/var/lib/nova/instances/5f56bc2b-74c8-47c1-834c-00796fafe6ae) already
exists, it is expected not to exist.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Horizon] Horizon responds very slowly

2018-07-19 Thread Eugen Block

Hi,

we also had to deal with slow dashboard, in our case it was a  
misconfiguration of memcached [0], [1].


Check with your configuration and make sure you use oslo.cache.

Hope this helps!

[0] https://bugs.launchpad.net/keystone/+bug/158
[1]  
https://ask.openstack.org/en/question/102611/how-to-configure-memcache-in-openstack-ha/



Zitat von 高松 :


After kill one node of a cluster which consist of three nodes,
I found that Horizon based on keystone with provider set to fernet  
respondes very slowly.

Admin login will cost at least 20 senconds.
And cli verbose command return show making authentication is stuck  
about 5 senconds.

Any help will be appreciated.





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to make Neutron select a specific subnet when boot an instance

2018-07-10 Thread Eugen Block
There has been some work on this [2], but it didn't make it into the  
Kilo release (abandoned), and I don't see it either in later releases.


[2]  
https://blueprints.launchpad.net/nova/+spec/selecting-subnet-when-creating-vm



Zitat von Hang Yang :


Hi there,

I have one question about choosing a specific subnet when boot a vm. My
OpenStack cluster is in Queens and I have multiple subnets in one network.
What I want is when I issue a boot command the instance only gets ip from
one subnet by default. I know there is a way to achieve that by creating a
port with --fixed-ip subnet=xxx then pass the port id to boot command.
Wondering if there is another way that does not need manually create the
port? Is there any configuration I can do to make it default for Neutron to
only pick one subnet from a network in boot?

Thanks for any help.

Best regards,
Hang





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to make Neutron select a specific subnet when boot an instance

2018-07-10 Thread Eugen Block

Hi,

depending on your workflow there would be a way with scripting [0],  
not sure if this would be a suitable approach for you.
There also has been a blueprint [1] for the selection of subnets  
during instance creation since Juno, but I can't find anything about  
an implementation.


Regards,
Eugen

[0]  
https://ask.openstack.org/en/question/95573/how-to-select-a-subnet-when-booting-an-instance/
[1]  
https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/selecting-subnet-when-creating-vm.html



Zitat von Hang Yang :


Hi there,

I have one question about choosing a specific subnet when boot a vm. My
OpenStack cluster is in Queens and I have multiple subnets in one network.
What I want is when I issue a boot command the instance only gets ip from
one subnet by default. I know there is a way to achieve that by creating a
port with --fixed-ip subnet=xxx then pass the port id to boot command.
Wondering if there is another way that does not need manually create the
port? Is there any configuration I can do to make it default for Neutron to
only pick one subnet from a network in boot?

Thanks for any help.

Best regards,
Hang





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] can't resize server

2018-06-15 Thread Eugen Block

Hi,

did you find a solution yet?

If not, I tried to rebuild your situation with a test instance.  
Although the environment and the storage backend are different, I  
believe it still applies to your issue, at least in a general way.


I have an instance booted from volume (size 1 GB). Trying to resize  
the instance via Horizon dashboard works (at least you would think  
that), it shows a new flavor with a disk size 8 GB. But the volume has  
not been resized, so the instance won't notice any changes.
To accomplish that, I had to shutdown the vm, set the volume state to  
available (you can't detach a root disk volume), then resize the  
volume to the size of the flavor, and then boot the vm again, now its  
disk has the desired size.


control:~ # openstack server stop test1
control:~ # openstack volume set --state available  
b832f798-e0de-4338-836a-07375f3ae3a0

control:~ # openstack volume set --size 8 b832f798-e0de-4338-836a-07375f3ae3a0
control:~ # openstack volume set --state in-use  
b832f798-e0de-4338-836a-07375f3ae3a0

control:~ # openstack server start test1

I should mention that I use live-migration, so during resize of an  
instance it migrates to another compute node.

Hope this helps!

Regards
Eugen


Zitat von Manuel Sopena Ballesteros :


Dear openstack community,

I have a packstack all-in-one environment and I would like to resize  
one of the vms. It seems like the resize process fails due to an  
issue with cinder


NOTE: the vm boots from volume and not from image

This is the vm I am trying to resize

[root@openstack ~(keystone_admin)]# openstack server show  
7292a929-54d9-4ce6-a595-aaf93a2be320

+--++
| Field| Value
  
  
 |

+--++
| OS-DCF:diskConfig| MANUAL   
  
  
 |
| OS-EXT-AZ:availability_zone  | nova 
  
  
 |
| OS-EXT-SRV-ATTR:host | openstack.localdomain
  
  
 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | openstack.localdomain
  
  
 |
| OS-EXT-SRV-ATTR:instance_name| instance-005f
  
  
 |
| OS-EXT-STS:power_state   | Shutdown 
  
  
 |
| OS-EXT-STS:task_state| None 
  
  
 |
| OS-EXT-STS:vm_state  | error
  
  
 |
| OS-SRV-USG:launched_at   | 2018-05-14T07:24:00.00   
  
  
 |
| OS-SRV-USG:terminated_at | None 
  
  

Re: [Openstack] Upgrade Nova Ocata -> Pike placement section

2018-06-05 Thread Eugen Block

Hi,

you can check the install guide [1] for further information.

Here's what it says:

---cut here---
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
---cut here---

Both control and compute node need that section.

Regards,
Eugen

[1] https://docs.openstack.org/nova/pike/install/controller-install-obs.html


Zitat von Gregory Orange :


Hi everyone,

I'm looking at upgrading Nova from Ocata to Pike and reading  
https://docs.openstack.org/releasenotes/nova/pike.html: "Ensure the  
[placement] section of nova.conf for the nova-conductor service is  
filled in." Can I get some input on what that means?


That is, I assume it needs to be our nova controllers, not compute  
nodes, and nova.conf [placement] section... but which settings need  
values? All of them?


Thanks,
Greg.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Database (Got timeout reading communication packets)

2018-05-14 Thread Eugen Block
While I was working on something else I remembered the error messages  
you described, I have them, too. It's a lab environment on hardware  
nodes with a sufficient network connection, and since we had to debug  
network issues before, we can rule out network problems in our case.
I found a website [1] to track down galera issues, I tried to apply  
those steps and it seems that the openstack code doesn't close the  
connections properly, hence the aborted connections.
I'm not sure if this is the correct interpretation, but since I didn't  
face any problems related to the openstack databases I decided to  
ignore these messages as long as the openstack environment works  
properly.


Regards,
Eugen

[1] https://www.fromdual.ch/abbrechende-mariadb-mysql-verbindungen


Zitat von Torin Woltjer :


are these interruptions occasionally or do they occur all the time? Is
this a new issue or has this happened before?


This is a 3 node Galera cluster on 3 KVM virtual machines. The errors are
constantly printing in the logs, and no node is excluded from receiving the
errors. I don't know whether they had always been there or not, but I
noticed them after an update.


Does the openstack environment work as expected despite these messages
or do you experience interruptions in the services?


The openstack services operate normally, the dashboard is fairly slow, but it
always has been.


I would check the network setup first (I have read about loose cables
in different threads...), maybe run some ping tests between the
machines to see if there's anything weird. Since you mention different
services reporting these interruptions this seems like a network issue
to me.


The hosts are all networked with bonded 10G SFP+ cables networked via a
switch. Pings between the VMs seem fine. If I were to guess, any networking
problem would be between the guest and host due to libvirt. Anything that I
should be looking for there?





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Database (Got timeout reading communication packets)

2018-05-14 Thread Eugen Block

Hi,

are these interruptions occasionally or do they occur all the time? Is  
this a new issue or has this happened before?
Does the openstack environment work as expected despite these messages  
or do you experience interruptions in the services?


I would check the network setup first (I have read about loose cables  
in different threads...), maybe run some ping tests between the  
machines to see if there's anything weird. Since you mention different  
services reporting these interruptions this seems like a network issue  
to me.


Regards,
Eugen


Zitat von Torin Woltjer :

Just the other day I noticed a bunch of errors spewing from the  
mysql service. I've spent quite a bit of time trying to track this  
down, and I haven't had any luck figuring out why this is happening.  
The following line is repeatedly spewed in the service's journal.


May 08 11:13:47 UBNTU-DBMQ2 mysqld[20788]: 2018-05-08 11:13:47  
140127545740032 [Warning] Aborted connection 211 to db: 'nova_api'  
user: 'nova' host: '192.168.116.21' (Got timeout reading  
communication packets)


It isn't always nova_api, it's happening with all of the openstack  
projects. And either of the controller node's ip addresses.


The database is a mariadb galera cluster. Removing haproxy has no  
effect. The output only occurs on the node receiving the  
connections; with haproxy it is multiple nodes, otherwise it is  
whatever node I specify as database in my controllers' host file's.





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] about cloud-init question

2018-04-23 Thread Eugen Block

I'm glad I could help!


Zitat von "Huang, Haibin" <haibin.hu...@intel.com>:


Hi
I use below config can create both /root/hhb.gz and /home/Ubuntu/config.
Thank you very much!

#cloud-config
write_files:
-   encoding: b64
content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==
owner: root:root
path: /root/hhb.gz
permissions: '0644'

runcmd:
-   mkdir -p /home/ubuntu/config
mkdir -p /home/ubuntu/config/hhb


-Original Message-----
From: Eugen Block [mailto:ebl...@nde.ag]
Sent: Monday, April 23, 2018 2:58 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] about cloud-init question

Hi,

we use this piece of script to pass salt data to instances and execute the
respective commands to start the salt-minion:

---cut here---
#cloud-config
write_files:
   # Minion Konfiguration
   - content: |
   master: 
   id: 
[...]
 owner: root:root
 path: /etc/salt/minion.d/init.conf
 permissions: '0644'

   # Minion Private-Key
   - content: |
   -BEGIN RSA PRIVATE KEY-
[...]
   -END RSA PRIVATE KEY-
   owner: root:root
   path: /etc/salt/pki/minion/minion.pem
   permissions: '0400'

[...]

# Enabled und Startet den Minion
runcmd:
   - rm -f /etc/machine-id
   - systemd-machine-id-setup
   - [ systemctl, enable, salt-minion.service ]
   - [ systemctl, start, --no-block, salt-minion.service ]
   - [ systemctl, daemon-reload ]
---cut here---

This both writes the desired files and also executes required  
commands. We use
this on openSUSE machines, I'm not sure if this differs in your  
environment, but

worth a shot, I guess.

Regards,
Eugen


Zitat von "Huang, Haibin" <haibin.hu...@intel.com>:

> Hi All,
>
> I have a problem about cloud-init.
> I want to both transfer files and execute script. So I give below
> script to user-data when I create instance.
> #cloud-config
> write_files:
> -   encoding: b64
> content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==
> owner: root:root
> path: /root/hhb.gz
> permissions: '0644'
>
> #!/bin/bash
> mkdir -p /home/ubuntu/config
>
> but, I can't get /root/hhb.gz and /home/Ubuntu/config.
> If I separate transfer files and execute script. It is ok.
> Any idea?
>
> Below is my debug info
>
> ubuntu@onap-hhb7:~$ sudo cloud-init --version
>
> sudo: unable to resolve host onap-hhb7
>
> cloud-init 0.7.5
>
>
>
> security-groupsubuntu@onap-hhb7:~$ curl
> http://169.254.169.254/2009-04-04/user-data
>
> #cloud-config
>
> write_files:
>
> -   encoding: b64
>
> content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==
>
> owner: root:root
>
> path: /root/hhb.gz
>
> permissions: '0644'
>
>
>
> #!/bin/bash
>
> mkdir -p /home/ubuntu/config
>
>
>
> ubuntu@onap-hhb7:~$ sudo ls /root/ -a
>
> .  ..  .bashrc  .profile  .ssh
>
>
>
> ubuntu@onap-hhb7:/var/lib/cloud/instance$ ls
>
> boot-finished datasource  obj.pkl  sem
> user-data.txt.i  vendor-data.txt.i
>
> cloud-config.txt  handlersscripts  user-data.txt  vendor-data.txt
>
> ubuntu@onap-hhb7:/var/lib/cloud/instance$ sudo cat user-data.txt
>
> sudo: unable to resolve host onap-hhb7
>
> #cloud-config
>
> write_files:
>
> -   encoding: b64
>
> content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==
>
> owner: root:root
>
> path: /root/hhb.gz
>
> permissions: '0644'
>
>
>
> #!/bin/bash
>
> mkdir -p /home/ubuntu/config
>
>
>
> --
> -
> Huang.haibin
> 11628530
> 86+18106533356




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] about cloud-init question

2018-04-23 Thread Eugen Block

Hi,

we use this piece of script to pass salt data to instances and execute  
the respective commands to start the salt-minion:


---cut here---
#cloud-config
write_files:
  # Minion Konfiguration
  - content: |
  master: 
  id: 
[...]
owner: root:root
path: /etc/salt/minion.d/init.conf
permissions: '0644'

  # Minion Private-Key
  - content: |
  -BEGIN RSA PRIVATE KEY-
[...]
  -END RSA PRIVATE KEY-
  owner: root:root
  path: /etc/salt/pki/minion/minion.pem
  permissions: '0400'

[...]

# Enabled und Startet den Minion
runcmd:
  - rm -f /etc/machine-id
  - systemd-machine-id-setup
  - [ systemctl, enable, salt-minion.service ]
  - [ systemctl, start, --no-block, salt-minion.service ]
  - [ systemctl, daemon-reload ]
---cut here---

This both writes the desired files and also executes required  
commands. We use this on openSUSE machines, I'm not sure if this  
differs in your environment, but worth a shot, I guess.


Regards,
Eugen


Zitat von "Huang, Haibin" :


Hi All,

I have a problem about cloud-init.
I want to both transfer files and execute script. So I give below  
script to user-data when I create instance.

#cloud-config
write_files:
-   encoding: b64
content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==
owner: root:root
path: /root/hhb.gz
permissions: '0644'

#!/bin/bash
mkdir -p /home/ubuntu/config

but, I can't get /root/hhb.gz and /home/Ubuntu/config.
If I separate transfer files and execute script. It is ok.
Any idea?

Below is my debug info

ubuntu@onap-hhb7:~$ sudo cloud-init --version

sudo: unable to resolve host onap-hhb7

cloud-init 0.7.5



security-groupsubuntu@onap-hhb7:~$ curl   
http://169.254.169.254/2009-04-04/user-data


#cloud-config

write_files:

-   encoding: b64

content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==

owner: root:root

path: /root/hhb.gz

permissions: '0644'



#!/bin/bash

mkdir -p /home/ubuntu/config



ubuntu@onap-hhb7:~$ sudo ls /root/ -a

.  ..  .bashrc  .profile  .ssh



ubuntu@onap-hhb7:/var/lib/cloud/instance$ ls

boot-finished datasource  obj.pkl  sem 
user-data.txt.i  vendor-data.txt.i


cloud-config.txt  handlersscripts  user-data.txt  vendor-data.txt

ubuntu@onap-hhb7:/var/lib/cloud/instance$ sudo cat user-data.txt

sudo: unable to resolve host onap-hhb7

#cloud-config

write_files:

-   encoding: b64

content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBw==

owner: root:root

path: /root/hhb.gz

permissions: '0644'



#!/bin/bash

mkdir -p /home/ubuntu/config



---
Huang.haibin
11628530
86+18106533356





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Domain not found error

2018-04-16 Thread Eugen Block

Is there a way to undo the keystone config and start over again? I want to
start afresh.


The easiest way is probably to drop the keystone database and recreate  
it, then do the bootstrapping again. I believe this should suffice  
since keystone is essential to all other services, so you wouldn't do  
too much damage.
Another way would be to login to your database and change the  
respective values, but since I don't know what exactly the bootstrap  
command does I would not recommend this option.



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi Eugen,
I tried pike initially. When that didn't work, I thought I'll use the
documentation for queens.
Is there a way to undo the keystone config and start over again? I want to
start afresh.

On Mon, Apr 16, 2018 at 3:24 PM, Eugen Block <ebl...@nde.ag> wrote:


Your first email pionted to the pike install guide which mentions
admin-url port 35357.

I'm trying to install keystone for my swift cluster.

I followed this document for install and configuration:
https://docs.openstack.org/keystone/pike/install/



So now you're trying to install queens release? You should stay consistent
and use only one guide to follow, although it seems like the ubuntu guide
is wrong at this point. The other guides for Q (RedHat and SUSE) point to
the admin-url port 35357, not port 5000. And the ubuntu guide for Pike
release also points to 35357 again, so this is probably a bug.

You should fix this prior to any further steps.



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:

Here is the documentation page I followed:

https://docs.openstack.org/keystone/queens/install/keystone-
install-ubuntu.html

On Mon, Apr 16, 2018 at 3:14 PM, Shyam Prasad N <nspmangal...@gmail.com>
wrote:

Hi Eugen,


Ignore the different IPs. I had tried keystone install on two different
systems. The old admin-rc script was from the other node.

As per the port numbers, I followed what was in the documentation:
Bootstrap the Identity service:
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

Regards,
Shyam

On Mon, Apr 16, 2018 at 2:57 PM, Eugen Block <ebl...@nde.ag> wrote:

Hi,


I found some differences between your bootstrap command and your
admin-rc
credentials:

export OS_AUTH_URL=http://20.20.20.7:35357/v3


--bootstrap-admin-url http://20.20.20.8:5000/v3/



You use two different IPs for your controller node, this can't work.
Another thing is, you usually have to create one admin endpoint (port
35357) and a public endpoint (port 5000), you use the public port for
both
endpoints. This could work, of course, although not recommended. But
then
you have to change your admin-rc credentials respectively. They should
reflect the configuration you bootstrapped with keystone-manage.

Change your admin-rc to point to the correct IP and the correct port,
then retry the domain list command after sourcing the credentials.



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:

Hi,



Sorry for the late reply. Was out for a while.

# openstack domain list
The request you have made requires authentication. (HTTP 401)
(Request-ID:
req-fd20ec4d-9000-4cfa-9a5c-ba547a11c4c4)

# tail /var/log/keystone/keystone-manage.log
#

# keystone-manage bootstrap --bootstrap-password PASSWORD
--bootstrap-admin-url http://20.20.20.8:5000/v3/
--bootstrap-internal-url
http://20.20.20.8:5000/v3/ --bootstrap-public-url
http://20.20.20.8:5000/v3/
--bootstrap-region-id RegionOne
2018-04-15 22:29:39.456 18518 WARNING keystone.assignment.core [-]
Deprecated: Use of the identity driver config to automatically
configure
the same assignment driver has been deprecated, in the "O" release, the
assignment driver will need to be expicitly configured if different
than
the default (SQL).
2018-04-15 22:29:39.585 18518 INFO keystone.cmd.cli [-] Domain default
already exists, skipping creation.
2018-04-15 22:29:39.621 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Project admin
already
exists, skipping creation.
2018-04-15 22:29:39.640 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin
already
exists, skipping creation.
2018-04-15 22:29:39.670 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Role admin
exists,
skipping creation.
2018-04-15 22:29:39.822 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin
already
has
admin on admin.
2018-04-15 22:29:39.827 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Region RegionOne
exists, skipping creation.
2018-04-15 22:29:39.834 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping admin
endpoint as already created
2018-04-15 22:29:39.835 18518

Re: [Openstack] Domain not found error

2018-04-16 Thread Eugen Block
Your first email pionted to the pike install guide which mentions  
admin-url port 35357.



I'm trying to install keystone for my swift cluster.
I followed this document for install and configuration:
https://docs.openstack.org/keystone/pike/install/


So now you're trying to install queens release? You should stay  
consistent and use only one guide to follow, although it seems like  
the ubuntu guide is wrong at this point. The other guides for Q  
(RedHat and SUSE) point to the admin-url port 35357, not port 5000.  
And the ubuntu guide for Pike release also points to 35357 again, so  
this is probably a bug.


You should fix this prior to any further steps.


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Here is the documentation page I followed:
https://docs.openstack.org/keystone/queens/install/keystone-install-ubuntu.html

On Mon, Apr 16, 2018 at 3:14 PM, Shyam Prasad N <nspmangal...@gmail.com>
wrote:


Hi Eugen,

Ignore the different IPs. I had tried keystone install on two different
systems. The old admin-rc script was from the other node.

As per the port numbers, I followed what was in the documentation:
Bootstrap the Identity service:
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

Regards,
Shyam

On Mon, Apr 16, 2018 at 2:57 PM, Eugen Block <ebl...@nde.ag> wrote:


Hi,

I found some differences between your bootstrap command and your admin-rc
credentials:

export OS_AUTH_URL=http://20.20.20.7:35357/v3

--bootstrap-admin-url http://20.20.20.8:5000/v3/



You use two different IPs for your controller node, this can't work.
Another thing is, you usually have to create one admin endpoint (port
35357) and a public endpoint (port 5000), you use the public port for both
endpoints. This could work, of course, although not recommended. But then
you have to change your admin-rc credentials respectively. They should
reflect the configuration you bootstrapped with keystone-manage.

Change your admin-rc to point to the correct IP and the correct port,
then retry the domain list command after sourcing the credentials.



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:

Hi,


Sorry for the late reply. Was out for a while.

# openstack domain list
The request you have made requires authentication. (HTTP 401)
(Request-ID:
req-fd20ec4d-9000-4cfa-9a5c-ba547a11c4c4)

# tail /var/log/keystone/keystone-manage.log
#

# keystone-manage bootstrap --bootstrap-password PASSWORD
--bootstrap-admin-url http://20.20.20.8:5000/v3/
--bootstrap-internal-url
http://20.20.20.8:5000/v3/ --bootstrap-public-url
http://20.20.20.8:5000/v3/
--bootstrap-region-id RegionOne
2018-04-15 22:29:39.456 18518 WARNING keystone.assignment.core [-]
Deprecated: Use of the identity driver config to automatically configure
the same assignment driver has been deprecated, in the "O" release, the
assignment driver will need to be expicitly configured if different than
the default (SQL).
2018-04-15 22:29:39.585 18518 INFO keystone.cmd.cli [-] Domain default
already exists, skipping creation.
2018-04-15 22:29:39.621 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Project admin
already
exists, skipping creation.
2018-04-15 22:29:39.640 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin already
exists, skipping creation.
2018-04-15 22:29:39.670 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Role admin exists,
skipping creation.
2018-04-15 22:29:39.822 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin already
has
admin on admin.
2018-04-15 22:29:39.827 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Region RegionOne
exists, skipping creation.
2018-04-15 22:29:39.834 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping admin
endpoint as already created
2018-04-15 22:29:39.835 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping internal
endpoint as already created
2018-04-15 22:29:39.835 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping public
endpoint as already created
# tail /var/log/keystone/keystone-manage.log2018-04-15 22:29:39.456
18518
WARNING keystone.assignment.core [-] Deprecated: Use of the identity
driver
config to automatically configure the same assignment driver has been
deprecated, in the "O" release, the assignment driver will need to be
expicitly configured if different than the default (SQL).
2018-04-15 22:29:39.585 18518 INFO keystone.cmd.cli [-] Domain default
already exists, skipping creation.
2018-04-15 22:29:39.621 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80c

Re: [Openstack] Domain not found error

2018-04-16 Thread Eugen Block

Hi,

I found some differences between your bootstrap command and your  
admin-rc credentials:



export OS_AUTH_URL=http://20.20.20.7:35357/v3
--bootstrap-admin-url http://20.20.20.8:5000/v3/


You use two different IPs for your controller node, this can't work.  
Another thing is, you usually have to create one admin endpoint (port  
35357) and a public endpoint (port 5000), you use the public port for  
both endpoints. This could work, of course, although not recommended.  
But then you have to change your admin-rc credentials respectively.  
They should reflect the configuration you bootstrapped with  
keystone-manage.


Change your admin-rc to point to the correct IP and the correct port,  
then retry the domain list command after sourcing the credentials.



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi,

Sorry for the late reply. Was out for a while.

# openstack domain list
The request you have made requires authentication. (HTTP 401) (Request-ID:
req-fd20ec4d-9000-4cfa-9a5c-ba547a11c4c4)

# tail /var/log/keystone/keystone-manage.log
#

# keystone-manage bootstrap --bootstrap-password PASSWORD
--bootstrap-admin-url http://20.20.20.8:5000/v3/ --bootstrap-internal-url
http://20.20.20.8:5000/v3/ --bootstrap-public-url http://20.20.20.8:5000/v3/
--bootstrap-region-id RegionOne
2018-04-15 22:29:39.456 18518 WARNING keystone.assignment.core [-]
Deprecated: Use of the identity driver config to automatically configure
the same assignment driver has been deprecated, in the "O" release, the
assignment driver will need to be expicitly configured if different than
the default (SQL).
2018-04-15 22:29:39.585 18518 INFO keystone.cmd.cli [-] Domain default
already exists, skipping creation.
2018-04-15 22:29:39.621 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Project admin already
exists, skipping creation.
2018-04-15 22:29:39.640 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin already
exists, skipping creation.
2018-04-15 22:29:39.670 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Role admin exists,
skipping creation.
2018-04-15 22:29:39.822 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin already has
admin on admin.
2018-04-15 22:29:39.827 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Region RegionOne
exists, skipping creation.
2018-04-15 22:29:39.834 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping admin
endpoint as already created
2018-04-15 22:29:39.835 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping internal
endpoint as already created
2018-04-15 22:29:39.835 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping public
endpoint as already created
# tail /var/log/keystone/keystone-manage.log2018-04-15 22:29:39.456 18518
WARNING keystone.assignment.core [-] Deprecated: Use of the identity driver
config to automatically configure the same assignment driver has been
deprecated, in the "O" release, the assignment driver will need to be
expicitly configured if different than the default (SQL).
2018-04-15 22:29:39.585 18518 INFO keystone.cmd.cli [-] Domain default
already exists, skipping creation.
2018-04-15 22:29:39.621 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Project admin already
exists, skipping creation.
2018-04-15 22:29:39.640 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin already
exists, skipping creation.
2018-04-15 22:29:39.670 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Role admin exists,
skipping creation.
2018-04-15 22:29:39.822 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] User admin already has
admin on admin.
2018-04-15 22:29:39.827 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Region RegionOne
exists, skipping creation.
2018-04-15 22:29:39.834 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping admin
endpoint as already created
2018-04-15 22:29:39.835 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping internal
endpoint as already created
2018-04-15 22:29:39.835 18518 INFO keystone.cmd.cli
[req-ed92018e-9fa0-4222-b9ca-6d81d80cbf7f - - - - -] Skipping public
endpoint as already created
#


On Fri, Apr 13, 2018 at 11:54 AM, Eugen Block <ebl...@nde.ag> wrote:


Hi,

the bug I reported is invalid because the keystone-bootstrap command is
supposed to create the default domain. Since we created our cloud in
Liberty release the default domain already existed in our environment.
Well, I guess we're back to square one. ;-)

Can you paste the output of

control:~ # openstack domain list

If the k

Re: [Openstack] Domain not found error

2018-04-13 Thread Eugen Block

Hi,

the bug I reported is invalid because the keystone-bootstrap command  
is supposed to create the default domain. Since we created our cloud  
in Liberty release the default domain already existed in our  
environment. Well, I guess we're back to square one. ;-)


Can you paste the output of

control:~ # openstack domain list

If the keystone bootstrap command worked, it should at least show the  
default domain. If it doesn't take a look into  
/var/log/keystone/keystone-manage.log and check for errors. If this  
doesn't reveal anything try running it again and check the logs again.



Zitat von Eugen Block <ebl...@nde.ag>:

The missing command has been in Newton, Ocata and Pike release. They  
fixed it in Queens again.


I filed a bug report: https://bugs.launchpad.net/keystone/+bug/1763297

Regards


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Thanks Eugen. It'll be great if you can do it. (I haven't yet gone through
the bug reporting documentation)
Please add me to the bug's CC list. That way if some info is needed from
me, I can provide it.

Regards,
Shyam

On Thu, Apr 12, 2018 at 12:48 PM, Eugen Block <ebl...@nde.ag> wrote:


I believe there's something missing in Ocata and Pike docs. If you read
Mitaka install guide [1] you'll find the first step to be creating the
default domain before all other steps regarding projects and users.

You should run

openstack domain create --description "Default Domain" default

and then the next steps should work, at least I hope so.

Do you want to report this as a bug? I can also report it, I have already
filed several reports.

Regards


[1] https://docs.openstack.org/mitaka/install-guide-obs/keystone
-users.html



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:

Hi,


Please read my replies inline below...

On Thu, Apr 12, 2018 at 12:10 PM, Eugen Block <ebl...@nde.ag> wrote:

Hi,


can you paste the credentials you're using?

# cat admin-rc

export OS_USERNAME=admin
export OS_PASSWORD=abcdef
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://20.20.20.7:35357/v3
export OS_IDENTITY_API_VERSION=3

The config values (e.g. domain) are case sensitive, the ID of the default


domain is usually "domain", its name is "Default". But if you're sourcing
the credentials with ID "Default" this would go wrong, although I'm not
sure if this would be the expected error message.

Just a couple of weeks ago there was someone on ask.openstack.org who
ignored case-sensitive options and failed to operate his cloud.

Did the keystone-manage bootstrap command work?

Yes. It did not throw any errors.




Regards


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi,



I'm trying to install keystone for my swift cluster.
I followed this document for install and configuration:
https://docs.openstack.org/keystone/pike/install/

However, I'm getting this error for a command:
# openstack user create --domain default --password-prompt swift
The request you have made requires authentication. (HTTP 401)
(Request-ID:
req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8)

# tail /var/log/keystone/keystone.log
2018-04-11 22:45:10.895 29335 INFO keystone.common.wsgi
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] GET
http://20.20.20.7:35357/v3/
2018-04-11 22:45:10.898 29335 INFO eventlet.wsgi.server
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] 20.20.20.7 - -
[11/Apr/2018 22:45:10] "GET /v3 HTTP/1.1" 200 493 0.062545
2018-04-11 22:45:10.908 29335 INFO keystone.common.wsgi
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] POST
http://20.20.20.7:35357/v3/auth/tokens
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] Could not find
domain:
Default
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers Traceback
(most recent call last):
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/auth/controllers.py", line
185,
in _lookup_domain
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
domain_name)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line
124,
in
wrapped
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
__ret_val
= __f(*args, **kwargs)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053,
in
decorate
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
should_cache_fn)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657,
in
get_or_create
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
async_creator) as value:
2018-04-11 22:45:11.011 29335 

Re: [Openstack] Domain not found error

2018-04-12 Thread Eugen Block
The missing command has been in Newton, Ocata and Pike release. They  
fixed it in Queens again.


I filed a bug report: https://bugs.launchpad.net/keystone/+bug/1763297

Regards


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Thanks Eugen. It'll be great if you can do it. (I haven't yet gone through
the bug reporting documentation)
Please add me to the bug's CC list. That way if some info is needed from
me, I can provide it.

Regards,
Shyam

On Thu, Apr 12, 2018 at 12:48 PM, Eugen Block <ebl...@nde.ag> wrote:


I believe there's something missing in Ocata and Pike docs. If you read
Mitaka install guide [1] you'll find the first step to be creating the
default domain before all other steps regarding projects and users.

You should run

openstack domain create --description "Default Domain" default

and then the next steps should work, at least I hope so.

Do you want to report this as a bug? I can also report it, I have already
filed several reports.

Regards


[1] https://docs.openstack.org/mitaka/install-guide-obs/keystone
-users.html



Zitat von Shyam Prasad N <nspmangal...@gmail.com>:

Hi,


Please read my replies inline below...

On Thu, Apr 12, 2018 at 12:10 PM, Eugen Block <ebl...@nde.ag> wrote:

Hi,


can you paste the credentials you're using?

 # cat admin-rc

export OS_USERNAME=admin
export OS_PASSWORD=abcdef
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://20.20.20.7:35357/v3
export OS_IDENTITY_API_VERSION=3

The config values (e.g. domain) are case sensitive, the ID of the default


domain is usually "domain", its name is "Default". But if you're sourcing
the credentials with ID "Default" this would go wrong, although I'm not
sure if this would be the expected error message.

Just a couple of weeks ago there was someone on ask.openstack.org who
ignored case-sensitive options and failed to operate his cloud.

Did the keystone-manage bootstrap command work?

Yes. It did not throw any errors.




Regards


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi,



I'm trying to install keystone for my swift cluster.
I followed this document for install and configuration:
https://docs.openstack.org/keystone/pike/install/

However, I'm getting this error for a command:
# openstack user create --domain default --password-prompt swift
The request you have made requires authentication. (HTTP 401)
(Request-ID:
req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8)

# tail /var/log/keystone/keystone.log
2018-04-11 22:45:10.895 29335 INFO keystone.common.wsgi
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] GET
http://20.20.20.7:35357/v3/
2018-04-11 22:45:10.898 29335 INFO eventlet.wsgi.server
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] 20.20.20.7 - -
[11/Apr/2018 22:45:10] "GET /v3 HTTP/1.1" 200 493 0.062545
2018-04-11 22:45:10.908 29335 INFO keystone.common.wsgi
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] POST
http://20.20.20.7:35357/v3/auth/tokens
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] Could not find
domain:
Default
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers Traceback
(most recent call last):
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/auth/controllers.py", line
185,
in _lookup_domain
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
domain_name)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line
124,
in
wrapped
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
 __ret_val
= __f(*args, **kwargs)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053,
in
decorate
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
should_cache_fn)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657,
in
get_or_create
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
async_creator) as value:
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158,
in
__enter__
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers return
self._enter()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in
_enter
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
 generated
= self._enter_create(createdtime)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", 

Re: [Openstack] Domain not found error

2018-04-12 Thread Eugen Block
I believe there's something missing in Ocata and Pike docs. If you  
read Mitaka install guide [1] you'll find the first step to be  
creating the default domain before all other steps regarding projects  
and users.


You should run

openstack domain create --description "Default Domain" default

and then the next steps should work, at least I hope so.

Do you want to report this as a bug? I can also report it, I have  
already filed several reports.


Regards


[1] https://docs.openstack.org/mitaka/install-guide-obs/keystone-users.html


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi,

Please read my replies inline below...

On Thu, Apr 12, 2018 at 12:10 PM, Eugen Block <ebl...@nde.ag> wrote:


Hi,

can you paste the credentials you're using?


 # cat admin-rc
export OS_USERNAME=admin
export OS_PASSWORD=abcdef
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://20.20.20.7:35357/v3
export OS_IDENTITY_API_VERSION=3

The config values (e.g. domain) are case sensitive, the ID of the default

domain is usually "domain", its name is "Default". But if you're sourcing
the credentials with ID "Default" this would go wrong, although I'm not
sure if this would be the expected error message.

Just a couple of weeks ago there was someone on ask.openstack.org who
ignored case-sensitive options and failed to operate his cloud.

Did the keystone-manage bootstrap command work?


Yes. It did not throw any errors.



Regards


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi,


I'm trying to install keystone for my swift cluster.
I followed this document for install and configuration:
https://docs.openstack.org/keystone/pike/install/

However, I'm getting this error for a command:
# openstack user create --domain default --password-prompt swift
The request you have made requires authentication. (HTTP 401) (Request-ID:
req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8)

# tail /var/log/keystone/keystone.log
2018-04-11 22:45:10.895 29335 INFO keystone.common.wsgi
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] GET
http://20.20.20.7:35357/v3/
2018-04-11 22:45:10.898 29335 INFO eventlet.wsgi.server
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] 20.20.20.7 - -
[11/Apr/2018 22:45:10] "GET /v3 HTTP/1.1" 200 493 0.062545
2018-04-11 22:45:10.908 29335 INFO keystone.common.wsgi
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] POST
http://20.20.20.7:35357/v3/auth/tokens
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] Could not find
domain:
Default
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers Traceback
(most recent call last):
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/auth/controllers.py", line
185,
in _lookup_domain
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
domain_name)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124,
in
wrapped
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
 __ret_val
= __f(*args, **kwargs)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053, in
decorate
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
should_cache_fn)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657, in
get_or_create
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
async_creator) as value:
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in
__enter__
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers return
self._enter()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in
_enter
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
 generated
= self._enter_create(createdtime)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 149, in
_enter_create
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers created
=
self.creator()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 625, in
gen_value
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
created_value = creator()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1049, in
creator
2018-04-11 22:45:1

Re: [Openstack] Domain not found error

2018-04-12 Thread Eugen Block

Hi,

can you paste the credentials you're using?
The config values (e.g. domain) are case sensitive, the ID of the  
default domain is usually "domain", its name is "Default". But if  
you're sourcing the credentials with ID "Default" this would go wrong,  
although I'm not sure if this would be the expected error message.


Just a couple of weeks ago there was someone on ask.openstack.org who  
ignored case-sensitive options and failed to operate his cloud.


Did the keystone-manage bootstrap command work?

Regards


Zitat von Shyam Prasad N <nspmangal...@gmail.com>:


Hi,

I'm trying to install keystone for my swift cluster.
I followed this document for install and configuration:
https://docs.openstack.org/keystone/pike/install/

However, I'm getting this error for a command:
# openstack user create --domain default --password-prompt swift
The request you have made requires authentication. (HTTP 401) (Request-ID:
req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8)

# tail /var/log/keystone/keystone.log
2018-04-11 22:45:10.895 29335 INFO keystone.common.wsgi
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] GET
http://20.20.20.7:35357/v3/
2018-04-11 22:45:10.898 29335 INFO eventlet.wsgi.server
[req-147f239e-2205-40b5-8aea-40604c99b695 - - - - -] 20.20.20.7 - -
[11/Apr/2018 22:45:10] "GET /v3 HTTP/1.1" 200 493 0.062545
2018-04-11 22:45:10.908 29335 INFO keystone.common.wsgi
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] POST
http://20.20.20.7:35357/v3/auth/tokens
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] Could not find domain:
Default
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers Traceback
(most recent call last):
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/auth/controllers.py", line 185,
in _lookup_domain
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
domain_name)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 124, in
wrapped
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers __ret_val
= __f(*args, **kwargs)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1053, in
decorate
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
should_cache_fn)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 657, in
get_or_create
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
async_creator) as value:
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in
__enter__
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers return
self._enter()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in
_enter
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers generated
= self._enter_create(createdtime)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 149, in
_enter_create
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers created =
self.creator()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 625, in
gen_value
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
created_value = creator()
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1049, in
creator
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers return
fn(*arg, **kw)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers   File
"/usr/lib/python2.7/dist-packages/keystone/resource/core.py", line 720, in
get_domain_by_name
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers raise
exception.DomainNotFound(domain_id=domain_name)
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
DomainNotFound: Could not find domain: Default
2018-04-11 22:45:11.011 29335 ERROR keystone.auth.controllers
2018-04-11 22:45:11.016 29335 WARNING keystone.common.wsgi
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] Authorization failed.
The request you have made requires authentication. from 20.20.20.7
2018-04-11 22:45:11.018 29335 INFO eventlet.wsgi.server
[req-8f888754-1cf5-4c24-81b6-7481c9c0dfb8 - - - - -] 20.20.20.7 - -
[11/Apr/2018 22:45:11] "POST /v3/auth/tokens HTTP/1.1" 401 425 0.113822

Can someone please tell me what's going on?
Thanks in advance for your r

Re: [Openstack] compiler for heat templates

2018-03-20 Thread Eugen Block
Have you tried the option "--dry-run"? This also provides log output  
and could help you identify issues.



Zitat von Sashan Govender <sash...@gmail.com>:


Hi

Is there a way to check heat templates. At the moment I run one and it
errors at runtime when, for example an something expects a string but gets
a list. For example in this case of an OS::Heat::SoftwareConfig resource,
the config attribute below expects a string, which is why str_replace works

 some_resource:
  type: OS::Heat::SoftwareConfig
  properties:
config:
  str_replace:
params:
  $repstr$:
list_join: ['-', [ {get_param: cluster_name}, 'xyz']]
template: |
  #!/bin/bash
  echo $repstr$ >> /etc/somefile

According to this
https://docs.openstack.org/heat/latest/template_guide/openstack.html#OS::Heat::SoftwareConfig

the config property expects a string.

If I replace str_replace with something that generates a list (e.g. repeat)
it fails at runtime. Is there a way to type check this? I tried 'heat
template-validate' but it didn't do what I expected...




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Pike] [Nova] Shutting down VMs with Cinder Volumes

2018-03-09 Thread Eugen Block

Hi,

My question is this: Can I shutdown the VMs, rebuild the compute  
nodes, and then relaunch the VMs?


Why shut them down? You could just migrate (cold or live) them to  
other compute nodes and maintain your compute nodes one by one, this  
would be possible without downtime.


Depending on your storage backend (if the disks and volumes do not  
reside on the compute nodes) rebooting instances on upgraded compute  
nodes should be no problem at all. The configuration of the instances  
are in the database, and if the compute nodes don't have existing xml  
files, they will be simply recreated.
Before our live migration worked I had to deal with some compute node  
issues and changed the hosting compute node of some instances directly  
in the database, and the instances came back up. So I don't see an  
issue there, always under the prerequisite that the compute  
configuration is correct and the storage backend is accessible by the  
compute nodes, of course.


Hope this helps!


Zitat von Father Vlasie <fv@spots.school>:


Hello everyone,

I have a couple of compute nodes that need HD upgrades. They are  
running VMs with Cinder volumes.


My question is this: Can I shutdown the VMs, rebuild the compute  
nodes, and then relaunch the VMs?


I am thinking “yes” because the volumes are not ephemeral but I am not sure.

Are there any VM specific data that I need to save from the compute nodes?

Thank you,

FV
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [HA] Upgrade existing environment to HA

2018-03-08 Thread Eugen Block

Hi list,

I have a question regarding high availability.
There's an existing cloud (Ocata) which developed from demo to  
production environment. Now I have to find a way to make it highly  
available, starting with the control node. I've been gathering all  
kinds of information to prepare a migration.


The plan is to leave the existing single-controller up and running  
while I configure two new servers in HA mode with Pike release in the  
meantime. There are two main aspects causing some headaches: database  
and networking. I believe the database part could be tricky but  
manageable, stop mysql at some point and dump the DB, then import it  
to the new control node(s) (maybe on shared storage) and hope that it  
works.


But what about neutron and the self-service networks with all the  
virtual routers etc.? Is it even possible to recreate the neutron  
environment on a different node? I read the guide on how to make  
neutron ha if you start from scratch, but is my approach realisticly  
possible?


I would really appreciate any insights from you guys. Is there maybe  
someone who has done this and could comment my approach?


Regards,
Eugen

--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Pike][Neutron] ERROR neutron.plugins.ml2.drivers.agent._common_agent - AgentNotFoundByTypeHost

2018-03-07 Thread Eugen Block
dist-packages/neutron/db/l3_agentschedulers_db.py", line 303, in list_router_ids_on_host\ncontext, constants.AGENT_TYPE_L3, host)\n', u'  File "/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 291, in _get_agent_by_type_and_host\nhost=host)\n', u'AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=UBNTU-OSTACK-COMPUTE1 could not be  
found\n'].
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most  
recent call last):
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 336, in  
treat_devices_removed
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent cfg.CONF.host)
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/neutron/agent/rpc.py", line 139,  
in update_device_down
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent  
agent_id=agent_id, host=host)
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 162,  
in call
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent return  
self._original_context.call(ctxt, method, **kwargs)
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py",  
line 169, in call
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent retry=self.retry)
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line  
123, in _send
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent timeout=timeout,  
retry=retry)
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",  
line 578, in send
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent retry=retry)
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent   File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",  
line 569, in _send
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent raise result
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent RemoteError: Remote  
error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and  
host=UBNTU-OSTACK-COMPUTE1 could not be found
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent [u'Traceback (most  
recent call last):\n', u'  File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py",  
line 160, in _process_incoming\nres =  
self.dispatcher.dispatch(message)\n', u'  File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",  
line 213, in dispatch\nreturn self._do_dispatch(endpoint,  
method, ctxt, args)\n', u'  File  
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",  
line 183, in _do_dispatch\nresult = func(ctxt, **new_args)\n',  
u'  File  
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py", line  
234, in update_device_down\nn_const.PORT_STATUS_DOWN, host)\n',  
u'  File  
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py", line  
331, in notify_l2pop_port_wiring\n 
l2pop_driver.obj.update_port_down(port_context)\n', u'  File  
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 253, in update_port_down\nadmin_context, agent_host, [port[\'device_id\']]):\n', u'  File "/usr/lib/python2.7/dist-packages/neutron/db/l3_agentschedulers_db.py", line 303, in list_router_ids_on_host\ncontext, constants.AGENT_TYPE_L3, host)\n', u'  File "/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 291, in _get_agent_by_type_and_host\nhost=host)\n', u'AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=UBNTU-OSTACK-COMPUTE1 could not be  
found\n'].
2018-03-06 13:38:58.199 1978 ERROR  
neutron.plugins.ml2.drivers.agent._common_agent
2018-03-06 13:38:59.216 1978 INFO  
neutron.plugins.ml2.drivers.agent._common_agent  
[req-262cb010-9068-4ad9-b93d-bd0875fc66e1 - - - - -] Linux bridge  
agent Agent out of sync with plugin!




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 1

Re: [Openstack] [Pike] [Nova] Error : ERROR : MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url

2018-03-06 Thread Eugen Block

Hi,

could you provide more verbose output from nova-api.log (maybe other  
logs, too)?



Zitat von Guru Desai <guru...@gmail.com>:


Oh my god !!! thank Navdeep..  With this, i m getting below error. Is this
known ?   this command was executed on compute node where

# openstack compute service list --service nova-compute
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/
and attach the Nova API log if possible.
 (HTTP 500) (Request-ID:
req-3993191e-46ac-4f38-bda9-9b003e6aab1b)



On Tue, Mar 6, 2018 at 10:24 PM, Navdeep Uniyal <
navdeep.uni...@bristol.ac.uk> wrote:


Hi Guru,



It should be auth_url. Please see the highlighted error below.



Regards,

Navdeep



*From:* Guru Desai <guru...@gmail.com>
*Sent:* 06 March 2018 16:41
*To:* OpenStack Mailing List <openstack@lists.openstack.org>
*Subject:* [Openstack] [Pike] [Nova] Error : ERROR :
MissingRequiredOptions: Auth plugin requires parameters which were not
given: auth_url



Hello,



I am setting up pike version and facing an issue with nova on controller.
I see below errors continouolsy in nova-api.log. But i have given the auth
parameters in the /etc/nova/nova.conf. I

am done installing keystone and glance, stuck here with nova. Modified the
nova.conf as per the install guide. Please suggest as to what could be the
issue..









[keystone_authtoken]





auth_uri = http://test_controller:5000

auth_uri = http://test_controller:35357

memcached_servers = test_controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS







Log

====









--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Keystone Unauthorized: The request you have made requires authentication while creating/starting instance

2018-03-05 Thread Eugen Block

Hi,

you should also check your neutron auth configs and the respective log  
files since nova reports "Instance failed network setup after 1  
attempt(s)". Set nova and neutron to debug mode to get more output.  
You could also try to run different neutron commands with the same  
credentials and see if there occur any errors. Breaking it down to a  
specific service will help identifying the issue.


Regards


Zitat von Andrea Gatta <andrea.ga...@gmail.com>:


Hello there,
as for the subject I am stuck trying to create/start a cirros imange.

At first I didn't notice but I can now say that while creating the image
keystone logs the following warning:

/var/log/keystone/keystone.log

2018-03-05 21:02:45.961 2120 INFO keystone.common.wsgi [
req-5c4c9e26-dbe2-429f-b414-f6262b451392 - - - - -] POST
http://controller1:35357/v3/auth/tokens
2018-03-05 21:02:46.740 2120 WARNING keystone.common.wsgi
[req-5c4c9e26-dbe2-429f-b414-f6262b451392 - - - - -] Authorization failed.
The request you have mad

at the same time nova throws the following error:

/var/log/nova/nova-compute.log
45ec8a6ff - - -] [instance: 7a789397-8fbd-47a7-a5f6-8b274f77ca72] Creating
image
2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager
[req-b9f9e984-6f5f-4869-9290-63ca145d19e1 e35fc188170d4144a9cd4d30f9eab65c
bad15e4bc5714298b275e2f45e
c8a6ff - - -] Instance failed network setup after 1 attempt(s)
2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager Traceback (most
recent call last):
...

2018-03-05 21:26:34.716 1225 ERROR nova.compute.manager Unauthorized: The
request you have made requires authentication. (HTTP 401) (Request-ID:
req-5c4c9e26-dbe2-429f-b414-f6262b451392)
2018-03-05 21:26:34.736 1225 ERROR nova.compute.manager [instance:
7a789397-8fbd-47a7-a5f6-8b274f77ca72] Unauthorized: The request you have
made requires authentication. (HTTP 401) (Request-ID:
req-5c4c9e26-dbe2-429f-b414-f6262b451392)

So basically the compute node sends  req-5c4c9e26-dbe2-429f-b414-f6262b451392
that hasn't gotten a reply since keystone on the controller node denies it
(reqs match).

To this point I've checked auth_uri and nova user password in
/etc/nova/nova.conf for both controller and compute nodes. Moreover I've
checked nova openstack user password with the command 'openstack user
password set' (with appropriate env). Crendentials are ok all across the
board.

Here's the [keystone_authtoken] section for both controller and compute
nodes

[keystone_authtoken]

auth_uri = http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 


auth_uri = http://controller1:5000
auth_url = http://controller1:35357
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 

Thanks in advance for any light you could shed on this.

Regards
Andrea




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Can't start instance - "Instance failed network setup after 1 attempt(s)/No valid host was found. There are not enough hosts available"

2018-03-05 Thread Eugen Block

Hi,

my first action would be debug mode for neutron logs and then review  
all of the logs (server, dhcp-agent, linuxbridgeagent, etc.). At least  
one of them should also report errors, maybe they point you to the  
right direction.


Have you checked 'openstack network agent list'? Are all agents up?

Regards


Zitat von Andrea Gatta <andrea.ga...@gmail.com>:


Hello guys,
I am fairly new to Openstack and am building a home lab to experiment with
it at my own pace.

Here's my present setup:

host/hypervisor: vmware workstation 10 (1 xeon 4 cores, 40 GB RAM)
os: Centos 7
openstack realease: Newton

Architecture is fairly simple:

1x Controller node (1 vcpu, 4 GB RAM)
1x Compute node (1   vcpu, 4 GB RAM)

After a couple of days of work I now have a working lab but am stuck not
being able to create and start basic cirros instance.

The issue has been confirmed using Horizon as well (instance creation fails
with same errors)


*root@controller1 nova]# openstack server create --flavor m1.nano --image
cirros --nic net-id=cd37f4c3-7860-4183-8901-deeb48448fe4 --security-group
default   --key-name mykey selfservice-instance*

root@controller1 ~]# openstack server list
+--+--++--++
| ID   | Name | Status |
Networks | Image Name |
+--+--++--++
| 2a100590-6d7c-4d04-aecb-9dc2011252f5 | selfservice-instance | ERROR  |
  | cirros |

*openstack server show selfservice-instance*

fault| {u'message': u'No valid host was
found. There are not enough hosts available.', u'code': 500

*nova-scheduler.log*

Filter results: ['RetryFilter: (start: 1, end: 0)']

['RetryFilter: (start: 1, end: 0)']

As for the installation process I followed the openstack official
documentation at

*https://docs.openstack.org/newton/install-guide-rdo/index.html
<https://docs.openstack.org/newton/install-guide-rdo/index.html> *

After a bit of digging I've found that the instance had failed network setup

*/var/log/nova/nova-compute.log*

2018-03-05 11:35:08.939 20920 ERROR nova.compute.manager
[req-b252833b-e6b4-43ac-8d95-5ccec002e74c e35fc188170d4144a9cd4d30f9eab65c
bad15e4bc5714298b275e2f45ec8a6ff - - -] *Instance failed network setup
after 1 attempt(s)*

Up to this point I reviewed the whole configuration several time with a
special focus on nova<>neutron integration but at present I haven't been
able to figure out what is going on

Rabbitmq seems to work fine and communications between controller and
compute nodes work as expected (no logs to prove otherwise found).

Just in case here's the output of 'openstack network list' in case anyone
was wondering whether or not openstack had interfaces to play with.

I am using QEMU with KVM acceleration.

*[root@controller1 etc]# openstack network list*
+--+-+--+
| ID   | Name| Subnets
|
+--+-+--+
| 982445b2-deb9-4308-8580-9de20992c4dd | provider|
ccd0290f-1640-4354-b56d-1a95c8c19ec0 |
| cd37f4c3-7860-4183-8901-deeb48448fe4 | selfservice |
6096dff6-4567-4666-9e10-6dd718514e86 |
+--+-+--+

Clues anyone ?

Thanks in advance

Cheers
Andrea




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Compute Node not mounting disk to VM's

2018-02-28 Thread Eugen Block

Hi,

unfortunately, I don't have an answer for you, but it seems that  
you're not alone with this. In the past 10 days or so I have read  
about very similiar issues multiple times (e.g. [1], [2]). In fact, it  
sounds like the update could be responsible for these changes.


Usually, you can change the disk_bus by specifying glance image  
properties, something like this:


openstack image set --property hw_scsi_model=virtio-scsi --property  
hw_disk_bus=scsi --property hw_qemu_guest_agent=yes --property  
os_require_quiesce=yes 


But I doubt any effect of this, there has to be something else telling  
libvirt to use scsi instead of virtio. I hope someone else has an idea  
where to look at since I don't have this issue and can't reproduce it.


What is your output for

---cut here---
root@compute:~ # grep -A3 virtio-blk  
/usr/lib/udev/rules.d/60-persistent-storage.rules

# virtio-blk
KERNEL=="vd*[!0-9]", ATTRS{serial}=="?*",  
ENV{ID_SERIAL}="$attr{serial}",  
SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}"
KERNEL=="vd*[0-9]", ATTRS{serial}=="?*",  
ENV{ID_SERIAL}="$attr{serial}",  
SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}-part%n"

---cut here---

You could also take a look into  
/etc/glance/metadefs/compute-libvirt-image.json, maybe there is  
something wrong there, but as I said, I can't really reproduce this.


Good luck!

[1]  
https://ask.openstack.org/en/question/112488/libvirt-not-allocating-cpu-and-disk-to-vms-after-the-os-update/

[2] https://bugs.launchpad.net/nova/+bug/1560965


Zitat von Yedhu Sastry <yedhusas...@gmail.com>:


Hello,

I have an OpenStack cluster(Newton) which is basically a test cluster.
After the regular OS security update and upgrade in all my compute nodes I
have problem with New VMs. While launching new VM's Iam getting the
Error  "ALERT!
LABEL=cloudimg-rootfs does not exist  Dropping to a shell!" in the console
log of VM's. In horizon it is showing as active. Iam booting from image not
from volume. Before the update everything was fine.

Then I checked all the logs related to OpenStack and I cant find any info
related to this. I spent days and I found that after the update libvirt is
now using scsi instead of virtio. I dont know why. All the VM's which I
created before the update are running fine and  is using 'virtio'. Then I
tried to manually change the instancexx.xml file of the libvirt to use "
 " and started the VM again using 'virsh
start instancexx'. VM got started and then went to shutdown state. But in
the console log I can see VM is getting IP and properly booting without any
error and then it goes to poweroff state.


1) Whether this issue is related to the update of libvirt?? If so why
libvirt is not using virtio_blk anymore?? Why it is using only
virtio_scsi?? Is it possible to change libvirt to use virtio_blk instead of
virtio_scsi??

2) I found nova package version on compute nodes are 14.0.10 and on
controller node it is 14.0.1. Whether this is the cause of the problem??
Whether an update in controller node solve this issue?? Iam not sure about
this.

3) Why Task status of  instancexx is showing as Powering Off in horizon
after 'virsh start instancexx' in the compute node?? Why it is not starting
the VM with the manually customized .xml file of libvirt??


Any help is really appreciated.


--

Thank you for your time and have a nice day,


With kind regards,
Yedhu Sastri




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ocata Created Ports Strange Issue

2018-02-09 Thread Eugen Block

Hi,

my input on this is very limited, but I believe we had a similar issue  
in our Ocata cloud. My workaround was like yours, detach the assigned  
port, recreate it and attach it again. The only strange thing was,  
that when I wanted to delete the port, the CLI reported that the port  
didn't exist, it literally disappeared!
I didn't spend very much time to debug it because it has not happened  
since then. And if I remember correctly, it occured around our large  
migration, where we upgraded our Ceph backend to the latest version,  
upgraded the OS of all nodes and also the cloud from Mitaka to Ocata  
(via Newton), it could have been a side effect of that, at least that  
was my hope.


So as I said, this is not of big help, but I can confirm your  
observation, unfortunately without any pointers to the cause. If this  
happens again, I will definitely spend more time on debugging! ;-)


Regards,
Eugen


Zitat von Georgios Dimitrakakis <gior...@acmac.uoc.gr>:


Dear all,

I have a small Ocata installation (1x controller + 2x compute nodes)  
on which I have manually created 5 network ports and afterwards each  
one of these ports is assigned to a specific instance (4Linux VMs  
and 1Windows). All these instances are located on one physical  
hypervisor (compute node) while the controller is also the  
networking node.


The other day we had to do system maintenance and all hosts (compute  
and network/controller) were powered off but before that we  
gracefully shutoff all running VMs.


As soon as maintenance finished we powered on everything and I met  
the following strange issue... Instances with an attached port were  
trying for very long time to get an IP from the DHCP server but they  
all manage to get one eventually with the exception of the Windows  
VM on which I had to assign it statically. Restarting networking  
services on controller/network and/or compute node didn't make any  
difference. On the other hand all newly spawned instances didn't  
have any problem no matter on which compute node they were spawned  
and their only difference was that they were automatically getting  
ports assigned. All the above happened on Friday and today (Monday)  
people were complaining that the Linux VMs didn't have network  
connectivity (Windows was working...), so I don't know the exact  
time the issue occured. I have tried to access all VMs using the  
"self-service" network by spawning a new instance unfortunately  
without success. The instance was successfully spawned, it had  
network connectivity but couldn't reach any of the afforementioned  
VMs.


What I did finally and solved the problem was to detach interfaces,  
deleted ports, re-created new ports with same IP address etc. and  
re-attached them to the VMs. As soon as I did that networking  
connectivity was back to normal without even having to restart the  
VMs.


Unfortunately I couldn't find any helpful information regarding this  
in the logs and I am wondering has anyone seen or experienced  
something similar?


Best regards,

G.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Rally - problem with some test

2018-01-18 Thread Eugen Block

Hi,

I can't really help you yet, I just started to deal with rally this  
week, but I kept your mail in my inbox, just in case ;-)


How did you configure your json file? Obviously, it's nova who is  
complaining about the block devices. How are the instances usually  
created in your environment? If I launch an instance via horizon, it  
has preselected "Yes" for "Create new volume", I don't know if this  
affects rally, too.


Regards,
Eugen


Zitat von Łukasz Chrustek <luk...@chrustek.net>:


Hi,

I have folowing problem with resize-server.json test in rally:

# rally task start resize-server.json

Traceback (most recent call last):
  File  
"/usr/local/lib/python2.7/dist-packages/rally/task/runner.py", line  
71, in _run_scenario_once

getattr(scenario_inst, method_name)(**scenario_kwargs)
  File  
"/usr/local/lib/python2.7/dist-packages/rally/plugins/openstack/scenarios/nova/servers.py", line 388, in  
run

server = self._boot_server(image, flavor, **kwargs)
  File  
"/usr/local/lib/python2.7/dist-packages/rally/task/atomic.py", line  
87, in func_atomic_actions

f = func(self, *args, **kwargs)
  File  
"/usr/local/lib/python2.7/dist-packages/rally/plugins/openstack/scenarios/nova/utils.py", line 80, in  
_boot_server

server_name, image, flavor, **kwargs)
  File  
"/usr/local/lib/python2.7/dist-packages/novaclient/v2/servers.py",  
line 1403, in create

**boot_kwargs)
  File  
"/usr/local/lib/python2.7/dist-packages/novaclient/v2/servers.py",  
line 802, in _boot

return_raw=return_raw, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/novaclient/base.py",  
line 361, in _create

resp, body = self.api.client.post(url, body=body)
  File  
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py",  
line 310, in post

return self.request(url, 'POST', **kwargs)
  File  
"/usr/local/lib/python2.7/dist-packages/novaclient/client.py", line  
83, in request

raise exceptions.from_response(resp, body, url, method)
BadRequest: Block Device Mapping is Invalid: You specified more  
local devices than the limit allows (HTTP 400) (Request-ID:  
req-30fa2508-cc8e-45f4-9f1c-86202de111df)



we  don't  have ephemeral disk allowed. What options I need to pass to
rally/json file, to make it work ?

regards
Luk


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Could not determine a suitable URL for the plugin

2018-01-17 Thread Eugen Block

See, I told you to check your configs ;-)

I'm glad it works now!


Zitat von Sashan Govender <sash...@gmail.com>:


Turns out the neutron config in /etc/nova/nova.conf on the compute node was
missing.

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = rootroot

After adding that I could create an instance.

[sashan@controller ~]$ openstack server list
+--+---++-++
| ID   | Name  | Status |
Networks| Image Name |
+--+---++-++
| b9342c83-0c10-4f3e-a3b4-41bc601ea0b1 | provider-instance | ACTIVE |
provider=192.168.10.107 | cirros |
| d03058f3-0009-47c9-8b34-182034398647 | provider-instance | ERROR  |
   | cirros |
| 42adeacf-3027-45ba-a12d-e284995ce3a7 | provider-instance | ERROR  |
   | cirros |
| cfcbde0b-34f3-4ce8-ba37-735a7fa84417 | provider-instance | ERROR  |
   | cirros |
| 9f1481b9-0554-4cec-8cf5-163fb790f463 | provider-instance | ERROR  |
   | cirros |
+--+---++-++
[sashan@controller ~]$


On Tue, Jan 16, 2018 at 10:10 PM Eugen Block <ebl...@nde.ag> wrote:


> 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension
> vlan-transparent not supported by any of loaded plugins
> 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to
> process extensions (auto-allocated-topology) because the configured
plugins
> do not satisfy their requirements. Some features will not work as
expected.

This sounds like the right place to dig deeper. I would enable debug
logs and see if there are more hints and then try to resolve this.



Zitat von Sashan Govender <sash...@gmail.com>:

> On Tue, Jan 16, 2018 at 7:48 PM Eugen Block <ebl...@nde.ag> wrote:
>
> Thanks for the help.
>
> Could you also paste the output of "openstack compute service list"
>> and "openstack network agent list"? I'd like to see if the nova and
>> neutron services are all up and running.
>>
>>
> [sashan@controller ~]$ openstack network agent list
>
+--+++---+---+---+---+
> | ID   | Agent Type | Host
 |
> Availability Zone | Alive | State | Binary|
>
+--+++---+---+---+---+
> | 0d5571c9-b514-4626-8738-1f87f9344978 | Linux bridge agent | compute
|
> None  | True  | UP| neutron-linuxbridge-agent |
> | 58b3554f-e0b2-4ce6-941d-ff6ca46247a4 | DHCP agent | controller
|
> nova  | True  | UP| neutron-dhcp-agent|
> | 5fb85699-20a9-4f8d-9b44-3317ffc1b9fc | Linux bridge agent | controller
|
> None  | True  | UP| neutron-linuxbridge-agent |
> | c4512921-73ff-49fa-b70d-13a3518883a0 | Metadata agent | controller
|
> None  | True  | UP| neutron-metadata-agent|
>
+--+++---+---+---+---+
> [sashan@controller ~]$ openstack compute service list
>
++--++--+-+---++
> | ID | Binary   | Host   | Zone | Status  | State |
Updated
> At |
>
++--++--+-+---++
> |  1 | nova-consoleauth | controller | internal | enabled | up|
> 2018-01-16T10:47:22.00 |
> |  2 | nova-conductor   | controller | internal | enabled | up|
> 2018-01-16T10:47:22.00 |
> |  3 | nova-scheduler   | controller | internal | enabled | up|
> 2018-01-16T10:47:27.00 |
> |  6 | nova-compute | compute| nova | enabled | up|
> 2018-01-16T10:47:24.00 |
>
++--++--+-+---++
>
>
>> > I don't think the
>> > warning about the  placement api is relevant. What about the other
one:
>> > Unable to refresh my resource provider record?
>>
>> I'm not sure about that, it is just a warning.
>> Can you confirm that glance is working properly and the image is okay?
>>

Re: [Openstack] Could not determine a suitable URL for the plugin

2018-01-16 Thread Eugen Block

2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension
vlan-transparent not supported by any of loaded plugins
2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to
process extensions (auto-allocated-topology) because the configured plugins
do not satisfy their requirements. Some features will not work as expected.


This sounds like the right place to dig deeper. I would enable debug  
logs and see if there are more hints and then try to resolve this.




Zitat von Sashan Govender <sash...@gmail.com>:


On Tue, Jan 16, 2018 at 7:48 PM Eugen Block <ebl...@nde.ag> wrote:

Thanks for the help.

Could you also paste the output of "openstack compute service list"

and "openstack network agent list"? I'd like to see if the nova and
neutron services are all up and running.



[sashan@controller ~]$ openstack network agent list
+--+++---+---+---+---+
| ID   | Agent Type | Host   |
Availability Zone | Alive | State | Binary|
+--+++---+---+---+---+
| 0d5571c9-b514-4626-8738-1f87f9344978 | Linux bridge agent | compute|
None  | True  | UP| neutron-linuxbridge-agent |
| 58b3554f-e0b2-4ce6-941d-ff6ca46247a4 | DHCP agent | controller |
nova  | True  | UP| neutron-dhcp-agent|
| 5fb85699-20a9-4f8d-9b44-3317ffc1b9fc | Linux bridge agent | controller |
None  | True  | UP| neutron-linuxbridge-agent |
| c4512921-73ff-49fa-b70d-13a3518883a0 | Metadata agent | controller |
None  | True  | UP| neutron-metadata-agent|
+--+++---+---+---+---+
[sashan@controller ~]$ openstack compute service list
++--++--+-+---++
| ID | Binary   | Host   | Zone | Status  | State | Updated
At |
++--++--+-+---++
|  1 | nova-consoleauth | controller | internal | enabled | up|
2018-01-16T10:47:22.00 |
|  2 | nova-conductor   | controller | internal | enabled | up|
2018-01-16T10:47:22.00 |
|  3 | nova-scheduler   | controller | internal | enabled | up|
2018-01-16T10:47:27.00 |
|  6 | nova-compute | compute| nova | enabled | up|
2018-01-16T10:47:24.00 |
++--++--+-+---++



> I don't think the
> warning about the  placement api is relevant. What about the other one:
> Unable to refresh my resource provider record?

I'm not sure about that, it is just a warning.
Can you confirm that glance is working properly and the image is okay?
Is the network layout as expected? Any information in other logs like
neutron and glance?



I noticed this error in the neutron logs:

2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension
vlan-transparent not supported by any of loaded plugins
2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to
process extensions (auto-allocated-topology) because the configured plugins
do not satisfy their requirements. Some features will not work as expected.
2018-01-16 21:40:12.559 1090 INFO neutron.quota.resource_registry [-]
Creating instance of TrackedResource for resource:subnet
2018

glance seems fine i.e. no error messages.




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Could not determine a suitable URL for the plugin

2018-01-16 Thread Eugen Block
 provider-instance | ERROR  |
| cirros |
+--+---++--++
[sashan@controller ~]$


Content from nova-compute.log on the compute node. I don't think the
warning about the  placement api is relevant. What about the other one:
Unable to refresh my resource provider record?

[root@compute ~]# tail /var/log/nova/nova-compute.log
2018-01-16 11:16:32.209 1435 INFO nova.compute.resource_tracker
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Final resource view:
name=compute phys_ram=2047MB used_ram=512MB phys_disk=16GB used_disk=0GB
total_vcpus=2 used_vcpus=0 pci_stats=[]
2018-01-16 11:16:32.236 1435 WARNING nova.scheduler.client.report
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my
resource provider record
2018-01-16 11:16:32.236 1435 INFO nova.compute.resource_tracker
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Compute_service record
updated for compute:compute
2018-01-16 11:17:33.076 1435 INFO nova.compute.resource_tracker
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Auditing locally
available compute resources for node compute
2018-01-16 11:17:33.129 1435 WARNING nova.scheduler.client.report
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] No authentication
information found for placement API. Placement is optional in Newton, but
required in Ocata. Please enable the placement service before upgrading.
2018-01-16 11:17:33.130 1435 WARNING nova.scheduler.client.report
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my
resource provider record
2018-01-16 11:17:33.168 1435 INFO nova.compute.resource_tracker
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Total usable vcpus: 2,
total allocated vcpus: 0
2018-01-16 11:17:33.168 1435 INFO nova.compute.resource_tracker
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Final resource view:
name=compute phys_ram=2047MB used_ram=512MB phys_disk=16GB used_disk=0GB
total_vcpus=2 used_vcpus=0 pci_stats=[]
2018-01-16 11:17:33.197 1435 WARNING nova.scheduler.client.report
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my
resource provider record
2018-01-16 11:17:33.197 1435 INFO nova.compute.resource_tracker
[req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Compute_service record
updated for compute:compute


On Mon, Jan 15, 2018 at 8:05 PM Eugen Block <ebl...@nde.ag> wrote:


Hi,

you should check your config settings again, especially the "auth_url"
settings in the section(s) "[keystone_authtoken]" of all the config
files.
Are all the services up (nova, cinder and neutron) and running? What
is the output of 'nova service-list'?
Have you checked other log files for errors? Is there something
interesting in nova-compute.log?

Regards,
Eugen


Zitat von Sashan Govender <sash...@gmail.com>:

> Hi
>
> I've setup an openstack system based on the instructions here:
>
> https://docs.openstack.org/newton/
>
> I'm trying to launch an instance:
> $ . demo-openrc
> $ openstack server create --flavor m1.nano --image cirros --nic
> net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default
> --key-name mykey provider-instance
>
> but get this error in the nova-conductor log file:
>
> 2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils
> [req-5b47171a-f74e-4e8e-8659-89cce144f284
82858c289ca444bf90fcd41123d069ce
> 61b0b2b23b08419596bd923f2c544956 - - -] [instance:
> e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state.
> 2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils
> [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a
82858c289ca444bf90fcd41123d069ce
> 61b0b2b23b08419596bd923f2c544956 - - -] [instance:
> 0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node
> compute): [u'Traceback (most recent call last):\n', u'  File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in
> _do_build_and_run_instance\nfilter_properties)\n', u'  File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in
> _build_and_run_instance\ninstance_uuid=instance.uuid,
> reason=six.text_type(e))\n', u'RescheduledException: Build of instance
> 0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could
> not determine a suitable URL for the plugin\n']
> 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils
> [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a
82858c289ca444bf90fcd41123d069ce
> 61b0b2b23b08419596bd923f2c544956 - - -] Failed to
> compute_task_build_instances: No valid host was found. There are not
enough
> hosts available.
> Traceback (most recent call last):
>
>   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
> line 199, in inner
> return func(*args, **kwargs)
>
>   File "/usr/lib/python2.7/site-pac

Re: [Openstack] Could not determine a suitable URL for the plugin

2018-01-15 Thread Eugen Block

Hi,

you should check your config settings again, especially the "auth_url"  
settings in the section(s) "[keystone_authtoken]" of all the config  
files.
Are all the services up (nova, cinder and neutron) and running? What  
is the output of 'nova service-list'?
Have you checked other log files for errors? Is there something  
interesting in nova-compute.log?


Regards,
Eugen


Zitat von Sashan Govender <sash...@gmail.com>:


Hi

I've setup an openstack system based on the instructions here:

https://docs.openstack.org/newton/

I'm trying to launch an instance:
$ . demo-openrc
$ openstack server create --flavor m1.nano --image cirros --nic
net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default
--key-name mykey provider-instance

but get this error in the nova-conductor log file:

2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils
[req-5b47171a-f74e-4e8e-8659-89cce144f284 82858c289ca444bf90fcd41123d069ce
61b0b2b23b08419596bd923f2c544956 - - -] [instance:
e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state.
2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils
[req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce
61b0b2b23b08419596bd923f2c544956 - - -] [instance:
0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node
compute): [u'Traceback (most recent call last):\n', u'  File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in
_do_build_and_run_instance\nfilter_properties)\n', u'  File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could
not determine a suitable URL for the plugin\n']
2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils
[req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce
61b0b2b23b08419596bd923f2c544956 - - -] Failed to
compute_task_build_instances: No valid host was found. There are not enough
hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
line 199, in inner
return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line
104, in select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)

  File
"/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line
74, in select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils
[req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce
61b0b2b23b08419596bd923f2c544956 - - -] [instance:
0ba01247-5513-4c58-bf04-18092fff2622] Setting instance to ERROR state.

Any tips how to resolve this?

Thanks




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Upgrade Mitaka to Pike

2018-01-08 Thread Eugen Block

Hi,

in case you haven't received any answers yet I thought it might still  
help if I share our experiences with the upgrade procedure. It really  
was tricky in some parts, but we managed to get both Ceph and  
OpenStack up and running. We migrated from ceph jewel to luminous and  
OpenStack Mitaka to Ocata, not Pike yet.


Basically, your procedure is correct. This is how we did it:

1. upgraded OS of each ceph server, no issues
2. upgraded ceph packages from jewel to luminous on each ceph server,  
no issues
3. upgraded ceph packages from jewel to luminous on all cloud nodes,  
no problems

4. upgraded controller to Newton, no problems yet
   clients could still work properly if they were in external  
networks, because neutron had to be stopped
5. double-upgraded control node OS and Newton to Ocata because  
ceph-client caused some troubles

6. upgraded OS of compute nodes last, no issues
   this was quite easy since we could live-migrate instances to other hosts

The biggest trouble was caused by the database migration. We had to  
manipulate the DB ourselves based on the error messages in the logs,  
but eventually it worked.


After the most important services were back online I started to update  
the configs according to the warnings in the logs for cinder, nova,  
neutron etc.


There were some guides that helped me keeping the right order:

[1] https://docs.openstack.org/nova/latest/user/upgrade.html
[2] https://www.rdoproject.org/install/upgrading-rdo/

Unfortunately, I don't have a detailed step-by-step guide about our  
procedures and issues we had to resolve. Since it was kind of time  
critical I was focused on resolving the problems instead of  
documenting everything. ;-)


I hope this helps anyway if you haven't already managed it. We didn't  
have a real downtime of our VMs, at least not the ones in production  
use since they work in external networks and are not depending on the  
neutron services on the control node. Being able to live-migrate  
instances was also quite helpful. :-)


Regards,
Eugen


Zitat von Sam Huracan <nowitzki.sa...@gmail.com>:


Hi OpenStackers,

I'm planning upgrading my OpenStack System. Currently version is Mitaka, on
Ubuntu 14.04.5.
I want to upgrade to latest version of Pike.

I've read some documents and know that Mitaka does not have Rolling
upgrade, which means there will have downtime in upgrade process.

Our system has 3 HA Controllers, all VMs and Storage were put in Ceph.

At the moment, I can list some step-by-step to upgrade:

   1. Upgrade OS to Ubuntu16.04
   2. Upgrage package in order: Mitaka -> Newton -> Ocata -> Pike
   3. Upgrade DB in order: Mitaka -> Newton -> Ocata -> Pike

Do I lack any step? Could you guys share me some experiences fulfil
solution, to reduce maximum downtime of system?

Thanks in advance.




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] file injection problem

2017-10-26 Thread Eugen Block
as written in source code, "config-drive = true" and file injection  
using personality are mutually exclusive mechanisms.


Interesting, I did it with config-drive=true and it worked for me. But  
it's great that you found a solution.



Zitat von Volodymyr Litovka <doka...@gmx.com>:


Answer is:

as written in source code, "config-drive = true" and file injection  
using personality are mutually exclusive mechanisms.


On 10/25/17 2:14 AM, Volodymyr Litovka wrote:
Also, python-guestfs package installed as well, so Nova is able to  
use it, at least quick check (snipped from Nova sources) passed:


# python2.7
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.

from oslo_utils import importutils
g = importutils.import_module('guestfs')
print g



from eventlet import tpool
t = tpool.Proxy(g.GuestFS())
t.add_drive("/dev/null")
t.launch()
print t



No ideas why I'm facing this problem. Anybody can comment on this?

Thanks again.

On 10/25/17 1:24 AM, Volodymyr Litovka wrote:

Hi colleagues,

it makes me crazy, but how to make it work file injection into instance?

nova.conf already configured with

==
[DEFAULT]
debug=true

[libvirt]
inject_partition = -1

[guestfs]
debug=true

[quota]
injected_files = 5
injected_file_content_bytes = 10240
injected_file_path_length = 255
===

libguestfs and libguestfs-tools are installed (on host machine):

libguestfs-hfsplus:amd64    1:1.32.2-4ubuntu2
libguestfs-perl 1:1.32.2-4ubuntu2
libguestfs-reiserfs:amd64   1:1.32.2-4ubuntu2
libguestfs-tools    1:1.32.2-4ubuntu2
libguestfs-xfs:amd64    1:1.32.2-4ubuntu2
libguestfs0:amd64   1:1.32.2-4ubuntu2

and, finally,

nova --debug boot --config-drive true --image  --flavor  
 --security-groups  --key-name  --file  
/etc/qqq=/dTest.txt --nic [...] dtest


makes a correct request (note a personality parameter)

REQ: curl -g -i -X POST http://controller:8774/v2.1/servers -H  
"Accept: application/json" -H "User-Agent: python-novaclient" -H  
"OpenStack-API-Version: compute 2.53" -H  
"X-OpenStack-Nova-API-Version: 2.53" -H "X-Auth-Token:  
{SHA1}11e6bac1ea20a124903ff967873c186a179d545e" -H "Content-Type:  
application/json" -d '{"server": {"name": "dtest", "imageRef":  
"12c86830-8d76-4159-a6bc-81966d7a220e", "key_name": "xxx",  
"flavorRef": "d0ff4bc5-df38-4f20-8908-afc516d594e6", "max_count":  
1, "min_count": 1, *"personality": [{"path": "/etc/qqq",  
"contents": "ZG9rYSB0ZXN0CmRva2EgdGVzdApkb2thIHRlc3QK"}]*,  
"networks": [{"uuid": "9cc72002-fe24-44a5-aa04-1ac0470f"}],  
"security_groups": [{"name":  
"dfc7d642-b55f-465c-84c2-9d95c9c565bf"}], "config_drive": true}}'


but nothing everywhere - neither '/etc/qqq' on guest VM nor logs  
(according to guestfs.debug=true) on host machine.


It's Pike on Ubuntu 16.04.3.

What I'm doing wrong?

Thanks.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] file injection problem

2017-10-26 Thread Eugen Block

Hi,

do other injections work with that image, e.g. user-data and ssh-keys?  
Is it a provider (external) network where you try to launch that  
instance? I assume you are using cloud-init for this, which version?


We've had our troubles with cloud-init, especially for external  
networks. I filed a bug report for openSUSE and cloud-init version  
17.1 just this week. The only version of cloud-init we actually could  
use (and still use) was 0.7.8. I just tested file injection with 0.7.8  
and it worked just fine.


Regards,
Eugen


Zitat von Volodymyr Litovka <doka...@gmx.com>:


Hi colleagues,

it makes me crazy, but how to make it work file injection into instance?

nova.conf already configured with

==
[DEFAULT]
debug=true

[libvirt]
inject_partition = -1

[guestfs]
debug=true

[quota]
injected_files = 5
injected_file_content_bytes = 10240
injected_file_path_length = 255
===

libguestfs and libguestfs-tools are installed (on host machine):

libguestfs-hfsplus:amd64    1:1.32.2-4ubuntu2
libguestfs-perl 1:1.32.2-4ubuntu2
libguestfs-reiserfs:amd64   1:1.32.2-4ubuntu2
libguestfs-tools    1:1.32.2-4ubuntu2
libguestfs-xfs:amd64    1:1.32.2-4ubuntu2
libguestfs0:amd64   1:1.32.2-4ubuntu2

and, finally,

nova --debug boot --config-drive true --image  --flavor  
 --security-groups  --key-name  --file  
/etc/qqq=/dTest.txt --nic [...] dtest


makes a correct request (note a personality parameter)

REQ: curl -g -i -X POST http://controller:8774/v2.1/servers -H  
"Accept: application/json" -H "User-Agent: python-novaclient" -H  
"OpenStack-API-Version: compute 2.53" -H  
"X-OpenStack-Nova-API-Version: 2.53" -H "X-Auth-Token:  
{SHA1}11e6bac1ea20a124903ff967873c186a179d545e" -H "Content-Type:  
application/json" -d '{"server": {"name": "dtest", "imageRef":  
"12c86830-8d76-4159-a6bc-81966d7a220e", "key_name": "xxx",  
"flavorRef": "d0ff4bc5-df38-4f20-8908-afc516d594e6", "max_count": 1,  
"min_count": 1, *"personality": [{"path": "/etc/qqq", "contents":  
"ZG9rYSB0ZXN0CmRva2EgdGVzdApkb2thIHRlc3QK"}]*, "networks": [{"uuid":  
"9cc72002-fe24-44a5-aa04-1ac0470f"}], "security_groups":  
[{"name": "dfc7d642-b55f-465c-84c2-9d95c9c565bf"}], "config_drive":  
true}}'


but nothing everywhere - neither '/etc/qqq' on guest VM nor logs  
(according to guestfs.debug=true) on host machine.


It's Pike on Ubuntu 16.04.3.

What I'm doing wrong?

Thanks.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [ocata] [cinder] cinder-volume causes high cpu load

2017-10-16 Thread Eugen Block

Hi list,

last week we upgraded our Mitaka cloud to Ocata (via Newton, of  
course) with ceph backend, and also upgraded the cloud nodes from  
openSUSE Leap 42.1 to Leap 42.3. There were some issues as expected,  
but no showstoppers (luckily).
So the cloud is up and working again, but our monitoring shows a high  
CPU load for cinder-volume service on the control node. But since all  
the clients are on the compute nodes we are wondering what cinder  
actually does on the control node except initializing the connections  
of course. I captured a tcpdump on control node and saw a lot of  
connections to the ceph nodes, the data contains all these rbd_header  
files, e.g. rb.0.24d5b04[...]. I expect this kind of traffic on the  
compute nodes, of course, but why does the control node also establish  
so many connections?


I'd appreciate any insight!

Regards,
Eugen

--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [heat] Heat::SoftwareDeployment not working

2017-06-26 Thread Eugen Block

Hi,

since heat is not really my strong suit I won't be able to help you  
there. Hopefully, someone else with more experience will help you.


Regards,
Eugen


Zitat von Amit Kumar <ebiib...@gmail.com>:


Thanks Eugene and Ignazio for your replies.

Please take note that Heat::SoftwareConfiguration is working perfectly at
my end. Shell script provided with Software configuration works fine.
Problem is occurring only when I am trying to use Heat::SoftwareDeployment
instead of using SoftwareConfig directly. Only motive behind using
SoftwareDeployment is to be able to pass input parameters to shell script.

Adding more to the above, VM is accessible from external new using SSH.
Flat NW is being used to put VM's eth0 on external NW. IP assigned is fixed
using NW port. If Nw is provider Nw then does it make any difference for
SoftwareConfig and SoftwareDeployment? curl -v http://169.254.169.254
<http://169.254.169.254/latest> works fine.

Regards,
Amit

On Jun 23, 2017 6:51 PM, "Eugen Block" <ebl...@nde.ag> wrote:

Hi,

it seems like your VM fails to connect to the metadata server, so any
configuration provided by user-data will have failed. Is the VM's network
configured properly? Does it get its IP by DHCP? Is it a provider network
or a self-service network?
If it's a provider network (external router), you'll have to provide the
user-data and network config by config-drive, this way you won't need a
metadata-server. If it's a self-service network, is DHCP enabled? Check
your ip config within your vm. If the ip config is as expected, try to
execute "curl -v http://169.254.169.254/latest;, does it timeout?

Are dhcp-server and metadata-server up and running? What's the output of
neutron agent-list

If you launch an instance in the same network without heat, just with the
user-data, does that work? If it does it's probably a heat issue. Have you
checked the heat logs for any hints?

Regards
Eugen


Zitat von Amit Kumar <ebiib...@gmail.com>:

Hi All,


I have installed OpenStack Mitaka using OpenStack-Ansible 13.3.13. Trying
to use Heat::SoftwareDeployment resource similar to as described in:
https://github.com/openstack/heat-templates/blob/master/hot/
software-config/example-templates/example-script-template.yaml
;
but is not working as expected. SoftwareDeployment resource is always in
progress state once heat stack is created from command line.

Here are the /var/log/cloud-init-output.log:
http://paste.openstack.org/show/613502/
/var/log/os-collect-config.log shows these logs:
http://paste.openstack.org/show/613503/. Can they cause any harm?
/var/run/heat-config/heat-config is showing the script and the input
parameters which I want to run on VM. Here are the logs:
http://paste.openstack.org/show/613504/ but in-spite of script and its
input being here, */var/lib/cloud/instances/i-003a/scripts/userdata*

file is empty.
Here is the /var/lib/cloud/instances/i-003a/user-data.txt:
http://paste.openstack.org/show/613505/

With the help of above logs, please see if you can point out if I am
missing anything here.

Thanks.

Regards,
Amit





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [heat] Heat::SoftwareDeployment not working

2017-06-23 Thread Eugen Block

Hi,

it seems like your VM fails to connect to the metadata server, so any  
configuration provided by user-data will have failed. Is the VM's  
network configured properly? Does it get its IP by DHCP? Is it a  
provider network or a self-service network?
If it's a provider network (external router), you'll have to provide  
the user-data and network config by config-drive, this way you won't  
need a metadata-server. If it's a self-service network, is DHCP  
enabled? Check your ip config within your vm. If the ip config is as  
expected, try to execute "curl -v http://169.254.169.254/latest;, does  
it timeout?


Are dhcp-server and metadata-server up and running? What's the output of
neutron agent-list

If you launch an instance in the same network without heat, just with  
the user-data, does that work? If it does it's probably a heat issue.  
Have you checked the heat logs for any hints?


Regards
Eugen


Zitat von Amit Kumar <ebiib...@gmail.com>:


Hi All,

I have installed OpenStack Mitaka using OpenStack-Ansible 13.3.13. Trying
to use Heat::SoftwareDeployment resource similar to as described in:
https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/example-script-template.yaml
;
but is not working as expected. SoftwareDeployment resource is always in
progress state once heat stack is created from command line.

Here are the /var/log/cloud-init-output.log:
http://paste.openstack.org/show/613502/
/var/log/os-collect-config.log shows these logs:
http://paste.openstack.org/show/613503/. Can they cause any harm?
/var/run/heat-config/heat-config is showing the script and the input
parameters which I want to run on VM. Here are the logs:
http://paste.openstack.org/show/613504/ but in-spite of script and its
input being here, */var/lib/cloud/instances/i-003a/scripts/userdata*
file is empty.
Here is the /var/lib/cloud/instances/i-003a/user-data.txt:
http://paste.openstack.org/show/613505/

With the help of above logs, please see if you can point out if I am
missing anything here.

Thanks.

Regards,
Amit




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [horizon] How to address new features (wish list)

2017-06-08 Thread Eugen Block

Hi,

thanks for your reply!
Since it would be a really small change I was curious if I couldn't do  
it myself. In the "Launch instance" dialog I wanted to show additional  
information for every image in the "Source" tab, in my case the image  
description. Currently, this dropdown only contains Min disk and Min  
ram.

So for everyone interested in what I did, this is the minor change:

---cut here---
control1:~ #  cat horizon-image-details.patch
---  
/srv/www/openstack-dashboard/static/dashboard/project/workflow/launch-instance/source/source-details.html  2017-06-08 13:27:58.570961479  
+0200
+++  
/srv/www/openstack-dashboard-mod/static/dashboard/project/workflow/launch-instance/source/source-details.html   2017-06-08 12:45:04.176099530  
+0200

@@ -15,6 +15,12 @@
 {$ (row.properties ? row.min_ram :  
row.volume_image_metadata.min_ram) || '--' $}

   
 
+
+  Image details
+  
+{$ (row.properties ? row.properties.description :  
row.volume_image_metadata.properties.description) || '--' $}

+  
+
   
---cut here---

You can see the result in the attached screenshot. We find it very  
useful since there can be many similar images in glance and it's  
helpful to be able to understand the differences between these images  
without being forced to exit the dialog.


Maybe this little patch even makes it to upstream ;-)

Regards,
Eugen


Zitat von Itxaka Serrano Garcia <igar...@suse.com>:


Hi!

While I dont know exactly how to deal with a wish for new features  
you could try to come to the weekly meeting[0] and try to discuss it  
with the attendants to see if someone is willing to put some time in  
it or they see it as viable feature that can be transformed into a  
Blueprint.


Or come into the IRC[1] channel and spam Rob And David, but that may  
not work as expected :D



[0] http://eavesdrop.openstack.org/#Horizon_Team_Meeting

[1] https://wiki.openstack.org/wiki/IRC


On 06/06/17 12:12, Eugen Block wrote:

Hi all,

I would like to ask you if anyone can explain to me how I get to  
wish for new features in OpenStack? I see the Horizon wiki page  
[1], but it only mentions the general workflow and one example. Am  
I supposed to edit the wiki page to bring my wish to the  
developer's attention? Is the page monitored by anyone? I have an  
account and would be able to edit the wiki page, but I don't know  
if this would effect anything.

Or should I ask this question in the developers mailing list?
I asked my question on ask.openstack.org some time ago [2] but  
without any responses. So any input would be appreciated!


Best regards,
Eugen

[1] https://wiki.openstack.org/wiki/Horizon/Wish_List
[2]  
https://ask.openstack.org/en/question/99205/tooltip-for-images-glance-in-dashboard/





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [horizon] How to address new features (wish list)

2017-06-06 Thread Eugen Block

Hi all,

I would like to ask you if anyone can explain to me how I get to wish  
for new features in OpenStack? I see the Horizon wiki page [1], but it  
only mentions the general workflow and one example. Am I supposed to  
edit the wiki page to bring my wish to the developer's attention? Is  
the page monitored by anyone? I have an account and would be able to  
edit the wiki page, but I don't know if this would effect anything.

Or should I ask this question in the developers mailing list?
I asked my question on ask.openstack.org some time ago [2] but without  
any responses. So any input would be appreciated!


Best regards,
Eugen

[1] https://wiki.openstack.org/wiki/Horizon/Wish_List
[2]  
https://ask.openstack.org/en/question/99205/tooltip-for-images-glance-in-dashboard/


--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] cloud-init not start ini ubuntu 17.04

2017-05-09 Thread Eugen Block

Hi,


my problem is while I build the based image, cloud-init process won't run
while based image booting, there is no log record in /var/log/syslog (in
this case ubuntu), and its just happend in 17.04.


so are the services up and running within your instances?
Please check "systemctl status cloud-*" if all required services are  
up and running. Maybe they have to be enabled ans started first. Just  
because you didn't have to do it manually before, doesn't mean this  
will never change ;-)


Regards,
Eugen


Zitat von Adhi Priharmanto <adhi@gmail.com>:


Hi Eugen,

Based on this guide, just install and reconfigure datasource of cloud-init
if needed.

https://docs.openstack.org/image-guide/ubuntu-image.html

my problem is while I build the based image, cloud-init process won't run
while based image booting, there is no log record in /var/log/syslog (in
this case ubuntu), and its just happend in 17.04.

I had build custom image with 16.04 and 16.10, and it's fine, just install
the cloud-init package needed.



On Fri, May 5, 2017 at 1:26 PM, Eugen Block <ebl...@nde.ag> wrote:


You have to make sure that cloud-init is enabled and running in your base
instance. Then snapshot that VM and launch another instance from the new
image, provide some user-data to test it.
Cloud-init is a tool for initial configuration of new instances, that's
why you would have to execute these steps manually if you configure your
first VM to be a new base image. So all the magic will be (hopefully)
visible if you launch a new VM.


Regards,
Eugen


Zitat von Adhi Priharmanto <adhi@gmail.com>:

Hi Bob,


yes I'm following those tutorial, creating glance image from existing vm
xenserver.

   - build from scratch VM using "16.04 template" and "other installation
   media"
   - update & upgrade the VM OS
   - installing cloud-init package, no change of cloud-init configuration
   and using the default setting of cloud-init
   - reboot the VM for testing the cloud-init and no output showing
   cloud-init activity, there is no process associated with cloud-init in
   "/var/log/syslog"
   - export the vdi, compress the VHD, upload to glance
   - start instance using the custom image, just get the IP address. To
   gather instance metadata, "cloud-init init" must be executed manually
after
   instance completely booting.


On Thu, May 4, 2017 at 11:32 PM, Bob Ball <bob.b...@citrix.com> wrote:

Hi Adhi,




Did you follow a guide, such as http://citrix-openstack.
siteleaf.net/posts/generating-images-for-xenserver-in-openstack/ for
generating the image?  If not, how was the image generated?



What exactly is the output from the 17.04 image you’re using?



Thanks,



Bob



*From:* Adhi Priharmanto [mailto:adhi@gmail.com]
*Sent:* 03 May 2017 16:36
*To:* openstack <openstack@lists.openstack.org>
*Subject:* [Openstack] cloud-init not start ini ubuntu 17.04



hi all,

I just created ubuntu 17.04 custom image for working with openstack
xenserver, after installing & update+upgrade ubuntu 17.04 base OS, I
installed cloud-init, then reboot it to test cloud-init, but I can't see
cloud-init process during the ubuntu 17.04 OS boot.

Is there anyone can help or give a suggest for me ?


--

Cheers,





*Adhi Priharmanto*

about.me/a_dhi

[image: http://d13pix9kaak6wt.cloudfront.net/signature/colorbar.png]



+62-812-82121584 <+62%20812-8212-1584>







--
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi
<http://about.me/a_dhi?promo=email_sig>
+62-812-82121584





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k





--
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi
<http://about.me/a_dhi?promo=email_sig>
+62-812-82121584




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] How to mount ISO file to a openstack instance ?

2017-05-08 Thread Eugen Block

Hi,


But as far as I know, this ISO image cannot be used to boot an instance.


That's not correct, see [1] for details. We use ISOs to install new  
VMs on a regular basis.


To attach an ISO to an instance, you have to create a volume from that  
ISO image first. This volume can then be attached to any instance  
within that project, or if you need that volume in a different  
project, you can transfer it [2].



And can an iso image be shared by multiple instances?


I'm not sure if I understand that correctly. If we talk about the  
first part of my answer (use volumes to attach images to instances)  
and you have created a volume that is already attached to an instance,  
you won't be able to attach the same volume to another instance unless  
you detach it from the first instance. I don't know another way to  
attach an ISO. You would have to create new volumes every time you  
need the ISO as an attachment.


Regards,
Eugen

[1]  
https://docs.openstack.org/user-guide/cli-nova-launch-instance-using-ISO-image.html

[2] https://docs.openstack.org/user-guide/common/cli-manage-volumes.html


Zitat von "Warad, Manjunath (Nokia - SG/Singapore)"  
<manjunath.wa...@nokia.com>:



Hi,

Yes, Glance can be used to manage ISO file and this can be  
configured to shared by multiple instances as is.


But as far as I know, this ISO image cannot be used to boot an instance.

Regards,
Manjunath

From: don...@ahope.com.cn [mailto:don...@ahope.com.cn]
Sent: Sunday, 7 May, 2017 10:12 PM
To: openstack <openstack@lists.openstack.org>
Subject: [Openstack] How to mount ISO file to a openstack instance ?

Hi all,

i want to know how to mount ISO file to a openstack instance ? Can  
it be managed by Glance ? And can an iso image be shared by multiple  
instances?



=
董 建 华
地址:杭州滨江区南环路3766号新世纪办公楼
邮编:310053
手机:13857132818
总机:0571-28996000
传真:0571-28996001
热线:4006728686
网址:www.ahope.com.cn<http://www.ahope.com.cn>
Email:don...@ahope.com.cn<mailto:don...@ahope.com.cn>




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] cloud-init not start ini ubuntu 17.04

2017-05-05 Thread Eugen Block
You have to make sure that cloud-init is enabled and running in your  
base instance. Then snapshot that VM and launch another instance from  
the new image, provide some user-data to test it.
Cloud-init is a tool for initial configuration of new instances,  
that's why you would have to execute these steps manually if you  
configure your first VM to be a new base image. So all the magic will  
be (hopefully) visible if you launch a new VM.



Regards,
Eugen


Zitat von Adhi Priharmanto <adhi@gmail.com>:


Hi Bob,

yes I'm following those tutorial, creating glance image from existing vm
xenserver.

   - build from scratch VM using "16.04 template" and "other installation
   media"
   - update & upgrade the VM OS
   - installing cloud-init package, no change of cloud-init configuration
   and using the default setting of cloud-init
   - reboot the VM for testing the cloud-init and no output showing
   cloud-init activity, there is no process associated with cloud-init in
   "/var/log/syslog"
   - export the vdi, compress the VHD, upload to glance
   - start instance using the custom image, just get the IP address. To
   gather instance metadata, "cloud-init init" must be executed  
manually after

   instance completely booting.


On Thu, May 4, 2017 at 11:32 PM, Bob Ball <bob.b...@citrix.com> wrote:


Hi Adhi,



Did you follow a guide, such as http://citrix-openstack.
siteleaf.net/posts/generating-images-for-xenserver-in-openstack/ for
generating the image?  If not, how was the image generated?



What exactly is the output from the 17.04 image you’re using?



Thanks,



Bob



*From:* Adhi Priharmanto [mailto:adhi@gmail.com]
*Sent:* 03 May 2017 16:36
*To:* openstack <openstack@lists.openstack.org>
*Subject:* [Openstack] cloud-init not start ini ubuntu 17.04



hi all,

I just created ubuntu 17.04 custom image for working with openstack
xenserver, after installing & update+upgrade ubuntu 17.04 base OS, I
installed cloud-init, then reboot it to test cloud-init, but I can't see
cloud-init process during the ubuntu 17.04 OS boot.

Is there anyone can help or give a suggest for me ?


--

Cheers,





*Adhi Priharmanto*

about.me/a_dhi

[image: http://d13pix9kaak6wt.cloudfront.net/signature/colorbar.png]



+62-812-82121584 <+62%20812-8212-1584>







--
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi
<http://about.me/a_dhi?promo=email_sig>
+62-812-82121584




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Snapshot: Cannot determine the parent storage pool

2017-02-21 Thread Eugen Block
Just out of curiosity, how exactly did you manage to delete base  
images from running instances? I was not able to do that, glance  
raised error messages when I tried it.


I'm not sure if [1] helps in any way, but here someone tried to edit  
the rbd-information for specific objects in his (copied) pool. But I  
hope there's another way than this...


Regards,
Eugen

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001453.html

Zitat von John Petrini <jpetr...@coredial.com>:


Hi List,

We're running Mitaka with Ceph. Recently I enabled RBD snapshots by adding
write permissions to the images pool in Ceph. This works perfectly for some
instances but is failing back to standard snapshots for others with the
following error:

Performing standard snapshot because direct snapshot failed: Cannot
determine the parent storage pool for 7a7b5119-
85da-429b-89b5-ad345cfb649e; cannot determine where to store images

Looking at the code here:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py
it appears that it looks for the pool of the base image to determine where
to save the snapshot. I believe the problem I'm encountering is that for
some of our instances the base image no longer exists.

Am I understanding this correctly and is there anyway to explicitly set the
pool to be used for snapshots and bypass this logic?

Thank You,

John Petrini




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ephemeral disks location

2017-01-25 Thread Eugen Block
Why would ephemeral instance disks be copied if the backing store is  
a shared system like Ceph.


Sorry for the confusion, I tried to describe the workflow if you use  
local disk storage for the instances. Of course there's no disk  
copying if you use a storage like ceph.



Zitat von Jay Pipes <jaypi...@gmail.com>:


On 01/25/2017 03:19 AM, Eugen Block wrote:

All these instances are in our ceph cluster.

The instance path is defined in nova.conf:

# Where instances are stored on disk (string value)
instances_path = $state_path/instances

If one compute node fails but it's able to initiate a migration, the
same instance directory is created on the new host and the disks are
copied to its new compute node.


Why would ephemeral instance disks be copied if the backing store is  
a shared system like Ceph. There would be no need to copy a disk  
image since the destination host's /var/lib/nova/instances directory  
is exactly the same as the source's, right?


Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ephemeral disks location

2017-01-25 Thread Eugen Block

Hi


where are the ephemeral disks stored?


if you decide to use local storage, your instance's disk would be stored in

---cut here---
compute1:~ # ls -l /var/lib/nova/instances/
insgesamt 60
drwxr-xr-x 2 nova nova 4096  3. Jan 11:05 14b75237-7619-481f-9636-792b64d1be17
drwxr-xr-x 2 nova nova 4096  9. Jan 14:58 284007bf-cd6b-42ee-9529-274d259e6812
drwxr-xr-x 2 nova nova 4096  9. Jan 16:13 2c408a5b-8f35-4e12-911a-36005ccff067
drwxr-xr-x 2 nova nova 4096 15. Jan 20:49 3d96bceb-4c9b-4e3c-9275-5d0bf119d47a
drwxr-xr-x 2 nova nova 4096  4. Jan 09:30 3ec7f722-12ef-4962-8059-38accf6f9a63
drwxr-xr-x 2 nova nova 4096  9. Jan 10:55 5b02c021-0b94-4d10-afc0-e0f66b492899
drwxr-xr-x 2 nova nova 4096 16. Dez 07:45 69d3e9da-2842-418c-8d62-5d2fbe805df1
drwxr-xr-x 2 nova nova 4096 19. Dez 10:13 6c30a7a8-6115-416a-820f-2bf3f9c7822f
drwxr-xr-x 2 nova nova 4096 27. Okt 15:20 911b252e-c763-4af2-a1d9-70d0880ee380
drwxr-xr-x 2 nova nova 4096 16. Jan 16:43 931f9a1e-2022-4571-909e-6c3f5f8c3ae8
drwxr-xr-x 2 nova nova 4096  4. Jan 11:19 _base
-rw-r--r-- 1 nova nova   30 25. Jan 08:33 compute_nodes
drwxr-xr-x 2 nova nova 4096 18. Jan 08:15 e96c2932-9bef-414e-bf5c-772f2c28613f
drwxr-xr-x 2 nova nova 4096 16. Dez 07:46 f87aadbf-f39d-4349-bca7-c7097e8c456e
drwxr-xr-x 2 nova nova 4096 24. Jan 13:06 locks

compute1:~ # ls -l  
/var/lib/nova/instances/14b75237-7619-481f-9636-792b64d1be17/

insgesamt 2192
-rw-rw 1 nova nova   0  3. Jan 11:05 console.log
-rw-r--r-- 1 nova nova  79  3. Jan 11:05 disk.info
-rw-r--r-- 1 nova nova 2233488  3. Jan 11:05 kernel
-rw-r--r-- 1 nova nova3259 12. Jan 12:54 libvirt.xml

compute1:~ # cat  
/var/lib/nova/instances/14b75237-7619-481f-9636-792b64d1be17/disk.info

{"/var/lib/nova/instances/14b75237-7619-481f-9636-792b64d1be17/kernel": "raw"}
---cut here---

All these instances are in our ceph cluster.

The instance path is defined in nova.conf:

# Where instances are stored on disk (string value)
instances_path = $state_path/instances

If one compute node fails but it's able to initiate a migration, the  
same instance directory is created on the new host and the disks are  
copied to its new compute node.



can I have both options?


I haven't tried it explicitly, but you can switch the option  
"images_type" in nova.conf and everytime you restart  
nova-compute.service, new instances would be created either on the  
compute nodes or in your storage (ceph) etc.
But I would not recommend that, the impact on existing instances could  
be large, e.g. if you try to resize an instance or migrate it. I don't  
think this would be a good idea. But as I said, I didn't try it  
myself, in my environment we just switched from file based instances  
to ceph. To get a better understanding how ceph works, I switched  
between file based and ceph based glance images, but only temporarily,  
of course.



Regards,
Eugen


Zitat von Manuel Sopena Ballesteros <manuel...@garvan.org.au>:


Hi,

I have been searching on the internet and could not find an answer  
to this question.


I understand that ephemeral disks lives until the VM is destroyed,  
but where are the ephemeral disks stored? On local host hosting the  
VM or in centralized storage (e.g. Ceph) or can I have both options?


If ephemeral can be stored on the same host the instance is  
running... what would happen if the host fails and the instance is  
migrated to another one? Will the ephemeral disk be moved across?  
Will the data persist?


Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E:  
manuel...@garvan.org.au<mailto:manuel...@garvan.org.au>


NOTICE
Please consider the environment before printing this email. This  
message and any attachments are intended for the addressee named and  
may contain legally privileged/confidential/copyright information.  
If you are not the intended recipient, you should not read, use,  
disclose, copy or distribute this communication. If you have  
received this message in error please notify us at once by return  
email and then delete both messages. We accept no liability for the  
distribution of viruses or similar in electronic communications.  
This notice should not be removed.




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : 

Re: [Openstack] Setting up another compute node

2017-01-24 Thread Eugen Block
a1be6e6cf3 1dd7b6481aa34ef7ba105a7336845369 -  
- -] Security group member updated  
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib  
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port  
e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
2017-01-23 14:09:22.058 8097 INFO  
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent  
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] port_unbound():  
net_uuid None not in local_vlan_map
2017-01-23 14:09:22.059 8097 INFO neutron.agent.securitygroups_rpc  
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Remove device  
filter for [u'e1058d22-9a7b-4988-9644-d0f476a01015']



When I attempt to check the status of the port mentioned there, it  
doesn't exist on either compute node.


(neutron) port-show e1058d22-9a7b-4988-9644-d0f476a01015
Unable to find port with name or id 'e1058d22-9a7b-4988-9644-d0f476a01015'


Thank you very much for your input.




Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer  
Plus<https://www.magentocommerce.com/certification/directory/dev/2215598/>

peter.ki...@objectstream.com<mailto:peter.ki...@objectstream.com>

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 1:48 PM, Trinath Somanchi  
<trinath.soman...@nxp.com<mailto:trinath.soman...@nxp.com>> wrote:

This is the error

 port_unbound(): net_uuid None not in local_vlan_map


Get Outlook for iOS<https://aka.ms/o0ukef>


From: Peter Kirby  
<peter.ki...@objectstream.com<mailto:peter.ki...@objectstream.com>>

Sent: Tuesday, January 24, 2017 12:45:01 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I am using the CLI so I can force the VM to create on the new host.   
Using this command:


openstack server create \
  --image db090568-9500-4092-ac23-364f25940b2f \
  --flavor m1.small \
  --availability-zone nova:vhost2 \
  --nic net-id=ce7f1bf3-b6b3-45c3-8251-2cbcdc9d4595 \
  temptest

If I do not specify vhost2, this command does successfully create  
the VM on vhost1.




Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer  
Plus<https://www.magentocommerce.com/certification/directory/dev/2215598/>

peter.ki...@objectstream.com<mailto:peter.ki...@objectstream.com>

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 12:27 PM, Trinath Somanchi  
<trinath.soman...@nxp.com<mailto:trinath.soman...@nxp.com>> wrote:


Can you post how did you spawn the VM ? I guess network is not added.


/Trinath


From: Peter Kirby  
<peter.ki...@objectstream.com<mailto:peter.ki...@objectstream.com>>

Sent: Monday, January 23, 2017 9:22:10 PM
To: OpenStack
Subject: [Openstack] Setting up another compute node

Hi,

I'm currently running OpenStack Mitaka on CentOS 7.2 and I'm trying  
to setup another compute node.


I have nova installed and running and the following neutron packages:
openstack-neutron.noarch  1:8.3.0-1.el7 
@openstack-mitaka
openstack-neutron-common.noarch   1:8.3.0-1.el7 
@openstack-mitaka
openstack-neutron-ml2.noarch  1:8.3.0-1.el7 
@openstack-mitaka
openstack-neutron-openvswitch.noarch  1:8.3.0-1.el7 
@openstack-mitaka
python-neutron.noarch 1:8.3.0-1.el7 
@openstack-mitaka
python-neutron-lib.noarch 0.0.3-1.el7   
@openstack-mitaka
python2-neutronclient.noarch  4.1.2-1.el7   
@openstack-mitaka


The neutron-openvswitch-agent is up and running and I can see it and  
nova from the OpenStack commandline.  Neutron agent-list says the  
new host has the openvswitch agent and it's alive.


However, when I try to deploy an instance to this new host, I get  
the following error and the the instances fails to deploy:


2017-01-20 10:51:21.132 24644 INFO neutron.agent.common.ovs_lib  
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] Port  
67b72a38-c553-4f06-953c-92f43d5dea60 not present in bridge br-int
2017-01-20 10:51:21.133 24644 INFO  
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent  
[req-2be33822-4a69-4521-9267-a81315b20b6b - - - - -] port_unbound():  
net_uuid None not in local_vlan_map


Here is the output from ovs-vsctl show:
2e5497fc-6f3a-4761-a99b-d4e95d0614f7
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port "eno1"
Interface "eno1"
Port br-ex
Interface br-ex
type: intern

Re: [Openstack] Unable Upload Image

2017-01-23 Thread Eugen Block
ur MariaDB connection id is 3195
Server version: 10.1.18-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input  
statement.


MariaDB [glance]>


==
MariaDB [keystone]> select * from endpoint where  
service_id='ec45b879f9e0449bb72ad8dcd42e075c';

+--++---+--++---+-+---+
| id   | legacy_endpoint_id | interface  
| service_id   | url| extra  
| enabled | region_id |

+--++---+--++---+-+---+
| 4b715873910c4693a8bc80507ff5826d | NULL   | internal   
| ec45b879f9e0449bb72ad8dcd42e075c | http://controller:9292 | {} 
|   1 | RegionOne |
| 4eaf3cff4e7f40c78b0c8e926911dca3 | NULL   | admin  
| ec45b879f9e0449bb72ad8dcd42e075c | http://controller:9292 | {} 
|   1 | RegionOne |
| ca770afcf6984c36bfc5e3cc5927f179 | NULL   | public 
| ec45b879f9e0449bb72ad8dcd42e075c | http://controller:9292 | {} 
|   1 | RegionOne |

+--++---+--++---+-+---+
3 rows in set (0.00 sec)

===

[root@Controller ~]# openstack endpoint list | grep glance
WARNING: openstackclient.common.utils is deprecated and will be  
removed after Jun 2017. Please use osc_lib.utils
| 4b715873910c4693a8bc80507ff5826d | RegionOne | glance   |  
image| True| internal  | http://controller:9292   
  |
| 4eaf3cff4e7f40c78b0c8e926911dca3 | RegionOne | glance   |  
image| True| admin | http://controller:9292   
  |
| ca770afcf6984c36bfc5e3cc5927f179 | RegionOne | glance   |  
image| True| public| http://controller:9292   
  |

==
Please help further. How can I enable debugging logs to identify  
actual issue. please help.


Regards,
B~Mork


On Sun, Jan 22, 2017 at 12:11 PM, Trinath Somanchi  
<trinath.soman...@nxp.com<mailto:trinath.soman...@nxp.com>> wrote:

Hi-

There might be session issues from GUI. Please logout and login again.

If the error still persists, please refresh the cookies. Also check  
your keystone configuration and auth credentials with glance and  
keystone.


/ Trinath

From: Bjorn Mork [mailto:bjron.m...@gmail.com<mailto:bjron.m...@gmail.com>]
Sent: Sunday, January 22, 2017 11:49 AM
To: openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Subject: [Openstack] Unable Upload Image

Hi Team,

I need support on my installation, unable to upload images in my  
setup, It is showing error "Error: Unable to retrieve the images."  
via GUI, although on cli , if use the command like shown below it  
works fine and shows there is already one image. But unable to show  
via GUI...



[root@Controller ~]# source /etc/openstack-scripts/admin-openrc
[root@Controller ~]#
[root@Controller ~]# glance image-list
+--++
| ID   | Name   |
+--++
| ade9e725-21bc-4032-90f7-68d517f6106a | cirros |
+--++
[root@Controller ~]#

You are requested to please help me. Thanks

Regards,
B~Mork




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova backup - instances unreachable

2017-01-23 Thread Eugen Block

Have you enabled live snapshots in nova.conf?

The default for this option is "true", so you should check that:

disable_libvirt_livesnapshot = false

Is it really a live snaphot? What's in the nova-compute.log? It should  
say something like


[instance: XXX] Beginning live snapshot process


Regards,
Eugen



Zitat von John Petrini <jpetr...@coredial.com>:


Hi All,

Following up after making this change. Adding write permissions to the
images pool in Ceph did the trick and RBD snapshots now work. However the
instance is still paused for the duration of the snapshot. Is it possible
to do a live snapshot without pausing the instance?

Thanks,

John

On Fri, Jan 13, 2017 at 5:49 AM, Eugen Block <ebl...@nde.ag> wrote:


Thanks,

for anyone interested in this issue, I filed a bug report:
https://bugs.launchpad.net/nova/+bug/1656242


Regards,
Eugen


Zitat von Mohammed Naser <mna...@vexxhost.com>:

It is likely because this has been tested with QEMU only. I think you

might want to bring this up with the Nova team.

Sent from my iPhone

On Jan 12, 2017, at 11:28 AM, Eugen Block <ebl...@nde.ag> wrote:


I'm not sure if this is the right spot, but I added some log statements
into driver.py.
First, there's this if-block:

   if (self._host.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION,
  MIN_QEMU_LIVESNAPSHOT_VERSION,
  host.HV_DRIVER_QEMU)
and source_type not in ('lvm')
and not CONF.ephemeral_storage_encryption.enabled
and not CONF.workarounds.disable_libvirt_livesnapshot):
   live_snapshot = True
  [...]
   else:
   live_snapshot = False

And I know that it lands in the else-statement. Turns out that
_host.has_min_version is "false", because of host.HV_DRIVER_QEMU. We are
running on Xen hypervisors. So I tried it with host.HV_DRIVER_XEN and now
nova-compute says:

[instance: 14b75237-7619-481f-9636-792b64d1be17] instance snapshotting
[instance: 14b75237-7619-481f-9636-792b64d1be17] Beginning live
snapshot process

Now I'm waiting for the result, but at least the VM is still running, so
it looks quite promising...

And there it is:

[instance: 14b75237-7619-481f-9636-792b64d1be17] Snapshot image upload
complete

I'm testing the image now, and it works!

Now the question is, why is it defaulting to HV_DRIVER_QEMU and is it
really necessary to change this directly in the code? Is there any other
way?

Regards,
Eugen

Zitat von Eugen Block <ebl...@nde.ag>:

Yes, I truncated the file and uploaded it:


http://dropcanvas.com/ta7nu
(First time I used this service, please give me feedback if this
doesn't work for you)

I see the "Beginning cold snapshot process" message, but I don't know
why. Any help is appreciated!

Regards,
Eugen


Zitat von Mohammed Naser <mna...@vexxhost.com>:

Would you be able to share the logs of a full snapshot run with the

compute node in debug?

Sent from my iPhone

On Jan 12, 2017, at 7:47 AM, Eugen Block <ebl...@nde.ag> wrote:


That's strange, I also searched for this message, but nothing there.
I have debug logs enabled on compute node but I don't see anything
regarding ceph. No matter, what I do, my instance is always  
shutdown before

a snapshot is taken. What else can I try?


Zitat von John Petrini <jpetr...@coredial.com>:

Mohammed,


It looks like you may be right. Just found the permissions issue in
the
nova log on the compute node.

4-e8f52e4fbcfb 691caf1c10354efab3e3c8ed61b7d89a
49bc5e5bf2684bd0948d9f94c7875027 - - -] Performing standard snapshot
because direct snapshot failed: no write permission on storage pool
images

I'm going to test the change and will send an update you all with the
results.

Thank You,

___

John Petrini





Yes, we are also running Mitaka and I also read Sebastien Han's

blogs ;-)

our snapshots are not happening at the RBD level,


they are being copied and uploaded to glance which takes up a lot
of space
and is very slow.



Unfortunately, that's what we are experiencing, too. I don't know if
there's something I missed in the nova configs or somewhere else,
but I'm
relieved that I'm not the only one :-)

While writing this email I searched again and found something:

https://specs.openstack.org/openstack/nova-specs/specs/mitak
a/implemented/rbd-instance-snapshots.html

https://review.openstack.org/#/c/205282/

It seems to be implemented already, I'm looking for the config
options to
set. If you manage to get nova to make rbd snapshots, please let me
know ;-)

Regards,
Eugen



Zitat von John Petrini <jpetr...@coredial.com>:

Hi Eugen,



Thanks for the response! That makes a lost of sense and is what I
figured
was going on but I missed it in the documentation. We use Ceph as
well and
I had considered doing the snapshots at the RBD level but I was
hoping
there was someway to accomplish this via nova. I came across 

Re: [Openstack] [OpenStack] VM start up with no route rules

2017-01-19 Thread Eugen Block
Does your VM's interface also have DHCP enabled? If it's configured to  
have a static address, it won't be changed by dhcp. Have you used the  
image outside of heat and did it work with dhcp for a single VM?



Zitat von "Xu, Rongjie (Nokia - CN/Hangzhou)" <rongjie...@nokia.com>:


Hi,

I am launch a heat stack on top of Mirantis OpenStack Mitaka.  
However, I see no route rules (output of command 'ip route' is  
empty) inside VM, which make the VM cannot get the metadata from  
metadata server. Basically, the VM is connected to a management  
network (192.168.1.0/24 DHCP enabled).


How can I debug this problem? Is it something wrong with Neutron? Thanks.



Best Regards
Xu Rongjie (Max)
Mobile:18658176819




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova backup - instances unreachable

2017-01-13 Thread Eugen Block

Thanks,

for anyone interested in this issue, I filed a bug report:  
https://bugs.launchpad.net/nova/+bug/1656242


Regards,
Eugen


Zitat von Mohammed Naser <mna...@vexxhost.com>:

It is likely because this has been tested with QEMU only. I think  
you might want to bring this up with the Nova team.


Sent from my iPhone


On Jan 12, 2017, at 11:28 AM, Eugen Block <ebl...@nde.ag> wrote:

I'm not sure if this is the right spot, but I added some log  
statements into driver.py.

First, there's this if-block:

   if (self._host.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION,
  MIN_QEMU_LIVESNAPSHOT_VERSION,
  host.HV_DRIVER_QEMU)
and source_type not in ('lvm')
and not CONF.ephemeral_storage_encryption.enabled
and not CONF.workarounds.disable_libvirt_livesnapshot):
   live_snapshot = True
  [...]
   else:
   live_snapshot = False

And I know that it lands in the else-statement. Turns out that  
_host.has_min_version is "false", because of host.HV_DRIVER_QEMU.  
We are running on Xen hypervisors. So I tried it with  
host.HV_DRIVER_XEN and now nova-compute says:


[instance: 14b75237-7619-481f-9636-792b64d1be17] instance snapshotting
[instance: 14b75237-7619-481f-9636-792b64d1be17] Beginning live  
snapshot process


Now I'm waiting for the result, but at least the VM is still  
running, so it looks quite promising...


And there it is:

[instance: 14b75237-7619-481f-9636-792b64d1be17] Snapshot image  
upload complete


I'm testing the image now, and it works!

Now the question is, why is it defaulting to HV_DRIVER_QEMU and is  
it really necessary to change this directly in the code? Is there  
any other way?


Regards,
Eugen

Zitat von Eugen Block <ebl...@nde.ag>:


Yes, I truncated the file and uploaded it:

http://dropcanvas.com/ta7nu
(First time I used this service, please give me feedback if this  
doesn't work for you)


I see the "Beginning cold snapshot process" message, but I don't  
know why. Any help is appreciated!


Regards,
Eugen


Zitat von Mohammed Naser <mna...@vexxhost.com>:

Would you be able to share the logs of a full snapshot run with  
the compute node in debug?


Sent from my iPhone


On Jan 12, 2017, at 7:47 AM, Eugen Block <ebl...@nde.ag> wrote:

That's strange, I also searched for this message, but nothing  
there. I have debug logs enabled on compute node but I don't see  
anything regarding ceph. No matter, what I do, my instance is  
always shutdown before a snapshot is taken. What else can I try?



Zitat von John Petrini <jpetr...@coredial.com>:


Mohammed,

It looks like you may be right. Just found the permissions issue in the
nova log on the compute node.

4-e8f52e4fbcfb 691caf1c10354efab3e3c8ed61b7d89a
49bc5e5bf2684bd0948d9f94c7875027 - - -] Performing standard snapshot
because direct snapshot failed: no write permission on storage  
pool images


I'm going to test the change and will send an update you all with the
results.

Thank You,

___

John Petrini





Yes, we are also running Mitaka and I also read Sebastien  
Han's blogs ;-)


our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a  
lot of space

and is very slow.



Unfortunately, that's what we are experiencing, too. I don't know if
there's something I missed in the nova configs or somewhere  
else, but I'm

relieved that I'm not the only one :-)

While writing this email I searched again and found something:

https://specs.openstack.org/openstack/nova-specs/specs/mitak
a/implemented/rbd-instance-snapshots.html

https://review.openstack.org/#/c/205282/

It seems to be implemented already, I'm looking for the config  
options to
set. If you manage to get nova to make rbd snapshots, please  
let me know ;-)


Regards,
Eugen



Zitat von John Petrini <jpetr...@coredial.com>:

Hi Eugen,


Thanks for the response! That makes a lost of sense and is  
what I figured
was going on but I missed it in the documentation. We use  
Ceph as well and

I had considered doing the snapshots at the RBD level but I was hoping
there was someway to accomplish this via nova. I came across this
Sebastien
Han write-up that claims this functionality was added to Mitaka:
http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-
snapshots-on-ceph-rbd/

We are running Mitaka but our snapshots are not happening at the RBD
level,
they are being copied and uploaded to glance which takes up a  
lot of space

and is very slow.

Have you or anyone else implemented this in Mitaka? Other  
than Sebastian's

blog I haven't found any documentation on this.

Thank You,

___

John Petrini

On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block <ebl...@nde.ag> wrote:

Hi,


this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to  
ensure that al

Re: [Openstack] nova backup - instances unreachable

2017-01-12 Thread Eugen Block

Yes, I truncated the file and uploaded it:

http://dropcanvas.com/ta7nu
(First time I used this service, please give me feedback if this  
doesn't work for you)


I see the "Beginning cold snapshot process" message, but I don't know  
why. Any help is appreciated!


Regards,
Eugen


Zitat von Mohammed Naser <mna...@vexxhost.com>:

Would you be able to share the logs of a full snapshot run with the  
compute node in debug?


Sent from my iPhone


On Jan 12, 2017, at 7:47 AM, Eugen Block <ebl...@nde.ag> wrote:

That's strange, I also searched for this message, but nothing  
there. I have debug logs enabled on compute node but I don't see  
anything regarding ceph. No matter, what I do, my instance is  
always shutdown before a snapshot is taken. What else can I try?



Zitat von John Petrini <jpetr...@coredial.com>:


Mohammed,

It looks like you may be right. Just found the permissions issue in the
nova log on the compute node.

4-e8f52e4fbcfb 691caf1c10354efab3e3c8ed61b7d89a
49bc5e5bf2684bd0948d9f94c7875027 - - -] Performing standard snapshot
because direct snapshot failed: no write permission on storage pool images

I'm going to test the change and will send an update you all with the
results.

Thank You,

___

John Petrini






Yes, we are also running Mitaka and I also read Sebastien Han's blogs ;-)

our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a  
lot of space

and is very slow.



Unfortunately, that's what we are experiencing, too. I don't know if
there's something I missed in the nova configs or somewhere else, but I'm
relieved that I'm not the only one :-)

While writing this email I searched again and found something:

https://specs.openstack.org/openstack/nova-specs/specs/mitak
a/implemented/rbd-instance-snapshots.html

https://review.openstack.org/#/c/205282/

It seems to be implemented already, I'm looking for the config options to
set. If you manage to get nova to make rbd snapshots, please let  
me know ;-)


Regards,
Eugen



Zitat von John Petrini <jpetr...@coredial.com>:

Hi Eugen,


Thanks for the response! That makes a lost of sense and is what I figured
was going on but I missed it in the documentation. We use Ceph  
as well and

I had considered doing the snapshots at the RBD level but I was hoping
there was someway to accomplish this via nova. I came across this
Sebastien
Han write-up that claims this functionality was added to Mitaka:
http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-
snapshots-on-ceph-rbd/

We are running Mitaka but our snapshots are not happening at the RBD
level,
they are being copied and uploaded to glance which takes up a  
lot of space

and is very slow.

Have you or anyone else implemented this in Mitaka? Other than  
Sebastian's

blog I haven't found any documentation on this.

Thank You,

___

John Petrini

On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block <ebl...@nde.ag> wrote:

Hi,


this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to ensure that all
data is flushed to disk."

So if the VM is not shut down, it's freezed to prevent data loss (I
guess). Depending on your storage backend, there are other ways to
perform
backups of your VMs.
We use Ceph as backend for nova, glance and cinder. Ceph stores the
disks,
images and volumes as Rados block device objects. We have a  
backup script

that creates snapshots of these RBDs, which are exported to our backup
drive. This way the running VM is not stopped or freezed, the user
doesn't
notice any issues. Unlike a nova snapshot, the rbd snapshot is created
immediately within a few seconds. After a successful backup the  
snapshots

are removed.

Hope this helps! If you are interested in Ceph, visit [1].

Regards,
Eugen

[1] http://docs.ceph.com/docs/giant/start/intro/


Zitat von John Petrini <jpetr...@coredial.com>:


Hello,



I've just started experimenting with nova backup and discovered that
there
is a period of time during the snapshot where the instance becomes
unreachable. Is this behavior expected during a live snapshot? Is there
any
way to prevent this?

___

John Petrini





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

   Vorsitzende des Aufsichtsrates: Angelika Mozdzen
 Sitz und Registergericht: Hamburg, HRB 90934
 Vorstand: Jens-U. Mozdzen
  USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstac
k
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstac
k





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwi

Re: [Openstack] nova backup - instances unreachable

2017-01-12 Thread Eugen Block
That's strange, I also searched for this message, but nothing there. I  
have debug logs enabled on compute node but I don't see anything  
regarding ceph. No matter, what I do, my instance is always shutdown  
before a snapshot is taken. What else can I try?



Zitat von John Petrini <jpetr...@coredial.com>:


Mohammed,

It looks like you may be right. Just found the permissions issue in the
nova log on the compute node.

4-e8f52e4fbcfb 691caf1c10354efab3e3c8ed61b7d89a
49bc5e5bf2684bd0948d9f94c7875027 - - -] Performing standard snapshot
because direct snapshot failed: no write permission on storage pool images

I'm going to test the change and will send an update you all with the
results.

Thank You,

___

John Petrini






Yes, we are also running Mitaka and I also read Sebastien Han's blogs ;-)

our snapshots are not happening at the RBD level,

they are being copied and uploaded to glance which takes up a lot of space
and is very slow.



Unfortunately, that's what we are experiencing, too. I don't know if
there's something I missed in the nova configs or somewhere else, but I'm
relieved that I'm not the only one :-)

While writing this email I searched again and found something:

https://specs.openstack.org/openstack/nova-specs/specs/mitak
a/implemented/rbd-instance-snapshots.html

https://review.openstack.org/#/c/205282/

It seems to be implemented already, I'm looking for the config options to
set. If you manage to get nova to make rbd snapshots, please let me know ;-)

Regards,
Eugen



Zitat von John Petrini <jpetr...@coredial.com>:

Hi Eugen,


Thanks for the response! That makes a lost of sense and is what I figured
was going on but I missed it in the documentation. We use Ceph as well and
I had considered doing the snapshots at the RBD level but I was hoping
there was someway to accomplish this via nova. I came across this
Sebastien
Han write-up that claims this functionality was added to Mitaka:
http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-
snapshots-on-ceph-rbd/

We are running Mitaka but our snapshots are not happening at the RBD
level,
they are being copied and uploaded to glance which takes up a lot of space
and is very slow.

Have you or anyone else implemented this in Mitaka? Other than Sebastian's
blog I haven't found any documentation on this.

Thank You,

___

John Petrini

On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block <ebl...@nde.ag> wrote:

Hi,


this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to ensure that all
data is flushed to disk."

So if the VM is not shut down, it's freezed to prevent data loss (I
guess). Depending on your storage backend, there are other ways to
perform
backups of your VMs.
We use Ceph as backend for nova, glance and cinder. Ceph stores the
disks,
images and volumes as Rados block device objects. We have a backup script
that creates snapshots of these RBDs, which are exported to our backup
drive. This way the running VM is not stopped or freezed, the user
doesn't
notice any issues. Unlike a nova snapshot, the rbd snapshot is created
immediately within a few seconds. After a successful backup the snapshots
are removed.

Hope this helps! If you are interested in Ceph, visit [1].

Regards,
Eugen

[1] http://docs.ceph.com/docs/giant/start/intro/


Zitat von John Petrini <jpetr...@coredial.com>:


Hello,



I've just started experimenting with nova backup and discovered that
there
is a period of time during the snapshot where the instance becomes
unreachable. Is this behavior expected during a live snapshot? Is there
any
way to prevent this?

___

John Petrini





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstac
k
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstac
k





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983






--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag


Re: [Openstack] nova backup - instances unreachable

2017-01-11 Thread Eugen Block

Have you or anyone else implemented this in Mitaka?


Yes, we are also running Mitaka and I also read Sebastien Han's blogs ;-)


our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a lot of space
and is very slow.


Unfortunately, that's what we are experiencing, too. I don't know if  
there's something I missed in the nova configs or somewhere else, but  
I'm relieved that I'm not the only one :-)


While writing this email I searched again and found something:

https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/rbd-instance-snapshots.html

https://review.openstack.org/#/c/205282/

It seems to be implemented already, I'm looking for the config options  
to set. If you manage to get nova to make rbd snapshots, please let me  
know ;-)


Regards,
Eugen


Zitat von John Petrini <jpetr...@coredial.com>:


Hi Eugen,

Thanks for the response! That makes a lost of sense and is what I figured
was going on but I missed it in the documentation. We use Ceph as well and
I had considered doing the snapshots at the RBD level but I was hoping
there was someway to accomplish this via nova. I came across this Sebastien
Han write-up that claims this functionality was added to Mitaka:
http://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/

We are running Mitaka but our snapshots are not happening at the RBD level,
they are being copied and uploaded to glance which takes up a lot of space
and is very slow.

Have you or anyone else implemented this in Mitaka? Other than Sebastian's
blog I haven't found any documentation on this.

Thank You,

___

John Petrini

On Wed, Jan 11, 2017 at 3:32 AM, Eugen Block <ebl...@nde.ag> wrote:


Hi,

this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to ensure that all
data is flushed to disk."

So if the VM is not shut down, it's freezed to prevent data loss (I
guess). Depending on your storage backend, there are other ways to perform
backups of your VMs.
We use Ceph as backend for nova, glance and cinder. Ceph stores the disks,
images and volumes as Rados block device objects. We have a backup script
that creates snapshots of these RBDs, which are exported to our backup
drive. This way the running VM is not stopped or freezed, the user doesn't
notice any issues. Unlike a nova snapshot, the rbd snapshot is created
immediately within a few seconds. After a successful backup the snapshots
are removed.

Hope this helps! If you are interested in Ceph, visit [1].

Regards,
Eugen

[1] http://docs.ceph.com/docs/giant/start/intro/


Zitat von John Petrini <jpetr...@coredial.com>:


Hello,


I've just started experimenting with nova backup and discovered that there
is a period of time during the snapshot where the instance becomes
unreachable. Is this behavior expected during a live snapshot? Is there
any
way to prevent this?

___

John Petrini





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
k





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova backup - instances unreachable

2017-01-11 Thread Eugen Block

Hi,

this seems to be exptected, the docs say:

"Shut down the source VM before you take the snapshot to ensure that  
all data is flushed to disk."


So if the VM is not shut down, it's freezed to prevent data loss (I  
guess). Depending on your storage backend, there are other ways to  
perform backups of your VMs.
We use Ceph as backend for nova, glance and cinder. Ceph stores the  
disks, images and volumes as Rados block device objects. We have a  
backup script that creates snapshots of these RBDs, which are exported  
to our backup drive. This way the running VM is not stopped or  
freezed, the user doesn't notice any issues. Unlike a nova snapshot,  
the rbd snapshot is created immediately within a few seconds. After a  
successful backup the snapshots are removed.


Hope this helps! If you are interested in Ceph, visit [1].

Regards,
Eugen

[1] http://docs.ceph.com/docs/giant/start/intro/


Zitat von John Petrini <jpetr...@coredial.com>:


Hello,

I've just started experimenting with nova backup and discovered that there
is a period of time during the snapshot where the instance becomes
unreachable. Is this behavior expected during a live snapshot? Is there any
way to prevent this?

___

John Petrini




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] FLOATING IP ISSUE

2016-10-19 Thread Eugen Block
Did you check your security group settings? They block almost every  
traffic by default. Are you able to login to your vm via ssh or how do  
you connect?
I would assume that this issue is one of the most asked so there  
should be plenty of answers already for all kinds of environments on  
http://ask.openstack.org/.


Regards,
Eugen


Zitat von venkat boggarapu <venkat.boggar...@gmail.com>:


i have deployed openstack liberty on centos 7 with 5 node architecture.

1.controller, 2.compute 3.block 4 object01 5 object02

when i launched a vm i am able to internal ip and floating ip

but i am unbale to connect outside the network with the floating ip and
ping google.com inside the vm.

please some one help regarding this.


With regards
venkat




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Guest VM IP configuration script

2016-08-25 Thread Eugen Block
I'm not sure if I understand this correctly, but I believe we have a  
smilar setup:


You have existing VLANs which have their own DHCP server and existing  
instances running. And you want to add new instances having their  
network interfaces in these VLANs, therefor you intend to disable  
neutron's dhcp server for that network. But your statement



neutron subnet-create  ext_net --gateway 10.35.1.254 10.35.1.0/24 --
--enable_dhcp=True


enables dhcp on that subnet, I think you want to use the  
router:external option and disable dhcp:


neutron subnet-create ext_net --gateway 10.35.1.254 10.35.1.0/24  
--disable-dhcp --router:external True


See [1] for details on external networks.

This way your new instances won't get any ip address and your existing  
instances won't lose their network config.
Now if you have that setup, you can use --config-drive with nova boot,  
the instance will receive its ip address that neutron has created.


I hope I got this right.

[1] http://docs.openstack.org/mitaka/networking-guide/scenario-classic-lb.html

Zitat von Satish Patel <satish@gmail.com>:


I am planning to upgrade liberty to mitaka now so hope i will have
latest and greatest feature to solve this problem. why openstack
community doesn't have any straightforward solution.

When i was trying to use DHCP in openstack i found openstack DHCP
start provide ip address to my existing LAN machines ( we are using
flat VLAN with neutron), that is why i disable openstack DHCP, Is it
common or i am doing something wrong?

We have existing DHCP in out LAN i don't want openstack DHCP take over
and start assigning own IPs to Provide LAN.

neutron subnet-create  ext_net --gateway 10.35.1.254 10.35.1.0/24 --
--enable_dhcp=True

On Thu, Aug 25, 2016 at 9:26 AM, Satish Patel <satish@gmail.com> wrote:

Eugen,

I think config-drive make sense when you don't have initial network in
place. In my case i don't care about fixed-IP for instance. I only
need to setup network using whatever IP neutron provide in that case
how do i query neutron port to find out what IP address is available
or neutron going to provide so i can take that information and pass to
userdata. Its sounds tricky any idea how to do that?

On Thu, Aug 25, 2016 at 2:53 AM, Eugen Block <ebl...@nde.ag> wrote:

Hi,

we've been trying to learn how to feed cloud-init with ip  
addresses, too. If

DHCP is disabled in your network, the instance won't get it's eth0
configured and won't be able to query the metadata server.
Creating a port before attaching it to a booting instance also doesn't work
if no dhcp is running on that network, I just tried that to be sure.

I've tried several ways but I only found one working option. For external
networks (or networks without dhcp) we are using config-drive now.  
Depending

on the OpenStack version it could be possible that you'll need
cloud-init-0.7.7, we had to fix two issues ourselves in version  
0.7.6 to get

it working, one of them was a missing default route.

With enabled config-drive the instance doesn't need a configured interface,
it's a temporarily mounted drive from where the required  
information is read

by cloud-init.
You can either add the option "--config-drive true" in your nova boot call
or check the checkbox in Horizon.

To answer your question about ports, you can create a new port either in
Horizon, but there you won't be able to assign a specific ip  
address. If you

want a specific ip address you have to call neutron port-create (port-name
is optional):

   neutron port-create  --fixed-ip
subnet_id=,ip_address= --name 

The resulting ID of that port can be used in nova boot call:

   nova boot --flavor 2 --image  --nic port-id=


Another way to assign a specific ip address to a booting instance without
port-creation (but DHCP has to be enabled) would be:

   nova boot --flavor 2 --image  --nic
net-id=,v4-fixed-ip= 

for example:
   nova boot --flavor 2 --image dc05b777-3122-4021-b7eb-8d96fdab2980 --nic
net-id=4421e160-d675-49f2-8c29-9722aebf03b2,v4-fixed-ip=192.168.124.6 test1

Hope this helps!


Zitat von Satish Patel <satish@gmail.com>:



My question is how to query ports and pass info to cloud-init?  is
there any document or api which i can call using script and setup
network ifcfg-eth0 file

On Wed, Aug 24, 2016 at 5:38 PM, Kaustubh Kelkar
<kaustubh.kel...@casa-systems.com> wrote:


You can create the ports beforehand and plug them in while creating the
instance. As for assigning IP addresses, you can query the ports and pass
the information to cloud-init. I am not sure if there is any  
other way to do

this.

Even if DHCP is disabled, OpenStack assigns IP information to ports when
a VM is created, and you can see this in your dashboard. The MAC and IP
information is used to configure iptables rules within security  
groups. Here

is the archived thread that provides this information:
http://lists.openstack.org/pipermail/openstack-de

Re: [Openstack] Guest VM IP configuration script

2016-08-25 Thread Eugen Block
You can download the tarball for cloud-init-0.7.7 from [1] already.  
You would have to build the packages yourself, I don't know if that's  
an option for you.
Currently, we're in a review process at OBS to upgrade from 0.7.6 to  
0.7.7 based on this new tarball. In case it will be accepted you'll  
have the newest rpm in the official repositories of openSUSE incl. a  
fix for missing default gateway. Please notice that we're running  
Mitaka on openSUSE distro, not Liberty.


Regards,
Eugen

[1] https://launchpad.net/cloud-init/+download


Zitat von Andreas Scheuring <scheu...@linux.vnet.ibm.com>:


We faced a similar issue while doing some tests in the past.
In any case you need to use the config drive. This is the only way how
your instance can access the IP information required.


There seem to be 3 ways for doing the configuration

#1 There is some code for cloud-init in review [1], that would do that.
But that would need to be merged first, a new version release is
required and your distro needs to pick that release. I'm not sure about
the current state, but it seems to be still in review.

#2 Use Glean [2] instead of cloud-init. It's an alternative to
cloud-init. But of course your image needs to have glean installed and
configured.

#3 hack your own solution. We did that for our limited scenario. With
nova file inject we injected a larger python script, doing all the
configuration. We used cloud-init to execute that script (directly
passing that script in with cloud-init was not possible, as it was too
large - that's why this hack was required :P). I would not recommend
going this way, cause you need to consider all the things like routes
and so on.


I personally think the best solution is #2 for now.



[1]
https://code.launchpad.net/~harlowja/cloud-init/cloud-init-net-sysconfig
[2] http://docs.openstack.org/infra/glean/



--
-
Andreas
IRC: andreas_s (formerly scheuran)



On Do, 2016-08-25 at 08:53 +0200, Eugen Block wrote:

Hi,

we've been trying to learn how to feed cloud-init with ip addresses,
too. If DHCP is disabled in your network, the instance won't get it's
eth0 configured and won't be able to query the metadata server.
Creating a port before attaching it to a booting instance also doesn't
work if no dhcp is running on that network, I just tried that to be
sure.

I've tried several ways but I only found one working option. For
external networks (or networks without dhcp) we are using config-drive
now. Depending on the OpenStack version it could be possible that
you'll need cloud-init-0.7.7, we had to fix two issues ourselves in
version 0.7.6 to get it working, one of them was a missing default
route.

With enabled config-drive the instance doesn't need a configured
interface, it's a temporarily mounted drive from where the required
information is read by cloud-init.
You can either add the option "--config-drive true" in your nova boot
call or check the checkbox in Horizon.

To answer your question about ports, you can create a new port either
in Horizon, but there you won't be able to assign a specific ip
address. If you want a specific ip address you have to call neutron
port-create (port-name is optional):

neutron port-create  --fixed-ip
subnet_id=,ip_address= --name 

The resulting ID of that port can be used in nova boot call:

nova boot --flavor 2 --image  --nic port-id=


Another way to assign a specific ip address to a booting instance
without port-creation (but DHCP has to be enabled) would be:

nova boot --flavor 2 --image  --nic
net-id=,v4-fixed-ip= 

for example:
nova boot --flavor 2 --image dc05b777-3122-4021-b7eb-8d96fdab2980
--nic
net-id=4421e160-d675-49f2-8c29-9722aebf03b2,v4-fixed-ip=192.168.124.6
test1

Hope this helps!


Zitat von Satish Patel <satish@gmail.com>:

> My question is how to query ports and pass info to cloud-init?  is
> there any document or api which i can call using script and setup
> network ifcfg-eth0 file
>
> On Wed, Aug 24, 2016 at 5:38 PM, Kaustubh Kelkar
> <kaustubh.kel...@casa-systems.com> wrote:
>> You can create the ports beforehand and plug them in while creating
>> the instance. As for assigning IP addresses, you can query the
>> ports and pass the information to cloud-init. I am not sure if
>> there is any other way to do this.
>>
>> Even if DHCP is disabled, OpenStack assigns IP information to ports
>> when a VM is created, and you can see this in your dashboard. The
>> MAC and IP information is used to configure iptables rules within
>> security groups. Here is the archived thread that provides this
>> information:
>>  
http://lists.openstack.org/pipermail/openstack-dev/2014-December/053069.html.

>>
>>
>> -Kaustubh
>>
>>> -Original Message-
>>> From: Satish Patel [mailto:satish@gmail.com]
>>> Sent: Wednesday, August 24, 2016 5:05 PM
&g

Re: [Openstack] Guest VM IP configuration script

2016-08-25 Thread Eugen Block

Hi,

we've been trying to learn how to feed cloud-init with ip addresses,  
too. If DHCP is disabled in your network, the instance won't get it's  
eth0 configured and won't be able to query the metadata server.
Creating a port before attaching it to a booting instance also doesn't  
work if no dhcp is running on that network, I just tried that to be  
sure.


I've tried several ways but I only found one working option. For  
external networks (or networks without dhcp) we are using config-drive  
now. Depending on the OpenStack version it could be possible that  
you'll need cloud-init-0.7.7, we had to fix two issues ourselves in  
version 0.7.6 to get it working, one of them was a missing default  
route.


With enabled config-drive the instance doesn't need a configured  
interface, it's a temporarily mounted drive from where the required  
information is read by cloud-init.
You can either add the option "--config-drive true" in your nova boot  
call or check the checkbox in Horizon.


To answer your question about ports, you can create a new port either  
in Horizon, but there you won't be able to assign a specific ip  
address. If you want a specific ip address you have to call neutron  
port-create (port-name is optional):


   neutron port-create  --fixed-ip  
subnet_id=,ip_address= --name 


The resulting ID of that port can be used in nova boot call:

   nova boot --flavor 2 --image  --nic port-id=  



Another way to assign a specific ip address to a booting instance  
without port-creation (but DHCP has to be enabled) would be:


   nova boot --flavor 2 --image  --nic  
net-id=,v4-fixed-ip= 


for example:
   nova boot --flavor 2 --image dc05b777-3122-4021-b7eb-8d96fdab2980  
--nic  
net-id=4421e160-d675-49f2-8c29-9722aebf03b2,v4-fixed-ip=192.168.124.6  
test1


Hope this helps!


Zitat von Satish Patel <satish@gmail.com>:


My question is how to query ports and pass info to cloud-init?  is
there any document or api which i can call using script and setup
network ifcfg-eth0 file

On Wed, Aug 24, 2016 at 5:38 PM, Kaustubh Kelkar
<kaustubh.kel...@casa-systems.com> wrote:
You can create the ports beforehand and plug them in while creating  
the instance. As for assigning IP addresses, you can query the  
ports and pass the information to cloud-init. I am not sure if  
there is any other way to do this.


Even if DHCP is disabled, OpenStack assigns IP information to ports  
when a VM is created, and you can see this in your dashboard. The  
MAC and IP information is used to configure iptables rules within  
security groups. Here is the archived thread that provides this  
information:  
http://lists.openstack.org/pipermail/openstack-dev/2014-December/053069.html.



-Kaustubh


-Original Message-
From: Satish Patel [mailto:satish@gmail.com]
Sent: Wednesday, August 24, 2016 5:05 PM
To: James Downs <e...@egon.cc>
Cc: openstack <openstack@lists.openstack.org>
Subject: Re: [Openstack] Guest VM IP configuration script

I am using neutron networking with vlan ( its provider VLAN). We  
are not using
DHCP but i need some kind of hack to inject IP address in instance  
using cloud-

init.

We are using cloud-init but i don't know how does it work and get IP from
neutron. I am new with neutron stuff.

On Wed, Aug 24, 2016 at 4:29 PM, James Downs <e...@egon.cc> wrote:
> On Wed, Aug 24, 2016 at 03:25:26PM -0400, Satish Patel wrote:
>> I enabled following in nova.conf on compute node but didn't work :(
>>
>> flat_injected=true
>>
>> Do i need to do anything else?
>
> Are you using flat networking?
> Nova-networks or Neutron?
>
> At this point, if you're not using DHCP, your only option is to  
arrange to feed
the networking information into the metadata for the VM at  
creation time, and

use someting like cloud-init to configure the networking. The ancient
networking injection stuff has either been removed, or been broken  
for years.

>
> Cheers,
> -j
>
> ___
> Mailing list:  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg  

Re: [Openstack] persistent storage on local disc

2016-08-17 Thread Eugen Block

Hi,


Is it possible to keep vms persistent storage on compute node local


sure, it's the default behaviour of nova if you don't specify other  
backends in the nova.conf.


Regards,
Eugen

Zitat von e...@ezit.hu:


Hi,

Is it possible to keep vms persistent storage on compute node local  
disk somehow?

I don't need HA and i don't want to spawn that vms on any other compute node,
i don't like to lose IO performance with iscsi lvm volumes,
basically what i need is just simply pass the ssd lvm volume or  
qcow2 image file to vm.


Regards,
enax



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Keystone] List group members with policy.v3cloudsample.json

2016-08-04 Thread Eugen Block
I just tried to reproduce that with a test domain, but I didn't get  
any errors. Did you make sure that your environment script uses the  
right credentials for (user)domain scope? I had my share with them a  
couple of times...



Zitat von 林自均 <johnl...@gmail.com>:


Hi Eugen,

I have no problem with the cloud admin, so I guess your workaround doesn't
work for me. What disturbing me is the unexpected behavior of the domain
admin.

John

Eugen Block <ebl...@nde.ag> 於 2016年8月4日 週四 下午3:34寫道:


Hi,

I had a similar issue recently [1], I had to adjust my policy file
because for some reason "domain_id:default" was not applied, instead I
use "user_domain_id:default" which works fine now.

---cut here---
control1:~ # grep "\"cloud_admin\":" /etc/keystone/policy.json
 "cloud_admin": "rule:admin_required and (domain_id:default or
user_domain_id:default)",
---cut here---

And I added it as an OR statement as a workaround to keep the original
statement. Hope this helps!

Regards,
Eugen

[1] http://lists.openstack.org/pipermail/openstack/2016-June/016454.html


Zitat von 林自均 <johnl...@gmail.com>:

> Hi all,
>
> My OpenStack version is Mitaka. I updated my /etc/keystone/policy.json to
> policy.v3cloudsample.json
> <
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json
>.
> Most functions works as expected.
>
> However, when I wanted to list members in a group as a domain admin, an
> error occurred: “You are not authorized to perform the requested action:
> identity:list_users_in_group (HTTP 403)”.
>
> The reproduce steps are:
>
>- As cloud admin:
>   - openstack domain create taiwan
>   - openstack user create --domain taiwan --password 5ecret
>   taiwan-president
>   - openstack role add --user taiwan-president --domain taiwan admin
>- As taiwan-president:
>   - openstack group create --domain taiwan indigenous
>   - openstack user create --domain taiwan margaret
>   - openstack group add user --group-domain taiwan indigenous
margaret
>   - openstack user list --group indigenous --domain taiwan
>
> The last command will generate the 403 error.
>
> The rule for identity:list_users_in_group is rule:cloud_admin or
> rule:admin_and_matching_target_group_domain_id. I can successfully list
> group members if I changed it to rule:admin_required.
>
> Am I doing anything wrong? Or did I run into some kind of bug? Thanks for
> the help.
>
> John
> ​



--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

 Vorsitzende des Aufsichtsrates: Angelika Mozdzen
   Sitz und Registergericht: Hamburg, HRB 90934
   Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Keystone] List group members with policy.v3cloudsample.json

2016-08-04 Thread Eugen Block

Hi,

I had a similar issue recently [1], I had to adjust my policy file  
because for some reason "domain_id:default" was not applied, instead I  
use "user_domain_id:default" which works fine now.


---cut here---
control1:~ # grep "\"cloud_admin\":" /etc/keystone/policy.json
"cloud_admin": "rule:admin_required and (domain_id:default or  
user_domain_id:default)",

---cut here---

And I added it as an OR statement as a workaround to keep the original  
statement. Hope this helps!


Regards,
Eugen

[1] http://lists.openstack.org/pipermail/openstack/2016-June/016454.html


Zitat von 林自均 <johnl...@gmail.com>:


Hi all,

My OpenStack version is Mitaka. I updated my /etc/keystone/policy.json to
policy.v3cloudsample.json
<https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json>.
Most functions works as expected.

However, when I wanted to list members in a group as a domain admin, an
error occurred: “You are not authorized to perform the requested action:
identity:list_users_in_group (HTTP 403)”.

The reproduce steps are:

   - As cloud admin:
  - openstack domain create taiwan
  - openstack user create --domain taiwan --password 5ecret
  taiwan-president
  - openstack role add --user taiwan-president --domain taiwan admin
   - As taiwan-president:
  - openstack group create --domain taiwan indigenous
  - openstack user create --domain taiwan margaret
  - openstack group add user --group-domain taiwan indigenous margaret
  - openstack user list --group indigenous --domain taiwan

The last command will generate the 403 error.

The rule for identity:list_users_in_group is rule:cloud_admin or
rule:admin_and_matching_target_group_domain_id. I can successfully list
group members if I changed it to rule:admin_required.

Am I doing anything wrong? Or did I run into some kind of bug? Thanks for
the help.

John
​




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Glance: Unable to create image.

2016-08-03 Thread Eugen Block

So you're running Juno, I'm not sure I can help here...
Are the other openstack services running? What is the output of
nova service-list
cinder service-list
keystone user-list

Can you create an empty volume? If this works then your cinder service  
is probably configured correctly and glance is not, then you should  
check again the docs and your actual configuration.


Are there any other error messages from other services?


Zitat von shivkumar gupta <shivkumar_gupt...@yahoo.com>:


Hello Eugen,
I am following attached guide.
RegardsShiv

On Tuesday, 2 August 2016 6:19 PM, Eugen Block <ebl...@nde.ag> wrote:


 Which guide are you using?
I don't see any domains in your glance-api.conf or 
glance-registry.conf, an excerpt from Mitaka guide:

---cut here---
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
---cut here---

Did you set the environment variables in your scripts correctly 
(OS_IMAGE_API_VERSION=2)? There are several points that could lead to 
the authorization error. I don't use SSL in my test environment, I 
don't know if this is another point to check.

Regards.
Eugen

Zitat von shivkumar gupta <shivkumar_gupt...@yahoo.com>:


Thanks Trinath,
I already verify the configuration from the document. Can you please 
help me what exactly should i verify. also what will be the 
authentication flow while creating an image in glance.

    On Monday, 1 August 2016 3:09 PM, Trinath Somanchi 
<trinath.soman...@nxp.com> wrote:


  #yiv3904287517 #yiv3904287517 -- _filtered #yiv3904287517 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered 
#yiv3904287517 {panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered 
#yiv3904287517 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 
4;}#yiv3904287517 #yiv3904287517 p.yiv3904287517MsoNormal, 
#yiv3904287517 li.yiv3904287517MsoNormal, #yiv3904287517 
div.yiv3904287517MsoNormal 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv3904287517 
a:link, #yiv3904287517 span.yiv3904287517MsoHyperlink 
{color:#0563C1;text-decoration:underline;}#yiv3904287517 a:visited, 
#yiv3904287517 span.yiv3904287517MsoHyperlinkFollowed 
{color:#954F72;text-decoration:underline;}#yiv3904287517 
p.yiv3904287517msonormal0, #yiv3904287517 
li.yiv3904287517msonormal0, #yiv3904287517 
div.yiv3904287517msonormal0 
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv3904287517 
span.yiv3904287517EmailStyle18 {color:windowtext;}#yiv3904287517 
.yiv3904287517MsoChpDefault {font-size:10.0pt;} _filtered 
#yiv3904287517 {margin:1.0in 1.0in 1.0in 1.0in;}#yiv3904287517 
div.yiv3904287517WordSection1 {}#yiv3904287517 Hi Shiv-    The error 
clearly mentions its an misconfiguration of keystone.    Reverify 
your glance configuration for keystone – glance authentication 
credentials.  The one you have created while installing and 
configuring glance.    / Trinath          From: shivkumar gupta 
[mailto:shivkumar_gupt...@yahoo.com]
Sent: Monday, August 01, 2016 2:42 PM
To: OpenStack Mailing List <openstack@lists.openstack.org>
Subject: Re: [Openstack] [OpenStack] Glance: Unable to create image. 
    Hello Experts,    Please suggest and help to proceed further.    
Regards Shiv    On Sunday, 31 July 2016 5:04 PM, shivkumar gupta 
<shivkumar_gupt...@yahoo.com> wrote:    Hello Experts,    I am 
unable to create image in during glance installation and getting 
following error.     glance image-create --name "Cirros" --file 
/tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 
--container-format bare --is-public True --progress  
[=>] 100% Request returned failure 
status. Invalid OpenStack Identity credentials.    From api.log i 
can see following errors was present. 2016-07-30 21:36:17.135 7114 
INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): 
127.0.0.1 2016-07-30 21:36:17.145 7114 WARNING 
keystoneclient.middleware.auth_token [-] Retrying on HTTP connection 
exception: [Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol 2016-07-30 
21:36:17.648 7114 INFO urllib3.connectionpool [-] Starting new HTTPS 
connection (1): 127.0.0.1 2016-07-30 21:36:17.671 7114 WARNING 
keystoneclient.middleware.auth_token [-] Retrying on HTTP connection 
exception: [Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol 2016-07-30 
21:36:18.673 7114 INFO urllib3.connectionpool [-] Starting new HTTPS 
connection (1): 127.0.0.1 2016-07-30 21:36:18.686 7114 WARNING 
keystoneclient.middleware.auth_token [-] Retrying on HTTP connection 
exception: [Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol 2016-07-30 
21:36:20.690 7114 INFO urllib3.connectionpool [-] Starting new HTTPS 
conne

Re: [Openstack] [OpenStack] Glance: Unable to create image.

2016-08-02 Thread Eugen Block
/images HTTP/1.1" 401 381 3.642616    root@controller:/home/shiv#  
keystone user-list  
+--++-+-+ |                id                |  name  | enabled |            email            | +--++-+-+ | 4b5b3e35bf7646c5ab2151f0e641c71e | admin  |   True  |shivkumar_gupt...@yahoo.com | | 823971bd62db441c8aa85726af4e0029 |  demo  |   True  |      d...@example.com      | | 66694ef0feae472f89ca96416234b48f | glance |   True  |                             | +--++-+-+    root@controller:/home/shiv# keystone user-role-list +--+---+--+--+ |                id                |  name |             user_id              |            tenant_id             | +--+---+--+--+ | df629a49703241ec88c81c5e756da6f5 | admin | 4b5b3e35bf7646c5ab2151f0e641c71e | a574030c4c104b80aee84478388c42c6 | +--+---+--+--+    root@controller:/home/shiv# keystone tenant-list +--+-+-+ |                id                |   name  | enabled | +--+-+-+ | a574030c4c104b80aee84478388c42c6 |  admin  |   True  | | 67aba69109bc4505a119693676324d90 |   demo  |   True  | | a7a4647b7a0c459f87cdef7357edf0ba | service |   True  | +--+-+-+    root@controller:/home/shiv# keystone user-get glance +--+--+ | Property |              Value               | +--+--+ |  email   |                                  | | enabled  |               True               | |    id    | 66694ef0feae472f89ca96416234b48f | |   name   |              glance              | | username |              glance              | +--+--+       root@controller:/home/shiv# keystone endpoint-list +--+---+-+-+--+--+ |                id                |   region  |          publicurl          |         internalurl         |           adminurl           |            service_id            | +--+---+-+-+--+--+ | 26f2fe4829d847a685519f6a8a24e7e3 | regionOne |http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 56df0a2a6d254c23b50939c747bdad28 | | 7770dbe6f8b34613a72c2d6a04aef58d | regionOne |    http://controller:9292   |    http://controller:9292   |    http://controller:9292    | b705791276b64727aa598b13457fa847 |    root@controller:/home/shiv# keystone service-list +--+--+--+-+ |                id                |   name   |   type   |       description       | +--+--+--+-+ | b705791276b64727aa598b13457fa847 |  glance  |  image   | Openstack Image Service | | 56df0a2a6d254c23b50939c747bdad28 | keystone | identity |    Openstack Identity   | +--+--+--+-+    root@controller:/home/shiv# keystone service-get glance +-+--+ |   Property  |              Value               | +-+--+ | description |     Openstack Image Service      | |   enabled   |               True               | |      id     | b705791276b64727aa598b13457fa847 | |     name    |              glance              | |     type    |              image               | +-+--+       Please suggest. api.conf and registry.conf file is attached.    Regards Shiv   
 




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Launch an instance from the images tab

2016-08-02 Thread Eugen Block
No, I mean an instance snapshot ;-) Please see the attached screenshot  
of the "launch instance" dialog. I filed a bug report for that:  
https://bugs.launchpad.net/horizon/+bug/1608565


Regards,
Eugen


Zitat von Turbo Fredriksson <tu...@bayour.com>:


On Aug 1, 2016, at 3:11 PM, Eugen Block wrote:

Project->Compute->Columes->Volume Snapshots->[on a  
snapshot]->Launch as Instance


Now I tried launching the instance for all 4 source types (Image,  
Instance snapshot, Volume, Volume snapshot), and 3 of them do  
actually pre-allocate the selected source, just "instance snapshot"  
does not.


Could you verify that, too, please? If you can confirm I'll report a bug.



What do you exactly mean by 'instance snapshot'? If you're meaning
the same as i do above, then it DO work for me!
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Launch an instance from the images tab

2016-08-01 Thread Eugen Block
Project->Compute->Columes->Volume Snapshots->[on a snapshot]->Launch  
as Instance


Now I tried launching the instance for all 4 source types (Image,  
Instance snapshot, Volume, Volume snapshot), and 3 of them do actually  
pre-allocate the selected source, just "instance snapshot" does not.


Could you verify that, too, please? If you can confirm I'll report a bug.


Zitat von Turbo Fredriksson <tu...@bayour.com>:


On Aug 1, 2016, at 8:58 AM, Eugen Block wrote:

I found out what it is. If I select a regular image (type "Image")  
I have it pre-selected, too. But if I select a snapshot (type  
"Snapshot") it is not pre-selected. In the "Launch instance" dialog  
I have to select the snapshot from the dropdown again. Can anyone  
confirm this?



Works for me..

  Project->Compute->Columes->Volume Snapshots->[on a  
snapshot]->Launch as Instance


My 'Source' tab is preselected with

  Select Boot Source: Volume snapshot

and the 'Allocated' below is populated with the image name I gave
when creating the snapshot.
--
System administrators motto:
You're either invisible or in trouble.
- Unknown


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Launch an instance from the images tab

2016-08-01 Thread Eugen Block
I found out what it is. If I select a regular image (type "Image") I  
have it pre-selected, too. But if I select a snapshot (type  
"Snapshot") it is not pre-selected. In the "Launch instance" dialog I  
have to select the snapshot from the dropdown again. Can anyone  
confirm this?


Regards,
Eugen


Zitat von Turbo Fredriksson <tu...@bayour.com>:


On Jul 29, 2016, at 10:50 AM, Eugen Block wrote:

I'm wondering if anyone else has noticed this, if I'm in the images  
view (Horizon) of a project and click on "Launch" to boot an  
instance from this selected image, the resulting dialog has no  
image pre-selected as it used to have, so I have to select the  
source again.


Works for me (on Mitaka).. I.e., I get the image preselected..
--
Geologists recently discovered that "earthquakes" are
nothing more than Bruce Schneier and Chuck Norris
communicating via a roundhouse kick-based cryptosystem.


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Launch an instance from the images tab

2016-07-29 Thread Eugen Block

Hi,

I'm wondering if anyone else has noticed this, if I'm in the images  
view (Horizon) of a project and click on "Launch" to boot an instance  
from this selected image, the resulting dialog has no image  
pre-selected as it used to have, so I have to select the source again.  
I'm not sure if this has changed since or during Mitaka, hard to say.  
Or is there even an existing bug I couldn't find yet?


Regards,
Eugen


--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [ceilometer] How to retrieve meters from rbd resources

2016-07-01 Thread Eugen Block
Thanks for the explanation, that actually helps a lot (and makes me  
realise that I did that for nothing ;-) )!
I'll be out of office for the next three weeks, so any attempt to  
contribute will start after my vacation ;-)


Regards,
Eugen

Zitat von Mehdi Abaakouk <sil...@sileht.net>:


Hi Eugen,

Le 2016-06-29 16:30, Eugen Block a écrit :

This bug [1] describes the issue, but it seems to be a libvirt issue,
not ceilometer.

According to [2] it should be possible to retrieve those meters.
I followed ceph docs to install the rados-gateway, I integrated
keystone authentication, at least I don't get any errors regarding
authentication and swift command seems to work.
Then I added the meters described in [2] to the  
/etc/ceilometer/pipeline.yaml


Ceph RBD and Ceph RadosGW are differents applications (both on top  
Ceph librados API)


So from Ceilometer point of view, meters collected on each  
applications are different meters, we currently have:


* instances disks IOPS meters retrieved by Ceilometer throught  
libvirt. (And currently broken for rbd backed instances)
* radosgw meters retrieved by ceilometer-polling-agent by polling  
the Ceph RadosGW API directly.



I believe I have completed all required steps, but I still get the
libvirt errors in ceilometer (both kvm and xen hypervisor).


If you got this error, that means you did the thing right, but it's  
bugged due to [1] as you notice.



Now I'm starting to wonder if rados-gw really is the right choice
here.


Yes, rados-gw meters seems unrelated to your use-case.


Has anybody figured out a way to retrieve rbd meters with ceilometer?


That depends on the use-cases, if you want to get:

* instances disks IOPS, ceilometer-compute-agent and libvirt have to  
be improved to retrieve them (that means fixing [1]).
* the raw rbd meters (image size and utilization), a new  
ceilometer-polling-agent plugin needs to be written.
  (A poller that stat all rbd images directly have chance to be  
really really slow).


AFAICR, nobody is currently working those two points, so  
contributions welcome.


Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Policy doesn't allow os_compute_api:servers:create:forced_host to be performed issue ...

2016-06-30 Thread Eugen Block

Hi,

it's your nova.policy file that enables this restriction.

control1:~ # grep -r forced_host /etc/
/etc/nova/policy.json:"compute:create:forced_host": "is_admin:True",
/etc/nova/policy.json:"os_compute_api:servers:create:forced_host":  
"rule:admin_api",


Are you trying to use the nova boot command with --availability-zone  
nova:YOUR_COMPUTE_NODE ?
I assume you either have the admin role in that demo project or do it  
as the admin user. If you simply don't specify the availability-zone  
it should work, nova will choose a fitting hypervisor if it finds one.


Regards,
Eugen


Zitat von Jean-Pierre Ribeauville <jpribeauvi...@axway.com>:


Hi,

I source  keystonerc_demo  and then I tried to start an instance for  
a demo project wirh key pair feature , bu using nova boot command.


Then  I got this error :

ERROR (Forbidden): Policy doesn't allow  
os_compute_api:servers:create:forced_host to be performed.


Under admin , I'm able to create an instance ( with a key-pair  
belonging to admin)


Is there a way to start an instance via nova boot by using demo user  
? ( i.e. by modifying some policy)



Thx for help.

Regards,

Jean-Pierre RIBEAUVILLE

+33 1 4717 2049

[axway_logo_tagline_87px]




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [ceilometer] How to retrieve meters from rbd resources

2016-06-29 Thread Eugen Block

Hi all,

I have a Mitaka environment where glance, nova and cinder use ceph  
(rbd) as storage backend, that works perfectly fine. Now I'm trying to  
get meters from my rbd pool with ceilometer, but libvirt fails to read  
information, the ceilometer-polling.log says


---cut here---
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk  
[req-823d46da-5a23-45bd-a0e9-596e7d4a9fc3 admin - - - -] Ignoring  
instance instance-02d7 (51d7bfdc-feec-4f13-ad0c-190dcfa2c62d) :  
this function is not supported by the connection driver:  
virDomainGetBlockInfo
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk  
Traceback (most recent call last):
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
 File  
"/usr/lib/python2.7/site-packages/ceilometer/compute/pollsters/disk.py", line  
625, in get_samples
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
   instance,
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
 File  
"/usr/lib/python2.7/site-packages/ceilometer/compute/pollsters/disk.py", line  
567, in _populate_cache
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
   for disk, info in disk_info:
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
 File  
"/usr/lib/python2.7/site-packages/ceilometer/compute/virt/libvirt/inspector.py", line 215, in  
inspect_disk_info
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
   block_info = domain.blockInfo(device)
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
 File "/usr/lib64/python2.7/site-packages/libvirt.py", line 690, in  
blockInfo
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk   
   if ret is None: raise libvirtError ('virDomainGetBlockInfo()  
failed', dom=self)
2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk  
libvirtError: this function is not supported by the connection driver:  
virDomainGetBlockInfo

2016-06-29 15:45:19.711 29483 ERROR ceilometer.compute.pollsters.disk
---cut here---

This bug [1] describes the issue, but it seems to be a libvirt issue,  
not ceilometer.


According to [2] it should be possible to retrieve those meters.
I followed ceph docs to install the rados-gateway, I integrated  
keystone authentication, at least I don't get any errors regarding  
authentication and swift command seems to work.

Then I added the meters described in [2] to the /etc/ceilometer/pipeline.yaml

---cut here---
control1:~ # cat /etc/ceilometer/pipeline.yaml
---
sources:
[...]
- name: radosgw_source
  interval: 600
  meters:
  - "radosgw.objects"
  - "radosgw.objects.size"
  - "radosgw.objects.containers"
  - "radosgw.api.request"
  - "radosgw.containers.objects"
  - "radosgw.containers.objects.size"
  sinks:
  - meter_sink
[...]
---cut here---

I believe I have completed all required steps, but I still get the  
libvirt errors in ceilometer (both kvm and xen hypervisor).
Now I'm starting to wonder if rados-gw really is the right choice  
here. Has anybody figured out a way to retrieve rbd meters with  
ceilometer?


Regards,
Eugen

[1] https://bugs.launchpad.net/ceilometer/+bug/1457440
[2]  
http://docs.openstack.org/admin-guide/telemetry-measurements.html#ceph-object-storage



--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Projects deals tricky job

2016-06-27 Thread Eugen Block
Thanks for the information, I'll definitely get to it. But right now  
I'm having some trouble with domain_id in the keystone_policy.json. I  
believe I'm also affected by this bug  
https://bugs.launchpad.net/python-openstackclient/+bug/1538804


I switched to the stable/liberty policy.v3cloudsample.json because the  
value for "token.is_admin_project:True or domain_id:admin_domain_id"  
lead to errors in authentication. Using "rule:admin_required and  
domain_id:default" works if I use Horizon (I see the output in  
keystone.log), but it fails to authenticate while using CLI because  
for some reason "domain_id" is never read by the client.

As a workaround I changed the rule to

"cloud_admin": "rule:admin_required and (domain_id:default or  
user_domain_id:default)"


that seems to work fine, and I already tried it with user_id instead  
of domain_id, but I can't predict the consequences. What is the  
recommendation here until the CLI client will be able to read domain_id?


Regards,
Eugen


Zitat von Timothy Symanczyk <timothy_symanc...@symantec.com>:


We implemented something here at Symantec that sounds very similar to what
you¹re both talking about. We have three levels of Admin - Cloud, Domain,
and Project. If you¹re interested in checking it out, we actually
presented on this topic in Austin.

The presentation : https://www.youtube.com/watch?v=v79kNddKbLc

All the referenced files can be found in our github here :
https://github.com/Symantec/Openstack_RBAC

Specifically you may want to check out our keystone policy file that
defines cloud_admin domain_admin and project_admin :
https://github.com/Symantec/Openstack_RBAC/blob/master/keystone/policy.json

Tim

On 6/20/16, 5:17 AM, "Eugen Block" <ebl...@nde.ag> wrote:


I believe you are trying to accomplish the same configuration as I do,
so I think domains are the answer. You can devide your cloud into
different domains and grant admin rights to specific users, which are
not authorized to see the other domains. Although I'm still not sure
if I did it correctly and it's not fully resolved yet, here is a
thread I started a few days ago:

http://lists.openstack.org/pipermail/openstack/2016-June/016454.html

Regards,
Eugen

Zitat von Venkatesh Kotipalli <openstackvenkat...@gmail.com>:


Hi Folks,

Is it possible to create a project admin in openstack.

As we identified when ever we created a project admin it will show
entire
cloud (Like : other users and all services completely admin access).
but i
want to see the particular project users,admins and control all the
services.

Guys please help me this part. I am really very confused.

Regards,
Venkatesh.k




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] nova service-list error

2016-06-24 Thread Eugen Block

You need a domain in your environment variables, e.g.

export OS_USER_DOMAIN_NAME=
or
export OS_USER_DOMAIN_ID=

Your environment script should provide

 OS_PROJECT_DOMAIN_NAME
 OS_USER_DOMAIN_NAME
 OS_PROJECT_NAME
 OS_USERNAME
 OS_PASSWORD
 OS_AUTH_URL=http://controller:35357/v3 --> (or  
OS_AUTH_URL=http://controller:5000/v3)

 OS_IDENTITY_API_VERSION=3
 OS_IMAGE_API_VERSION=2

Zitat von venkat boggarapu <venkat.boggar...@gmail.com>:


HI All,

When i am trying to run the command nova service-list i am getting the
below error


ERROR (BadRequest): Expecting to find domain in project - the server could
not comply with the request since it is either malformed or otherwise
incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID:
req-78bdca0c-6dda-4f4d-a571-eee336cec053)

can someone help regarding this.


With regards
venkat....




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Networking issues with neutron-linuxbridge-agent

2016-06-24 Thread Eugen Block
If you are using the Neutron API for security groups, then I think  
you need firewall_driver=nova.virt.firewall.NoopFirewallDriver in  
nova.conf - that's what devstack does.


I think this was really the solution! I tried to provoke the  
interruption in three different ways that broke the connection before,  
but I couldn't! I hope this is it, I'll report if the interruptions  
return, but so far thank you very much!!!



Zitat von Darragh O'Reilly <dara2002-openst...@yahoo.com>:


On Friday, 24 June 2016, 9:15, Eugen Block <ebl...@nde.ag> wrote:

Make sure nova is using the noop driver.



I'm trying to use ceilometer, in that case the docs say to use
messagingv2 driver, so that's what I did. And until two weeks ago it
worked just fine, I had no networking issues.




Your iptables output is showing entries from both nova-compute and  
neutron. If you are using the Neutron API for security groups, then  
I think you need  
firewall_driver=nova.virt.firewall.NoopFirewallDriver in nova.conf -  
that's what devstack does.




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Networking issues with neutron-linuxbridge-agent

2016-06-24 Thread Eugen Block

Make sure nova is using the noop driver.


I'm trying to use ceilometer, in that case the docs say to use  
messagingv2 driver, so that's what I did. And until two weeks ago it  
worked just fine, I had no networking issues.



double check your security groups config


The security groups also seem to be fine, my colleague works via ssh  
on those instances. And the interruption can be caused by deleting an  
instance in a different project with it's own security groups, it just  
has to run on the same compute node.



Zitat von Darragh O'Reilly <dara2002-openst...@yahoo.com>:

double check your security groups config. Make sure nova is using  
the noop driver.




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid

2016-06-23 Thread Eugen Block

"rbd"?


It's a different storage backend, something like a network RAID. But  
don't mind it right now ;-)


But even after disabling them, they're still show as  
"status=disabled,state=up"


They are running because you didn't stop the services, you just  
disabled them. You could stop them for now if you don't intend using  
cinder until you get an instance up and running, but I would take care  
of cinder after that. It doesn't affect you if you while trying to  
boot an instance on local storage because cinder is not required for  
that.


From your latest logs I assume that you are still trying to boot from  
volume, I recommend to ignore cinder for now and focus on launching an  
instance at all. Have you fixed your glance issue? Because that is  
required, otherwise it won't work at all.



Zitat von Turbo Fredriksson <tu...@bayour.com>:


On Jun 23, 2016, at 12:26 PM, Eugen Block wrote:

/etc/cinder/cinder.conf:enabled_backends = rbd--> that's what I  
use currently


"rbd"?

I'm not sure if it would work, it's been a while since I used local  
storage, but if you just comment the enabled_backend option out and  
restart cinder services, I believe it would create local volumes.


Shouldn't it be enough just to "disable" those services/backends?

I guess I have to, because just commenting that out didn't help, they still
show as enabled and running.

But even after disabling them, they're still show as  
"status=disabled,state=up"

with a "cinder service-list".. ?

Ok, that's different! I'm not running Glance on my Compute, only  
on my Control.


Glance is not supposed to run on a compute node, it runs on a control node.


Ok, good! I thought I missed something fundamental.


What's the output of "openstack endpoint list | grep glance"?


| 57b10556b7bf47eaa019c603a0f6b34f | europe-london | glance | image  
| True | public   | http://10.0.4.1:9292
| 8672f6de1673470d93ab6ccee1c1a2bb | europe-london | glance | image  
| True | internal | http://10.0.4.1:9292
| e45c3e83fe744e7db949cdd89dfe5654 | europe-london | glance | image  
| True | admin| http://10.0.4.1:9292


That's the Control node..


[waited a little while]


How long did you wait?


10-15 seconds perhaps. At least less than (half?) a minute..

 "This section describes how to install and configure the Image  
service, code-named glance, on the controller node."


It is not obvious from that that that (!! :) should only be done on the
Controller! It just say "do this on the controller". It does not make it
clear that you shouldn't do something on the compute as well.

"This section describes how to install and configure the Compute  
service, code-named nova, on the controller node."
"This section describes how to install and configure the Compute  
service on a compute node."


Neither of which distinguish the different parts - what if I
have/want a separate compute and control node? It does not
make things obvious!


And that's why I have a problem with HOWTOs! They _assume_ to much.
And a _BAD_ HOWTO (which all of them on Openstack are!) doesn't even
attempt to explain the different options you have, so if you deviate
even the very slightest, you're f**ked!

There's a _HUMONGOS_ difference between a "HOWTO" and "Documentation"!

Timeout problem? Make sure that nothing blocks the requests  
(proxy?), what response do you get if you execute

control1:~ # curl http://:9292


I was doing that ON the Control. Worked just fine.

And The Control and Compute is on the same switch.
--
Det är när man känner doften av sin egen avföring
som man börjar undra vem man egentligen är.
- Arne Anka


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ceph RadosGW and Object Storage meters: storage.objects.incoming|outgoing.bytes

2016-06-23 Thread Eugen Block
I'm trying to accomplish the same, I use Ceph as storage backend and  
get errors in ceilometer-polling.log like


Cannot inspect data of MemoryUsagePollster for  
7307de53-52a4-4900-9c04-d5fb6c787159, non-fatal reason: Failed to  
inspect memory usage of instance 

Re: [Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid

2016-06-23 Thread Eugen Block
lue, traceback)\n  File  
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line  
2244, in _build_resources\nreason=six.text_type(exc))\n',  
u'created': u'2016-06-22T21:27:28Z'} |

- s n i p -

Ok, that's different! I'm not running Glance on my Compute, only on  
my Control.


Which of these should I run on the Compute and which one on the Control?

The documentation (one of many I follow:  
http://docs.openstack.org/draft/install-guide-debconf/common/get_started_image_service.html) doesn't say. Only which ones to  
install

on the Control.

- s n i p -
bladeA03b:/etc/nova# apt-cache search glance | grep ^glance
glance - OpenStack Image Registry and Delivery Service - Daemons
glance-api - OpenStack Image Registry and Delivery Service - API server
glance-common - OpenStack Image Registry and Delivery Service - common files
glance-glare - OpenStack Artifacts - API server
glance-registry - OpenStack Image Registry and Delivery Service -  
registry server

- s n i p -

Currently, I have all of them only on the Control..


Concerning the flavor, I think the flavor you use should have the same disk
size as the disk.


Ok, I'll keep that in mind, thanx.


Now, this might be a stupid question, but it actually only occurred  
to me just now
when I looking at that missing net error. I haven't really setup my  
network, just

"winged" it. I' pretty sure it's not even close to working (I need to do more
studying in the matter - I still don't have a clue about how things  
is supposed

to work in/on the OpenStack side of things).

I've postponed it because I desperately need ANY success story - creating an
instance, even if it won't technically work would help a lot in  
that. I figured
it should at least TRY to start.. And I _ASUME_ (!!) that as long as  
the Control
can talk to the Compute and "tell" it what to do (such as "attach  
this volume/image"),
it should at least be able to be created. I'm guessing the  
networking (Neutron)
in OS is for the _instance_, not for administration etc. Or, did I  
misunderstood

(the little I've read and actually understood about it :)?
--
Att tänka innan man talar, är som att torka sig i röven
innan man skiter.
- Arne Anka


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (no subject)

2016-06-23 Thread Eugen Block

Before you can execute any administrational tasks you have to authenticate.
So according to the docs I use  
(http://docs.openstack.org/mitaka/install-guide-obs/keystone-services.html)  
you need some credentials in your environment, at least


OS_TOKEN (only for initializing the identity service)
OS_URL
OS_IDENTITY_API_VERSION

The example looks like this:

export OS_TOKEN=294a4c8a8a475f9b9836
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

The token is created in a previous step, so make sure you have  
followed the guide you are using, otherwise you won't get very far. ;-)


Regards,
Eugen


Zitat von venkat boggarapu <venkat.boggar...@gmail.com>:


Hi All,

We are getting the below error while installing glance service in our
environment.


[root@controller ~]# openstack user create --domain default
--password-prompt glance
Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with
--os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope
with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name.

can some please help regarding this issue.


With regards
venkat




--
Eugen Block voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG  fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail  : ebl...@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
  Sitz und Registergericht: Hamburg, HRB 90934
  Vorstand: Jens-U. Mozdzen
   USt-IdNr. DE 814 013 983


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


  1   2   >