Re: [Openstack] Openstack+KVM+overcommit, VM priority

2017-01-12 Thread Ivan Derbenev
What are the facilities for cpu? That's the initial question - how can I do it 
if this is possible


Best Regards
Tech-corps IT Engineer
Ivan Derbenev
Phone: +79633431774

-Original Message-
From: James Downs [mailto:e...@egon.cc] 
Sent: Thursday, January 12, 2017 12:56 AM
To: Ivan Derbenev <ivan.derbe...@tech-corps.com>
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Openstack+KVM+overcommit, VM priority

On Wed, Jan 11, 2017 at 09:34:32PM +, Ivan Derbenev wrote:

> if both vms start using all 64gb memory, both of them start using swap

Don't overcommit RAM.

> So, the question is - is it possible to prioritize 1st vm above 2nd? so the 
> second one will fail before the 1st, to leave maximum possible perfomance to 
> the most importan one?

Do you mean CPU prioritization? There are facilities to allow one VM or another 
to have CPU priority, but what, if a high priority VM wants RAM, you want to 
OOM the other? That doesn't exist, AFAIK.

Cheers,
-j

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack+KVM+overcommit, VM priority

2017-01-11 Thread Ivan Derbenev
no, that's not my question.

I'm already overcommiting, but now need to prioritize one instance above others 
in terms of perfomance.


От: Eugene Nikanorov <enikano...@mirantis.com>
Отправлено: 12 января 2017 г. 1:13:27
Кому: Ivan Derbenev; openstack@lists.openstack.org
Тема: Re: [Openstack] Openstack+KVM+overcommit, VM priority

Ivan,

see if it provides an answer: 
https://ask.openstack.org/en/question/55307/overcommitting-value-in-novaconf/

Regards,
Eugene.

On Wed, Jan 11, 2017 at 1:55 PM, James Downs 
<e...@egon.cc<mailto:e...@egon.cc>> wrote:
On Wed, Jan 11, 2017 at 09:34:32PM +0000, Ivan Derbenev wrote:

> if both vms start using all 64gb memory, both of them start using swap

Don't overcommit RAM.

> So, the question is - is it possible to prioritize 1st vm above 2nd? so the 
> second one will fail before the 1st, to leave maximum possible perfomance to 
> the most importan one?

Do you mean CPU prioritization? There are facilities to allow one VM or
another to have CPU priority, but what, if a high priority VM wants RAM,
you want to OOM the other? That doesn't exist, AFAIK.

Cheers,
-j

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org<mailto:openstack@lists.openstack.org>
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Openstack+KVM+overcommit, VM priority

2017-01-11 Thread Ivan Derbenev
Hello, guys!

Imagine we have a compute node with kvm hypervisor installed

It has 64gb RAM and quad core processor


We create 2 machines in nova on this host - both with 64gb and 4 VCPUs

if both vms start using all 64gb memory, both of them start using swap

same for cpu - they use it equally


So, the question is - is it possible to prioritize 1st vm above 2nd? so the 
second one will fail before the 1st, to leave maximum possible perfomance to 
the most importan one?

like production and secondary services running on the same node.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] allow user to run single api call

2016-11-17 Thread Ivan Derbenev
Hello all!

Is it possible to give a user (or role) an ability to run specific api call? 
It's monitoring - only user, and I want to give it permissions for all 
%servicename% %itemname%-list calls
And changing specific policies in policy.json seems to work, but not for things 
like  nova/cinder service-list. 
So I can run service-list only when the user is admin (or after I changed 
context is_admin in policy.json)
Can I somehow allow user to run ONLY nova service-list?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] instances with nfs-backed cinder volumes hanges after network fail

2016-10-26 Thread Ivan Derbenev
Hello guys!
We have currently an installation of Openstack Liberty+KVM+Ububntu 14.04, 
mostly default
Cinder and Nova are backed with qcow2 over NFS, cinder volumes resides on the 
same servers running nova-compute
The problem is - we do have power issues in our building, and not always we can 
turn off servers properly, when this happens
So, when the power is down, and network is down, all instances that has cinder 
volumes connected hangs.
Simply nfs share becomes unavailable, even if the share is mounted locally 
(from 192.168.1.29 onto 192.168.1.29, for example), I can't even ls there, it 
hangs.
And it never goes up, until the server reboot, even if the network is back.
I can't even restart instance from KVM locally, it says that device is busy
So, every instance which has cinder volume mounted hangs and wouldn't go up 
until hypervisor rebooted

Is there a way to fix this issue?
At least make instances with local cinder volumes (the same host where nova 
runs this instance) not hang?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] automatic role assignment

2016-07-11 Thread Ivan Derbenev
Hello!
We are using OS liberty + Ubuntu LTS.
I configured keystone to use ldap as an identity backend and sql as an 
assignment backend.
Is there an automatic built-in way to assign some roles to all people from 
specific LDAP group?


Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] resize lvm-backed instance

2016-02-17 Thread Ivan Derbenev
bump

Regards, 
IT engineer
Farheap, Russia
Ivan Derbenev

-Original Message-
From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com] 
Sent: Sunday, February 14, 2016 10:54 PM
To: Chris Friesen <chris.frie...@windriver.com>; openstack@lists.openstack.org
Subject: Re: [Openstack] resize lvm-backed instance

Wow, I completely missed this answer, and now I do have the same problem I 
looked at code and found that LVM migration is not implemented, however, I 
don't wanna migrate, I want to resize Here's log I'm getting

2016-02-14 14:38:45.016 25601 INFO nova.compute.resource_tracker 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Total usable vcpus: 8, 
total allocated vcpus: 13
2016-02-14 14:38:45.017 25601 INFO nova.compute.resource_tracker 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Final resource view: 
name=openstack04 phys_ram=32175MB used_ram=32256MB phys_disk=837GB 
used_disk=104GB total_vcpus=8 used_vcpus=13 pci_stats=None
2016-02-14 14:38:45.060 25601 INFO nova.compute.resource_tracker 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Compute_service record 
updated for openstack04:openstack04
2016-02-14 14:38:45.061 25601 DEBUG oslo_concurrency.lockutils 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
0.197s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:265
2016-02-14 14:38:51.528 25601 DEBUG oslo_service.periodic_task 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Running periodic task 
ComputeManager._heal_instance_info_cache run_periodic_tasks 
/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py:213
2016-02-14 14:38:51.529 25601 DEBUG nova.compute.manager 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Starting heal instance 
info cache _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:5478
2016-02-14 14:38:51.612 25601 DEBUG oslo_concurrency.lockutils 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Acquired semaphore 
"refresh_cache-3e2836b6-602a-44de-a835-9d1936a53785" lock 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:198
2016-02-14 14:38:51.613 25601 DEBUG nova.objects.instance 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Lazy-loading `flavor' on 
Instance uuid 3e2836b6-602a-44de-a835-9d1936a53785 obj_load_attr 
/usr/lib/python2.7/dist-packages/nova/objects/instance.py:860
2016-02-14 14:38:51.754 25601 DEBUG nova.network.base_api 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] [instance: 
3e2836b6-602a-44de-a835-9d1936a53785] Updating instance_info_cache with 
network_info: [VIF({'profile': None, 'ovs_interfaceid': None, 
'preserve_on_delete': False, 'network': Network({'bridge': u'br1004', 
'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [], 'address': u'10.1.10.49'})], 'version': 4, 
'meta': {u'dhcp_server': u'10.1.10.3'}, 'dns': [IP({'meta': {}, 'version': 4, 
'type': u'dns', 'address': u'8.8.4.4'}), IP({'meta': {}, 'version': 4, 'type': 
u'dns', 'address': u'10.1.2.100'})], 'routes': [], 'cidr': u'10.1.10.0/24', 
'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': 
u'10.1.10.3'})}), Subnet({'ips': [], 'version': None, 'meta': {u'dhcp_server': 
u'10.1.10.3'}, 'dns': [], 'routes': [], 'cidr': None, 'gateway': IP({'meta': 
{}, 'version': None, 'type': u'gateway', 'address': None})})], 'meta': 
{u'multi_host': True, u'should_create_bridge': True, u'should_create_vlan': 
True, u'bridge_interface': u'eth1', u'tenant_id': 
u'2b1b8aa01abf4db8ab9aa8c22cf861f0', u'vlan': 1004}, 'id': 
u'9540bfd0-e620-4302-bb1e-dfe69ad9bcda', 'label': u'tcdefault-1004'}), 
'devname': None, 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 
'details': {}, 'address': u'fa:16:3e:eb:41:69', 'active': False, 'type': 
u'bridge', 'id': u'd9171467-de05-4a99-aaab-4d0d57343658', 'qbg_params': None})] 
update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/base_api.py:43
2016-02-14 14:38:51.769 25601 DEBUG oslo_concurrency.lockutils 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Releasing semaphore 
"refresh_cache-3e2836b6-602a-44de-a835-9d1936a53785" lock 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:211
2016-02-14 14:38:51.770 25601 DEBUG nova.compute.manager 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] [instance: 
3e2836b6-602a-44de-a835-9d1936a53785] Updated the network info_cache for 
instance _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:5539
2016-02-14 14:38:53.337 25601 DEBUG oslo_concurrency.lockutils 
[req-8a8069fb-2df1-4683-9e66-e48fbe446c94 Ivan Derbenev 
2b1b8aa01abf4db8ab9aa8c22cf861f0 - - -] Acquired semaphore 
"refresh_cache-3bc4ba06-77bc-4b3c-a857-dd1a2f74ca24" lock 
/usr/lib/python2.7/dist-packages/oslo_concurre

Re: [Openstack] resize lvm-backed instance

2016-02-14 Thread Ivan Derbenev
Wow, I completely missed this answer, and now I do have the same problem
I looked at code and found that LVM migration is not implemented, however, I 
don't wanna migrate, I want to resize
Here's log I'm getting

2016-02-14 14:38:45.016 25601 INFO nova.compute.resource_tracker 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Total usable vcpus: 8, 
total allocated vcpus: 13
2016-02-14 14:38:45.017 25601 INFO nova.compute.resource_tracker 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Final resource view: 
name=openstack04 phys_ram=32175MB used_ram=32256MB phys_disk=837GB 
used_disk=104GB total_vcpus=8 used_vcpus=13 pci_stats=None
2016-02-14 14:38:45.060 25601 INFO nova.compute.resource_tracker 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Compute_service record 
updated for openstack04:openstack04
2016-02-14 14:38:45.061 25601 DEBUG oslo_concurrency.lockutils 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
0.197s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:265
2016-02-14 14:38:51.528 25601 DEBUG oslo_service.periodic_task 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Running periodic task 
ComputeManager._heal_instance_info_cache run_periodic_tasks 
/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py:213
2016-02-14 14:38:51.529 25601 DEBUG nova.compute.manager 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Starting heal instance 
info cache _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:5478
2016-02-14 14:38:51.612 25601 DEBUG oslo_concurrency.lockutils 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Acquired semaphore 
"refresh_cache-3e2836b6-602a-44de-a835-9d1936a53785" lock 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:198
2016-02-14 14:38:51.613 25601 DEBUG nova.objects.instance 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Lazy-loading `flavor' on 
Instance uuid 3e2836b6-602a-44de-a835-9d1936a53785 obj_load_attr 
/usr/lib/python2.7/dist-packages/nova/objects/instance.py:860
2016-02-14 14:38:51.754 25601 DEBUG nova.network.base_api 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] [instance: 
3e2836b6-602a-44de-a835-9d1936a53785] Updating instance_info_cache with 
network_info: [VIF({'profile': None, 'ovs_interfaceid': None, 
'preserve_on_delete': False, 'network': Network({'bridge': u'br1004', 
'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [], 'address': u'10.1.10.49'})], 'version': 4, 
'meta': {u'dhcp_server': u'10.1.10.3'}, 'dns': [IP({'meta': {}, 'version': 4, 
'type': u'dns', 'address': u'8.8.4.4'}), IP({'meta': {}, 'version': 4, 'type': 
u'dns', 'address': u'10.1.2.100'})], 'routes': [], 'cidr': u'10.1.10.0/24', 
'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': 
u'10.1.10.3'})}), Subnet({'ips': [], 'version': None, 'meta': {u'dhcp_server': 
u'10.1.10.3'}, 'dns': [], 'routes': [], 'cidr': None, 'gateway': IP({'meta': 
{}, 'version': None, 'type': u'gateway', 'address': None})})], 'meta': 
{u'multi_host': True, u'should_create_bridge': True, u'should_create_vlan': 
True, u'bridge_interface': u'eth1', u'tenant_id': 
u'2b1b8aa01abf4db8ab9aa8c22cf861f0', u'vlan': 1004}, 'id': 
u'9540bfd0-e620-4302-bb1e-dfe69ad9bcda', 'label': u'tcdefault-1004'}), 
'devname': None, 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 
'details': {}, 'address': u'fa:16:3e:eb:41:69', 'active': False, 'type': 
u'bridge', 'id': u'd9171467-de05-4a99-aaab-4d0d57343658', 'qbg_params': None})] 
update_instance_cache_with_nw_info 
/usr/lib/python2.7/dist-packages/nova/network/base_api.py:43
2016-02-14 14:38:51.769 25601 DEBUG oslo_concurrency.lockutils 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] Releasing semaphore 
"refresh_cache-3e2836b6-602a-44de-a835-9d1936a53785" lock 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:211
2016-02-14 14:38:51.770 25601 DEBUG nova.compute.manager 
[req-73be063c-747a-4933-bef3-31a19c10f278 - - - - -] [instance: 
3e2836b6-602a-44de-a835-9d1936a53785] Updated the network info_cache for 
instance _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:5539
2016-02-14 14:38:53.337 25601 DEBUG oslo_concurrency.lockutils 
[req-8a8069fb-2df1-4683-9e66-e48fbe446c94 Ivan Derbenev 
2b1b8aa01abf4db8ab9aa8c22cf861f0 - - -] Acquired semaphore 
"refresh_cache-3bc4ba06-77bc-4b3c-a857-dd1a2f74ca24" lock 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:198
2016-02-14 14:38:53.387 25601 DEBUG nova.network.base_api 
[req-8a8069fb-2df1-4683-9e66-e48fbe446c94 Ivan Derbenev 
2b1b8aa01abf4db8ab9aa8c22cf861f0 - - -] [instance: 
3bc4ba06-77bc-4b3c-a857-dd1a2f74ca24] Updating instance_info_cache with 
network_info: [VIF({'profile': None, 'ovs_interfaceid': None, 
'prese

Re: [Openstack] [Nova] [Glance] Errors Produced by Nova and Glance Interaction

2015-12-04 Thread Ivan Derbenev
Looks like this is your issue
https://bugs.launchpad.net/glance/+bug/1476770

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Richard Raseley [mailto:rich...@raseley.com]
Sent: Friday, December 4, 2015 5:50 AM
To: openstack@lists.openstack.org
Subject: [Openstack] [Nova] [Glance] Errors Produced by Nova and Glance 
Interaction

I am running OpenStack Juno installed from RDO packages, all version 
2015.1.1-1.el7, on CentOS 7.

I am experiencing an issue whenever Nova has to interact with Glance wherein it 
is unable to do anything other than retrieve the list of available images. Here 
is an example:

```

(openstack)➜  openstack  glance image-list
+--+-+
| ID   | Name|
+--+-+
| 70bcde32-dcfc-47f6-8596-9d968fae2011 | cirros-0.3.4-x86_64 |
+--+-+

(openstack)➜  openstack  glance image-show 70bcde32-dcfc-47f6-8596-9d968fae2011
+--+--+
| Property | Value|
+--+--+
| architecture | x86_64   |
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at   | 2015-12-03T15:10:22Z |
| description  | CirrOS 0.3.4 x86_64  |
| disk_format  | qcow2|
| id   | 70bcde32-dcfc-47f6-8596-9d968fae2011 |
| min_disk | 1|
| min_ram  | 512  |
| name | cirros-0.3.4-x86_64  |
| owner| bd712506c2ba41f2b1d661f97c927f8a |
| protected| False|
| size | 13287936 |
| status   | active   |
| tags | []   |
| updated_at   | 2015-12-03T15:10:26Z |
| virtual_size | None |
| visibility   | private  |
+--+--+

(openstack)➜  openstack  nova image-list
+--+-+++
| ID   | Name| Status | Server |
+--+-+++
| 70bcde32-dcfc-47f6-8596-9d968fae2011 | cirros-0.3.4-x86_64 | ACTIVE ||
+--+-+++

(openstack)➜  openstack  nova image-show 70bcde32-dcfc-47f6-8596-9d968fae2011
ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-f27a5253-dc65-4689-9d89-f80ddc873e37)

```

This is causing the inability to launch instances. Whenever this occurs, the 
Glance API logs look clean and I see a 200 response to each request, however 
the Nova API logs show an ‘AttributeError’ 
(http://paste.openstack.org/show/480853/) for each request. I had originally 
thought this was related to this bug 
(https://bugs.launchpad.net/glance/+bug/1476770), but am still experiencing it 
with keystonemiddleware 1.5.3, requests 2.8.1, and urllib3 1.12.

Any assistance would be greatly appreciated, as this is currently a blocking 
issue for me.

Regards,

Richard

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] change IP of instance

2015-12-04 Thread Ivan Derbenev
Hello!
Openstack kilo+Ubuntu 14.04
Nova-network, not neutron, vlanDHCPmanager

Is there a possibility to change ip of instance, without database hacking?

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] public snapshots via horizon/nova

2015-11-23 Thread Ivan Derbenev
Hello!
Is there a way to make a public snapshot via horizon? Kilo, Ubuntu 14.04

Not to change metadata for already-created snapshot, but to make it public 
immediately after creation?

Or at least how can I create public snapshot via
#nova image-create
(again, not change property in glance, but via nova)

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] single dnsmasq instance in cloud

2015-11-05 Thread Ivan Derbenev
Hello, guys!
We do have installation of kilo + nova-network

Is it possible to have only one dnsmasq dhcp server on controller node, instead 
of every-node installation?

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] no dhcp record for new vm

2015-11-05 Thread Ivan Derbenev
Hello, guys, we have the newest kilo release (2015.1.2) with nova-network
And we've got a problem with new vms - they can't get new ip until nova-network 
is restarted

/var/lib/nova/networks/nova-br1004.conf file isn't reloaded, so there is no new 
record.
This happens only on compute nodes, on controller nova-br1004.conf updates at 
the same time when the instance is booted

I tried to debug the code to find when this should happen and why it doesn't, 
but no result

Help is deeply appreciated, thanks

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] work from home 19.10.2015

2015-10-19 Thread Ivan Derbenev
Ввиду проведения санехнических работы вынужден поработать из дома первую 
половину дня

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] resize lvm-backed instance

2015-10-13 Thread Ivan Derbenev
Hello!
Is it possible to resize (and, possibly, migrate) lvm-backed instances in kilo?
When I try to do so, I am getting an error
Instance rollback performed due to: Migration pre-check error: Migration is not 
supported for LVM backed instances

What's the workaround?
Yes, I can go into DB manually every time, but I don't like this

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Problems with OpenStack and LDAP

2015-08-17 Thread Ivan Derbenev
You need to create service users in ldap
ADMIN_TOKEN should work for assigning roles

Regards, 
IT engineer
Farheap, Russia
Ivan Derbenev

-Original Message-
From: Marc Pape [mailto:marc.p...@gmail.com] 
Sent: Monday, August 17, 2015 10:32 AM
To: openstack@lists.openstack.org
Subject: [Openstack] Problems with OpenStack and LDAP

Hello everybody,

i've got some problems with our OpenStack (Juno) and the Integrate Identity 
Service over LDAP.
The LDAP connection is read only, so i configured the [identity], [ldap] and 
[assignment] parts in keystone conf.
The identity part use driver =
keystone.identity.backends.ldap.Identity and assignment driver = 
keystone.assignment.backends.sql.Assignment
Our goal is a user authentication via LDAP and project assignment in the 
internal SQL . It would be great if the service users of OpenStack are also 
stored in SQL, but they are also currently in the LDAP deposited.
After restarting the Keystone Service authentication via LDAP is possible. The 
user get the message that no projects assigned to him.
Now there are wto problems. How can you log in as admin to assign projects and 
keystone said that it couldn't find the service user like ceilometer, neutron 
and so on.
I've followed the instructions on docs.openstack.org for Identity management, 
but i didn't find any notices about that problems.

Many greetings and thanks for a possible answer

Marc

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] New instances booting time

2015-08-14 Thread Ivan Derbenev
Well, thanks. 
Actually, wanted to know, if this behavior is ok for every installation, or I 
has misconfigured smthing

Regards, 
IT engineer
Farheap, Russia
Ivan Derbenev

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Friday, August 14, 2015 5:56 PM
To: Jay Pipes jaypi...@gmail.com
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] New instances booting time

On Fri, Aug 14, 2015 at 03:34:09PM +0100, Daniel P. Berrange wrote:
 On Fri, Aug 14, 2015 at 03:10:13PM +0100, Daniel P. Berrange wrote:
  On Fri, Aug 14, 2015 at 09:55:42AM -0400, Jay Pipes wrote:
   On 08/13/2015 11:37 PM, Ivan Derbenev wrote:
   *From:*Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
   *Sent:* Wednesday, August 5, 2015 1:21 PM
   *To:* openstack@lists.openstack.org
   *Subject:* [Openstack] New instances booting time
   
   Hello, guys, I have a question
   
   We now have OS Kilo + KVM+ Ubuntu 14.04
   
   Nova-compute.conf:
   
   [libvirt]
   
   virt_type=kvm
   
   images_type = lvm
   
   images_volume_group =openstack-controller01-ky01-vg
   
   volume_clear = none
   
   the problem is when nova boots a new instance it’s SUPER slow
   
   exactly this step:
   
   Running cmd (subprocess): sudo nova-rootwrap 
   /etc/nova/rootwrap.conf qemu-img convert -O raw 
   /var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a1
   9b42e 
   /dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a6623
   27b608c_disk
   
   Well, I understand what this step is doing – it’s copying raw image to 
   lvm.
   
   How can we speed it up?
   
   I don’t wanna have instance from 100GB image booted for 4 hours
   
   Don't use base images that are 100G in size. Quite simple, really.
  
  Presumably the actual image is only a few 100 MB in size, but the 
  virtual qcow2 size is 100 GB ?
  
  I guess qemu-img probably has to still write out 100 GB of zeros as 
  it can't assume that the LVM target is pre-zeroed. I do wonder 
  though if there's a way to optimize this so that qemu-img only has 
  to write out the 100 MB of actual data, and not al the zeros.
  
  Perhaps there's scope for a flag to qemu-img to tell it to seek in 
  the target when it sees holes in the source image. If OpenStack then 
  used sparse LVM images, there would be no need to rwite out 100's of 
  GB of zeros.
 
 Speaking with the KVM block layer maintainers, the behaviour seen is 
 not entirely expected. qemu-img wants to ensure no existing data is 
 left behind as that could cause data corruption if the original qcow2 
 file had dropped empty sectors. So it has to make sure it can write 
 out the 100 GB even if that is mostly zeros.
 
 That said, it has support for using discard, and it will use that in 
 preference to writing zeros, which should be really fast. For it to 
 work with block devices though, it seems that we need to disable the 
 I/O cache by passing '-t none' to qemu-img when doing this conversion. 
 This also assumes the underlying storage that the user has configured 
 behind their LVM volume supports discard which may not always be the 
 case.
 
 So it would be interesting to know if changing
 
 $ qemu-img convert -O raw \
 /var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a19b42e \
 
 /dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a662327b60
 8c_disk
 
 to
 
 $ qemu-img convert -O raw -t none \
 /var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a19b42e \
 
 /dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a662327b60
 8c_disk
 
 makes any significant difference to the speed.

I filed a bug to track this possible enhancement

  https://bugs.launchpad.net/nova/+bug/1484992

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] New instances booting time

2015-08-14 Thread Ivan Derbenev
Well, I have 4gb qcow2 image 
But it contains a 100gb filesystem
And the rest 96 gbs are still copied

Regards, 
IT engineer
Farheap, Russia
Ivan Derbenev

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, August 14, 2015 4:56 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] New instances booting time

On 08/13/2015 11:37 PM, Ivan Derbenev wrote:
 *From:*Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
 *Sent:* Wednesday, August 5, 2015 1:21 PM
 *To:* openstack@lists.openstack.org
 *Subject:* [Openstack] New instances booting time

 Hello, guys, I have a question

 We now have OS Kilo + KVM+ Ubuntu 14.04

 Nova-compute.conf:

 [libvirt]

 virt_type=kvm

 images_type = lvm

 images_volume_group =openstack-controller01-ky01-vg

 volume_clear = none

 the problem is when nova boots a new instance it's SUPER slow

 exactly this step:

 Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf 
 qemu-img convert -O raw 
 /var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a19b42e
 /dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a662327b60
 8c_disk

 Well, I understand what this step is doing - it's copying raw image to lvm.

 How can we speed it up?

 I don't wanna have instance from 100GB image booted for 4 hours

Don't use base images that are 100G in size. Quite simple, really.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] New instances booting time

2015-08-13 Thread Ivan Derbenev
bump

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: Wednesday, August 5, 2015 1:21 PM
To: openstack@lists.openstack.org
Subject: [Openstack] New instances booting time

Hello, guys, I have a question
We now have OS Kilo + KVM+ Ubuntu 14.04

Nova-compute.conf:
[libvirt]
virt_type=kvm
images_type = lvm
images_volume_group =openstack-controller01-ky01-vg
volume_clear = none

the problem is when nova boots a new instance it's SUPER slow
exactly this step:
Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf qemu-img 
convert -O raw 
/var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a19b42e 
/dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a662327b608c_disk

Well, I understand what this step is doing - it's copying raw image to lvm.

How can we speed it up?
I don't wanna have instance from 100GB image booted for 4 hours

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] cinder and nova volumes at the same place

2015-08-07 Thread Ivan Derbenev
Hello!
We have an installation of OS Kilo+Ubunto14.04+KVM
Now we use LVM backend for both Nova and Cinder.
I do want to switch it to file backend and store VMs in qcow2 files on ext4 fs.
But us there a way to store cinder volumes in the same way?

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] New instances booting time

2015-08-05 Thread Ivan Derbenev
Hello, guys, I have a question
We now have OS Kilo + KVM+ Ubuntu 14.04

Nova-compute.conf:
[libvirt]
virt_type=kvm
images_type = lvm
images_volume_group =openstack-controller01-ky01-vg
volume_clear = none

the problem is when nova boots a new instance it's SUPER slow
exactly this step:
Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf qemu-img 
convert -O raw 
/var/lib/nova/instances/_base/999f7fff2521e4a7243c9e1d21599fd64a19b42e 
/dev/openstack-controller01-ky01-vg/5f831046-435c-4636-8b71-a662327b608c_disk

Well, I understand what this step is doing - it's copying raw image to lvm.

How can we speed it up?
I don't wanna have instance from 100GB image booted for 4 hours

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Adding a volume to existing instance

2015-07-27 Thread Ivan Derbenev
Hello, guys!
That can be rather simple question for you, but I can't quite understand.
For example, we have 2 hardware machines with Ubuntu+KVM+Kilo
Each machine has 2 TB of local storage
Shared storage on freenas can be used ONLY for images, not for VMs

We created a Debian instance on server1, and now we need to attach a 100gb disk 
to it. And we want this disk to be created on the same server1. It doesn't 
really matter what backend we use, lvm\file image\anything else

Well... it's easily ca be done by KVM itself with virsh attach-device. But how 
can we do this via OpenStack services? Cinder will use iscsi even inside single 
machine (I don't know whether it will be slower or not) and we can't predict 
what storage server it will use to create disk.
And I couldn't find any way to do it inside nova.


Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] XenServer 6.5 or KVM

2015-07-24 Thread Ivan Derbenev
We had a small deployment (2 nodes) of OS Juno+XS6.5 and after I understood how 
it should work with Xen everything worked great.
Still, we had to move for KVM because XS with local storage lacked support of 
dynamic volumes via Cinder.

If you plan to use shared storage for VMs, XS is a great choice. It’s MUCH 
easier to maintain then KVM, especially for a newbie in such technologies

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Leandro Mendes [mailto:theflock...@gmail.com]
Sent: Friday, July 24, 2015 5:15 PM
To: Mārtiņš Jakubovičs
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] XenServer 6.5 or KVM

Thank you guys.
One last question: This 3 node setup is just a proof of concept of a bigger 
architecture (with around 30 nodes). I personally would prefer use KVM, so we 
can use Kilo and Neutron, but for some political reasons, we are really pushed 
to use XenServer (no problem about XS itself, I think it is great), but we 
don't have enough time and hands to make it work.
Thinking about it,i would ask (maybe Bob will give me the best answer) what 
should be the best choise for use Openstack with XenServer regarding their 
versions?
Maybe an older XCP with Juno or Icehouse?
And what about VMWare as it is as XenServer in Group B?

Thank you again.

On Fri, Jul 24, 2015 at 4:20 AM, Mārtiņš Jakubovičs 
mart...@vertigs.lvmailto:mart...@vertigs.lv wrote:
Hello Leandro,

From my experience OpenStack + XenServer is really difficult to build, but it 
work. Newest features like neutron, advanced cinder drivers, XenServer pool's, 
etc is not working well. At least with XenServer 6.2 and  Icehouse release. If 
you know both technologies and don't know which to choose then use KVM.
If you want to use XenServer then community member Bob Ball 
(bob.b...@citrix.commailto:bob.b...@citrix.com) can help you more with this.

Best regards,
Martins

On 2015.07.24. 05:03, Leandro Mendes wrote:
Hi Guys,
I'm trying to setup a 3 node (controller, neutron and compute) Openstack setup, 
but i'm having a lot of trouble regarding the XenServer(6.5) setup.
Looking around the internet for docs about XenServer as compute node, i've got 
the impression that the main effort of OpenStack is use KVM instead of 
XenServer, because the docs (and lots of another links outhere) are outdated.
May i consider it true? Should i forget about XenServer and move to KVM?
What do you guys think about that?
Thank you.


___

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org

Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] xenserver and cinder

2015-07-14 Thread Ivan Derbenev
Well, we are migrating from cloudstack now and it has this feature

So, do I understand It right. In current situation I cannot use data disk's at 
local storages at all?
The only solution is to make additional layer through cinder+lvm and share 
cinder vols through iscsi?

Do you know what's the best practice for a system like mine?
I mean XS with local storages

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Bob Ball [mailto:bob.b...@citrix.com]
Sent: Tuesday, July 14, 2015 5:18 PM
To: Ivan Derbenev; openstack@lists.openstack.org
Subject: RE: xenserver and cinder

Hi Ivan,

In that case then no, I don't believe what you are trying to achieve is 
currently possible with OpenStack.  My understanding is that detachable block 
storage is provided only by Cinder and Cinder must be network-attached storage 
as there is no concept of tying the storage to particular VMs.

For example, you create the cinder volumes _before_ you attach it to a virtual 
machine - so there is no knowledge of which VM or which hypervisor will be 
attaching to the volume.

Bob

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: 14 July 2015 15:10
To: Bob Ball; 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: RE: xenserver and cinder

Ok, well...
I don't have the reason to use precisely Cinder.
What I need is the ability to create, delete and attach data volumes to 
instances. As far as I know I can't do it without Cinder.
If you can show me a path how to do it, and use local storages only - that 
would be really great

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Bob Ball [mailto:bob.b...@citrix.com]
Sent: Tuesday, July 14, 2015 4:58 PM
To: Ivan Derbenev; 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: RE: xenserver and cinder

As I understand it Cinder provides volumes that must be accessible to VMs on 
any of the hosts - i.e. must be accessed over the network.  Therefore if you 
want to use Cinder to provide data volumes (or BFV) then you need a VM/physical 
machine/storage array providing the actual storage which is accessible over the 
network.

I'm not aware of a way to use Cinder create volumes that are only accessible to 
VMs on a single host - which is what you would need to use storage that is 
entirely local to the hypervisor.

Just to understand, why are you trying to use Cinder backed by local storage 
only?  What's the use case here?

Bob

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: 14 July 2015 14:50
To: Bob Ball; 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: RE: xenserver and cinder

So, do I understand it right - I can't get rid of additional layer for cinder 
volumes, if I want to use local storage?

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Bob Ball [mailto:bob.b...@citrix.com]
Sent: Tuesday, July 14, 2015 12:34 PM
To: Ivan Derbenev; 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: RE: xenserver and cinder

Hi Ivan,

XenAPINFSDriver was primarily useful for pooled scenarios (which in turn relied 
on Nova aggregates) - however it's not the easiest way to consume Cinder 
volumes.  The XenServer Nova integration supports BFV and volume attach for 
Cinder volumes presented over iSCSI, so however those are managed by Cinder is 
independent of how XenServer can consume them.

Even when using the deprecated XenAPINFSDriver, the storage had to be remote 
storage (i.e. provided to the host through an NFS server) so that does not let 
you use the hypervisor-local storage as block storage provisionable by Cinder 
and you would have needed a separate VM if you actually need your Cinder 
volumes to be provisioned from local storage.

Bob

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: 13 July 2015 18:53
To: openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: [Openstack] xenserver and cinder


Hello, guys!
We are currently making an installation of XS6.5+OS Juno (maybe we'll switch to 
kilo)
We have 2 servers, controller has glance, nova and keystone, and compute vms 
have only nova-compute installed.
We use local storage on each server, not shared one (this is important)

We want to implement cinder service to manage volumes easily, but i can't 
understand how can we do it.XenAPINFSDriver is deprecated and isn't supported 
any more.
The only solution i found so far is to create storage VM, give it some space 
and use this space for cinder volumes, and then mount it in VMs as ISCSI 
targets.
And in the same time nova uses xenAPI to create and manage volumes when it 
creates instances.

And what i want to make is to make cinder use Xenserver volumes both when nova 
creates VM and when i create volume with cinder.
Without second level of abstraction.

Can you tell me what are the best practices to use cinder for Xenservers with 
local storage?


Regards,
Ivan Derbenev

Re: [Openstack] xenserver and cinder

2015-07-14 Thread Ivan Derbenev
So, do I understand it right - I can't get rid of additional layer for cinder 
volumes, if I want to use local storage?

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Bob Ball [mailto:bob.b...@citrix.com]
Sent: Tuesday, July 14, 2015 12:34 PM
To: Ivan Derbenev; openstack@lists.openstack.org
Subject: RE: xenserver and cinder

Hi Ivan,

XenAPINFSDriver was primarily useful for pooled scenarios (which in turn relied 
on Nova aggregates) - however it's not the easiest way to consume Cinder 
volumes.  The XenServer Nova integration supports BFV and volume attach for 
Cinder volumes presented over iSCSI, so however those are managed by Cinder is 
independent of how XenServer can consume them.

Even when using the deprecated XenAPINFSDriver, the storage had to be remote 
storage (i.e. provided to the host through an NFS server) so that does not let 
you use the hypervisor-local storage as block storage provisionable by Cinder 
and you would have needed a separate VM if you actually need your Cinder 
volumes to be provisioned from local storage.

Bob

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: 13 July 2015 18:53
To: openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: [Openstack] xenserver and cinder


Hello, guys!
We are currently making an installation of XS6.5+OS Juno (maybe we'll switch to 
kilo)
We have 2 servers, controller has glance, nova and keystone, and compute vms 
have only nova-compute installed.
We use local storage on each server, not shared one (this is important)

We want to implement cinder service to manage volumes easily, but i can't 
understand how can we do it.XenAPINFSDriver is deprecated and isn't supported 
any more.
The only solution i found so far is to create storage VM, give it some space 
and use this space for cinder volumes, and then mount it in VMs as ISCSI 
targets.
And in the same time nova uses xenAPI to create and manage volumes when it 
creates instances.

And what i want to make is to make cinder use Xenserver volumes both when nova 
creates VM and when i create volume with cinder.
Without second level of abstraction.

Can you tell me what are the best practices to use cinder for Xenservers with 
local storage?


Regards,
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] xenserver and cinder

2015-07-14 Thread Ivan Derbenev
Ok, well...
I don't have the reason to use precisely Cinder.
What I need is the ability to create, delete and attach data volumes to 
instances. As far as I know I can't do it without Cinder.
If you can show me a path how to do it, and use local storages only - that 
would be really great

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Bob Ball [mailto:bob.b...@citrix.com]
Sent: Tuesday, July 14, 2015 4:58 PM
To: Ivan Derbenev; openstack@lists.openstack.org
Subject: RE: xenserver and cinder

As I understand it Cinder provides volumes that must be accessible to VMs on 
any of the hosts - i.e. must be accessed over the network.  Therefore if you 
want to use Cinder to provide data volumes (or BFV) then you need a VM/physical 
machine/storage array providing the actual storage which is accessible over the 
network.

I'm not aware of a way to use Cinder create volumes that are only accessible to 
VMs on a single host - which is what you would need to use storage that is 
entirely local to the hypervisor.

Just to understand, why are you trying to use Cinder backed by local storage 
only?  What's the use case here?

Bob

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: 14 July 2015 14:50
To: Bob Ball; 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: RE: xenserver and cinder

So, do I understand it right - I can't get rid of additional layer for cinder 
volumes, if I want to use local storage?

Regards,
IT engineer
Farheap, Russia
Ivan Derbenev

From: Bob Ball [mailto:bob.b...@citrix.com]
Sent: Tuesday, July 14, 2015 12:34 PM
To: Ivan Derbenev; 
openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: RE: xenserver and cinder

Hi Ivan,

XenAPINFSDriver was primarily useful for pooled scenarios (which in turn relied 
on Nova aggregates) - however it's not the easiest way to consume Cinder 
volumes.  The XenServer Nova integration supports BFV and volume attach for 
Cinder volumes presented over iSCSI, so however those are managed by Cinder is 
independent of how XenServer can consume them.

Even when using the deprecated XenAPINFSDriver, the storage had to be remote 
storage (i.e. provided to the host through an NFS server) so that does not let 
you use the hypervisor-local storage as block storage provisionable by Cinder 
and you would have needed a separate VM if you actually need your Cinder 
volumes to be provisioned from local storage.

Bob

From: Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
Sent: 13 July 2015 18:53
To: openstack@lists.openstack.orgmailto:openstack@lists.openstack.org
Subject: [Openstack] xenserver and cinder


Hello, guys!
We are currently making an installation of XS6.5+OS Juno (maybe we'll switch to 
kilo)
We have 2 servers, controller has glance, nova and keystone, and compute vms 
have only nova-compute installed.
We use local storage on each server, not shared one (this is important)

We want to implement cinder service to manage volumes easily, but i can't 
understand how can we do it.XenAPINFSDriver is deprecated and isn't supported 
any more.
The only solution i found so far is to create storage VM, give it some space 
and use this space for cinder volumes, and then mount it in VMs as ISCSI 
targets.
And in the same time nova uses xenAPI to create and manage volumes when it 
creates instances.

And what i want to make is to make cinder use Xenserver volumes both when nova 
creates VM and when i create volume with cinder.
Without second level of abstraction.

Can you tell me what are the best practices to use cinder for Xenservers with 
local storage?


Regards,
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] xenserver and cinder

2015-07-13 Thread Ivan Derbenev
Hello, guys!
We are currently making an installation of XS6.5+OS Juno (maybe we'll switch to 
kilo)
We have 2 servers, controller has glance, nova and keystone, and compute vms 
have only nova-compute installed.
We use local storage on each server, not shared one (this is important)

We want to implement cinder service to manage volumes easily, but i can't 
understand how can we do it.XenAPINFSDriver is deprecated and isn't supported 
any more.
The only solution i found so far is to create storage VM, give it some space 
and use this space for cinder volumes, and then mount it in VMs as ISCSI 
targets.
And in the same time nova uses xenAPI to create and manage volumes when it 
creates instances.

And what i want to make is to make cinder use Xenserver volumes both when nova 
creates VM and when i create volume with cinder.
Without second level of abstraction.

Can you tell me what are the best practices to use cinder for Xenservers with 
local storage?


Regards,
Ivan Derbenev

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack