Re: [Openstack-operators] ethtool with virtual NIC shows nothing - print the current settings of the NIC in OpenStack VM

2016-08-26 Thread Kris G. Lindgren
Assuming you are using paravirtualized nics, you won’t see anything because 
it’s not a real network device.  Additionally – I think this would fall under 
the leaking of physical implementation details to the cloud user.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Lukas Lehner 
Date: Friday, August 26, 2016 at 2:31 PM
To: "openstack-operators@lists.openstack.org" 

Subject: [Openstack-operators] ethtool with virtual NIC shows nothing - print 
the current settings of the NIC in OpenStack VM

Hi

http://unix.stackexchange.com/questions/305638/ethtool-with-virtual-nic-shows-nothing-print-the-current-settings-of-the-nic-i

Lukas
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] ethtool with virtual NIC shows nothing - print the current settings of the NIC in OpenStack VM

2016-08-26 Thread Lukas Lehner
Hi

http://unix.stackexchange.com/questions/305638/ethtool-with-virtual-nic-shows-nothing-print-the-current-settings-of-the-nic-i

Lukas
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron Allow tenants to select Fixed VM IPs

2016-08-26 Thread Kosnik, Lubosz
Lubosz is my first name not Kosnik :P
You can create a VM from Horizon and only specify the floating IP to be exactly 
that one. With private networks it’s not available from Horizon.
About getting every time the next IP it’s normal thing. After getting the roof 
for that specified IP range it will start looking for free IPs from the 
beginning of the range.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Aug 26, 2016, at 11:06 AM, William Josefsson 
> wrote:

Hi Kosnik. thanks. Is there any way in the GUI for the user to do that, or they 
need to do cli 'neutron port-create ...' ?
Maybe I can pre-create the fixed IPs as admin, but how do a standard tenant 
user select the Ports just created ..  just as they select the Networks/Subnets 
during 'Launch an instance'?

I notice while provisioning that the IP number increments all the time, even if 
previous instances with lower IPs are deleted. What will happen eventually when 
I reach the last IP, will the lower number IPs be reused, or what would the 
behavior be? thx will



On Thu, Aug 25, 2016 at 10:58 PM, Kosnik, Lubosz 
> wrote:
VM always will get the same IP from DHCP server. To prepare the VM with fixed 
IP you need using neutron create a port in specified network with specified IP 
and after that to boot new VM you’re specifying not net-id but port-id and it’s 
gonna work.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC
lubosz.kos...@intel.com

On Aug 25, 2016, at 9:01 AM, William Josefsson 
> wrote:

Hi,

I wonder if there's any way of allowing my users to select fixed IPs for the 
VMs? I do shared Provider networks, VLAN on Liberty/CentOS.

I know nova boot from the CLI or API has v4-fixed-ip=ip-addr option, however is 
there any way in the Dashboard where the User can select static IP?

I would also appreciate if anyone can explain the default dnsmasq dhcpd lease. 
Will a VM always get the same IP during it's life time, or it may change? thx 
will
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron Allow tenants to select Fixed VM IPs

2016-08-26 Thread William Josefsson
Hi Kosnik. thanks. Is there any way in the GUI for the user to do that, or
they need to do cli 'neutron port-create ...' ?
Maybe I can pre-create the fixed IPs as admin, but how do a standard tenant
user select the Ports just created ..  just as they select the
Networks/Subnets during 'Launch an instance'?

I notice while provisioning that the IP number increments all the time,
even if previous instances with lower IPs are deleted. What will happen
eventually when I reach the last IP, will the lower number IPs be reused,
or what would the behavior be? thx will



On Thu, Aug 25, 2016 at 10:58 PM, Kosnik, Lubosz 
wrote:

> VM always will get the same IP from DHCP server. To prepare the VM with
> fixed IP you need using neutron create a port in specified network with
> specified IP and after that to boot new VM you’re specifying not net-id but
> port-id and it’s gonna work.
>
> Cheers,
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
> On Aug 25, 2016, at 9:01 AM, William Josefsson 
> wrote:
>
> Hi,
>
> I wonder if there's any way of allowing my users to select fixed IPs for
> the VMs? I do shared Provider networks, VLAN on Liberty/CentOS.
>
> I know nova boot from the CLI or API has *v4-fixed-ip=ip-addr* option,
> however is there any way in the Dashboard where the User can select static
> IP?
>
> I would also appreciate if anyone can explain the default dnsmasq dhcpd
> lease. Will a VM always get the same IP during it's life time, or it may
> change? thx will
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Tim Bell

On 26 Aug 2016, at 17:44, Andrew Laski 
> wrote:




On Fri, Aug 26, 2016, at 11:01 AM, John Griffith wrote:


On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski 
> wrote:


On Fri, Aug 26, 2016, at 03:44 
AM,kostiantyn.volenbovs...@swisscom.com
wrote:
> Hi,
> option 1 (=that's what patches suggest) sounds totally fine.
> Option 3 > Allow block device mappings, when present, to mostly determine
> instance  packing
> sounds like option 1+additional logic (=keyword 'mostly')
> I think I miss to understand the part of 'undermining the purpose of the
> flavor'
> Why new behavior might require one more parameter to limit number of
> instances of host?
> Isn't it that those VMs will be under control of other flavor
> constraints, such as CPU and RAM anyway and those will be the ones
> controlling 'instance packing'?

Yes it is possible that CPU and RAM could be controlling instance
packing. But my understanding is that since those are often
oversubscribed
I don't understand why the oversubscription ratio matters here?


My experience is with environments where the oversubscription was used to be a 
little loose with how many vCPUs were allocated or how much RAM was allocated 
but disk was strictly controlled.




while disk is not that it's actually the disk amounts
that control the packing on some environments.
Maybe an explanation of what you mean by "packing" here.  Customers that I've 
worked with over the years have used CPU and Mem as their levers and the main 
thing that they care about in terms of how many Instances go on a Node.  I'd 
like to learn more about why that's wrong and that disk space is the mechanism 
that deployers use for this.


By packing I just mean the various ways that different flavors fit on a host. A 
host may be designed to hold 1 xlarge, or 2 large, or 4 mediums, or 1 large and 
2 mediums, etc... The challenge I see here is that the constraint can be 
managed by using CPU or RAM or disk or some combination of the three. For 
deployers just using disk the above patches will change behavior for them.

It's not wrong to use CPU/RAM, but it's not what everyone is doing. One purpose 
of this email was to gauge if it would be acceptable to only use CPU/RAM for 
packing.




But that is a sub option
here, just document that disk amounts should not be used to determine
flavor packing on hosts and instead CPU and RAM must be used.

> Does option 3 covers In case someone relied on eg. flavor root disk for
> disk volume booted from volume - and now instance packing will change
> once patches are implemented?

That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
swap/ephemeral=0 the deployer is stating that they want only 4 instances
on that host.
How do you arrive at that logic?  What if they actually wanted a single 
VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining resources split 
among Instances that were all 1 VCPU, 1 G ram and a 1 G root disk?

My example assumes the one stated flavor. But if they have a smaller flavor 
then more than 4 instances would fit.


If there is CPU and RAM oversubscription enabled then by
using volumes a user could end up with more than 4 instances on that
host. So a max_instances=4 setting could solve that. However I don't
like the idea of adding a new config, and I think it's too simplistic to
cover more complex use cases. But it's an option.

I would venture to guess that most Operators would be sad to read that.  So 
rather than give them an explicit lever that does exactly what they want 
clearly and explicitly we should make it as complex as possible and have it be 
the result of a 4 or 5 variable equation?  Not to mention it's completely 
dynamic (because it seems like
lots of clouds have more than one flavor).

Is that lever exactly what they want? That's part of what I'd like to find out 
here. But currently it's possible to setup a situation where 1 large flavor or 
4 small flavors fit on a host. So would the max_instances=4 setting be desired? 
Keeping in mind that if the above patches merged 4 large flavors could be put 
on that host if they only use remote volumes and aren't using proper CPU/RAM 
limits.

I probably was not clear enough in my original description or made some bad 
assumptions. The concern I have is that if someone is currently relying on disk 
sizes for their instance limits then the above patches change behavior for them 
and affect capacity limits and planning. Is this okay and if not what do we do?


From a single operator perspective, we’d prefer an option which would allow 
boot from volume with a larger size than the flavour. The quota for volumes 
would avoid abuse.

The use cases we encounter are a standard set of flavors with defined 
core/memory/disk ratios which correspond to the 

Re: [Openstack-operators] [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Andrew Laski



On Fri, Aug 26, 2016, at 11:01 AM, John Griffith wrote:
>
>
> On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski
>  wrote:
>>
>>
>> On Fri, Aug 26, 2016, at 03:44
>> AM,kostiantyn.volenbovs...@swisscom.com
>>  wrote:
>> > Hi,
>>  > option 1 (=that's what patches suggest) sounds totally fine.
>>  > Option 3 > Allow block device mappings, when present, to mostly
>>  > determine instance  packing sounds like option 1+additional logic
>>  > (=keyword 'mostly') I think I miss to understand the part of
>>  > 'undermining the purpose of the flavor' Why new behavior might
>>  > require one more parameter to limit number of instances of host?
>>  > Isn't it that those VMs will be under control of other flavor
>>  > constraints, such as CPU and RAM anyway and those will be the ones
>>  > controlling 'instance packing'?
>>
>> Yes it is possible that CPU and RAM could be controlling instance
>>  packing. But my understanding is that since those are often
>>  oversubscribed
> I don't understand why the oversubscription ratio matters here?
>

My experience is with environments where the oversubscription was used
to be a little loose with how many vCPUs were allocated or how much RAM
was allocated but disk was strictly controlled.

>
>
>
>> while disk is not that it's actually the disk amounts
>>  that control the packing on some environments.
> Maybe an explanation of what you mean by "packing" here.  Customers
> that I've worked with over the years have used CPU and Mem as their
> levers and the main thing that they care about in terms of how many
> Instances go on a Node.  I'd like to learn more about why that's wrong
> and that disk space is the mechanism that deployers use for this.
>

By packing I just mean the various ways that different flavors fit on a
host. A host may be designed to hold 1 xlarge, or 2 large, or 4 mediums,
or 1 large and 2 mediums, etc... The challenge I see here is that the
constraint can be managed by using CPU or RAM or disk or some
combination of the three. For deployers just using disk the above
patches will change behavior for them.

It's not wrong to use CPU/RAM, but it's not what everyone is doing. One
purpose of this email was to gauge if it would be acceptable to only use
CPU/RAM for packing.


>
>
>> But that is a sub option
>>  here, just document that disk amounts should not be used to
>>  determine
>>  flavor packing on hosts and instead CPU and RAM must be used.
>>
>>  > Does option 3 covers In case someone relied on eg. flavor root
>>  > disk for disk volume booted from volume - and now instance packing
>>  > will change once patches are implemented?
>>
>> That's the goal. In a simple case of having hosts with 16 CPUs,
>> 128GB of
>>  RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB,
>>  root_gb=500GB,
>>  swap/ephemeral=0 the deployer is stating that they want only 4
>>  instances
>>  on that host.
> How do you arrive at that logic?  What if they actually wanted a
> single VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining
> resources split among Instances that were all 1 VCPU, 1 G ram and a 1
> G root disk?

My example assumes the one stated flavor. But if they have a smaller
flavor then more than 4 instances would fit.

>
>> If there is CPU and RAM oversubscription enabled then by
>>  using volumes a user could end up with more than 4 instances on that
>>  host. So a max_instances=4 setting could solve that. However I don't
>>  like the idea of adding a new config, and I think it's too
>>  simplistic to
>>  cover more complex use cases. But it's an option.
>
> I would venture to guess that most Operators would be sad to read
> that.  So rather than give them an explicit lever that does exactly
> what they want clearly and explicitly we should make it as complex as
> possible and have it be the result of a 4 or 5 variable equation?  Not
> to mention it's completely dynamic (because it seems like
> lots of clouds have more than one flavor).

Is that lever exactly what they want? That's part of what I'd like to
find out here. But currently it's possible to setup a situation where 1
large flavor or 4 small flavors fit on a host. So would the
max_instances=4 setting be desired? Keeping in mind that if the above
patches merged 4 large flavors could be put on that host if they only
use remote volumes and aren't using proper CPU/RAM limits.

I probably was not clear enough in my original description or made some
bad assumptions. The concern I have is that if someone is currently
relying on disk sizes for their instance limits then the above patches
change behavior for them and affect capacity limits and planning. Is
this okay and if not what do we do?


>
> All I know is that the current state is broken.  It's not just the
> scheduling problem, I could live with that probably since it's too
> hard to fix... but keep in mind that you're reporting the complete
> wrong information for the Instance in these cases.  My flavor says
> it's 5G, but in 

[Openstack-operators] NYC - Airport

2016-08-26 Thread Edgar Magana
Hello,

Anyone wants to share a taxi to the airport? I am leaving at 3:00pm my flight 
is at 6:00pm

Edgar
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Andrew Laski


On Fri, Aug 26, 2016, at 03:44 AM, kostiantyn.volenbovs...@swisscom.com
wrote:
> Hi, 
> option 1 (=that's what patches suggest) sounds totally fine.
> Option 3 > Allow block device mappings, when present, to mostly determine
> instance  packing 
> sounds like option 1+additional logic (=keyword 'mostly') 
> I think I miss to understand the part of 'undermining the purpose of the
> flavor'
> Why new behavior might require one more parameter to limit number of
> instances of host? 
> Isn't it that those VMs will be under control of other flavor
> constraints, such as CPU and RAM anyway and those will be the ones
> controlling 'instance packing'?

Yes it is possible that CPU and RAM could be controlling instance
packing. But my understanding is that since those are often
oversubscribed while disk is not that it's actually the disk amounts
that control the packing on some environments.  But that is a sub option
here, just document that disk amounts should not be used to determine
flavor packing on hosts and instead CPU and RAM must be used.

> Does option 3 covers In case someone relied on eg. flavor root disk for
> disk volume booted from volume - and now instance packing will change
> once patches are implemented?

That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
swap/ephemeral=0 the deployer is stating that they want only 4 instances
on that host. If there is CPU and RAM oversubscription enabled then by
using volumes a user could end up with more than 4 instances on that
host. So a max_instances=4 setting could solve that. However I don't
like the idea of adding a new config, and I think it's too simplistic to
cover more complex use cases. But it's an option.

> 
> BR, 
> Konstantin
> 
> > -Original Message-
> > From: Andrew Laski [mailto:and...@lascii.com]
> > Sent: Thursday, August 25, 2016 10:20 PM
> > To: openstack-...@lists.openstack.org
> > Cc: openstack-operators@lists.openstack.org
> > Subject: [Openstack-operators] [Nova] Reconciling flavors and block device
> > mappings
> > 
> > Cross posting to gather some operator feedback.
> > 
> > There have been a couple of contentious patches gathering attention recently
> > about how to handle the case where a block device mapping supersedes flavor
> > information. Before moving forward on either of those I think we should 
> > have a
> > discussion about how best to handle the general case, and how to handle any
> > changes in behavior that results from that.
> > 
> > There are two cases presented:
> > 
> > 1. A user boots an instance using a Cinder volume as a root disk, however 
> > the
> > flavor specifies root_gb = x where x > 0. The current behavior in Nova is 
> > that the
> > scheduler is given the flavor root_gb info to take into account during 
> > scheduling.
> > This may disqualify some hosts from receiving the instance even though that 
> > disk
> > space  is not necessary because the root disk is a remote volume.
> > https://review.openstack.org/#/c/200870/
> > 
> > 2. A user boots an instance and uses the block device mapping parameters to
> > specify a swap or ephemeral disk size that is less than specified on the 
> > flavor.
> > This leads to the same problem as above, the scheduler is provided 
> > information
> > that doesn't match the actual disk space to be consumed.
> > https://review.openstack.org/#/c/352522/
> > 
> > Now the issue: while it's easy enough to provide proper information to the
> > scheduler on what the actual disk consumption will be when using block 
> > device
> > mappings that undermines one of the purposes of flavors which is to control
> > instance packing on hosts. So the outstanding question is to what extent 
> > should
> > users have the ability to use block device mappings to bypass flavor 
> > constraints?
> > 
> > One other thing to note is that while a flavor constrains how much local 
> > disk is
> > used it does not constrain volume size at all. So a user can specify an
> > ephemeral/swap disk <= to what the flavor provides but can have an arbitrary
> > sized root disk if it's a remote volume.
> > 
> > Some possibilities:
> > 
> > Completely allow block device mappings, when present, to determine instance
> > packing. This is what the patches above propose and there's a strong desire 
> > for
> > this behavior from some folks. But changes how many instances may fit on a
> > host which could be undesirable to some.
> > 
> > Keep the status quo. It's clear that is undesirable based on the bug 
> > reports and
> > proposed patches above.
> > 
> > Allow block device mappings, when present, to mostly determine instance
> > packing. By that I mean that the scheduler only takes into account local 
> > disk that
> > would be consumed, but we add additional configuration to Nova which limits
> > the number of instance that can be placed on a host. This is a compromise
> > solution but I fear that a single int 

Re: [Openstack-operators] [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Andrew Laski


On Fri, Aug 26, 2016, at 06:31 AM, Chris Dent wrote:
> On Thu, 25 Aug 2016, Andrew Laski wrote:
> 
> > Allow block device mappings, when present, to mostly determine instance
> > packing. By that I mean that the scheduler only takes into account local
> > disk that would be consumed, but we add additional configuration to Nova
> > which limits the number of instance that can be placed on a host. This
> > is a compromise solution but I fear that a single int value does not
> > meet the needs of deployers wishing to limit instances on a host. They
> > want it to take into account cpu allocations and ram and disk, in short
> > a flavor :)
> 
> When you say "add additional configuration" do you mean "add more
> things to nova.conf"? If so, then please don't do that. There is far
> too much of that.

Yes, that's what I mean. I'm not a big fan of this option either for
that reason.

> 
> -- 
> Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
> freenode: cdent tw: @anticdent
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Chris Dent

On Thu, 25 Aug 2016, Andrew Laski wrote:


Allow block device mappings, when present, to mostly determine instance
packing. By that I mean that the scheduler only takes into account local
disk that would be consumed, but we add additional configuration to Nova
which limits the number of instance that can be placed on a host. This
is a compromise solution but I fear that a single int value does not
meet the needs of deployers wishing to limit instances on a host. They
want it to take into account cpu allocations and ram and disk, in short
a flavor :)


When you say "add additional configuration" do you mean "add more
things to nova.conf"? If so, then please don't do that. There is far
too much of that.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Kostiantyn.Volenbovskyi
Hi, 
option 1 (=that's what patches suggest) sounds totally fine.
Option 3 > Allow block device mappings, when present, to mostly determine 
instance  packing 
sounds like option 1+additional logic (=keyword 'mostly') 
I think I miss to understand the part of 'undermining the purpose of the flavor'
Why new behavior might require one more parameter to limit number of instances 
of host? 
Isn't it that those VMs will be under control of other flavor constraints, such 
as CPU and RAM anyway and those will be the ones controlling 'instance packing'?
Does option 3 covers In case someone relied on eg. flavor root disk for disk 
volume booted from volume - and now instance packing will change once patches 
are implemented?

BR, 
Konstantin

> -Original Message-
> From: Andrew Laski [mailto:and...@lascii.com]
> Sent: Thursday, August 25, 2016 10:20 PM
> To: openstack-...@lists.openstack.org
> Cc: openstack-operators@lists.openstack.org
> Subject: [Openstack-operators] [Nova] Reconciling flavors and block device
> mappings
> 
> Cross posting to gather some operator feedback.
> 
> There have been a couple of contentious patches gathering attention recently
> about how to handle the case where a block device mapping supersedes flavor
> information. Before moving forward on either of those I think we should have a
> discussion about how best to handle the general case, and how to handle any
> changes in behavior that results from that.
> 
> There are two cases presented:
> 
> 1. A user boots an instance using a Cinder volume as a root disk, however the
> flavor specifies root_gb = x where x > 0. The current behavior in Nova is 
> that the
> scheduler is given the flavor root_gb info to take into account during 
> scheduling.
> This may disqualify some hosts from receiving the instance even though that 
> disk
> space  is not necessary because the root disk is a remote volume.
> https://review.openstack.org/#/c/200870/
> 
> 2. A user boots an instance and uses the block device mapping parameters to
> specify a swap or ephemeral disk size that is less than specified on the 
> flavor.
> This leads to the same problem as above, the scheduler is provided information
> that doesn't match the actual disk space to be consumed.
> https://review.openstack.org/#/c/352522/
> 
> Now the issue: while it's easy enough to provide proper information to the
> scheduler on what the actual disk consumption will be when using block device
> mappings that undermines one of the purposes of flavors which is to control
> instance packing on hosts. So the outstanding question is to what extent 
> should
> users have the ability to use block device mappings to bypass flavor 
> constraints?
> 
> One other thing to note is that while a flavor constrains how much local disk 
> is
> used it does not constrain volume size at all. So a user can specify an
> ephemeral/swap disk <= to what the flavor provides but can have an arbitrary
> sized root disk if it's a remote volume.
> 
> Some possibilities:
> 
> Completely allow block device mappings, when present, to determine instance
> packing. This is what the patches above propose and there's a strong desire 
> for
> this behavior from some folks. But changes how many instances may fit on a
> host which could be undesirable to some.
> 
> Keep the status quo. It's clear that is undesirable based on the bug reports 
> and
> proposed patches above.
> 
> Allow block device mappings, when present, to mostly determine instance
> packing. By that I mean that the scheduler only takes into account local disk 
> that
> would be consumed, but we add additional configuration to Nova which limits
> the number of instance that can be placed on a host. This is a compromise
> solution but I fear that a single int value does not meet the needs of 
> deployers
> wishing to limit instances on a host. They want it to take into account cpu
> allocations and ram and disk, in short a flavor :)
> 
> And of course there may be some other unconsidered solution. That's where
> you, dear reader, come in.
> 
> Thoughts?
> 
> -Andrew
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Mitaka live snapshot of instances not working

2016-08-26 Thread Michael Stang
Hi,
 
thank you for the link. I tried this method and get the following result:
 
 


virsh dumpxml --inactive instance-0367 > /var/tmp/instance-0367.xml

virsh blockjob instance-0367 vda --abort
error: Requested operation is not valid: No active operation on device:
drive-virtio-disk0

virsh blockcopy --domain instance-0367 vda
/var/tmp/instance-0367-copy.qcow2 --wait -- verbose
error: internal error: unable to execute QEMU command 'drive-mirror': Could not
create file: Permission denied

virsh blockjob instance-0367 vda --abort
error: Requested operation is not valid: No active operation on device:
drive-virtio-disk0

virsh define /var/tmp/instance-0367.xml
Domain instance-0367 defined from /var/tmp/instance-0367.xml

 


The last command I ccould not do, because I got no image with the 3rd command. I
tried it as user and also as root, also tired different directories to write the
image to (//var/tmp/, /tmp, ~/ )

 

Kind regards,

Michael

> kostiantyn.volenbovs...@swisscom.com hat am 25. August 2016 um 14:27
> geschrieben:
> 
> 
>  Hi,
> 
>   
> 
>  In my previous mail I have indicated the link to website of Kashyap that
> provides the sequence for cold snapshot and not live snapshot.
> 
>   
> 
>  Could you try the sequence specified in comment ‘Kashyap Chamarthy (kashyapc)
> https://launchpad.net/~kashyapc wrote on 2014-06-27’ in [1] ?
> 
>  Libvirt API equivalent of virsh managedsave is not something that is used in
> live snapshot according to that (I haven’t checked source code myself)
> 
>   
> 
>  BR,
> 
>  Konstantin
> 
>  [1] https://bugs.launchpad.net/nova/+bug/1334398
> 
>   
> 
>   
> 
>  From: Michael Stang [mailto:michael.st...@dhbw-mannheim.de]
>  Sent: Thursday, August 25, 2016 8:48 AM
>  To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC
> ; Saverio Proto 
>  Cc: openstack-operators@lists.openstack.org
>  Subject: RE: [Openstack-operators] Mitaka live snapshot of instances not
> working
> 
>   
> 
>  Hi Konstantin, hi Saverio
> 
>   
> 
>  thank you for your answers.
> 
>   
> 
>  I checked the version, these are
> 
>   
> 
>  libvirt 1.3.1-1ubuntu10.1~cloud0
> 
>  qemu  1:2.5+dfsg-5ubuntu10.2~cloud0
> 
>   
> 
>  at our installation, system is Ubuntu 14.04.
> 
>   
> 
>   
> 
>  I tried also the following from [2]
> 
>   
> 
>  Command: 
> 
>  nova image-create test "snap_of_test" --poll
> 
>  Result: Server snapshotting... 25% completeERROR (NotFound): Image not found.
> (HTTP 404)
> 
>   
> 
>   
> 
>  Then I started trying step by step as in [2] but failed at the first step
> already:
> 
>   
> 
>  Command: 
> 
>  virsh managedsave instance-0367
> 
>  Result:
> 
>  error: Failed to save domain instance-0367 state
>  error: internal error: unable to execute QEMU command 'migrate': Migration
> disabled: failed to allocate shared memory
> 
>   
> 
>  I also checked on the compute nodes the directories:
> 
>  /var/lib/libvirt/qemu/save/
>  /var/lib/nova/instances/snapshots/
> 
>  there is 257G free space and the instance only has 1GB root disk, so I think
> its not missing space.
> 
>   
> 
>  So is this maybe a problem with qemu? How can i enable 'migrate' and why is
> it disabled?
> 
>   
> 
>  Thank you for your help.
> 
>   
> 
>  Kind regards,
>  Michael
> 
>   
> 
>   
> 
>   
> 
>   
> 
> 
>  > kostiantyn.volenbovs...@swisscom.com
>  > mailto:kostiantyn.volenbovs...@swisscom.com hat am 24. August 2016 um 14:51
>  > geschrieben:
>  >
>  >
>  > Hi,
>  > extract from [1] ((side note: I couldn't find that in config reference for
>  > Mitaka) is:
>  > "disable_libvirt_livesnapshot = True
>  > (BoolOpt) When using libvirt 1.2.2 live snapshots fail intermittently under
>  > load. This config option provides a mechanism to enable live snapshot while
>  > this is resolved. See https://bugs.launchpad.net/nova/+bug/1334398
>  > https://bugs.launchpad.net/nova/+bug/1334398 "
>  >
>  > I am not sure if Nova behaves like that in case you have
>  > disable_libvirt_livesnapshot=True (default in Liberty and Mitaka
>  > apparently...)
>  > In case it is not about that, then I would try to do it manually using
>  > something like [2] as guideline to see if it succeeds using Libvirt/QEMU
>  > without Nova.
>  >
>  > BR,
>  > Konstantin
>  > [1]
>  > 
> http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html
>  > 
> http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html
>  > [2]
>  > https://kashyapc.com/2013/03/11/openstack-nova-image-create-under-the-hood/
>  > https://kashyapc.com/2013/03/11/openstack-nova-image-create-under-the-hood/
>  >
>  >
>  >
>  >
>  >
>  > From: Michael Stang [mailto:michael.st...@dhbw-mannheim.de
>  > mailto:michael.st...@dhbw-mannheim.de ]
>  > Sent: Wednesday, August 24, 2016 9:55 AM
>  > To: openstack-operators   >