Re: [openstack-dev] [DevStack] fails in lxc running ubuntu trusty amd64

2014-06-19 Thread Jérôme Gallard
Hi Mike,

We worked with Devstack and LXC and got the same issue (
https://blueprints.launchpad.net/devstack/+spec/lxc-computes ).

The issue seems to be linked with namespace:
https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00839.html
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855

Hope it helps,
Jérôme



2014-06-19 9:03 GMT+02:00 Mike Spreitzer mspre...@us.ibm.com:

 In my linux containers running Ubuntu 14.04 64-bit, DevStack fails because
 it can not install the package named tgt.  The problem is that the install
 script invokes the tgt service's start operation, which launches the daemon
 (tgtd), and the launch fails with troubles with RDMA.  Has anybody tried
 such a thing?  Any fixes, workarounds?  Any ideas?

 Thanks,
 Mike


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Availability Zones and Host aggregates..

2014-03-27 Thread Jérôme Gallard
Hi Toan,
Is what you say related to :
https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones?


2014-03-27 10:37 GMT+01:00 Khanh-Toan Tran khanh-toan.t...@cloudwatt.com:



 - Original Message -
  From: Sangeeta Singh sin...@yahoo-inc.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, March 26, 2014 6:54:18 PM
  Subject: Re: [openstack-dev] [nova][scheduler] Availability Zones and
 Host aggregates..
 
 
 
  On 3/26/14, 10:17 AM, Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
  wrote:
 
  
  
  - Original Message -
   From: Sangeeta Singh sin...@yahoo-inc.com
   To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
   Sent: Tuesday, March 25, 2014 9:50:00 PM
   Subject: [openstack-dev] [nova][scheduler] Availability Zones and Host
  aggregates..
  
   Hi,
  
   The availability Zones filter states that theoretically a compute node
  can be
   part of multiple availability zones. I have a requirement where I need
  to
   make a compute node part to 2 AZ. When I try to create a host
 aggregates
   with AZ I can not add the node in two host aggregates that have AZ
  defined.
   However if I create a host aggregate without associating an AZ then I
  can
   add the compute nodes to it. After doing that I can update the
   host-aggregate an associate an AZ. This looks like a bug.
  
   I can see the compute node to be listed in the 2 AZ with the
   availability-zone-list command.
  
  
  Yes it appears a bug to me (apparently the AZ metadata indertion is
  considered as a normal metadata so no check is done), and so does the
  message in the AvailabilityZoneFilter. I don't know why you need a
  compute node that belongs to 2 different availability-zones. Maybe I'm
  wrong but for me it's logical that availability-zones do not share the
  same compute nodes. The availability-zones have the role of partition
  your compute nodes into zones that are physically separated (in large
  term it would require separation of physical servers, networking
  equipments, power sources, etc). So that when user deploys 2 VMs in 2
  different zones, he knows that these VMs do not fall into a same host
 and
  if some zone falls, the others continue working, thus the client will
 not
  lose all of his VMs. It's smaller than Regions which ensure total
  separation at the cost of low-layer connectivity and central management
  (e.g. scheduling per region).
  
  See: http://www.linuxjournal.com/content/introduction-openstack
  
  The former purpose of regouping hosts with the same characteristics is
  ensured by host-aggregates.
  
   The problem that I have is that I can still not boot a VM on the
  compute node
   when I do not specify the AZ in the command though I have set the
  default
   availability zone and the default schedule zone in nova.conf.
  
   I get the error ³ERROR: The requested availability zone is not
  available²
  
   What I am  trying to achieve is have two AZ that the user can select
  during
   the boot but then have a default AZ which has the HV from both AZ1 AND
  AZ2
   so that when the user does not specify any AZ in the boot command I
  scatter
   my VM on both the AZ in a balanced way.
  
  
  I do not understand your goal. When you create two availability-zones
 and
  put ALL of your compute nodes into these AZs, then if you don't
 specifies
  the AZ in your request, then AZFilter will automatically accept all
 hosts.
  The defaut weigher (RalWeigher) will then distribute the workload
 fairely
  among these nodes regardless of AZ it belongs to. Maybe it is what you
  want?
 
With Havana that does not happen as there is a concept of
  default_scheduler_zone which is none if not specified and when we specify
  one can only specify a since AZ whereas in my case I basically want the 2
  AZ that I create both to be considered default zones if nothing is
  specified.

 If you look into the code of the AvailabilityFilter, you'll see that the
 filter automatically accepts host if there is NO availability-zone in the
 request, which is the case when user does not specify AZ. This is exactly
 what I see in my Openstack platform (Hanava stable). FYI, I didn't set up a
 default AZ in config. So whenever I creates several VMs without specifying
 an AZ, the scheduler spreads the VMs into all hosts regardless of their AZ.

 What I think lacking is that user can not select a set of AZs instead of
 one or none right now.

  
   Any pointers.
  
   Thanks,
   Sangeeta
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  

Re: [openstack-dev] Combination of ComputeCapabilitiesFilter and AggregateInstanceExtraSpecsFilter

2013-07-11 Thread Jérôme Gallard
Thanks a lot for your answers and for solving the issue.

Regards,
Jérôme

On Mon, Jul 8, 2013 at 3:05 PM, Russell Bryant rbry...@redhat.com wrote:
 On 07/05/2013 08:14 PM, Qiu Yu wrote:
 Russell,

 Should ComputeCapabilitiesFilter also be restricted to use scoped
 format only? Currently it recognize and compare BOTH scoped and
 non-scoped key, which is causing the conflict.

 I've already submitted a bug and patch review before.

 https://bugs.launchpad.net/nova/+bug/1191185
 https://review.openstack.org/#/c/33143/

 But removing non-scoped support breaks backwards compatibility.  We
 should avoid that whenever possible.  In this case, there's a pretty
 easy solution to avoid conflicts while also not breaking backwards
 compatibility.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Combination of ComputeCapabilitiesFilter and AggregateInstanceExtraSpecsFilter

2013-07-05 Thread Jérôme Gallard
Hi all,

I'm trying to combine ComputeCapabilitiesFilter and
AggregateInstanceExtraSpecsFilter. However I probably missed
something, because it does not work :-)

Both filters are activated with the following order:
ComputeCapabilitiesFilter, AggregateInstanceExtraSpecsFilter.

I created a flavor with the following extra_spec:
* capabilities:hypervisor_hostname=node1
* class=good

I created an aggregate containing node1 with an extra_spec:
* class=good

When I start a new instance with the previously created flavor, the
ComputeCapabilitiesFilter can't find an available node. I put some
debug inside the filter. From my understanding, it seems that,
ComputeCapabilitiesFilter manage to find the first spec
capabilities:hypervisor_hostname=node1 into the list of metadata
provided by the host node1 : the first iteration of the loop is OK.
Then this filter continues with the class=good spec and, of course,
it fails and the filter returns that there is no available host.

Do you have an idea about what I'm missing? How to tell to
ComputeCapabilitiesFilter that the class key is not for it?

I read the detailed documentation about filter_scheduler (
http://docs.openstack.org/developer/nova/devref/filter_scheduler.html
). But I didn't manage to solve the issue.

Thanks a lot.

Regards,
Jérôme

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-03 Thread Jérôme Gallard
Hi all,

Russell, I agree with all of your remarks, and especially with the
fact that placement details have to be avoided to be exposed to
users.

However I see a possible use case for the filter. For instance, if we
consider the BP Support for multiple active scheduler drivers (
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
): a cloud provider may want to provide a specific class of services
(on a dedicated aggregate) for users who wants to ensure that both
volumes and instances are on the same host and use the weight function
for all the other hosts.
Does it make sense?

Regards,
Jérôme

On Wed, Jul 3, 2013 at 5:54 PM, Russell Bryant rbry...@redhat.com wrote:
 On 07/03/2013 10:24 AM, Alexey Ovchinnikov wrote:
 Hi everyone,

 for some time I have been working on an implementation of a filter that
 would allow to force instances to hosts which contain specific volumes.
 A blueprint can be found here:
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
 and an implementation here:
 https://review.openstack.org/#/c/29343/

 The filter works for LVM driver and now it picks either a host
 containing specified volume
 or nothing (thus effectively failing instance scheduling). Now it fails
 primarily when it can't find the volume. It has been
 pointed to me that sometimes it may be desirable not to fail instance
 scheduling but to run it anyway. However this softer behaviour fits better
 for weighter function. Thus I have registered a blueprint for the
 weighter function:
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function

 I was thinking about both the filter and the weighter working together.
 The former
 could be used in cases when we strongly need storage space associated
 with an
 instance and need them placed on the same host. The latter could be used
 when
 storage space is nice to have and preferably on the same host
 with an instance, but not so crucial as to have the instance running.

 During reviewing a question appeared whether we need the filter and
 wouldn't things be better
 if we removed it and had only the weighter function instead. I am not
 yet convinced
 that the filter is useless and needs to be replaced with the weighter,
 so I am asking for your opinion on this matter. Do you see usecases for
 the filter,
 or the weighter will answer all needs?

 Thanks for starting this thread.

 I was pushing for the weight function.  It seems much more appropriate
 for a cloud environment than the filter.  It's an optimization that is
 always a good idea, so the weight function that works automatically
 would be good.  It's also transparent to users.

 Some things I don't like about the filter:

  - It requires specifying a scheduler hint

  - It's exposing a concept of co-locating volumes and instances on the
 same host to users.  This isn't applicable for many volume backends.  As
 a result, it's a violation of the principle where users ideally do not
 need to know or care about deployment details.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev