Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2015-01-09 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-01-09 14:57:21 -0800:
 On 08/01/15 05:39, Anant Patil wrote:
  1. The stack was failing when there were single disjoint resources or
  just one resource in template. The graph did not include this resource
  due to a minor bug in dependency_names(). I have added a test case and
  fix here:
  https://github.com/anantpatil/heat-convergence-prototype/commit/b58abd77cf596475ecf3f19ed38adf8ad3bb6b3b
 
 Thanks, sorry about that! I will push a patch to fix it up.
 
  2. The resource graph is created with keys in both forward order
  traversal and reverse order traversal and the update will finish the
  forward order and attempt the reverse order. If this is the case, then
  the update-replaced resources will be deleted before the update is
  complete and if the update fails, the old resource is not available for
  roll-back; a new resource has to be created then. I have added a test
  case at the above mentioned location.
 
  In our PoC, the updates (concurrent updates) won't remove a
  update-replaced resource until all the resources are updated, and
  resource clean-up phase is started.
 
 Hmmm, this is a really interesting question actually. That's certainly 
 not how Heat works at the moment; we've always assumed that rollback is 
 best-effort at recovering the exact resources you had before. It would 
 be great to have users weigh in on how they expect this to behave. I'm 
 curious now what CloudFormation does.
 
 I'm reluctant to change it though because I'm pretty sure this is 
 definitely *not* how you would want e.g. a rolling update of an 
 autoscaling group to happen.
 
  It is unacceptable to remove the old
  resource to be rolled-back to since it may have changes which the user
  doesn't want to loose;
 
 If they didn't want to lose it they shouldn't have tried an update that 
 would replace it. If an update causes a replacement or an interruption 
 to service then I consider the same fair game for the rollback - the 
 user has already given us permission for that kind of change. (Whether 
 the user's consent was informed is a separate question, addressed by 
 Ryan's update-preview work.)
 

In the original vision we had for using scaled groups to manage, say,
nova-compute nodes, you definitely can't create new servers, so you
can't just create all the new instances without de-allocating some.

That said, thats why we are using in-place methods like rebuild.

I think it would be acceptable to have cleanup run asynchronously,
and to have rollback re-create anything that has already been cleaned up.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron]VIF_VHOSTUSER

2015-01-09 Thread Ian Wells
Once more, I'd like to revisit the VIF_VHOSTUSER discussion [1].  I still
think this is worth getting into Nova's libvirt driver - specifically
because there's actually no way to distribute this as an extension; since
we removed the plugin mechanism for VIF drivers, it absolutely requires a
code change in the libvirt driver.  This means that there's no graceful way
of distributing an aftermarket VHOSTUSER driver for libvirt.

The standing counterargument to adding it is that nothing in the upstream
or 3rd party CI would currently test the VIF_VHOSTUSER code.  I'm not sure
that's a showstopper, given the code is zero risk to anyone when it's not
being used, and clearly is going to be experimental when it's enabled.  So,
Nova cores, would it be possible to incorporate this without a
corresponding driver in base Neutron?

Cheers,
-- 
Ian.

[1] https://review.openstack.org/#/c/96140/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Request Spec Freeze Exception For More Image Properties Support

2015-01-09 Thread Jiang, Yunhong
Hello Nova Community,
Please grant a freeze exception for the nova spec more image 
properties support at https://review.openstack.org/#/c/138937/ . 

The potential changes in nova are limited, affecting only to the 
corresponding scheduler filters. Its purpose is to ensure and enforce image 
provider hints/recommendations, something the image provider knows best, to 
achieve optimal performance and/or meet compliance or other constraints. 

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] noVNC disabled by default?

2015-01-09 Thread Sean Dague
On 01/09/2015 06:12 PM, Solly Ross wrote:
 Hi,   
   
 I just noticed that noVNC was disabled by default in devstack (the relevant   
   
 change was
   
 https://review.openstack.org/#/c/140860/).
   
   
   
 Now, if I understand correctly (based on the short commit message), the   
   
 rationale is that we don't want devstack to reply on non-OpenStack Git
   
 repos, so that devstack doesn't fail when some external Git hosting   
   
 service (e.g. GitHub) goes down.

Realistically the policy is more about the fact that we should be using
released (and commonly available) versions of dependent software.
Ideally from packages, but definitely not from git trees. We don't want
to be testing everyone else's bleeding edge, there are lots of edges and
pointy parts in OpenStack as it is.

   
   
 This is all fine and dandy (and a decent idea, IMO), but this leaves devstack 
   
 installing a broken installation of Horizon by default -- Horizon still 
  
 attempts to show the noVNC console when you go to the console tab for an
   
 instance, which is a bit confusing, initially.  Now, it wasn't particularly   
   
 hard to track not particularly hard to track down *why* this happened (hmm... 
   
 my stackrc seems to be missing n-novnc in ENABLED_SERVICES.  Go-go-gadget   
   
 `git blame`), but it strikes me as a bit inconsistent and inconvenient.   
   
   
   
 Personally, I would like to see noVNC back as a default service, since it 
   
 can be useful when trying to see what your VM is actually doing during
   
 boot, or if you're having network issues.  Is there anything I can do 
   
 as a noVNC maintainer to help?
   
   
   
 We (the noVNC team) do publish releases, and I've been trying to make 
   
 sure that they happen in a more timely fashion.  In the past, it was 
 necessary  
 to use Git master to ensure that you got the latest version (there was a  
   
 2-year gap between 0.4 and 0.5!), but I'm trying to change that.  Currently,  
   
 it would appear that most of the distros are still using the old version 
 (0.4), 
 but versions 0.5 and 0.5.1 are up on GitHub as release tarballs (0.5 being a 
 3  
 months old and 0.5.1 having been tagged a couple weeks ago).  I will attempt 
 to 
 work with distro maintainers to get the packages updated.  However, in the 
 mean 
 time, is there a place would be acceptable to place the releases so that 
 devstack
 can install them?

If you rewrite the noNVC installation in devstack to work from a release
URL that includes the released version on it, I think that would be
sufficient to turn it back on. Again, ideally this should be in distros,
but I think we could work on doing release installs until then,
especially if the install process is crisp.

I am looking at the upstream release tarball right now though, and don't
see and INSTALL instructions in it. So lets see what the devstack patch
would look like to do the install.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
2015-01-09 22:22 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 09/01/2015 14:58, Alex Xu a écrit :



 2015-01-09 17:17 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 09/01/2015 09:01, Alex Xu a écrit :

 Hi, All

  There is bug when running nova with ironic
 https://bugs.launchpad.net/nova/+bug/1402658

  The case is simple: one baremetal node with 1024MB ram, then boot two
 instances with 512MB ram flavor.
 Those two instances will be scheduling to same baremetal node.

  The problem is at scheduler side the IronicHostManager will consume all
 the resources for that node whatever
 how much resource the instance used. But at compute node side, the
 ResourceTracker won't consume resources
 like that, just consume like normal virtual instance. And ResourceTracker
 will update the resource usage once the
 instance resource claimed, then scheduler will know there are some free
 resource on that node, then will try to
 schedule other new instance to that node.

  I take look at that, there is NumInstanceFilter, it will limit how many
 instance can schedule to one host. So can
 we just use this filter to finish the goal? The max instance is
 configured by option 'max_instances_per_host', we
 can make the virt driver to report how many instances it supported. The
 ironic driver can just report max_instances_per_host=1.
 And libvirt driver can report max_instance_per_host=-1, that means no
 limit. And then we can just remove the
 IronicHostManager, then make the scheduler side is more simpler. Does
 make sense? or there are more trap?

  Thanks in advance for any feedback and suggestion.



  Mmm, I think I disagree with your proposal. Let me explain by the best
 I can why :

 tl;dr: Any proposal unless claiming at the scheduler level tends to be
 wrong

 The ResourceTracker should be only a module for providing stats about
 compute nodes to the Scheduler.
 How the Scheduler is consuming these resources for making a decision
 should only be a Scheduler thing.


  agreed, but we can't implement this for now, the reason is you described
 as below.



 Here, the problem is that the decision making is also shared with the
 ResourceTracker because of the claiming system managed by the context
 manager when booting an instance. It means that we have 2 distinct decision
 makers for validating a resource.


  Totally agreed! This is the root cause.


  Let's stop to be realistic for a moment and discuss about what could
 mean a decision for something else than a compute node. Ok, let say a
 volume.
 Provided that *something* would report the volume statistics to the
 Scheduler, that would be the Scheduler which would manage if a volume
 manager could accept a volume request. There is no sense to validate the
 decision of the Scheduler on the volume manager, just maybe doing some
 error management.

 We know that the current model is kinda racy with Ironic because there is
 a 2-stage validation (see [1]). I'm not in favor of complexifying the
 model, but rather put all the claiming logic in the scheduler, which is a
 longer path to win, but a safier one.


  Yea, I have thought about add same resource consume at compute manager
 side, but it's ugly because we implement ironic's resource consuming method
 in two places. If we move the claiming in the scheduler the thing will
 become easy, we can just provide some extension for different consuming
 method (If I understand right the discussion in the IRC). As gantt will be
 standalone service, so validating a resource shouldn't spread into
 different service. So I agree with you.

  But for now, as you said this is long term plan. We can't provide
 different resource consuming in compute manager side now, also can't move
 the claiming into scheduler now. So the method I proposed is more easy for
 now, at least we won't have different resource consuming way between
 scheduler(IonricHostManger) and compute(ResourceTracker) for ironic. And
 ironic can works fine.

  The method I propose have a little problem. When all the node allocated,
 we still can saw there are some resource are free if the flavor's resource
 is less than baremetal's resource. But it can be done by expose
 max_instance to hypervisor api(running instances already exposed), then
 user will now why can't allocated more instance. And if we can configure
 max_instance for each node, sounds like useful for operator also :)



 I think that if you don't want to wait for the claiming system to happen
 in the Scheduler, then at least you need to fix the current way of using
 the ResourceTracker, like what Jay Pipes is working on in his spec.


I'm with your guys at same line now :)





 -Sylvain


 -Sylvain

 [1]  https://bugs.launchpad.net/nova/+bug/1341420

  Thanks
 Alex


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 

Re: [openstack-dev] [nova] reckoning time for nova ec2 stack

2015-01-09 Thread Steven Hardy
On Fri, Jan 09, 2015 at 09:11:50AM -0500, Sean Dague wrote:
 boto 2.35.0 just released, and makes hmac-v4 authentication mandatory
 for EC2 end points (it has been optionally supported for a long time).
 
 Nova's EC2 implementation does not do this.
 
 The short term approach is to pin boto -
 https://review.openstack.org/#/c/146049/, which I think is a fine long
 term fix for stable/, but in master not supporting new boto, which
 people are likely to deploy, doesn't really seem like an option.
 
 https://bugs.launchpad.net/tempest/+bug/1408987 is the bug.
 
 I don't think shipping an EC2 API in Kilo that doesn't work with recent
 boto is a thing Nova should do. Do we have volunteers to step up and fix
 this, or do we need to get more aggressive about deprecating this interface?

I'm not stepping up to maintain the EC2 API, but the auth part of it is
very similar to heat's auth (which does support hmac-v4), so I hacked on
the nova API a bit to align with the way heat does things:

https://review.openstack.org/#/c/146124/ (WIP)

This needs some more work, but AFAICS solves the actual auth part which is
quite simply fixed by reusing some code we have in heat's ec2token middleware.

If this is used, we could extract the common parts and/or use a common auth
middleware in future, assuming the EC2 implementation as a whole isn't
deemed unmaintained and removed that is.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Russell Bryant
On 01/09/2015 09:50 AM, Thierry Carrez wrote:
 What do you think ? Could that work ? If not, do you have alternate
 suggestions ?

This seems incorporate more of what people have found incredibly useful
(work sessions) and organizes things in a way to accommodate the
anticipated growth in projects this cycle.  I think this suggestion
sounds like a very nice iteration on the design summit format.  Nice work!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
Hi, All

There is bug when running nova with ironic
https://bugs.launchpad.net/nova/+bug/1402658

The case is simple: one baremetal node with 1024MB ram, then boot two
instances with 512MB ram flavor.
Those two instances will be scheduling to same baremetal node.

The problem is at scheduler side the IronicHostManager will consume all the
resources for that node whatever
how much resource the instance used. But at compute node side, the
ResourceTracker won't consume resources
like that, just consume like normal virtual instance. And ResourceTracker
will update the resource usage once the
instance resource claimed, then scheduler will know there are some free
resource on that node, then will try to
schedule other new instance to that node.

I take look at that, there is NumInstanceFilter, it will limit how many
instance can schedule to one host. So can
we just use this filter to finish the goal? The max instance is configured
by option 'max_instances_per_host', we
can make the virt driver to report how many instances it supported. The
ironic driver can just report max_instances_per_host=1.
And libvirt driver can report max_instance_per_host=-1, that means no
limit. And then we can just remove the
IronicHostManager, then make the scheduler side is more simpler. Does make
sense? or there are more trap?

Thanks in advance for any feedback and suggestion.

Thanks
Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-09 Thread Thierry Carrez
Adam Gandelman wrote:
 Flavio has been actively involved in stable branch maintenance for as
 long as I can remember, but it looks like his +2 abilities were removed
 after the organizational changes made to the stable maintenance teams. 
 He has expressed interest in continuing on with general stable
 maintenance and I think his proven understanding of branch policies make
 him a valuable contributor. I propose we add him to the
 stable-maint-core team.

I just added Flavio to stable-maint-core. Welcome!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Request for comments for a possible solution

2015-01-09 Thread Mathieu Rohon
Hi Mike,

after reviewing your latest patch [1], I  think that a possible solution
could be to add a new entry in fdb RPC message.
This entry would specify whether the port is multi-bound or not.
The new fdb message would look like this :
{net_id:
  {port:
{agent_ip:
  {mac, ip, *multi-bound*}
}
  }
   network_type:
 vxlan,
   segment_id:
 id
 }

When the multi-bound option would be set, the ARP responder would be
provisioned but the underlying module (ovs or kernel vxlan) would be
provisioned to flood the packet to every tunnel concerned by this overlay
segment, and not only the tunnel to agent that is supposed to host the port.
In the LB world, this means not adding fdb entry for the MAC of the
multi-bound port, whereas in the OVS world, it means not adding a flow that
send the trafic that matches the MAC of the multi-bound port to only one
tunnel port, but to every tunnel port of this overlay segment.

This way, traffic to multi-bound port will behave as unknown unicast
traffic. First packet will be flood to every tunnel and local bridge will
learn the correct tunnel for the following packets based on which tunnel
received the answer.
Once learning occurs with first ingress packet, following packets would be
sent to the correct tunnel and not flooded anymore.

I've tested this with linuxbridge and it works fine. Based on code
overview, this should work correctly with OVS too. I'll test it ASAP.

I know that DVR team already add such a flag in RPC messages, but they
revert it in later patches. I would be very interested in having their
opinion on this proposal.
It seems that DVR port could also use this flag. This would result in
having ARP responder activated for DVR port too.

This shouldn't need a bump in RPC versioning since this flag would be
optionnal. So their shouldn't have any issue with backward compatibility.

Regards,

Mathieu

[1]https://review.openstack.org/#/c/141114/2

On Sun, Dec 21, 2014 at 12:14 PM, Narasimhan, Vivekanandan 
vivekanandan.narasim...@hp.com wrote:

 Hi Mike,

 Just one comment [Vivek]

 -Original Message-
 From: Mike Kolesnik [mailto:mkole...@redhat.com]
 Sent: Sunday, December 21, 2014 11:17 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Robert Kukura
 Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for
 comments for a possible solution

 Hi Mathieu,

 Comments inline

 Regards,
 Mike

 - Original Message -
  Mike,
 
  I'm not even sure that your solution works without being able to bind
  a router HA port to several hosts.
  What's happening currently is that you :
 
  1.create the router on two l3agent.
  2. those l3agent trigger the sync_router() on the l3plugin.
  3. l3plugin.sync_routers() will trigger
 l2plugin.update_port(host=l3agent).
  4. ML2 will bind the port to the host mentioned in the last
 update_port().
 
  From a l2pop perspective, this will result in creating only one tunnel
  to the host lastly specified.
  I can't find any code that forces that only the master router binds
  its router port. So we don't even know if the host which binds the
  router port is hosting the master router or the slave one, and so if
  l2pop is creating the tunnel to the master or to the slave.
 
  Can you confirm that the above sequence is correct? or am I missing
  something?

 Are you referring to the alternative solution?

 In that case it seems that you're correct so that there would need to be
 awareness of the master router at some level there as well.
 I can't say for sure as I've been thinking on the proposed solution with
 no FDBs so there would be some issues with the alternative that need to be
 ironed out.

 
  Without the capacity to bind a port to several hosts, l2pop won't be
  able to create tunnel correctly, that's the reason why I was saying
  that a prerequisite for a smart solution would be to first fix the bug
  :
  https://bugs.launchpad.net/neutron/+bug/1367391
 
  DVR Had the same issue. Their workaround was to create a new
  port_binding tables, that manages the capacity for one DVR port to be
  bound to several host.
  As mentioned in the bug 1367391, this adding a technical debt in ML2,
  which has to be tackle down in priority from my POV.

 I agree that this would simplify work but even without this bug fixed we
 can achieve either solution.

 We have already knowledge of the agents hosting a router so this is
 completely doable without waiting for fix for bug 1367391.

 Also from my understanding the bug 1367391 is targeted at DVR only, not at
 HA router ports.

 [Vivek]  Currently yes, but Bob's concept embraces all replicated ports
 and so HA router ports will play into it :)

 --
 Thanks,

 Vivek


 
 
  On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik mkole...@redhat.com
 wrote:
   Hi Mathieu,
  
   Thanks for the quick reply, some comments inline..
  
   Regards,
   Mike
  
   - Original Message -
   Hi mike,
  
   thanks for working on this bug :
  
 

Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-09 Thread Thierry Carrez
Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments should 
 move over to the new package. In some cases this may or may not be as 
 feasible due to requirement changes when using the new middleware package on 
 particularly old deployments (Grizzly, Havana, etc).
 
 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they want 
 the newest keystoneclient lib (regardless of their deployment release). For 
 releases older than Icehouse (EOLd) there is no way to communicate in the 
 repositories/tags a change to require keystonemiddleware.
 
 There are 2 viable options to get to where we only have one version of the 
 keystonemiddleware to maintain (which for a number of reasons, primarily 
 relating to security concerns is important).
 
 1) Work to update Icehouse to include the keystonemiddleware package for the 
 next stable release. Sometime after this stable release remove the auth_token 
 (and other middlewares) from keystoneclient. The biggest downside is this 
 adds new dependencies in an old release, which is poor for packaging and 
 deployers (making sure paste-ini is updated etc).
 
 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. This 
 is a better experience for our deployer base, but does not solve the issues 
 around solid testing with the auth_token middleware from keystoneclient 
 (except for the stable-icehouse devstack-gate jobs).
 
 I am looking for insight, preferences, and other options from the community 
 and the TC. I will propose this topic for the next TC meeting so that we can 
 have a clear view on how to handle this in the most appropriate way that 
 imparts the best balance between maintainability, security, and experience 
 for the OpenStack providers, deployers, and users.

This is probably a very dumb question, but could you explain why
keystoneclient.middleware can't map to keystonemiddleware functions
(adding keystonemiddleware as a dependency of future keystoneclient)? At
first glance that would allow to remove dead duplicated code while
ensuring compatibility for as long as we need to support those old
releases...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The scope of OpenStack wiki [all]

2015-01-09 Thread Thierry Carrez
Stefano Maffulli wrote:
 The wiki served for many years the purpose of 'poor man CMS' when we
 didn't have an easy way to collaboratively create content. So the wiki
 ended up hosting pages like 'Getting started with OpenStack', demo
 videos, How to contribute, mission, to document our culture / shared
 understandings (4 opens, release cycle, use of blueprints, stable branch
 policy...), to maintain the list of Programs, meetings/teams, blueprints
 and specs, lots of random documentation and more.
 
 Lots of the content originally placed on the wiki was there because
 there was no better place. Now that we have more mature content and
 processes, these are finding their way out of the wiki like: 
 
   * http://governance.openstack.org
   * http://specs.openstack.org
   * http://docs.openstack.org/infra/manual/
 
 Also, the Introduction to OpenStack is maintained on
 www.openstack.org/software/ together with introductory videos and other
 basic material. A redesign of openstack.org/community and the new portal
 groups.openstack.org are making even more wiki pages obsolete.
 
 This makes the wiki very confusing to newcomers and more likely to host
 conflicting information.

One of the issues here is that the wiki also serves as a default
starting page for all things not on www.openstack.org (its main page
is a list of relevant links). So at the same time we are moving
authoritative content out of the wiki to more appropriate,
version-controlled and peer-reviewed sites, we are still relying on the
wiki as a reference catalog or starting point to find those more
appropriate sites. That is IMHO what creates the confusion on where the
authoritative content actually lives.

So we also need to revisit how to make navigation between the various
web properties of OpenStack more seamless and discoverable, so that we
don't rely on the wiki starting page for that important role.

 I would propose to restrict the scope of the wiki to things that
 anything that don't need or want to be peer-reviewed. Things like:
 
   * agendas for meetings, sprints, etc
   * list of etherpads for summits
   * quick prototypes of new programs (mentors, upstream training) before
 they find a stable home (which can still be the wiki)

+1 -- I agree on the end goal... Use the wiki a bit like we use
etherpads or pastebins, and have more appropriate locations for all of
our reference information. It will take some time but we should move
toward that.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-09 Thread Steven Hardy
On Fri, Jan 09, 2015 at 04:07:51PM +1000, Angus Salkeld wrote:
On Fri, Jan 9, 2015 at 3:22 PM, Murugan, Visnusaran
visnusaran.muru...@hp.com wrote:
 
  Steve,
 
  A 
 
  My reasoning to have a a**--continuea** like functionality was to run it
  as a periodic task and substitute continuous observer for now.
 
I am not in favor of the --continue as an API. I'd suggest responding to
resource timeouts and if there is no response from the task, then re-start
(continue)
the task.

I agree, the --continue API seems unnecessary.

I realized however that my initial remarks were a little unfair, because if
an engine dies, you can't necessarily restart the failed action via
stack-update, because we're in an unknown state.

So what would be useful is persisting sufficient per-resource state that
the following workflow becomes possible:

- User initiates stack-create
- Engine handling the create dies or is killed/restarted
- Another engine detects the failure and puts the stack into a FAILED
  state, or the user has to wait for a lock timeout which does that.
- User can do heat stack-update to continue the failed create

Obviously it'd be best long term if the user-visible restart could be
handled automatically, but the above would be a good first step.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-09 Thread Thierry Carrez
Maru Newby wrote:
 On Jan 8, 2015, at 3:54 PM, Sean Dague s...@dague.net wrote:

 The crux of it comes from the fact that the operator voice (especially
 those folks with large nova-network deploys) wasn't represented there.
 Once we got back from the mid-cycle and brought it to the list, there
 was some very understandable push back on deprecating without a
 migration plan.
 
 I think it’s clear that a migration plan is required.  An automated 
 migration, not so much.

The solution is not black or white.

Yes, operators would generally prefer an instant, automated, no-downtime
hot migration that magically moves them to the new world order. Yes,
developers would generally prefer to just document a general cold
procedure that operators could follow to migrate, warning that their
mileage may vary.

The trade-off solution we came up with last cycle is to have developers
and operators converge on a clear procedure with reasonable/acceptable
downtime, potentially assisted by new features and tools. It's really
not a us vs. them thing. It's a collaborative effort where operators
agree on what level of pain they can absorb and developers help to
reduce that pain wherever reasonably possible.

This convergence effort is currently rebooted because it has stalled. We
still need to agree on the reasonable trade-off procedure. We still need
to investigate if there is any tool or simple feature we can add to
Neutron or Nova to make some parts of that procedure easier and less
painful.

So we are not bringing back the magic upgrade pony requirement on the
table. We are just rebooting the effort to come to a reasonable solution
for everyone.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] ironic-discoverd status update

2015-01-09 Thread Dmitry Tantsur

On 01/09/2015 08:43 AM, Jerry Xinyu Zhao wrote:

tuskar-ui is supposed to enroll nodes into ironic.

Right. And it has support for discoverd IIRC.



On Thu, Jan 8, 2015 at 4:36 AM, Zhou, Zhenzan zhenzan.z...@intel.com
mailto:zhenzan.z...@intel.com wrote:

Sounds like we could add something new to automate the enrollment of
new nodes:-)
Collecting IPMI info into a csv file is still a trivial job...

BR
Zhou Zhenzan

-Original Message-
From: Dmitry Tantsur [mailto:dtant...@redhat.com
mailto:dtant...@redhat.com]
Sent: Thursday, January 8, 2015 5:19 PM
To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update

On 01/08/2015 06:48 AM, Kumar, Om (Cloud OS RD) wrote:
  My understanding of discovery was to get all details for a node
and then register that node to ironic. i.e. Enrollment of the node
to ironic. Pardon me if it was out of line with your understanding
of discovery.
That's why we agreed to use terms inspection/introspection :) sorry
for not being consistent here (name 'discoverd' is pretty old and
hard to change).

discoverd does not enroll nodes. while possible, I'm somewhat
resistant to make it do enrolling, mostly because I want it to be
user-controlled process.

 
  What I understand from the below mentioned spec is that the Node
is registered, but the spec will help ironic discover other
properties of the node.
that's what discoverd does currently.

 
  -Om
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com
mailto:dtant...@redhat.com]
  Sent: 07 January 2015 20:20
  To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status update
 
  On 01/07/2015 03:44 PM, Matt Keenan wrote:
  On 01/07/15 14:24, Kumar, Om (Cloud OS RD) wrote:
  If it's a separate project, can it be extended to perform out
of band
  discovery too..? That way there will be a single service to perform
  in-band as well as out of band discoveries.. May be it could follow
  driver framework for discovering nodes, where one driver could be
  native (in-band) and other could be iLO specific etc...
 
 
  I believe the following spec outlines plans for out-of-band
discovery:
  https://review.openstack.org/#/c/100951/
  Right, so Ironic will have drivers, one of which (I hope) will be
a driver for discoverd.
 
 
  No idea what the progress is with regard to implementation
within the
  Kilo cycle though.
  For now we hope to get it merged in K.
 
 
  cheers
 
  Matt
 
  Just a thought.
 
  -Om
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com
mailto:dtant...@redhat.com]
  Sent: 07 January 2015 14:34
  To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status
update
 
  On 01/07/2015 09:58 AM, Zhou, Zhenzan wrote:
  So is it possible to just integrate this project into ironic?
I mean
  when you create an ironic node, it will start discover in the
  background. So we don't need two services?
  Well, the decision on the summit was that it's better to keep it
  separate. Please see https://review.openstack.org/#/c/135605/ for
  details on future interaction between discoverd and Ironic.
 
  Just a thought, thanks.
 
  BR
  Zhou Zhenzan
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com
mailto:dtant...@redhat.com]
  Sent: Monday, January 5, 2015 4:49 PM
  To: openstack-dev@lists.openstack.org
mailto:openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Ironic] ironic-discoverd status
update
 
  On 01/05/2015 09:31 AM, Zhou, Zhenzan wrote:
  Hi, Dmitry
 
  I think this is a good project.
  I got one question: what is the relationship with
ironic-python-agent?
  Thanks.
  Hi!
 
  No relationship right now, but I'm hoping to use IPA as a base for
  introspection ramdisk in the (near?) future.
 
  BR
  Zhou Zhenzan
 
  -Original Message-
  From: Dmitry Tantsur [mailto:dtant...@redhat.com
mailto:dtant...@redhat.com]
  Sent: Thursday, December 11, 2014 10:35 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Ironic] ironic-discoverd status update
 
  Hi all!
 
  As you know I actively promote ironic-discoverd project [1]
as one
  of the means to do hardware inspection for Ironic (see e.g. spec
  [2]), so I decided 

Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-09 Thread Jesse Pretorius
On 9 January 2015 at 02:57, Tom Fifield t...@openstack.org wrote:

 On 09/01/15 08:06, Maru Newby wrote:
  The fact that operators running nova-network would like the upstream
 community to pay for implementing an automated migration solution for them
 is hardly surprising.  It is less clear to me that implementing such a
 solution, with all the attendant cost and risks, should take priority over
 efforts that benefit a broader swath of the community.  Are the operators
 in question so strapped for resources that they are not able to automate
 their migrations themselves, provided a sufficiently detailed plan to do so?

 This effort does benefit a broad swath of the community.


Also, as I recall, CERN and others who you may consider more along the
lines of ops, rather than devs, are committing resources to this. It's just
that those resources are not core devs in either nova or neutron - that's
why Anita was seeking champions in those two areas to assist with moving
things along and with their knowledge of the code-base.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [devstack] Opensatck installation issue.

2015-01-09 Thread Abhishek Shrivastava
Hi Liuxinguo,

Thanks for the suggestion, I'll try and make it work.

On Fri, Jan 9, 2015 at 1:24 PM, liuxinguo liuxin...@huawei.com wrote:

  Hi Abhishek,



 For the error in the first line:

 “mkdir: cannot create directory `/logs': Permission denied”

 and the error at the end:

 “ln: failed to create symbolic link `/logs/screen/screen-key.log': No such
 file or directory”



 The stack user does not have the permission on “/” so it can not create
 directory `/logs'.



 Please check the permission.



 liu



 *发件人:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *发送时间:* 2015年1月9日 15:26
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* [openstack-dev] [devstack] Opensatck installation issue.



 Hi,



 I'm trying to install *Openstack *through* devstack master* on my *Ubuntu* 
 *12.04
 VM*, but it is failing and generating the following error.



 If anyone can help me resolving this issue please do reply.



 --

 *Thanks  Regards,*

 *Abhishek*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

*Thanks  Regards,*
*Abhishek*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Request for comments for a possible solution

2015-01-09 Thread Assaf Muller


- Original Message -
 Hi Mike,
 
 after reviewing your latest patch [1], I  think that a possible solution
 could be to add a new entry in fdb RPC message.
 This entry would specify whether the port is multi-bound or not.
 The new fdb message would look like this :
 {net_id:
   {port:
     {agent_ip:
   {mac, ip, multi-bound }
     }
   }
    network_type:
  vxlan,
    segment_id:
      id
  }
 
 When the multi-bound option would be set, the ARP responder would be
 provisioned but the underlying module (ovs or kernel vxlan) would be
 provisioned to flood the packet to every tunnel concerned by this overlay
 segment, and not only the tunnel to agent that is supposed to host the port.
 In the LB world, this means not adding fdb entry for the MAC of the
 multi-bound port, whereas in the OVS world, it means not adding a flow that
 send the trafic that matches the MAC of the multi-bound port to only one
 tunnel port, but to every tunnel port of this overlay segment.
 
 This way, traffic to multi-bound port will behave as unknown unicast traffic.
 First packet will be flood to every tunnel and local bridge will learn the
 correct tunnel for the following packets based on which tunnel received the
 answer.
 Once learning occurs with first ingress packet, following packets would be
 sent to the correct tunnel and not flooded anymore.
 
 I've tested this with linuxbridge and it works fine. Based on code overview,
 this should work correctly with OVS too. I'll test it ASAP.
 
 I know that DVR team already add such a flag in RPC messages, but they revert
 it in later patches. I would be very interested in having their opinion on
 this proposal.
 It seems that DVR port could also use this flag. This would result in having
 ARP responder activated for DVR port too.
 
 This shouldn't need a bump in RPC versioning since this flag would be
 optionnal. So their shouldn't have any issue with backward compatibility.
 

Mike and I discussed this idea, and our concern was backwards compatability
because we *need* a solution that could be backported to Juno. If we can
still backport this kind of solution, because as you say an optional new
parameter is indeed backward compatable, and this solves LB as well, that's
a pretty big win!

 Regards,
 
 Mathieu
 
 [1] https://review.openstack.org/#/c/141114/2
 
 On Sun, Dec 21, 2014 at 12:14 PM, Narasimhan, Vivekanandan 
 vivekanandan.narasim...@hp.com  wrote:
 
 
 Hi Mike,
 
 Just one comment [Vivek]
 
 -Original Message-
 From: Mike Kolesnik [mailto: mkole...@redhat.com ]
 Sent: Sunday, December 21, 2014 11:17 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Robert Kukura
 Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for
 comments for a possible solution
 
 Hi Mathieu,
 
 Comments inline
 
 Regards,
 Mike
 
 - Original Message -
  Mike,
  
  I'm not even sure that your solution works without being able to bind
  a router HA port to several hosts.
  What's happening currently is that you :
  
  1.create the router on two l3agent.
  2. those l3agent trigger the sync_router() on the l3plugin.
  3. l3plugin.sync_routers() will trigger l2plugin.update_port(host=l3agent).
  4. ML2 will bind the port to the host mentioned in the last update_port().
  
  From a l2pop perspective, this will result in creating only one tunnel
  to the host lastly specified.
  I can't find any code that forces that only the master router binds
  its router port. So we don't even know if the host which binds the
  router port is hosting the master router or the slave one, and so if
  l2pop is creating the tunnel to the master or to the slave.
  
  Can you confirm that the above sequence is correct? or am I missing
  something?
 
 Are you referring to the alternative solution?
 
 In that case it seems that you're correct so that there would need to be
 awareness of the master router at some level there as well.
 I can't say for sure as I've been thinking on the proposed solution with no
 FDBs so there would be some issues with the alternative that need to be
 ironed out.
 
  
  Without the capacity to bind a port to several hosts, l2pop won't be
  able to create tunnel correctly, that's the reason why I was saying
  that a prerequisite for a smart solution would be to first fix the bug
  :
  https://bugs.launchpad.net/neutron/+bug/1367391
  
  DVR Had the same issue. Their workaround was to create a new
  port_binding tables, that manages the capacity for one DVR port to be
  bound to several host.
  As mentioned in the bug 1367391, this adding a technical debt in ML2,
  which has to be tackle down in priority from my POV.
 
 I agree that this would simplify work but even without this bug fixed we can
 achieve either solution.
 
 We have already knowledge of the agents hosting a router so this is
 completely doable without waiting for fix for bug 1367391.
 
 Also from my understanding the bug 1367391 is targeted at DVR only, not at 

Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Tim Bell

Let's ask the operators opinions too on openstack-operators mailing list. There 
was some duplication during the summit between the tracks but there is also a 
significant operator need outside the pure code area which comes along with the 
big tent tagging for projects. We need to make sure that we reserve time for 
focus on operator needs for

- Packaging
- Monitoring
- Automation
- Configuration
- …

These are areas which are not pure code development and deliverables in the 
classic OpenStack project sense but are pre-reqs for any production deployment.

The cells and nova-network to Neutron migration sessions were good examples of 
how we can agree on the best way forward with shared effort.

For me, the key part is making sure the right combinations of people are 
available in the right sessions (and ideally key topics are discussed in unique 
sessions). I think we're getting very close as we've been doing much mutual 
design in the past couple of summits/mid-cycle meet ups and subsequent *-specs 
reviews.

Tim

On 9 Jan 2015, at 18:57, Jay Pipes jaypi...@gmail.com wrote:

 Huge +1 from me. Thank you, Thierry.
 
 -jay
 
 On 01/09/2015 09:50 AM, Thierry Carrez wrote:
 Hi everyone,
 
 The OpenStack Foundation staff is considering a number of changes to the
 Design Summit format for Vancouver, changes on which we'd very much like
 to hear your feedback.
 
 The problems we are trying to solve are the following:
 - Accommodate the needs of more OpenStack projects
 - Reduce separation and perceived differences between the Ops Summit and
 the Design/Dev Summit
 - Create calm and less-crowded spaces for teams to gather and get more
 work done
 
 While some sessions benefit from large exposure, loads of feedback and
 large rooms, some others are just workgroup-oriented work sessions that
 benefit from smaller rooms, less exposure and more whiteboards. Smaller
 rooms are also cheaper space-wise, so they allow us to scale more easily
 to a higher number of OpenStack projects.
 
 My proposal is the following. Each project team would have a track at
 the Design Summit. Ops feedback is in my opinion part of the design of
 OpenStack, so the Ops Summit would become a track within the
 forward-looking Design Summit. Tracks may use two separate types of
 sessions:
 
 * Fishbowl sessions
 Those sessions are for open discussions where a lot of participation and
 feedback is desirable. Those would happen in large rooms (100 to 300
 people, organized in fishbowl style with a projector). Those would have
 catchy titles and appear on the general Design Summit schedule. We would
 have space for 6 or 7 of those in parallel during the first 3 days of
 the Design Summit (we would not run them on Friday, to reproduce the
 successful Friday format we had in Paris).
 
 * Working sessions
 Those sessions are for a smaller group of contributors to get specific
 work done or prioritized. Those would happen in smaller rooms (20 to 40
 people, organized in boardroom style with loads of whiteboards). Those
 would have a blanket title (like infra team working session) and
 redirect to an etherpad for more precise and current content, which
 should limit out-of-team participation. Those would replace project
 pods. We would have space for 10 to 12 of those in parallel for the
 first 3 days, and 18 to 20 of those in parallel on the Friday (by
 reusing fishbowl rooms).
 
 Each project track would request some mix of sessions (We'd like 4
 fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
 Friday) and the TC would arbitrate how to allocate the limited
 resources. Agenda for the fishbowl sessions would need to be published
 in advance, but agenda for the working sessions could be decided
 dynamically from an etherpad agenda.
 
 By making larger use of smaller spaces, we expect that setup to let us
 accommodate the needs of more projects. By merging the two separate Ops
 Summit and Design Summit events, it should make the Ops feedback an
 integral part of the Design process rather than a second-class citizen.
 By creating separate working session rooms, we hope to evolve the pod
 concept into something where it's easier for teams to get work done
 (less noise, more whiteboards, clearer agenda).
 
 What do you think ? Could that work ? If not, do you have alternate
 suggestions ?
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Patrick East
Hi Eduard,

I am going through the same process of setting up a CI with the same
instructions/tools (migrating from one using jaypipes instructions w/
static slaves). What I learned the other day in IRC is that the gearman
plugin will only register the job if there are build slaves with labels
that can run the jobs. Make sure that your nodepool nodes are being created
and attached to jenkins correctly, if you only see instances with
template in the name my understanding is that those are used for snapshot
images and are not the actual build nodes.

On a related note, I am having issues with the ssh keys. Nodepool is able
to log in to the node to set up the template and create an image from it,
but then fails to log in to a build node. Have you run into any issues with
that?

-Patrick

On Fri, Jan 9, 2015 at 5:14 AM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 Hi all,
 Back with the same error.
 Did a complete (clean) install based on rasselin's tutorial, now i have a
 working jenkins master + a dedicated cloud provider.
 Testing with noop looks ok, but dsvm-tempest-full returns NOT_REGISTERED.

 Here is some debug.log:

 2015-01-09 14:08:06,772 DEBUG zuul.IndependentPipelineManager: Found job
 dsvm-tempest-full for change Change 0x7f8db86278d0 139585,15
 2015-01-09 14:08:06,773 INFO zuul.Gearman: Launch job dsvm-tempest-full
 for change Change 0x7f8db86278d0 139585,15 with dependent changes []
 2015-01-09 14:08:06,773 DEBUG zuul.Gearman: Custom parameter function used
 for job dsvm-tempest-full, change: Change 0x7f8db86278d0 139585,15,
 params: {'BASE_LOG_PATH': '85/139585/15/check', 'ZUUL_PIPELINE': 'check',
 'OFFLINE_NODE_WHEN_COMPLETE': '1', 'ZUUL_UUID':
 'fa4ca39e02b14d1d864725441e301eb0', 'LOG_PATH':
 '85/139585/15/check/dsvm-tempest-full/fa4ca39', 'ZUUL_CHANGE_IDS':
 u'139585,15', 'ZUUL_PATCHSET': '15', 'ZUUL_BRANCH': u'master', 'ZUUL_REF':
 u'refs/zuul/master/Z4efb72c817fb4ab39b67eb93fa8177ea', 'ZUUL_COMMIT':
 u'97c142345b12bdf6a48c89b00f0d4d7811ce4a55', 'ZUUL_URL': u'
 http://10.100.128.3/p/', 'ZUUL_CHANGE': '139585', 'ZUUL_CHANGES':
 u'openstack-dev/sandbox:master:refs/changes/85/139585/15', 'ZUUL_PROJECT':
 'openstack-dev/sandbox'}
 ...
 2015-01-09 14:08:06,837 DEBUG zuul.Gearman: Function
 build:dsvm-tempest-full is not registered
 2015-01-09 14:08:06,837 ERROR zuul.Gearman: Job gear.Job 0x7f8db16e5590
 handle: None name: build:dsvm-tempest-full unique:
 fa4ca39e02b14d1d864725441e301eb0 is not registered with Gearman
 2015-01-09 14:08:06,837 INFO zuul.Gearman: Build gear.Job 0x7f8db16e5590
 handle: None name: build:dsvm-tempest-full unique:
 fa4ca39e02b14d1d864725441e301eb0 complete, result NOT_REGISTERED
 2015-01-09 14:08:06,837 DEBUG zuul.Scheduler: Adding complete event for
 build: Build fa4ca39e02b14d1d864725441e301eb0 of dsvm-tempest-full on
 Worker Unknown
 2015-01-09 14:08:06,837 DEBUG zuul.Scheduler: Done adding complete event
 for build: Build fa4ca39e02b14d1d864725441e301eb0 of dsvm-tempest-full on
 Worker Unknown
 2015-01-09 14:08:06,837 DEBUG zuul.IndependentPipelineManager: Adding
 build Build fa4ca39e02b14d1d864725441e301eb0 of dsvm-tempest-full on
 Worker Unknown of job dsvm-tempest-full to item QueueItem
 0x7f8db17ba310 for Change 0x7f8db86278d0 139585,15 in check

 So it seems that Zuul sees the job, but Gearman returns is not
 registered.

 Any idea how to register it? I see it in Jenkins GUI.
 The only warning i see in Jenkins GUI is:
 There’s no slave/cloud that matches this assignment. Did you mean ‘master’
 instead of ‘devstack_slave’?

 On the cloud provider gui i see instances with names like (
 d-p-c-TIMESTAMP.template.openstack.org) spawning and running and some
 deleting.

 Thanks,

 Eduard


 On Tue, Jan 6, 2015 at 7:29 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Gearman worker threads is what is needed to actually run the job. You
 need to type ‘status’ to get the results. It shouldn’t be empty since you
 stated the job actually ran (and failed tempest)

 Publishing the result is controlled here in the zuul layout.yaml file
 [1]. Make sure you’re not using the ‘silent’ pipeline which (obviously)
 won’t publish the result. Manual is here [2]

 You’ll need a log server to host the uploaded log files. You can set one
 up like –infra’s using this [3] or WIP [4]



 Ramy



 [1]
 https://github.com/rasselin/os-ext-testing-data/blob/master/etc/zuul/layout.yaml#L22

 [2] http://ci.openstack.org/zuul/index.html

 [3]
 https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_log_server.sh

 [4] https://review.openstack.org/#/c/138913/



 *From:* Punith S [mailto:punit...@cloudbyte.com]
 *Sent:* Monday, January 05, 2015 10:22 PM
 *To:* Asselin, Ramy
 *Cc:* Eduard Matei; OpenStack Development Mailing List (not for usage
 questions)

 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need
 help setting up CI



 thanks ramy :)

 i have setup the CI , but our dsvm-tempest-full job is failing due to
 some failures on running tempest.
 but 

Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Sean Roberts
Inline

~sean

 On Jan 9, 2015, at 9:30 AM, Thierry Carrez thie...@openstack.org wrote:
 
 sean roberts wrote:
 I like it. Thank you for coming up with improvements to the
 summit planning. One caveat on the definition of project for summit
 space. Which projects get considered for space is always difficult. Who
 is going to fill the rooms they request or are they going to have them
 mostly empty? I'm sure the TC can figure it out by looking at the number
 of contributors or something like that. I would however, like to know a
 bit more of your plan for this specific part of the proposal sooner than
 later.   
 
 That would be any OpenStack project, with the project structure reform
 hopefully completed by then. That would likely let projects that were
 previously in the other projects track have time to apply and to be
 considered full Design Summit citizens. The presence of a busy other
 projects track to cover for unofficial projects in previous summits
 really was an early sign that something was wrong with our definition of
 OpenStack projects anyway :)
Got it. This is going in the right direction. 

 
 Now I expect the TC to split the limited resources following metrics
 like team size and development activity. Small projects might end up
 having just a couple sessions in mostly-empty rooms, yes... still better
 than not giving space at all.
Agreed. I don't want to starve innovation at the summits. 

 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-09 Thread Morgan Fainberg

 On Jan 9, 2015, at 5:28 AM, Thierry Carrez thie...@openstack.org wrote:
 
 Dean Troyer wrote:
 On Fri, Jan 9, 2015 at 4:22 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
This is probably a very dumb question, but could you explain why
keystoneclient.middleware can't map to keystonemiddleware functions
(adding keystonemiddleware as a dependency of future keystoneclient)? At
first glance that would allow to remove dead duplicated code while
ensuring compatibility for as long as we need to support those old
releases...
 
 Part of the reason for moving keystonemiddleware out of
 keystonemiddleware was to do the reverse, have a keystoneclient install
 NOT bring in auth_token.  I doubt there will be anything other than
 servers that need keystonemiddleware installed whereas quite a few
 clients will not want it at all.
 
 Sure, that should clearly still be the end goal... The idea would be to
 keep deprecated functions in the client lib until we consider those
 releases that need them truly dead. Not saying it's the best option
 ever, was just curious why it was not listed in the proposed options :)
 
 I'm on the fence about changing stable requirements...if we imagine
 keystonemiddleware is not an OpenStack project this wouldn't be the
 first time we've had to do that when things change out from under us. 
 But I hate doing that to ourselves...
 
 This is not about changing stable requirements: havana servers would
 still depend on python-keystoneclient like they always did. If you use
 an old version of that you are covered, and if you use a new version of
 that, *that* would pull keystonemiddleware as a new requirement. So this
 is about temporarily changing future keystoneclient requirements to
 avoid double maintenance of code while preserving compatibility.
 

There is a simple and more technical reason keystonemiddleware cannot be 
imported by keystoneclient: It would be a circular dependency. 
Keystonemiddleware makes use of keystoneclient cms (PKI token 
encoding/decoding) code, the keystoneclient Session object, and the 
auth_plugins. Simply put, without further splitting apart keystoneclient into 
sub-modules (again requiring stable changes, and a lot more upheaval), the 
libraries do not lend themselves to cross import.

Dean’s point is accurate, the goal is to make it so that keystoneclient doesn’t 
need to pull in all of the server requirements that middleware needs to run 
(the whole reason for the split).

Cheers,
—Morgan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Request for Spec Freeze Exception for: Quota Management in Nested Projects

2015-01-09 Thread Sajeesh Cimson Sasi
Hi
   We'd like to ask for a spec freeze exception for the Blue Print on 
Quota Management in Nested Projects,kindly  see:  
https://review.openstack.org/#/c/129420/
The required keystone related bits are already in Kilo, and this proposal will 
allow people to actually exploit this new feature and make it useful. Nested 
projects are very important for large organizations like CERN who are waiting 
for this code to be released, and many people have already given a +1 to this 
proposal.
The code is ready and can be provided immediately.The nested quota driver is 
made by extending the  current DbQuotaDriver and it can support One to N levels 
of projects. Therefore ,there is no issue with backward compatibility. It can 
work with hierarchical as well as non-hierarchical projects.
Kindly consider this blue print for inclusion in Kilo.

  best regards,
   Sajeesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack-dev topics now work correctly

2015-01-09 Thread Stefano Maffulli
Dear all,

if you've tried the topics on this mailing list and haven't received
emails, well... we had a problem on our side: the topics were not setup
correctly.

Luigi Toscano helped isolate the problem and point at the solution[1].
He noticed that only the QA topic was working and that's the only one
defined with a single regular expression, while all the others use
multiple line regexp.

I corrected the regexp as described in the mailman FAQ and tested that
the delivery works correctly. If you want to subscribe only to some
topics now you can. Thanks again to Luigi for the help. 

Cheers,
stef

[1] http://wiki.list.org/pages/viewpage.action?pageId=8683547



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Michael Dorman
(X-posted to -operators.)

Any thoughts on how the ops track spaces would be requested, since there 
is not a real ‘operators project’, PTL, etc.?

I assume this would come from the operators group as a whole, so probably 
something we should put on the agenda at the ops meet up in March.  (I’ve 
added it to the etherpad.)

Mike





On 1/9/15, 2:50 PM, Thierry Carrez thie...@openstack.org wrote:

Hi everyone,

The OpenStack Foundation staff is considering a number of changes to the
Design Summit format for Vancouver, changes on which we'd very much like
to hear your feedback.

The problems we are trying to solve are the following:
- Accommodate the needs of more OpenStack projects
- Reduce separation and perceived differences between the Ops Summit and
the Design/Dev Summit
- Create calm and less-crowded spaces for teams to gather and get more
work done

While some sessions benefit from large exposure, loads of feedback and
large rooms, some others are just workgroup-oriented work sessions that
benefit from smaller rooms, less exposure and more whiteboards. Smaller
rooms are also cheaper space-wise, so they allow us to scale more easily
to a higher number of OpenStack projects.

My proposal is the following. Each project team would have a track at
the Design Summit. Ops feedback is in my opinion part of the design of
OpenStack, so the Ops Summit would become a track within the
forward-looking Design Summit. Tracks may use two separate types of
sessions:

* Fishbowl sessions
Those sessions are for open discussions where a lot of participation and
feedback is desirable. Those would happen in large rooms (100 to 300
people, organized in fishbowl style with a projector). Those would have
catchy titles and appear on the general Design Summit schedule. We would
have space for 6 or 7 of those in parallel during the first 3 days of
the Design Summit (we would not run them on Friday, to reproduce the
successful Friday format we had in Paris).

* Working sessions
Those sessions are for a smaller group of contributors to get specific
work done or prioritized. Those would happen in smaller rooms (20 to 40
people, organized in boardroom style with loads of whiteboards). Those
would have a blanket title (like infra team working session) and
redirect to an etherpad for more precise and current content, which
should limit out-of-team participation. Those would replace project
pods. We would have space for 10 to 12 of those in parallel for the
first 3 days, and 18 to 20 of those in parallel on the Friday (by
reusing fishbowl rooms).

Each project track would request some mix of sessions (We'd like 4
fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
Friday) and the TC would arbitrate how to allocate the limited
resources. Agenda for the fishbowl sessions would need to be published
in advance, but agenda for the working sessions could be decided
dynamically from an etherpad agenda.

By making larger use of smaller spaces, we expect that setup to let us
accommodate the needs of more projects. By merging the two separate Ops
Summit and Design Summit events, it should make the Ops feedback an
integral part of the Design process rather than a second-class citizen.
By creating separate working session rooms, we hope to evolve the pod
concept into something where it's easier for teams to get work done
(less noise, more whiteboards, clearer agenda).

What do you think ? Could that work ? If not, do you have alternate
suggestions ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread sean roberts
I like it. Thank you for coming up with improvements to the
summit planning. One caveat on the definition of project for summit
space. Which projects get considered for space is always difficult. Who is
going to fill the rooms they request or are they going to have them mostly
empty? I'm sure the TC can figure it out by looking at the number of
contributors or something like that. I would however, like to know a bit
more of your plan for this specific part of the proposal sooner than
later.

On Friday, January 9, 2015, Thierry Carrez thie...@openstack.org
javascript:_e(%7B%7D,'cvml','thie...@openstack.org'); wrote:

 Hi everyone,

 The OpenStack Foundation staff is considering a number of changes to the
 Design Summit format for Vancouver, changes on which we'd very much like
 to hear your feedback.

 The problems we are trying to solve are the following:
 - Accommodate the needs of more OpenStack projects
 - Reduce separation and perceived differences between the Ops Summit and
 the Design/Dev Summit
 - Create calm and less-crowded spaces for teams to gather and get more
 work done

 While some sessions benefit from large exposure, loads of feedback and
 large rooms, some others are just workgroup-oriented work sessions that
 benefit from smaller rooms, less exposure and more whiteboards. Smaller
 rooms are also cheaper space-wise, so they allow us to scale more easily
 to a higher number of OpenStack projects.

 My proposal is the following. Each project team would have a track at
 the Design Summit. Ops feedback is in my opinion part of the design of
 OpenStack, so the Ops Summit would become a track within the
 forward-looking Design Summit. Tracks may use two separate types of
 sessions:

 * Fishbowl sessions
 Those sessions are for open discussions where a lot of participation and
 feedback is desirable. Those would happen in large rooms (100 to 300
 people, organized in fishbowl style with a projector). Those would have
 catchy titles and appear on the general Design Summit schedule. We would
 have space for 6 or 7 of those in parallel during the first 3 days of
 the Design Summit (we would not run them on Friday, to reproduce the
 successful Friday format we had in Paris).

 * Working sessions
 Those sessions are for a smaller group of contributors to get specific
 work done or prioritized. Those would happen in smaller rooms (20 to 40
 people, organized in boardroom style with loads of whiteboards). Those
 would have a blanket title (like infra team working session) and
 redirect to an etherpad for more precise and current content, which
 should limit out-of-team participation. Those would replace project
 pods. We would have space for 10 to 12 of those in parallel for the
 first 3 days, and 18 to 20 of those in parallel on the Friday (by
 reusing fishbowl rooms).

 Each project track would request some mix of sessions (We'd like 4
 fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
 Friday) and the TC would arbitrate how to allocate the limited
 resources. Agenda for the fishbowl sessions would need to be published
 in advance, but agenda for the working sessions could be decided
 dynamically from an etherpad agenda.

 By making larger use of smaller spaces, we expect that setup to let us
 accommodate the needs of more projects. By merging the two separate Ops
 Summit and Design Summit events, it should make the Ops feedback an
 integral part of the Design process rather than a second-class citizen.
 By creating separate working session rooms, we hope to evolve the pod
 concept into something where it's easier for teams to get work done
 (less noise, more whiteboards, clearer agenda).

 What do you think ? Could that work ? If not, do you have alternate
 suggestions ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
~sean
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Asselin, Ramy

Regarding SSH Keys and logging into nodes, you need to set the NODEPOOL_SSH_KEY 
variable

1.   I documented my notes here 
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/nodepool/nodepool.yaml.erb.sample#L48

2.   This is also documented ‘officially’ here: 
https://github.com/openstack-infra/nodepool/blob/master/README.rst

3.   Also, I had an issue getting puppet to do the right thing with keys, 
so it gets forced here: 
https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L197


Ramy

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Friday, January 09, 2015 8:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Thanks Patrick,
Indeed it seems the cloud provider was setting up vms on a bridge whose eth was 
DOWN so the vms could not connect to the outside world so the prepare script 
was failing.
Looking into that.

Thanks,

Eduard

On Fri, Jan 9, 2015 at 6:44 PM, Patrick East 
patrick.e...@purestorage.commailto:patrick.e...@purestorage.com wrote:
Ah yea, sorry, should have specified; I am having it run the 
prepare_node_devstack.sh from the infra repo. I see it adding the same public 
key to the user specified in my nodepool.yaml. The strange part (and I need to 
double check.. feel like it can't be right) is that on my master node the 
nodepool users id_rsa changed at some point in the process.


-Patrick

On Fri, Jan 9, 2015 at 8:38 AM, Jeremy Stanley 
fu...@yuggoth.orgmailto:fu...@yuggoth.org wrote:
On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote:
[...]
 On a related note, I am having issues with the ssh keys. Nodepool
 is able to log in to the node to set up the template and create an
 image from it, but then fails to log in to a build node. Have you
 run into any issues with that?

Your image build needs to do _something_ to make SSH into the
resulting nodes possible. We accomplish that by applying a puppet
manifest which sets up an authorized_keys file for the account we
want it to use, but there are countless ways you could go about it
in your environment.
--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Eduard Biceri Matei, Senior Software Developer

www.cloudfounders.comhttp://www.cloudfounders.com/

 | eduard.ma...@cloudfounders.commailto:eduard.ma...@cloudfounders.com







CloudFounders, The Private Cloud Software Company



Disclaimer:

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed.
If you are not the named addressee or an employee or agent responsible for 
delivering this message to the named addressee, you are hereby notified that 
you are not authorized to read, print, retain, copy or disseminate this message 
or any part of it. If you have received this email in error we request you to 
notify us by reply e-mail and to delete all electronic files of the message. If 
you are not the intended recipient you are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any errors or omissions in the content of this message, and shall have no 
liability for any loss or damage suffered by the user, which arise as a result 
of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Thierry Carrez
sean roberts wrote:
 I like it. Thank you for coming up with improvements to the
 summit planning. One caveat on the definition of project for summit
 space. Which projects get considered for space is always difficult. Who
 is going to fill the rooms they request or are they going to have them
 mostly empty? I'm sure the TC can figure it out by looking at the number
 of contributors or something like that. I would however, like to know a
 bit more of your plan for this specific part of the proposal sooner than
 later.   

That would be any OpenStack project, with the project structure reform
hopefully completed by then. That would likely let projects that were
previously in the other projects track have time to apply and to be
considered full Design Summit citizens. The presence of a busy other
projects track to cover for unofficial projects in previous summits
really was an early sign that something was wrong with our definition of
OpenStack projects anyway :)

Now I expect the TC to split the limited resources following metrics
like team size and development activity. Small projects might end up
having just a couple sessions in mostly-empty rooms, yes... still better
than not giving space at all.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Jeremy Stanley
On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote:
[...]
 On a related note, I am having issues with the ssh keys. Nodepool
 is able to log in to the node to set up the template and create an
 image from it, but then fails to log in to a build node. Have you
 run into any issues with that?

Your image build needs to do _something_ to make SSH into the
resulting nodes possible. We accomplish that by applying a puppet
manifest which sets up an authorized_keys file for the account we
want it to use, but there are countless ways you could go about it
in your environment.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-09 Thread Erlon Cruz
Hi Ivan, thanks !!

On Fri, Jan 9, 2015 at 10:42 AM, Ivan Kolodyazhny e...@e0ne.info wrote:

 Hi Erlon,

 We've got a thread mailing-list [1] for it and some details in wiki [2].
 Anyway, need to get confirmation from our core devs and/or Mike.

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html
 [2]
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Testing_requirements_for_Kilo_release_and_beyond

 Regards,
 Ivan Kolodyazhny

 On Fri, Jan 9, 2015 at 2:26 PM, Erlon Cruz sombra...@gmail.com wrote:

 Hi all, hi cinder core devs,

 I have read on IRC discussions about a deadline for drivers vendors to
 have their CI running and voting until kilo-2, but I didn't find any post
 on this list to confirm this. Can anyone confirm this?

 Thanks,
 Erlon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Patrick East
Ah yea, sorry, should have specified; I am having it run
the prepare_node_devstack.sh from the infra repo. I see it adding the same
public key to the user specified in my nodepool.yaml. The strange part (and
I need to double check.. feel like it can't be right) is that on my master
node the nodepool users id_rsa changed at some point in the process.


-Patrick

On Fri, Jan 9, 2015 at 8:38 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote:
 [...]
  On a related note, I am having issues with the ssh keys. Nodepool
  is able to log in to the node to set up the template and create an
  image from it, but then fails to log in to a build node. Have you
  run into any issues with that?

 Your image build needs to do _something_ to make SSH into the
 resulting nodes possible. We accomplish that by applying a puppet
 manifest which sets up an authorized_keys file for the account we
 want it to use, but there are countless ways you could go about it
 in your environment.
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-09 Thread Armando M.

 If we were standing at a place with a detailed manual upgrade document
 that explained how to do minimal VM downtime, that a few ops had gone
 through and proved out, that would be one thing. And we could figure out
 which parts made sense to put tooling around to make this easier for
 everyone.

 But we seem far from there.

 My suggestion is to start with a detailed document, figure out that it
 works, and build automation around that process.


The problem is that whatever documented solution we can come up with is
going to be so opinionated to be hardly of any use on general terms, let
alone worth automating. Furthermore, its lifespan is going to be reasonably
limited which to me doesn't seem to justify enough the engineering cost,
and it's not like we haven't been trying...

I am not suggesting we give up entirely, but perhaps we should look at the
operator cases (for those who cannot afford cold migrations, or more simply
stand up a new cloud to run side-by-side with old cloud, and leave the old
one running until it drains), individually. This means having someone
technical who has a deep insight into these operator's environments lead
the development effort required to adjust the open source components to
accommodate whatever migration process makes sense to them. Having someone
championing a general effort from the 'outside' does not sound like an
efficient use of anyone's time.

So this goes back to the question: who can effectively lead the technical
effort? I personally don't think we can have Neutron cores or Nova cores
lead this effort and be effective, if they don't have direct
access/knowledge to these cloud platforms, and everything that pertains to
them.

Armando


 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Eduard Matei
Thanks Patrick,
Indeed it seems the cloud provider was setting up vms on a bridge whose eth
was DOWN so the vms could not connect to the outside world so the prepare
script was failing.
Looking into that.

Thanks,

Eduard

On Fri, Jan 9, 2015 at 6:44 PM, Patrick East patrick.e...@purestorage.com
wrote:

 Ah yea, sorry, should have specified; I am having it run
 the prepare_node_devstack.sh from the infra repo. I see it adding the same
 public key to the user specified in my nodepool.yaml. The strange part (and
 I need to double check.. feel like it can't be right) is that on my master
 node the nodepool users id_rsa changed at some point in the process.


 -Patrick

 On Fri, Jan 9, 2015 at 8:38 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote:
 [...]
  On a related note, I am having issues with the ssh keys. Nodepool
  is able to log in to the node to set up the template and create an
  image from it, but then fails to log in to a build node. Have you
  run into any issues with that?

 Your image build needs to do _something_ to make SSH into the
 resulting nodes possible. We accomplish that by applying a puppet
 manifest which sets up an authorized_keys file for the account we
 want it to use, but there are countless ways you could go about it
 in your environment.
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The scope of OpenStack wiki [all]

2015-01-09 Thread Stefano Maffulli
On Fri, 2015-01-09 at 10:35 +0100, Thierry Carrez wrote:
 One of the issues here is that the wiki also serves as a default
 starting page for all things not on www.openstack.org (its main page
 is a list of relevant links). So at the same time we are moving
 authoritative content out of the wiki to more appropriate,
 version-controlled and peer-reviewed sites, we are still relying on the
 wiki as a reference catalog or starting point to find those more
 appropriate sites. That is IMHO what creates the confusion on where the
 authoritative content actually lives.

There is an intention to redesign the Community page
http://www.openstack.org/community/: maybe this can be used as a
starting point for discovery of governance, specs, infra manual,
contributor docs, etc? 

The wiki may need a new category for 'stackforge' projects but probably
it makes sense to wait until the new programs.yaml and some tags are
set. Eventually we may match those tags in wiki [[Category:]]...
something for the future. 

Between redesigning www/Community and docs/developers(contributors) I'm
quite confident that most personas currently served by the wiki will
have their interests served better elsewhere.

To answer Carol's comment: no content will be pushed out of the wiki
without a proper migration and redirection. We're discussing the scope
of the wiki so that we can communicate more clearly what content should
be expected to be in the wiki and what we should plan on putting
elsewhere (since so much redesigning is going on).

 So we also need to revisit how to make navigation between the various
 web properties of OpenStack more seamless and discoverable, so that we
 don't rely on the wiki starting page for that important role.

Indeed. The new www.openstack.org has a better navigation bar, IMHO and
the wiki pages (and mailing list archives and other sites) have been
left behind. It shouldn't be too hard to sync the navigations for the
sites and offer a common to-level path at least.

/stef


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [qa] EC2 status and call for assistance

2015-01-09 Thread Matt Riedemann



On 1/8/2015 1:28 PM, Matt Riedemann wrote:



On 4/25/2014 4:13 AM, Alexandre Levine wrote:

Joe,

In regard to your first question - yes we'll be going in this direction
very soon. It's being discussed with Randy now.
As for the second question - we'd love to participate in fixing it (in
fact we've done it for OCS already) and probably maintaining it but I'm
not sure what it takes and means to commit to this - we'll discuss it as
well.

Best regards,
   Alex Levine

24.04.2014 23:33, Joe Gordon пишет:




On Thu, Apr 24, 2014 at 10:10 AM, Alexandre Levine
alev...@cloudscaling.com mailto:alev...@cloudscaling.com wrote:

Cristopher,

FYI in regard to 


Its the sort of direction that we tried to steer the GCE
API folks in I
cehouse, though I don't know what they ended up doing



We ended up perfectly ok. The project is on Stackforge for some
time https://github.com/stackforge/gce-api. It works.
I believe that this is exactly what should be done with EC2 as
well. We even considered and tried to estimate it once.

I can tell you even more that we do have lots of AWS Tempest tests
specifically to check various compatibility issues in OpenStack.
And we've created a number of fixes for proprietary implementation
of a cloud based on OpenStack. Some of them are in EC2 layer, some
are in nova core.


Any plans to contribute this to the community?


But anyways, I'm completely convinced that:

1. Any further improvements to EC2 layer should be done after its
separation from nova.


So the fundamental problem we are having with Nova's EC2
implementation is that no one is maintaining it upstream.  If pulling
EC2 out of nova into its own repo solves this problem then wonderful.
But the status quo is untenable, Nova does not want to ship code that
we know to be broken, so we need folks interested in it to help fix it.

2. EC2 should still somehow be supported by OpenStack because as
far as I know lots of people use euca2ools to access it.


Best regards,
  Alex Levine

24.04.2014 19 tel:24.04.2014%2019:24, Christopher Yeoh пишет:

On Thu, 24 Apr 2014 09:10:19 +1000
Michael Still mi...@stillhq.com mailto:mi...@stillhq.com
wrote:

These seem like the obvious places to talk to people about
helping us
get this code maintained before we're forced to drop it.
Unfortunately
we can't compel people to work on things, but we can make
it in their
best interests.

A followup question as well -- there's a proposal to
implement the
Nova v2 API on top of the v3 API. Is something similar
possible with
EC2? Most of the details of EC2 have fallen out of my
brain, but I'd
be very interested in if such a thing is possible.

So there's sort of a couple of ways we suggested doing a V2
API on top
of V3 long term. The current most promising proposal (and I
think
Kenichi has covered this a bit in another email) is a very
thin layer
inside the Nova API code. This works well because the V2 and
V3 APIs in
many areas are very closely related anyway - so emulation is
straightforward.

However there is another alternative (which I don't think is
necessary
for V2) and that is to have a more fuller fledged type proxy
where
translation is say done between receiving V2 requests and
translating
them to native V3 API requests. Responses are similarly
translated but
in reverse. Its the sort of direction that we tried to steer
the GCE
API folks in Icehouse, though I don't know what they ended up
doing -
IIRC I think they said it would be possible.

Longer term I suspect its something we should consider if we
could do
something like that for the EC2 API and then be able to rip
out the
ec2 API specific code from the nova API part of tree. The
messiness of
any UUID or state map translation perhaps could then be
handled in a
very isolated manner from the core Nova code (though I won't
pretend to
understand the specifics of what is required here). I guess the
critical question will be if the emulation of the EC2 API is
good
enough, but as Sean points out - there are lots of existing
issues
already so it may end up not perfect, but still much better
than what we
have now.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___

Re: [openstack-dev] [Glance] IRC logging

2015-01-09 Thread Joshua Harlow
So the only comment I'll put in is one that I know not everyone agrees 
with but might as well throw it out there.


http://freenode.net/channel_guidelines.shtml (this page has a bunch of 
useful advice IMHO).


From that; something useful to look/think over at least...


If you're considering publishing channel logs, think it through. The 
freenode network is an interactive environment. Even on public channels, 
most users don't weigh their comments with the idea that they'll be 
enshrined in perpetuity. For that reason, few participants publish logs.


If you're publishing logs on an ongoing basis, your channel topic should 
reflect that fact. Be sure to provide a way for users to make comments 
without logging, and get permission from the channel owners before you 
start. If you're thinking of anonymizing your logs (removing 
information that identifies the specific users), be aware that it's 
difficult to do it well—replies and general context often provide 
identifying information which is hard to filter.


If you just want to publish a single conversation, be careful to get 
permission from each participant. Provide as much context as you can. 
Avoid the temptation to publish or distribute logs without permission in 
order to portray someone in a bad light. The reputation you save will 
most likely be your own.



Brian Rosmaita wrote:

The response on the review is overwhelmingly positive (or, strictly
speaking, unanimously non-negative).

If anyone has an objection, could you please register it before 12:00
UTC on Monday, January 12?

https://review.openstack.org/#/c/145025/

thanks,
brian

*From:* David Stanek [dsta...@dstanek.com]
*Sent:* Wednesday, January 07, 2015 4:43 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Glance] IRC logging

It's also important to remember that IRC channels are typically not
private and are likely already logged by dozens of people anyway.

On Tue, Jan 6, 2015 at 1:22 PM, Christopher Aedo ca...@mirantis.com
mailto:ca...@mirantis.com wrote:

On Tue, Jan 6, 2015 at 2:49 AM, Flavio Percoco fla...@redhat.com
mailto:fla...@redhat.com wrote:
  Fully agree... I don't see how enable logging should be a limitation
  for freedom of thought. We've used it in Zaqar since day 0 and it's
  bee of great help for all of us.

  The logging does not remove the need of meetings where decisions and
  more relevant/important topics are discussed.

Wanted to second this as well. I'm strongly in favor of logging -
looking through backlogs of chats on other channels has been very
helpful to me in the past, and it sure to help others in the future.
I don't think there is danger of anyone pointing to a logged IRC
conversation in this context as some statement of record.

-Christopher

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Jay Pipes

Huge +1 from me. Thank you, Thierry.

-jay

On 01/09/2015 09:50 AM, Thierry Carrez wrote:

Hi everyone,

The OpenStack Foundation staff is considering a number of changes to the
Design Summit format for Vancouver, changes on which we'd very much like
to hear your feedback.

The problems we are trying to solve are the following:
- Accommodate the needs of more OpenStack projects
- Reduce separation and perceived differences between the Ops Summit and
the Design/Dev Summit
- Create calm and less-crowded spaces for teams to gather and get more
work done

While some sessions benefit from large exposure, loads of feedback and
large rooms, some others are just workgroup-oriented work sessions that
benefit from smaller rooms, less exposure and more whiteboards. Smaller
rooms are also cheaper space-wise, so they allow us to scale more easily
to a higher number of OpenStack projects.

My proposal is the following. Each project team would have a track at
the Design Summit. Ops feedback is in my opinion part of the design of
OpenStack, so the Ops Summit would become a track within the
forward-looking Design Summit. Tracks may use two separate types of
sessions:

* Fishbowl sessions
Those sessions are for open discussions where a lot of participation and
feedback is desirable. Those would happen in large rooms (100 to 300
people, organized in fishbowl style with a projector). Those would have
catchy titles and appear on the general Design Summit schedule. We would
have space for 6 or 7 of those in parallel during the first 3 days of
the Design Summit (we would not run them on Friday, to reproduce the
successful Friday format we had in Paris).

* Working sessions
Those sessions are for a smaller group of contributors to get specific
work done or prioritized. Those would happen in smaller rooms (20 to 40
people, organized in boardroom style with loads of whiteboards). Those
would have a blanket title (like infra team working session) and
redirect to an etherpad for more precise and current content, which
should limit out-of-team participation. Those would replace project
pods. We would have space for 10 to 12 of those in parallel for the
first 3 days, and 18 to 20 of those in parallel on the Friday (by
reusing fishbowl rooms).

Each project track would request some mix of sessions (We'd like 4
fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
Friday) and the TC would arbitrate how to allocate the limited
resources. Agenda for the fishbowl sessions would need to be published
in advance, but agenda for the working sessions could be decided
dynamically from an etherpad agenda.

By making larger use of smaller spaces, we expect that setup to let us
accommodate the needs of more projects. By merging the two separate Ops
Summit and Design Summit events, it should make the Ops feedback an
integral part of the Design process rather than a second-class citizen.
By creating separate working session rooms, we hope to evolve the pod
concept into something where it's easier for teams to get work done
(less noise, more whiteboards, clearer agenda).

What do you think ? Could that work ? If not, do you have alternate
suggestions ?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] reckoning time for nova ec2 stack

2015-01-09 Thread Tim Bell

Let's not forget that more than a quarter of OpenStack production deployments 
are using the EC2 interface from my memory of the user survey. Our experience 
is that the basic functionality is OK but you need to keep into the appropriate 
subset.

Any plans for depreciation of the EC2 inside Nova need to factor in the real 
life usage in production and have a validation window to confirm that an 
alternative solution is not only providing better and compatible functionality 
at scale but also is packaged, configured with puppet/chef/..., can be 
monitored/metered, rate limited, …

No problem to start the effort if there is full agreement that this is the way 
to go but this is not a trivial migration. 

Tim

On 9 Jan 2015, at 17:17, Steven Hardy sha...@redhat.com wrote:

 On Fri, Jan 09, 2015 at 09:11:50AM -0500, Sean Dague wrote:
 boto 2.35.0 just released, and makes hmac-v4 authentication mandatory
 for EC2 end points (it has been optionally supported for a long time).
 
 Nova's EC2 implementation does not do this.
 
 The short term approach is to pin boto -
 https://review.openstack.org/#/c/146049/, which I think is a fine long
 term fix for stable/, but in master not supporting new boto, which
 people are likely to deploy, doesn't really seem like an option.
 
 https://bugs.launchpad.net/tempest/+bug/1408987 is the bug.
 
 I don't think shipping an EC2 API in Kilo that doesn't work with recent
 boto is a thing Nova should do. Do we have volunteers to step up and fix
 this, or do we need to get more aggressive about deprecating this interface?
 
 I'm not stepping up to maintain the EC2 API, but the auth part of it is
 very similar to heat's auth (which does support hmac-v4), so I hacked on
 the nova API a bit to align with the way heat does things:
 
 https://review.openstack.org/#/c/146124/ (WIP)
 
 This needs some more work, but AFAICS solves the actual auth part which is
 quite simply fixed by reusing some code we have in heat's ec2token middleware.
 
 If this is used, we could extract the common parts and/or use a common auth
 middleware in future, assuming the EC2 implementation as a whole isn't
 deemed unmaintained and removed that is.
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] IRC logging

2015-01-09 Thread Brian Rosmaita
The response on the review is overwhelmingly positive (or, strictly speaking, 
unanimously non-negative).

If anyone has an objection, could you please register it before 12:00 UTC on 
Monday, January 12?

https://review.openstack.org/#/c/145025/

thanks,
brian

From: David Stanek [dsta...@dstanek.com]
Sent: Wednesday, January 07, 2015 4:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] IRC logging

It's also important to remember that IRC channels are typically not private and 
are likely already logged by dozens of people anyway.

On Tue, Jan 6, 2015 at 1:22 PM, Christopher Aedo 
ca...@mirantis.commailto:ca...@mirantis.com wrote:
On Tue, Jan 6, 2015 at 2:49 AM, Flavio Percoco 
fla...@redhat.commailto:fla...@redhat.com wrote:
 Fully agree... I don't see how enable logging should be a limitation
 for freedom of thought. We've used it in Zaqar since day 0 and it's
 bee of great help for all of us.

 The logging does not remove the need of meetings where decisions and
 more relevant/important topics are discussed.

Wanted to second this as well.  I'm strongly in favor of logging -
looking through backlogs of chats on other channels has been very
helpful to me in the past, and it sure to help others in the future.
I don't think there is danger of anyone pointing to a logged IRC
conversation in this context as some statement of record.

-Christopher

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Patrick East
Thanks for the links!

After digging around in my configs I figured out the issue, I had a typo in
my JENKINS_SSH_PUBLIC_KEY_NO_WHITESPACE (copy pasta cut off a
character...). But I managed to put the right one in the key for nova to
use so it was able to log in to set up the instance, but didn't end up with
the right thing in the NODEPOOL_SSH_KEY variable.

-Patrick

On Fri, Jan 9, 2015 at 9:25 AM, Asselin, Ramy ramy.asse...@hp.com wrote:



 Regarding SSH Keys and logging into nodes, you need to set the
 NODEPOOL_SSH_KEY variable

 1.   I documented my notes here
 https://github.com/rasselin/os-ext-testing-data/blob/master/etc/nodepool/nodepool.yaml.erb.sample#L48

 2.   This is also documented ‘officially’ here:
 https://github.com/openstack-infra/nodepool/blob/master/README.rst

 3.   Also, I had an issue getting puppet to do the right thing with
 keys, so it gets forced here:
 https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L197



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, January 09, 2015 8:58 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Thanks Patrick,

 Indeed it seems the cloud provider was setting up vms on a bridge whose
 eth was DOWN so the vms could not connect to the outside world so the
 prepare script was failing.

 Looking into that.



 Thanks,



 Eduard



 On Fri, Jan 9, 2015 at 6:44 PM, Patrick East patrick.e...@purestorage.com
 wrote:

  Ah yea, sorry, should have specified; I am having it run
 the prepare_node_devstack.sh from the infra repo. I see it adding the same
 public key to the user specified in my nodepool.yaml. The strange part (and
 I need to double check.. feel like it can't be right) is that on my master
 node the nodepool users id_rsa changed at some point in the process.




   -Patrick



 On Fri, Jan 9, 2015 at 8:38 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-01-09 08:28:39 -0800 (-0800), Patrick East wrote:
 [...]
  On a related note, I am having issues with the ssh keys. Nodepool
  is able to log in to the node to set up the template and create an
  image from it, but then fails to log in to a build node. Have you
  run into any issues with that?

 Your image build needs to do _something_ to make SSH into the
 resulting nodes possible. We accomplish that by applying a puppet
 manifest which sets up an authorized_keys file for the account we
 want it to use, but there are countless ways you could go about it
 in your environment.
 --
 Jeremy Stanley


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 --

 *Eduard Biceri Matei, Senior Software Developer*

  www.cloudfounders.com

  | eduard.ma...@cloudfounders.com







  *CloudFounders, The Private Cloud Software Company*



  Disclaimer:

  This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Swift object-updater and container-updater

2015-01-09 Thread Jay S. Bryant

Minwoo,

It is important to understand that Icehouse has gone into a security 
fixes only mode.  It is too late in the stable process to be making 
notable changes for anything other than security issues.


The patch for the fork bomb like problem in object-auditor is in 
Icehouse:  https://review.openstack.org/#/c/126371/  So, we do not need 
to worry about that one.  The other two problems are not really security 
problems as they cause the object-updater and container-updater to throw 
an exception and exit.  The behavior is irritating but not a security risk.


So, I think the fix that you are really asking to have fixed in 
Icehouse, has already merged.  I will propose the other fixes back to 
stable/juno but don't feel they warrant a change in Icehouse.


I hope this clarifies the situation.

Jay

On 01/08/2015 09:21 AM, Minwoo Bae wrote:

Hi, to whom it may concern:


Jay Bryant and I would like to have the fixes for the Swift 
object-updater (https://review.openstack.org/#/c/125746/) and the 
Swift container-updater 
(https://review.openstack.org/#/q/I7eed122bf6b663e6e7894ace136b6f4653db4985,n,z) 
backported to Juno and then to Icehouse soon if possible. It's been in 
the queue for a while now, so we were wondering if we could have an 
estimated time for delivery?


Icehouse is in security-only mode, but the container-updater issue may 
potentially be used as a fork-bomb, which presents security concerns. 
To further justify the fix, a problem of similar nature 
https://review.openstack.org/#/c/126371/(regarding the object-auditor) 
was successfully fixed in stable/icehouse.


The object-updater issue may potentially have some security 
implications as well.



Thank you very much!

Minwoo


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Swift object-updater and container-updater

2015-01-09 Thread Jay S. Bryant

Minwoo,

The cherry-picks for the contain-updater and object-updater back to 
stable/juno are now available for review: 
https://review.openstack.org/146211 and https://review.openstack.org/134082


Jay

On 01/08/2015 09:21 AM, Minwoo Bae wrote:

Hi, to whom it may concern:


Jay Bryant and I would like to have the fixes for the Swift 
object-updater (https://review.openstack.org/#/c/125746/) and the 
Swift container-updater 
(https://review.openstack.org/#/q/I7eed122bf6b663e6e7894ace136b6f4653db4985,n,z) 
backported to Juno and then to Icehouse soon if possible. It's been in 
the queue for a while now, so we were wondering if we could have an 
estimated time for delivery?


Icehouse is in security-only mode, but the container-updater issue may 
potentially be used as a fork-bomb, which presents security concerns. 
To further justify the fix, a problem of similar nature 
https://review.openstack.org/#/c/126371/(regarding the object-auditor) 
was successfully fixed in stable/icehouse.


The object-updater issue may potentially have some security 
implications as well.



Thank you very much!

Minwoo


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-09 Thread Sean Dague
On 01/09/2015 01:59 AM, Morgan Fainberg wrote:
 That was a copy paste error. The response was meant to be:
 
 Yes, that is the issue, unbounded version on the stable branches. 

So if that is the root issue, there are other fixes. Our current policy
of keeping clients uncapped on stable branches is due to trying to do a
few too many things at once in the test environment.

Some history:

There are basically no explicit live tests for clients libraries / cli
at all. It remains a hole. As a project we've fallen back to secondary
testing of those clients through the fact that OpenStack services
implicitly use them to talk to each other (and they used to be inside a
small amount of tempest code, but that's been removed).

As a policy we expect:

CLI/LIBS to work with both newer and older clouds. Which totally makes
sense. Especially when you think about the use case like nodepool where
you have 1 process that talks to 2 clouds at different levels of
OpenStack at the same time.

Barring any real testing of the CLI/LIBS we do the install latest libs
in stable/juno stacks as a fall back.

But... what that actually creates is a stable/icehouse env with the
latest client code being used to talk between components (i.e. from nova
- cinder). Which I am sure is how basically zero people in the world
have their clouds. You don't go randomly upping libraries in these
environments unless you have to. It also actually allows backports to
require newer library features which wouldn't exist there.


So... we could (and probably should) cap libraries on stable branches.
There are definitely a few dragons there, at least one of which Doug
discovered in that you can't really do this for only a slice of the
libraries, as if any are allowed to roll forward they can stick you in
conflicting requirements land. We know they all worked at a revision set
when we released, we need to capture that and move on.

This would allow the keystone team to drop having to carry that dead code.

Clearly we also need to actually have the clients test themselves
against OpenStack explicitly, not just by accident. But that's a bigger
challenge to overcome.

-Sean

 
 --Morgan
 
 Sent via mobile
 
 On Jan 8, 2015, at 22:57, Morgan Fainberg morgan.fainb...@gmail.com wrote:



 On Jan 8, 2015, at 16:10, Sean Dague s...@dague.net wrote:

 On 01/08/2015 07:01 PM, Morgan Fainberg wrote:

 On Jan 8, 2015, at 3:56 PM, Sean Dague s...@dague.net wrote:

 On 01/08/2015 06:29 PM, Morgan Fainberg wrote:
 As of Juno all projects are using the new keystonemiddleware package for 
 auth_token middleware. Recently we’ve been running into issues with 
 maintenance of the now frozen (and deprecated) 
 keystoneclient.middleware.auth_token code. Ideally all deployments 
 should move over to the new package. In some cases this may or may not 
 be as feasible due to requirement changes when using the new middleware 
 package on particularly old deployments (Grizzly, Havana, etc).

 The Keystone team is looking for the best way to support our deployer 
 community. In a perfect world we would be able to convert icehouse 
 deployments to the new middleware package and instruct deployers to use 
 either an older keystoneclient or convert to keystonemiddleware if they 
 want the newest keystoneclient lib (regardless of their deployment 
 release). For releases older than Icehouse (EOLd) there is no way to 
 communicate in the repositories/tags a change to require 
 keystonemiddleware.

 There are 2 viable options to get to where we only have one version of 
 the keystonemiddleware to maintain (which for a number of reasons, 
 primarily relating to security concerns is important).

 1) Work to update Icehouse to include the keystonemiddleware package for 
 the next stable release. Sometime after this stable release remove the 
 auth_token (and other middlewares) from keystoneclient. The biggest 
 downside is this adds new dependencies in an old release, which is poor 
 for packaging and deployers (making sure paste-ini is updated etc).

 2) Plan to remove auth_token from keystoneclient once icehouse hits EOL. 
 This is a better experience for our deployer base, but does not solve 
 the issues around solid testing with the auth_token middleware from 
 keystoneclient (except for the stable-icehouse devstack-gate jobs).

 I am looking for insight, preferences, and other options from the 
 community and the TC. I will propose this topic for the next TC meeting 
 so that we can have a clear view on how to handle this in the most 
 appropriate way that imparts the best balance between maintainability, 
 security, and experience for the OpenStack providers, deployers, and 
 users.

 So, ignoring the code a bit for a second, what are the interfaces which
 are exposed that we're going to run into a breaking change here?

   -Sean


 There are some configuration options provided by auth_token middleware and 
 the paste-ini files load keystoneclient.middleware.auth_token to 

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2015-01-09 Thread Zane Bitter

On 08/01/15 05:39, Anant Patil wrote:

1. The stack was failing when there were single disjoint resources or
just one resource in template. The graph did not include this resource
due to a minor bug in dependency_names(). I have added a test case and
fix here:
https://github.com/anantpatil/heat-convergence-prototype/commit/b58abd77cf596475ecf3f19ed38adf8ad3bb6b3b


Thanks, sorry about that! I will push a patch to fix it up.


2. The resource graph is created with keys in both forward order
traversal and reverse order traversal and the update will finish the
forward order and attempt the reverse order. If this is the case, then
the update-replaced resources will be deleted before the update is
complete and if the update fails, the old resource is not available for
roll-back; a new resource has to be created then. I have added a test
case at the above mentioned location.

In our PoC, the updates (concurrent updates) won't remove a
update-replaced resource until all the resources are updated, and
resource clean-up phase is started.


Hmmm, this is a really interesting question actually. That's certainly 
not how Heat works at the moment; we've always assumed that rollback is 
best-effort at recovering the exact resources you had before. It would 
be great to have users weigh in on how they expect this to behave. I'm 
curious now what CloudFormation does.


I'm reluctant to change it though because I'm pretty sure this is 
definitely *not* how you would want e.g. a rolling update of an 
autoscaling group to happen.



It is unacceptable to remove the old
resource to be rolled-back to since it may have changes which the user
doesn't want to loose;


If they didn't want to lose it they shouldn't have tried an update that 
would replace it. If an update causes a replacement or an interruption 
to service then I consider the same fair game for the rollback - the 
user has already given us permission for that kind of change. (Whether 
the user's consent was informed is a separate question, addressed by 
Ryan's update-preview work.)



and that's why probably they use the roll-back
flag.


I don't think there's any basis for saying that. People use the rollback 
flag because they want the stack left in a consistent state even if an 
error occurs.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] noVNC disabled by default?

2015-01-09 Thread Solly Ross
Hi, 
I just noticed that noVNC was disabled by default in devstack (the relevant 
change was  
https://review.openstack.org/#/c/140860/).  

Now, if I understand correctly (based on the short commit message), the 
rationale is that we don't want devstack to reply on non-OpenStack Git  
repos, so that devstack doesn't fail when some external Git hosting 
service (e.g. GitHub) goes down.

This is all fine and dandy (and a decent idea, IMO), but this leaves devstack   
installing a broken installation of Horizon by default -- Horizon still  
attempts to show the noVNC console when you go to the console tab for an  
instance, which is a bit confusing, initially.  Now, it wasn't particularly 
hard to track not particularly hard to track down *why* this happened (hmm...   
my stackrc seems to be missing n-novnc in ENABLED_SERVICES.  Go-go-gadget 
`git blame`), but it strikes me as a bit inconsistent and inconvenient. 

Personally, I would like to see noVNC back as a default service, since it   
can be useful when trying to see what your VM is actually doing during  
boot, or if you're having network issues.  Is there anything I can do   
as a noVNC maintainer to help?  

We (the noVNC team) do publish releases, and I've been trying to make   
sure that they happen in a more timely fashion.  In the past, it was necessary  
to use Git master to ensure that you got the latest version (there was a
2-year gap between 0.4 and 0.5!), but I'm trying to change that.  Currently,
it would appear that most of the distros are still using the old version (0.4), 
but versions 0.5 and 0.5.1 are up on GitHub as release tarballs (0.5 being a 3  
months old and 0.5.1 having been tagged a couple weeks ago).  I will attempt to 
work with distro maintainers to get the packages updated.  However, in the mean 
time, is there a place would be acceptable to place the releases so that 
devstack
can install them?   

Best Regards,   
Solly Ross 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Neutron ML2][VMWare]NetworkNotFoundForBridge: Network could not be found for bridge br-int

2015-01-09 Thread Foss Geek
Dear All,

I am trying to integrate Openstack + vCenter + Neutron + VMware dvSwitch
ML2 Mechanism driver.

I deployed a two node openstack environment (controller + compute with KVM)
with Neutron VLAN + KVM using fuel 5.1. Again I installed nova-compute
using yum in controller node and configured nova-compute in controller to
point vCenter. I am also using Neutron VLAN with VMware dvSwitch ML2
Mechanism driver. My vCenter is properly configured as suggested by the
doc:
https://www.mirantis.com/blog/managing-vmware-vcenter-resources-mirantis-openstack-5-0-part-1-create-vsphere-cluster/

I am able to create network from Horizon and I can see the same network
created in vCenter. When I try to create a VM I am getting the below error
in Horizon.

Error: Failed to launch instance test-01: Please try again later [Error:
No valid host was found. ].

Here is the error message from Instance Overview tab:

Instance Overview
Info
Name
test-01
ID
309a1f47-83b6-4ab4-9d71-642a2000c8a1
Status
Error
Availability Zone
nova
Created
Jan. 9, 2015, 8:16 p.m.
Uptime
0 minutes
Fault
Message
No valid host was found.
Code
500
Details
File /usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py,
line 108, in schedule_run_instance raise exception.NoValidHost(reason=)
Created
Jan. 9, 2015, 8:16 p.m

Getting the below error in nova-all.log:


183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.135 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Authenticating user token
__call__
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:676
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.136 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Removing headers from request
environment:
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
_remove_auth_headers
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:733
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.137 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Returning cached token
_cache_get
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1545
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.138 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Storing token in cache store
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1460
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.139 31870 DEBUG
keystoneclient.middleware.auth_token
[req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Received request from user:
4564fea80fa14e1daed160afa074d389 with project_id :
dd32714d9009495bb51276e284380d6a and roles: admin,_member_
 _build_user_headers
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:996
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.141 31870 DEBUG
routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Matched GET
/dd32714d9009495bb51276e284380d6a/servers/309a1f47-83b6-4ab4-9d71-642a2000c8a1
__call__ /usr/lib/python2.6/site-packages/routes/middleware.py:100
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG
routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Route path:
'/{project_id}/servers/:(id)', defaults: {'action': u'show', 'controller':
nova.api.openstack.wsgi.Resource object at 0x43e2550} __call__
/usr/lib/python2.6/site-packages/routes/middleware.py:102
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG
routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Match dict:
{'action': u'show', 'controller': nova.api.openstack.wsgi.Resource object
at 0x43e2550, 'project_id': u'dd32714d9009495bb51276e284380d6a', 'id':
u'309a1f47-83b6-4ab4-9d71-642a2000c8a1'} __call__
/usr/lib/python2.6/site-packages/routes/middleware.py:103
183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.143 31870 DEBUG
nova.api.openstack.wsgi [req-05089e83-e4c1-4d90-b7c5-065226e55d91 None]
Calling method 'bound method Controller.show of
nova.api.openstack.compute.servers.Controller object at 0x4204290'
(Content-type='None', Accept='application/json') _process_stack
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:945
183Jan  9 20:16:23 node-18 nova-compute 2015-01-09 20:16:23.170 29111
DEBUG nova.virt.vmwareapi.network_util
[req-27cf4cd7-9184-4d7e-b57a-19ef3caeef26 None] Network br-int not found on
host! get_network_with_the_name
/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/network_util.py:80
179Jan  9 20:16:23 node-18 nova-compute 2015-01-09 20:16:23.171 29111
ERROR nova.compute.manager [req-27cf4cd7-9184-4d7e-b57a-19ef3caeef26 None]
[instance: 309a1f47-83b6-4ab4-9d71-642a2000c8a1] Instance failed to spawn

[openstack-dev] [api] API Definition Formats

2015-01-09 Thread Everett Toews
One thing that has come up in the past couple of API WG meetings [1] is just 
how useful a proper API definition would be for the OpenStack projects.

By API definition I mean a format like Swagger, RAML, API Blueprint, etc. These 
formats are a machine/human readable way of describing your API. Ideally they 
drive the implementation of both the service and the client, rather than 
treating the format like documentation where it’s produced as a by product of 
the implementation.

I think this blog post [2] does an excellent job of summarizing the role of API 
definition formats.

Some of the other benefits include validation of requests/responses, easier 
review of API design/changes, more consideration given to client design, 
generating some portion of your client code, generating documentation, mock 
testing, etc. 

If you have experience with an API definition format, how has it benefitted 
your prior projects?

Do you think it would benefit your current OpenStack project?

Thanks,
Everett

[1] https://wiki.openstack.org/wiki/Meetings/API-WG
[2] 
http://apievangelist.com/2014/12/21/making-sure-the-most-important-layers-of-api-space-stay-open/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Sylvain Bauza


Le 09/01/2015 09:01, Alex Xu a écrit :

Hi, All

There is bug when running nova with ironic 
https://bugs.launchpad.net/nova/+bug/1402658


The case is simple: one baremetal node with 1024MB ram, then boot two 
instances with 512MB ram flavor.

Those two instances will be scheduling to same baremetal node.

The problem is at scheduler side the IronicHostManager will consume 
all the resources for that node whatever
how much resource the instance used. But at compute node side, the 
ResourceTracker won't consume resources
like that, just consume like normal virtual instance. And 
ResourceTracker will update the resource usage once the
instance resource claimed, then scheduler will know there are some 
free resource on that node, then will try to

schedule other new instance to that node.

I take look at that, there is NumInstanceFilter, it will limit how 
many instance can schedule to one host. So can
we just use this filter to finish the goal? The max instance is 
configured by option 'max_instances_per_host', we
can make the virt driver to report how many instances it supported. 
The ironic driver can just report max_instances_per_host=1.
And libvirt driver can report max_instance_per_host=-1, that means no 
limit. And then we can just remove the
IronicHostManager, then make the scheduler side is more simpler. Does 
make sense? or there are more trap?


Thanks in advance for any feedback and suggestion.




Mmm, I think I disagree with your proposal. Let me explain by the best I 
can why :


tl;dr: Any proposal unless claiming at the scheduler level tends to be wrong

The ResourceTracker should be only a module for providing stats about 
compute nodes to the Scheduler.
How the Scheduler is consuming these resources for making a decision 
should only be a Scheduler thing.


Here, the problem is that the decision making is also shared with the 
ResourceTracker because of the claiming system managed by the context 
manager when booting an instance. It means that we have 2 distinct 
decision makers for validating a resource.


Let's stop to be realistic for a moment and discuss about what could 
mean a decision for something else than a compute node. Ok, let say a 
volume.
Provided that *something* would report the volume statistics to the 
Scheduler, that would be the Scheduler which would manage if a volume 
manager could accept a volume request. There is no sense to validate the 
decision of the Scheduler on the volume manager, just maybe doing some 
error management.


We know that the current model is kinda racy with Ironic because there 
is a 2-stage validation (see [1]). I'm not in favor of complexifying the 
model, but rather put all the claiming logic in the scheduler, which is 
a longer path to win, but a safier one.


-Sylvain

[1]  https://bugs.launchpad.net/nova/+bug/1341420


Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Stable branch status and proposed stable-maint members mentoring

2015-01-09 Thread Thierry Carrez
Hi stable-maint people,

We seem to still have a number of issues with stable branch in the gate,
both in Icehouse and Juno. I'd like to help where I can but I have a bit
of a hard time tracking down the remaining failures and things that have
already been worked on (we really need a dashboard there...)

Ihar  Adam: as Icehouse and Juno champions, could you post a quick
status update here, and let us know where the rest of us can help ?

In the future, how could we better communicate on that ? Should we make
more use of #openstack-stable to communicate on issues and progress ?
Should we set up a Stable status wiki page ?

Another topic is the mentoring of new propose $PROJECT-stable-maint
members. We have a number of proposed people to contact and introduce
the stable branch policy to (before we add them to the group):

Erno Kuvaja (glance-stable-maint)
Amrith Kumar (trove-stable-maint)
Lin-Hua Cheng (horizon-stable-maint)

Is someone in stable-maint-core interested in reaching out to them ? If
not I'll probably handle that.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] serial-console *replaces* console-log file?

2015-01-09 Thread Sahid Orentino Ferdjaoui
On Fri, Jan 09, 2015 at 09:15:39AM +0800, Lingxian Kong wrote:
 There is an excellent post describing this, for your information:
 http://blog.oddbit.com/2014/12/22/accessing-the-serial-console-of-your-nova-servers/

Good reference, you can also get some information here:

  https://review.openstack.org/#/c/132269/

 2015-01-07 22:38 GMT+08:00 Markus Zoeller mzoel...@de.ibm.com:
  The blueprint serial-ports introduced a serial console connection
  to an instance via websocket. I'm wondering
  * why enabling the serial console *replaces* writing into log file [1]?
  * how one is supposed to retrieve the boot messages *before* one connects?

The good point of using serial console is that you can create with a
few lines of python an interactive console to debug your virtual
machine.

s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] packaging problem production build question

2015-01-09 Thread Matthias Runge
On 08/01/15 23:46, Matthew Farina wrote:
 Thanks for humoring me as I ask these questions. I'm just trying to
 connect the dots.
 
 How would system packages work in practice? For example, when it comes
 to ubuntu lucid (10.04 LTS) there is no system package meeting the
 jQuery requirement and for precise (12.04 LTS) you need
 precise-backports. This is for the most popular JavaScript library.
 There is only an angular package for trusty (14.04 LTS) and the version
 is older than the horizon minimum.
 
 private-bower would be a nice way to have a private registry. But, bower
 packages aren't packages in the same sense as system or pypi packages.
 If I understand it correctly, when bower downloads something it doesn't
 get it from the registry (bower.io http://bower.io or private-bower).
 Instead it goes to the source (e.g., Github) to download the code.
 private-bower isn't a package mirror but instead a private registry (of
 location). How could private-bower be used to negate network effects if
 you still need to go out to the Internet to get the packages?
 
 
For a deployment, you want updates, often installed automatically.

Your repository providing your horizon package needs to provide required
dependencies as well.

I wouldn't recommend to use bower. In some environments, it's not
allowed to use third party repositories at all. A test environment
should match a possible production environment, where it can. This one
is quite easy.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable branch status and proposed stable-maint members mentoring

2015-01-09 Thread Ihar Hrachyshka

On 01/09/2015 11:44 AM, Thierry Carrez wrote:

Hi stable-maint people,

We seem to still have a number of issues with stable branch in the gate,
both in Icehouse and Juno. I'd like to help where I can but I have a bit
of a hard time tracking down the remaining failures and things that have
already been worked on (we really need a dashboard there...)

Ihar  Adam: as Icehouse and Juno champions, could you post a quick
status update here, and let us know where the rest of us can help ?


Icehouse:

current:
- tempest, nova failures due to fresh boto release.
To be fixed with: https://review.openstack.org/#/c/146049/ (needs 
backports to Juno and Icehouse, and merge of openstack bot patches to 
nova and tempest repos)


recently fixed:
- keystone failed before due to new pip, fixed with 
I3e0f1c2d9a859f276f74cb1d1477f92fe8a7524e.
- nova failed before due to huge logs filled with DeprecationWarning 
warnings (fixed with latest stevedore release).


Other than that, there were some failures due to pypi mirror issues.



In the future, how could we better communicate on that ? Should we make
more use of #openstack-stable to communicate on issues and progress ?
Should we set up a Stable status wiki page ?
I think we should have some common document (etherpad?) with branch 
status and links.


Another topic is the mentoring of new propose $PROJECT-stable-maint
members. We have a number of proposed people to contact and introduce
the stable branch policy to (before we add them to the group):

Erno Kuvaja (glance-stable-maint)
Amrith Kumar (trove-stable-maint)
Lin-Hua Cheng (horizon-stable-maint)

Is someone in stable-maint-core interested in reaching out to them ? If
not I'll probably handle that.

I would expect stable liaisons to handle that. Don't we have those 
assigned to the projects?


/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Driver modes, share-servers, and clustered backends

2015-01-09 Thread Deepak Shetty
Some of my comments inline prefixed with deepakcs

On Fri, Jan 9, 2015 at 6:43 AM, Li, Chen chen...@intel.com wrote:

 Thanks for the explanations!
 Really helpful.

 My questions are added in line.

 Thanks.
 -chen

 -Original Message-
 From: Ben Swartzlander [mailto:b...@swartzlander.org]
 Sent: Friday, January 09, 2015 6:02 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Manila] Driver modes, share-servers, and
 clustered backends

 There has been some confusion on the topic of driver modes and
 share-server, especially as they related to storage controllers with
 multiple physical nodes, so I will try to clear up the confusion as much as
 I can.

 Manila has had the concept of share-servers since late icehouse. This
 feature was added to solve 3 problems:
 1) Multiple drivers were creating storage VMs / service VMs as a side
 effect of share creation and Manila didn't offer any way to manage or even
 know about these VMs that were created.
 2) Drivers needed a way to keep track of (persist) what VMs they had
 created

 == so, a corresponding relationship do exist between share server and
 virtual machines.


deepakcs: I also have the same Q.. is there a relation between share server
and service VM or not ? Is there any other way you can implement a share
server w/o creating a service VM ?
IIUC, some may say that the vserver created in case of netapp storage is eq
to share server ? If this is true, then we should have a notion of whether
share server is within Manila or outside Manila too, no ? If this is not
true, then does the netapp cluster_mode driver get classified as single_svm
mode driver ?



 3) We wanted to standardize across drivers what these VMs looked like to
 Manila so that the scheduler and share-manager could know about them

 ==Q, why scheduler and share-manager need to know them ?


deepakcs: I guess because these service VMs will be managed by Manila hence
they need to know about it



 It's important to recognize that from Manila's perspective, all a
 share-server is is a container for shares that's tied to a share network
 and it also has some network allocations. It's also important to know that
 each share-server can have zero, one, or multiple IP addresses and can
 exist on an arbitrary large number of physical nodes, and the actual form
 that a share-server takes is completely undefined.


deepakcs: I am confused about `can exist on an arbitrary large number of
physical nodes` - How is this true in case of generic driver, where service
VM is just a VM on one node. What does large number of physical nodes mean,
can you provide a real world example to understand this pls ?



 During Juno, drivers that didn't explicity support the concept of
 share-servers basically got a dummy share server created which acted as a
 giant container for all the shares that backend created. This worked okay,
 but it was informal and not documented, and it made some of the things we
 want to do in kilo impossible.

 == Q, what things are impossible?  Dummy share server solution make sense
 to me.


deepakcs: I looked at the stable/juno branch and I am not sure exactly to
which part of the code you refer to as dummy server. can you pinpoint it
pls so that its clear for all. Are you referring to the ability of driver
to handle setup_server as a dummy server creation ? For eg: in glusterfs
case setup_server is no-op and I don't see how a dummy share server
(meanign service VM) is getting created from the code.




 To solve the above problem I proposed driver modes. Initially I proposed
 3 modes:
 1) single_svm
 2) flat_multi_svm
 3) managed_multi_svm

 Mode (1) was supposed to correspond to driver that didn't deal with share
 servers, and modes (2) and (3) were for drivers that did deal with share
 servers, where the difference between those 2 modes came down to networking
 details. We realized that (2) can be implemented as a special case of (3)
 so we collapsed the modes down to 2 and that's what's merged upstream now.

 == driver that didn't deal with share servers 
   =
 https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver
   = This is where I get totally lost.
   = Because for generic driver, it is not create and delete share
 servers and its related network, but would still use a share server(the
 service VM) .
   = The share (the cinder volume) need to attach to an instance no matter
 what the driver mode is.
   = I think use is some kind of deal too.


deepakcs: I partly agree with Chen above. If (1) doesn't deal with share
server, why even have 'svm' in it ? Also in *_multi_svm mode, what does
'multi' mean ? IIRC we provide the ability to manage share servers, 1 per
tenant, so how does multi fit into 1 share server per tenant notion ? Or
am i completely wrong about it ?



 The specific names we settled on (single_svm and multi_svm) were perhaps
 poorly chosen, because svm is not a term we've used 

[openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-09 Thread Erlon Cruz
Hi all, hi cinder core devs,

I have read on IRC discussions about a deadline for drivers vendors to have
their CI running and voting until kilo-2, but I didn't find any post on
this list to confirm this. Can anyone confirm this?

Thanks,
Erlon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Murray, Paul (HP Cloud)
There is bug when running nova with ironic 
https://bugs.launchpad.net/nova/+bug/1402658

I filed this bug – it has been a problem for us.

The problem is at scheduler side the IronicHostManager will consume all the 
resources for that node whatever
how much resource the instance used. But at compute node side, the 
ResourceTracker won't consume resources
like that, just consume like normal virtual instance. And ResourceTracker will 
update the resource usage once the
instance resource claimed, then scheduler will know there are some free 
resource on that node, then will try to
schedule other new instance to that node

You have summed up the problem nicely – i.e.: the resource availability is 
calculated incorrectly for ironic nodes.

I take look at that, there is NumInstanceFilter, it will limit how many 
instance can schedule to one host. So can
we just use this filter to finish the goal? The max instance is configured by 
option 'max_instances_per_host', we
can make the virt driver to report how many instances it supported. The ironic 
driver can just report max_instances_per_host=1.
And libvirt driver can report max_instance_per_host=-1, that means no limit. 
And then we can just remove the
IronicHostManager, then make the scheduler side is more simpler. Does make 
sense? or there are more trap?


Makes sense, but solves the wrong problem. The problem is what you said above – 
i.e.: the resource availability is calculated incorrectly for ironic nodes.
The right solution would be to fix the resource tracker. The ram resource on an 
ironic node has different allocation behavior to a regular node. The test to 
see if a new instance fits is the same, but instead of deducting the requested 
amount to get the remaining availability it should simply return 0. This should 
be dealt with in the new resource objects ([2] below) by either having 
different version of the resource object for ironic nodes (certainly doable and 
the most sensible option – resources should be presented according to the 
resources on the host). Alternatively the ram resource object should cater for 
the difference in its calculations.
I have a local fix for this that I was too shy to propose upstream because it’s 
a bit hacky and will hopefully be obsolete soon. I could share it if you like.
Paul
[2] https://review.openstack.org/#/c/127609/


From: Sylvain Bauza sba...@redhat.commailto:sba...@redhat.com
Date: 9 January 2015 at 09:17
Subject: Re: [openstack-dev] [Nova][Ironic] Question about scheduling two 
instances to same baremetal node
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org


Le 09/01/2015 09:01, Alex Xu a écrit :
Hi, All

There is bug when running nova with ironic 
https://bugs.launchpad.net/nova/+bug/1402658

The case is simple: one baremetal node with 1024MB ram, then boot two instances 
with 512MB ram flavor.
Those two instances will be scheduling to same baremetal node.

The problem is at scheduler side the IronicHostManager will consume all the 
resources for that node whatever
how much resource the instance used. But at compute node side, the 
ResourceTracker won't consume resources
like that, just consume like normal virtual instance. And ResourceTracker will 
update the resource usage once the
instance resource claimed, then scheduler will know there are some free 
resource on that node, then will try to
schedule other new instance to that node.

I take look at that, there is NumInstanceFilter, it will limit how many 
instance can schedule to one host. So can
we just use this filter to finish the goal? The max instance is configured by 
option 'max_instances_per_host', we
can make the virt driver to report how many instances it supported. The ironic 
driver can just report max_instances_per_host=1.
And libvirt driver can report max_instance_per_host=-1, that means no limit. 
And then we can just remove the
IronicHostManager, then make the scheduler side is more simpler. Does make 
sense? or there are more trap?

Thanks in advance for any feedback and suggestion.


Mmm, I think I disagree with your proposal. Let me explain by the best I can 
why :

tl;dr: Any proposal unless claiming at the scheduler level tends to be wrong

The ResourceTracker should be only a module for providing stats about compute 
nodes to the Scheduler.
How the Scheduler is consuming these resources for making a decision should 
only be a Scheduler thing.

Here, the problem is that the decision making is also shared with the 
ResourceTracker because of the claiming system managed by the context manager 
when booting an instance. It means that we have 2 distinct decision makers for 
validating a resource.

Let's stop to be realistic for a moment and discuss about what could mean a 
decision for something else than a compute node. Ok, let say a volume.
Provided that *something* would report the volume statistics to the Scheduler, 
that would be the 

[openstack-dev] [nova] reckoning time for nova ec2 stack

2015-01-09 Thread Sean Dague
boto 2.35.0 just released, and makes hmac-v4 authentication mandatory
for EC2 end points (it has been optionally supported for a long time).

Nova's EC2 implementation does not do this.

The short term approach is to pin boto -
https://review.openstack.org/#/c/146049/, which I think is a fine long
term fix for stable/, but in master not supporting new boto, which
people are likely to deploy, doesn't really seem like an option.

https://bugs.launchpad.net/tempest/+bug/1408987 is the bug.

I don't think shipping an EC2 API in Kilo that doesn't work with recent
boto is a thing Nova should do. Do we have volunteers to step up and fix
this, or do we need to get more aggressive about deprecating this interface?

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][client][IMPORTANT] Making Fuel Client a separate project

2015-01-09 Thread Roman Prykhodchenko
Hi folks,

according to the Fuel client refactoring plan [1] it’s necessary to move it out 
to a separate repository on Stackforge.

The process of doing that consists of two major steps:
- Landing a patch [2] to project-config for creating a new Stackforge project
- Creating an initial core group for python-fuelclient
- Moving all un-merged patches from fuel-web to python-fuelclient gerrit repo

The first step of this process has already been started so I kindly ask all 
fuelers to DO NOT MERGE any new patches to fuel-web IF THEY DO touch fuelclient 
folder.
After the project is set up I will let everyone know about that and will tell 
what to do after that so I encourage all interested people to check this thread 
once in a while.


# References:

1. Re-thinking Fuel Client https://review.openstack.org/#/c/145843 
https://review.openstack.org/#/c/145843
2. Add python-fuelclient to Stackforge https://review.openstack.org/#/c/145843 
https://review.openstack.org/#/c/145843


- romcheg___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Re: [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-09 Thread Jay Pipes

Adding [api] topic.

On 01/08/2015 07:47 PM, Kevin Benton wrote:

Is there another openstack service that allows this so we can make the
API consistent between the two when this change is made?


Kevin, thank you VERY much for asking the above question and caring 
about consistency in the APIs!


There was a discussion on the ML about this very area of the APIs, and 
how there is current inconsistency to resolve:


http://openstack-dev.openstack.narkive.com/UbM1J7dH/horizon-all-status-vs-state

You were involved in that thread, so I know you're very familiar with 
the problem domain :)


In the above thread, I mentioned that this really was something that the 
API WG should tackle, and this here ML thread should be a catalyst for 
getting that done.


What we need is a patch proposed to the openstack/api-wg that proposes 
some guidelines around the REST API structure for disabling some 
resource for administrative purposes, with some content that discusses 
the semantic differences between state and status, and makes 
recommendations on the naming of resource attributes that indicate an 
admnistrative state.


Of course, this doesn't really address Jack M's question about whether 
there should be a separate mode (in Jack's terms) to indicate that 
some resource can be only manually assigned and not automatically 
assigned. Personally, I don't feel there is a need for another mode. I 
think if something has been administratively disabled, that an 
administrator should still be able to manually alter that thing.


All the best,
-jay


On Thu, Jan 8, 2015 at 3:09 PM, Carl Baldwin c...@ecbaldwin.net
mailto:c...@ecbaldwin.net wrote:

I added a link to @Jack's post to the ML to the bug report [1].  I am
willing to support @Itsuro with reviews of the implementation and am
willing to consult if you need and would like to ping me.

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1408488

On Thu, Jan 8, 2015 at 7:49 AM, McCann, Jack jack.mcc...@hp.com
mailto:jack.mcc...@hp.com wrote:
  +1 on need for this feature
 
  The way I've thought about this is we need a mode that stops the
*automatic*
  scheduling of routers/dhcp-servers to specific hosts/agents,
while allowing
  manual assignment of routers/dhcp-servers to those hosts/agents,
and where
  any existing routers/dhcp-servers on those hosts continue to
operate as normal.
 
  The maintenance use case was mentioned: I want to evacuate
routers/dhcp-servers
  from a host before taking it down, and having the scheduler add
new routers/dhcp
  while I'm evacuating the node is a) an annoyance, and b) causes a
service blip
  when I have to right away move that new router/dhcp to another host.
 
  The other use case is adding a new host/agent into an existing
environment.
  I want to be able to bring the new host/agent up and into the
neutron config, but
  I don't want any of my customers' routers/dhcp-servers scheduled
there until I've
  had a chance to assign some test routers/dhcp-servers and make
sure the new server
  is properly configured and fully operational.
 
  - Jack
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Sylvain Bauza


Le 09/01/2015 15:07, Murray, Paul (HP Cloud) a écrit :


There is bug when running nova with ironic 
https://bugs.launchpad.net/nova/+bug/1402658


I filed this bug – it has been a problem for us.

The problem is at scheduler side the IronicHostManager will consume 
all the resources for that node whatever


how much resource the instance used. But at compute node side, the 
ResourceTracker won't consume resources


like that, just consume like normal virtual instance. And 
ResourceTracker will update the resource usage once the


instance resource claimed, then scheduler will know there are some 
free resource on that node, then will try to


schedule other new instance to that node

You have summed up the problem nicely – i.e.: the resource 
availability is calculated incorrectly for ironic nodes.


I take look at that, there is NumInstanceFilter, it will limit how 
many instance can schedule to one host. So can


we just use this filter to finish the goal? The max instance is 
configured by option 'max_instances_per_host', we


can make the virt driver to report how many instances it supported. 
The ironic driver can just report max_instances_per_host=1.


And libvirt driver can report max_instance_per_host=-1, that means no 
limit. And then we can just remove the


IronicHostManager, then make the scheduler side is more simpler. Does 
make sense? or there are more trap?


Makes sense, but solves the wrong problem. The problem is what you 
said above – i.e.: the resource availability is calculated incorrectly 
for ironic nodes.


The right solution would be to fix the resource tracker. The ram 
resource on an ironic node has different allocation behavior to a 
regular node. The test to see if a new instance fits is the same, but 
instead of deducting the requested amount to get the remaining 
availability it should simply return 0. This should be dealt with in 
the new resource objects ([2] below) by either having different 
version of the resource object for ironic nodes (certainly doable and 
the most sensible option – resources should be presented according to 
the resources on the host). Alternatively the ram resource object 
should cater for the difference in its calculations.


I have a local fix for this that I was too shy to propose upstream 
because it’s a bit hacky and will hopefully be obsolete soon. I could 
share it if you like.


Paul

[2] https://review.openstack.org/#/c/127609/



Agreed, I think that [2] will help a lot. Until it's done, are we really 
sure we want to fix the bug ? It can be workarounded by creating flavors 
being at least half the compute nodes and I really would like adding 
more tech debt.


-Sylvain


From: *Sylvain Bauza* sba...@redhat.com mailto:sba...@redhat.com
Date: 9 January 2015 at 09:17
Subject: Re: [openstack-dev] [Nova][Ironic] Question about scheduling 
two instances to same baremetal node
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org


Le 09/01/2015 09:01, Alex Xu a écrit :

Hi, All

There is bug when running nova with ironic
https://bugs.launchpad.net/nova/+bug/1402658

The case is simple: one baremetal node with 1024MB ram, then boot
two instances with 512MB ram flavor.

Those two instances will be scheduling to same baremetal node.

The problem is at scheduler side the IronicHostManager will
consume all the resources for that node whatever

how much resource the instance used. But at compute node side, the
ResourceTracker won't consume resources

like that, just consume like normal virtual instance. And
ResourceTracker will update the resource usage once the

instance resource claimed, then scheduler will know there are some
free resource on that node, then will try to

schedule other new instance to that node.

I take look at that, there is NumInstanceFilter, it will limit how
many instance can schedule to one host. So can

we just use this filter to finish the goal? The max instance is
configured by option 'max_instances_per_host', we

can make the virt driver to report how many instances it
supported. The ironic driver can just report max_instances_per_host=1.

And libvirt driver can report max_instance_per_host=-1, that means
no limit. And then we can just remove the

IronicHostManager, then make the scheduler side is more simpler.
Does make sense? or there are more trap?

Thanks in advance for any feedback and suggestion.

Mmm, I think I disagree with your proposal. Let me explain by the best 
I can why :


tl;dr: Any proposal unless claiming at the scheduler level tends to be 
wrong


The ResourceTracker should be only a module for providing stats about 
compute nodes to the Scheduler.
How the Scheduler is consuming these resources for making a decision 
should only be a Scheduler thing.


Here, the problem is that the decision making is also 

Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Sylvain Bauza


Le 09/01/2015 14:58, Alex Xu a écrit :



2015-01-09 17:17 GMT+08:00 Sylvain Bauza sba...@redhat.com 
mailto:sba...@redhat.com:



Le 09/01/2015 09:01, Alex Xu a écrit :

Hi, All

There is bug when running nova with ironic
https://bugs.launchpad.net/nova/+bug/1402658

The case is simple: one baremetal node with 1024MB ram, then boot
two instances with 512MB ram flavor.
Those two instances will be scheduling to same baremetal node.

The problem is at scheduler side the IronicHostManager will
consume all the resources for that node whatever
how much resource the instance used. But at compute node side,
the ResourceTracker won't consume resources
like that, just consume like normal virtual instance. And
ResourceTracker will update the resource usage once the
instance resource claimed, then scheduler will know there are
some free resource on that node, then will try to
schedule other new instance to that node.

I take look at that, there is NumInstanceFilter, it will limit
how many instance can schedule to one host. So can
we just use this filter to finish the goal? The max instance is
configured by option 'max_instances_per_host', we
can make the virt driver to report how many instances it
supported. The ironic driver can just report
max_instances_per_host=1.
And libvirt driver can report max_instance_per_host=-1, that
means no limit. And then we can just remove the
IronicHostManager, then make the scheduler side is more simpler.
Does make sense? or there are more trap?

Thanks in advance for any feedback and suggestion.




Mmm, I think I disagree with your proposal. Let me explain by the
best I can why :

tl;dr: Any proposal unless claiming at the scheduler level tends
to be wrong

The ResourceTracker should be only a module for providing stats
about compute nodes to the Scheduler.
How the Scheduler is consuming these resources for making a
decision should only be a Scheduler thing.


agreed, but we can't implement this for now, the reason is you 
described as below.



Here, the problem is that the decision making is also shared with
the ResourceTracker because of the claiming system managed by the
context manager when booting an instance. It means that we have 2
distinct decision makers for validating a resource.


Totally agreed! This is the root cause.

Let's stop to be realistic for a moment and discuss about what
could mean a decision for something else than a compute node. Ok,
let say a volume.
Provided that *something* would report the volume statistics to
the Scheduler, that would be the Scheduler which would manage if a
volume manager could accept a volume request. There is no sense to
validate the decision of the Scheduler on the volume manager, just
maybe doing some error management.

We know that the current model is kinda racy with Ironic because
there is a 2-stage validation (see [1]). I'm not in favor of
complexifying the model, but rather put all the claiming logic in
the scheduler, which is a longer path to win, but a safier one.


Yea, I have thought about add same resource consume at compute manager 
side, but it's ugly because we implement ironic's resource consuming 
method in two places. If we move the claiming in the scheduler the 
thing will become easy, we can just provide some extension for 
different consuming method (If I understand right the discussion in 
the IRC). As gantt will be standalone service, so validating a 
resource shouldn't spread into different service. So I agree with you.


But for now, as you said this is long term plan. We can't provide 
different resource consuming in compute manager side now, also can't 
move the claiming into scheduler now. So the method I proposed is more 
easy for now, at least we won't have different resource consuming way 
between scheduler(IonricHostManger) and compute(ResourceTracker) for 
ironic. And ironic can works fine.


The method I propose have a little problem. When all the node 
allocated, we still can saw there are some resource are free if the 
flavor's resource is less than baremetal's resource. But it can be 
done by expose max_instance to hypervisor api(running instances 
already exposed), then user will now why can't allocated more 
instance. And if we can configure max_instance for each node, sounds 
like useful for operator also :)


I think that if you don't want to wait for the claiming system to happen 
in the Scheduler, then at least you need to fix the current way of using 
the ResourceTracker, like what Jay Pipes is working on in his spec.



-Sylvain



-Sylvain

[1] https://bugs.launchpad.net/nova/+bug/1341420


Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [all] python 2.6 for clients

2015-01-09 Thread Andreas Jaeger
On 01/09/2015 02:25 PM, Ihar Hrachyshka wrote:
 Hi all,
 
 I assumed that we still support py26 for clients, but then I saw [1]
 that removed corresponding tox environment from ironic client.
 
 What's our take on that? Shouldn't clients still support Python 2.6?
 
 [1]:
 https://github.com/openstack/ironic-python-agent/commit/d95a99d5d1a62ef5c085ce20ec07d960a3f23ac1

Indeed, clients are supposed to continue supporting 2.6 as mentioned here:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049111.html

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] python 2.6 for clients

2015-01-09 Thread Ihar Hrachyshka

On 01/09/2015 02:33 PM, Andreas Jaeger wrote:

On 01/09/2015 02:25 PM, Ihar Hrachyshka wrote:

Hi all,

I assumed that we still support py26 for clients, but then I saw [1]
that removed corresponding tox environment from ironic client.

What's our take on that? Shouldn't clients still support Python 2.6?

[1]:
https://github.com/openstack/ironic-python-agent/commit/d95a99d5d1a62ef5c085ce20ec07d960a3f23ac1

Indeed, clients are supposed to continue supporting 2.6 as mentioned here:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049111.html

Andreas


OK, thanks. Reverting: https://review.openstack.org/#/c/146083/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The state of nova-network to neutron migration

2015-01-09 Thread Sean Dague
On 01/09/2015 05:04 AM, Thierry Carrez wrote:
 Maru Newby wrote:
 On Jan 8, 2015, at 3:54 PM, Sean Dague s...@dague.net wrote:

 The crux of it comes from the fact that the operator voice (especially
 those folks with large nova-network deploys) wasn't represented there.
 Once we got back from the mid-cycle and brought it to the list, there
 was some very understandable push back on deprecating without a
 migration plan.

 I think it’s clear that a migration plan is required.  An automated 
 migration, not so much.
 
 The solution is not black or white.
 
 Yes, operators would generally prefer an instant, automated, no-downtime
 hot migration that magically moves them to the new world order. Yes,
 developers would generally prefer to just document a general cold
 procedure that operators could follow to migrate, warning that their
 mileage may vary.
 
 The trade-off solution we came up with last cycle is to have developers
 and operators converge on a clear procedure with reasonable/acceptable
 downtime, potentially assisted by new features and tools. It's really
 not a us vs. them thing. It's a collaborative effort where operators
 agree on what level of pain they can absorb and developers help to
 reduce that pain wherever reasonably possible.
 
 This convergence effort is currently rebooted because it has stalled. We
 still need to agree on the reasonable trade-off procedure. We still need
 to investigate if there is any tool or simple feature we can add to
 Neutron or Nova to make some parts of that procedure easier and less
 painful.
 
 So we are not bringing back the magic upgrade pony requirement on the
 table. We are just rebooting the effort to come to a reasonable solution
 for everyone.

If we were standing at a place with a detailed manual upgrade document
that explained how to do minimal VM downtime, that a few ops had gone
through and proved out, that would be one thing. And we could figure out
which parts made sense to put tooling around to make this easier for
everyone.

But we seem far from there.

My suggestion is to start with a detailed document, figure out that it
works, and build automation around that process.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] python 2.6 for clients

2015-01-09 Thread Dmitry Tantsur

On 01/09/2015 02:37 PM, Ihar Hrachyshka wrote:

On 01/09/2015 02:33 PM, Andreas Jaeger wrote:

On 01/09/2015 02:25 PM, Ihar Hrachyshka wrote:

Hi all,

I assumed that we still support py26 for clients, but then I saw [1]
that removed corresponding tox environment from ironic client.

What's our take on that? Shouldn't clients still support Python 2.6?

[1]:
https://github.com/openstack/ironic-python-agent/commit/d95a99d5d1a62ef5c085ce20ec07d960a3f23ac1


Indeed, clients are supposed to continue supporting 2.6 as mentioned
here:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049111.html


Andreas


OK, thanks. Reverting: https://review.openstack.org/#/c/146083/
Thanks you for your time folks, but this is not a client :) it's an 
alternative ramdisk for Ironic.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

2015-01-09 Thread Zane Bitter

On 09/01/15 01:07, Angus Salkeld wrote:

I am not in favor of the --continue as an API. I'd suggest responding to
resource timeouts and if there is no response from the task, then
re-start (continue)
the task.


Yeah, I am not in favour of a new API either. In fact, I believe we 
already have this functionality: if you do another update with the same 
template and parameters then it will break the lock and continue the 
update if the engine running the previous update has failed. And when we 
switch over to convergence it will still do the Right Thing without any 
extra implementation effort.


There is one improvement we can make to the API though: in Juno, Ton 
added a PATCH method to stack update such that you can reuse the 
existing parameters without specifying them again. We should extend this 
to the template also, so you wouldn't have to supply any data to get 
Heat to start another update with the same template and parameters.


I'm not sure if there is a blueprint for this already; co-ordinate with 
Ton if you are planning to work on it.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
2015-01-09 17:17 GMT+08:00 Sylvain Bauza sba...@redhat.com:


 Le 09/01/2015 09:01, Alex Xu a écrit :

 Hi, All

  There is bug when running nova with ironic
 https://bugs.launchpad.net/nova/+bug/1402658

  The case is simple: one baremetal node with 1024MB ram, then boot two
 instances with 512MB ram flavor.
 Those two instances will be scheduling to same baremetal node.

  The problem is at scheduler side the IronicHostManager will consume all
 the resources for that node whatever
 how much resource the instance used. But at compute node side, the
 ResourceTracker won't consume resources
 like that, just consume like normal virtual instance. And ResourceTracker
 will update the resource usage once the
 instance resource claimed, then scheduler will know there are some free
 resource on that node, then will try to
 schedule other new instance to that node.

  I take look at that, there is NumInstanceFilter, it will limit how many
 instance can schedule to one host. So can
 we just use this filter to finish the goal? The max instance is configured
 by option 'max_instances_per_host', we
 can make the virt driver to report how many instances it supported. The
 ironic driver can just report max_instances_per_host=1.
 And libvirt driver can report max_instance_per_host=-1, that means no
 limit. And then we can just remove the
 IronicHostManager, then make the scheduler side is more simpler. Does make
 sense? or there are more trap?

  Thanks in advance for any feedback and suggestion.



 Mmm, I think I disagree with your proposal. Let me explain by the best I
 can why :

 tl;dr: Any proposal unless claiming at the scheduler level tends to be
 wrong

 The ResourceTracker should be only a module for providing stats about
 compute nodes to the Scheduler.
 How the Scheduler is consuming these resources for making a decision
 should only be a Scheduler thing.


agreed, but we can't implement this for now, the reason is you described as
below.



 Here, the problem is that the decision making is also shared with the
 ResourceTracker because of the claiming system managed by the context
 manager when booting an instance. It means that we have 2 distinct decision
 makers for validating a resource.


Totally agreed! This is the root cause.


 Let's stop to be realistic for a moment and discuss about what could mean
 a decision for something else than a compute node. Ok, let say a volume.
 Provided that *something* would report the volume statistics to the
 Scheduler, that would be the Scheduler which would manage if a volume
 manager could accept a volume request. There is no sense to validate the
 decision of the Scheduler on the volume manager, just maybe doing some
 error management.

 We know that the current model is kinda racy with Ironic because there is
 a 2-stage validation (see [1]). I'm not in favor of complexifying the
 model, but rather put all the claiming logic in the scheduler, which is a
 longer path to win, but a safier one.


Yea, I have thought about add same resource consume at compute manager
side, but it's ugly because we implement ironic's resource consuming method
in two places. If we move the claiming in the scheduler the thing will
become easy, we can just provide some extension for different consuming
method (If I understand right the discussion in the IRC). As gantt will be
standalone service, so validating a resource shouldn't spread into
different service. So I agree with you.

But for now, as you said this is long term plan. We can't provide different
resource consuming in compute manager side now, also can't move the
claiming into scheduler now. So the method I proposed is more easy for now,
at least we won't have different resource consuming way between
scheduler(IonricHostManger) and compute(ResourceTracker) for ironic. And
ironic can works fine.

The method I propose have a little problem. When all the node allocated, we
still can saw there are some resource are free if the flavor's resource is
less than baremetal's resource. But it can be done by expose max_instance
to hypervisor api(running instances already exposed), then user will now
why can't allocated more instance. And if we can configure max_instance for
each node, sounds like useful for operator also :)



 -Sylvain

 [1]  https://bugs.launchpad.net/nova/+bug/1341420

  Thanks
 Alex


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] offlist: The scope of OpenStack wiki [all]

2015-01-09 Thread Anne Gentle
Oh hi list!

Feel free to discuss the a project just getting started content that ends
up on the wiki -- where would that go?

Anne

On Thu, Jan 8, 2015 at 9:51 PM, Anne Gentle a...@openstack.org wrote:

 Hi Stef, thanks for writing this up. One aspect this proposal doesn't
 address is the ungoverned content for projects that are either in
 stackforge, pre-stackforge, incubating, or have no intention of any
 governance but want to use the openstack wiki. What can we do if one of
 those groups raises the issue? We can talk more about it tomorrow but it's
 problematic. Not unsolvable but lack of governance is one reason to be on
 the wiki.
 Anne

 On Thu, Jan 8, 2015 at 12:31 PM, Stefano Maffulli stef...@openstack.org
 wrote:

 hello folks,

 TL;DR Many wiki pages and categories are maintained elsewhere and to
 avoid confusion to newcomers we need to agree on a new scope for the
 wiki. A suggestion below is to limit its scope to content that doesn't
 need/want peer-review and is not hosted elsewhere (no duplication).

 The wiki served for many years the purpose of 'poor man CMS' when we
 didn't have an easy way to collaboratively create content. So the wiki
 ended up hosting pages like 'Getting started with OpenStack', demo
 videos, How to contribute, mission, to document our culture / shared
 understandings (4 opens, release cycle, use of blueprints, stable branch
 policy...), to maintain the list of Programs, meetings/teams, blueprints
 and specs, lots of random documentation and more.

 Lots of the content originally placed on the wiki was there because
 there was no better place. Now that we have more mature content and
 processes, these are finding their way out of the wiki like:

   * http://governance.openstack.org
   * http://specs.openstack.org
   * http://docs.openstack.org/infra/manual/

 Also, the Introduction to OpenStack is maintained on
 www.openstack.org/software/ together with introductory videos and other
 basic material. A redesign of openstack.org/community and the new portal
 groups.openstack.org are making even more wiki pages obsolete.

 This makes the wiki very confusing to newcomers and more likely to host
 conflicting information.

 I would propose to restrict the scope of the wiki to things that
 anything that don't need or want to be peer-reviewed. Things like:

   * agendas for meetings, sprints, etc
   * list of etherpads for summits
   * quick prototypes of new programs (mentors, upstream training) before
 they find a stable home (which can still be the wiki)

 Also, documentation for contributors and users should not be on the
 wiki, but on docs.openstack.org (where it can be found more easily).

 If nobody objects, I'll start by proposing a new home page design and
 start tagging content that may be moved elsewhere.

 /stef


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-09 Thread Ivan Kolodyazhny
Hi Erlon,

We've got a thread mailing-list [1] for it and some details in wiki [2].
Anyway, need to get confirmation from our core devs and/or Mike.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html
[2]
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Testing_requirements_for_Kilo_release_and_beyond

Regards,
Ivan Kolodyazhny

On Fri, Jan 9, 2015 at 2:26 PM, Erlon Cruz sombra...@gmail.com wrote:

 Hi all, hi cinder core devs,

 I have read on IRC discussions about a deadline for drivers vendors to
 have their CI running and voting until kilo-2, but I didn't find any post
 on this list to confirm this. Can anyone confirm this?

 Thanks,
 Erlon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] python 2.6 for clients

2015-01-09 Thread Ihar Hrachyshka

Hi all,

I assumed that we still support py26 for clients, but then I saw [1] 
that removed corresponding tox environment from ironic client.


What's our take on that? Shouldn't clients still support Python 2.6?

[1]: 
https://github.com/openstack/ironic-python-agent/commit/d95a99d5d1a62ef5c085ce20ec07d960a3f23ac1


/Ihar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Using DevStack for multi-node setup

2015-01-09 Thread Mathieu Rohon
hi danny,

if you're using neutron, you can use the option :

NEUTRON_CREATE_INITIAL_NETWORKS=False

in your local.conf.
This way no router or network are created. You have to create it manually,
and of course you can do it once every agent is up is Neutron.

Mathieu

On Thu, Jan 8, 2015 at 3:46 PM, Kashyap Chamarthy kcham...@redhat.com
wrote:

 On Mon, Jan 05, 2015 at 08:20:48AM -0500, Sean Dague wrote:
  On 01/03/2015 04:41 PM, Danny Choi (dannchoi) wrote:
   Hi,
  
   I’m using DevStack to deploy OpenStack on a multi-node setup:
   Controller, Network, Compute as 3 separate nodes
  
   Since the Controller node is stacked first, during which the Network
   node is not yet ready, it fails to create the router instance and the
   public network.
   Both have to be created manually.
  
   Is this the expected behavior?  Is there a workaround to have DevStack
   create them?
 
  The only way folks tend to run multinode devstack is Controller +
  Compute nodes. And that sequence of creating an all in one controller,
  plus additional compute nodes later, works.

 Sean, I wonder if you have a pointer to an example CI gate job (assuming
 there's one) for the above with Neutron networking?


 --
 /kashyap

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-09 Thread Dean Troyer
On Fri, Jan 9, 2015 at 4:22 AM, Thierry Carrez thie...@openstack.org
wrote:

 This is probably a very dumb question, but could you explain why
 keystoneclient.middleware can't map to keystonemiddleware functions
 (adding keystonemiddleware as a dependency of future keystoneclient)? At
 first glance that would allow to remove dead duplicated code while
 ensuring compatibility for as long as we need to support those old
 releases...


Part of the reason for moving keystonemiddleware out of keystonemiddleware
was to do the reverse, have a keystoneclient install NOT bring in
auth_token.  I doubt there will be anything other than servers that need
keystonemiddleware installed whereas quite a few clients will not want it
at all.

I'm on the fence about changing stable requirements...if we imagine
keystonemiddleware is not an OpenStack project this wouldn't be the first
time we've had to do that when things change out from under us.  But I hate
doing that to ourselves...

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-09 Thread Eduard Matei
Hi all,
Back with the same error.
Did a complete (clean) install based on rasselin's tutorial, now i have a
working jenkins master + a dedicated cloud provider.
Testing with noop looks ok, but dsvm-tempest-full returns NOT_REGISTERED.

Here is some debug.log:

2015-01-09 14:08:06,772 DEBUG zuul.IndependentPipelineManager: Found job
dsvm-tempest-full for change Change 0x7f8db86278d0 139585,15
2015-01-09 14:08:06,773 INFO zuul.Gearman: Launch job dsvm-tempest-full for
change Change 0x7f8db86278d0 139585,15 with dependent changes []
2015-01-09 14:08:06,773 DEBUG zuul.Gearman: Custom parameter function used
for job dsvm-tempest-full, change: Change 0x7f8db86278d0 139585,15,
params: {'BASE_LOG_PATH': '85/139585/15/check', 'ZUUL_PIPELINE': 'check',
'OFFLINE_NODE_WHEN_COMPLETE': '1', 'ZUUL_UUID':
'fa4ca39e02b14d1d864725441e301eb0', 'LOG_PATH':
'85/139585/15/check/dsvm-tempest-full/fa4ca39', 'ZUUL_CHANGE_IDS':
u'139585,15', 'ZUUL_PATCHSET': '15', 'ZUUL_BRANCH': u'master', 'ZUUL_REF':
u'refs/zuul/master/Z4efb72c817fb4ab39b67eb93fa8177ea', 'ZUUL_COMMIT':
u'97c142345b12bdf6a48c89b00f0d4d7811ce4a55', 'ZUUL_URL': u'
http://10.100.128.3/p/', 'ZUUL_CHANGE': '139585', 'ZUUL_CHANGES':
u'openstack-dev/sandbox:master:refs/changes/85/139585/15', 'ZUUL_PROJECT':
'openstack-dev/sandbox'}
...
2015-01-09 14:08:06,837 DEBUG zuul.Gearman: Function
build:dsvm-tempest-full is not registered
2015-01-09 14:08:06,837 ERROR zuul.Gearman: Job gear.Job 0x7f8db16e5590
handle: None name: build:dsvm-tempest-full unique:
fa4ca39e02b14d1d864725441e301eb0 is not registered with Gearman
2015-01-09 14:08:06,837 INFO zuul.Gearman: Build gear.Job 0x7f8db16e5590
handle: None name: build:dsvm-tempest-full unique:
fa4ca39e02b14d1d864725441e301eb0 complete, result NOT_REGISTERED
2015-01-09 14:08:06,837 DEBUG zuul.Scheduler: Adding complete event for
build: Build fa4ca39e02b14d1d864725441e301eb0 of dsvm-tempest-full on
Worker Unknown
2015-01-09 14:08:06,837 DEBUG zuul.Scheduler: Done adding complete event
for build: Build fa4ca39e02b14d1d864725441e301eb0 of dsvm-tempest-full on
Worker Unknown
2015-01-09 14:08:06,837 DEBUG zuul.IndependentPipelineManager: Adding build
Build fa4ca39e02b14d1d864725441e301eb0 of dsvm-tempest-full on Worker
Unknown of job dsvm-tempest-full to item QueueItem 0x7f8db17ba310 for
Change 0x7f8db86278d0 139585,15 in check

So it seems that Zuul sees the job, but Gearman returns is not registered.

Any idea how to register it? I see it in Jenkins GUI.
The only warning i see in Jenkins GUI is:
There’s no slave/cloud that matches this assignment. Did you mean ‘master’
instead of ‘devstack_slave’?

On the cloud provider gui i see instances with names like (
d-p-c-TIMESTAMP.template.openstack.org) spawning and running and some
deleting.

Thanks,

Eduard


On Tue, Jan 6, 2015 at 7:29 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Gearman worker threads is what is needed to actually run the job. You
 need to type ‘status’ to get the results. It shouldn’t be empty since you
 stated the job actually ran (and failed tempest)

 Publishing the result is controlled here in the zuul layout.yaml file [1].
 Make sure you’re not using the ‘silent’ pipeline which (obviously) won’t
 publish the result. Manual is here [2]

 You’ll need a log server to host the uploaded log files. You can set one
 up like –infra’s using this [3] or WIP [4]



 Ramy



 [1]
 https://github.com/rasselin/os-ext-testing-data/blob/master/etc/zuul/layout.yaml#L22

 [2] http://ci.openstack.org/zuul/index.html

 [3]
 https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_log_server.sh

 [4] https://review.openstack.org/#/c/138913/



 *From:* Punith S [mailto:punit...@cloudbyte.com]
 *Sent:* Monday, January 05, 2015 10:22 PM
 *To:* Asselin, Ramy
 *Cc:* Eduard Matei; OpenStack Development Mailing List (not for usage
 questions)

 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 thanks ramy :)

 i have setup the CI , but our dsvm-tempest-full job is failing due to some
 failures on running tempest.
 but how do we publish these failures to sandbox project.
 my gearman service is not showing any worker threads

 root@cimaster:/# telnet 127.0.0.1 4730

 Trying 127.0.0.1...

 Connected to 127.0.0.1.

 Escape character is '^]'.



 thanks



 On Sun, Jan 4, 2015 at 10:23 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Did you try asking the friendly folks on IRC freenode #openstack-infra?



 You can also try:

 Rebooting.

 Delete all the Jenkins jobs and reloading them.



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, December 26, 2014 1:30 AM
 *To:* Punith S
 *Cc:* OpenStack Development Mailing List (not for usage questions);
 Asselin, Ramy


 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 @Asselin:

 Regarding the few items you can try: i tried everything, the job still
 appears NOT_REGISTERED.

 I'll see next week if i can do a 

Re: [openstack-dev] [stable] Stable branch status and proposed stable-maint members mentoring

2015-01-09 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 On 01/09/2015 11:44 AM, Thierry Carrez wrote:
 Another topic is the mentoring of new propose $PROJECT-stable-maint
 members. We have a number of proposed people to contact and introduce
 the stable branch policy to (before we add them to the group):

 Erno Kuvaja (glance-stable-maint)
 Amrith Kumar (trove-stable-maint)
 Lin-Hua Cheng (horizon-stable-maint)

 Is someone in stable-maint-core interested in reaching out to them ? If
 not I'll probably handle that.

 I would expect stable liaisons to handle that. Don't we have those
 assigned to the projects?

Well, some of them are the proposed liaisons :) Also I think we should
not dilute the message too much and, as guardians of the policy, apply
the brainwashing directly and get a clear feel of how well it is
understood. I don't mind doing it (it's just an email with offer for QA
on IRC) if nobody wants it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][tc] Removal Plans for keystoneclient.middleware.auth_token

2015-01-09 Thread Thierry Carrez
Dean Troyer wrote:
 On Fri, Jan 9, 2015 at 4:22 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:
 
 This is probably a very dumb question, but could you explain why
 keystoneclient.middleware can't map to keystonemiddleware functions
 (adding keystonemiddleware as a dependency of future keystoneclient)? At
 first glance that would allow to remove dead duplicated code while
 ensuring compatibility for as long as we need to support those old
 releases...
 
 Part of the reason for moving keystonemiddleware out of
 keystonemiddleware was to do the reverse, have a keystoneclient install
 NOT bring in auth_token.  I doubt there will be anything other than
 servers that need keystonemiddleware installed whereas quite a few
 clients will not want it at all.

Sure, that should clearly still be the end goal... The idea would be to
keep deprecated functions in the client lib until we consider those
releases that need them truly dead. Not saying it's the best option
ever, was just curious why it was not listed in the proposed options :)

 I'm on the fence about changing stable requirements...if we imagine
 keystonemiddleware is not an OpenStack project this wouldn't be the
 first time we've had to do that when things change out from under us. 
 But I hate doing that to ourselves...

This is not about changing stable requirements: havana servers would
still depend on python-keystoneclient like they always did. If you use
an old version of that you are covered, and if you use a new version of
that, *that* would pull keystonemiddleware as a new requirement. So this
is about temporarily changing future keystoneclient requirements to
avoid double maintenance of code while preserving compatibility.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] packaging problem production build question

2015-01-09 Thread Sullivan, Jon Paul
 -Original Message-
 From: Jeremy Stanley [mailto:fu...@yuggoth.org]
 Sent: 08 January 2015 22:26
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [horizon] packaging problem production
 build question
 
 On 2015-01-08 15:11:24 -0700 (-0700), David Lyle wrote:
 [...]
  For those running CI environments, remote access will likely be
  required for bower to work. Although, it seems something like
  private-bower [1] could be utilized to leverage a local mirror where
  access or network performance are issues.
 [...]
 
 There's a very good chance we'll want to do something similar for the
 official OpenStack CI jobs as well. We already go to extreme lengths to
 pre-cache and locally mirror things which software would otherwise try
 to retrieve from random parts of the Internet during setup for tests. If
 your software retrieves files from 10 random places over the network,
 the chances of your job failing because of one of them being offline is
 multiplied by 10. As that number grows, so grows your lack of
 testability.

Local mirrors are also used to control the version of software included in a 
build, so that builds can be repeatable independently of changes to external 
sources.  Is this supported by bower?

 --
 Jeremy Stanley
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Vancouver Design Summit format changes

2015-01-09 Thread Thierry Carrez
Hi everyone,

The OpenStack Foundation staff is considering a number of changes to the
Design Summit format for Vancouver, changes on which we'd very much like
to hear your feedback.

The problems we are trying to solve are the following:
- Accommodate the needs of more OpenStack projects
- Reduce separation and perceived differences between the Ops Summit and
the Design/Dev Summit
- Create calm and less-crowded spaces for teams to gather and get more
work done

While some sessions benefit from large exposure, loads of feedback and
large rooms, some others are just workgroup-oriented work sessions that
benefit from smaller rooms, less exposure and more whiteboards. Smaller
rooms are also cheaper space-wise, so they allow us to scale more easily
to a higher number of OpenStack projects.

My proposal is the following. Each project team would have a track at
the Design Summit. Ops feedback is in my opinion part of the design of
OpenStack, so the Ops Summit would become a track within the
forward-looking Design Summit. Tracks may use two separate types of
sessions:

* Fishbowl sessions
Those sessions are for open discussions where a lot of participation and
feedback is desirable. Those would happen in large rooms (100 to 300
people, organized in fishbowl style with a projector). Those would have
catchy titles and appear on the general Design Summit schedule. We would
have space for 6 or 7 of those in parallel during the first 3 days of
the Design Summit (we would not run them on Friday, to reproduce the
successful Friday format we had in Paris).

* Working sessions
Those sessions are for a smaller group of contributors to get specific
work done or prioritized. Those would happen in smaller rooms (20 to 40
people, organized in boardroom style with loads of whiteboards). Those
would have a blanket title (like infra team working session) and
redirect to an etherpad for more precise and current content, which
should limit out-of-team participation. Those would replace project
pods. We would have space for 10 to 12 of those in parallel for the
first 3 days, and 18 to 20 of those in parallel on the Friday (by
reusing fishbowl rooms).

Each project track would request some mix of sessions (We'd like 4
fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
Friday) and the TC would arbitrate how to allocate the limited
resources. Agenda for the fishbowl sessions would need to be published
in advance, but agenda for the working sessions could be decided
dynamically from an etherpad agenda.

By making larger use of smaller spaces, we expect that setup to let us
accommodate the needs of more projects. By merging the two separate Ops
Summit and Design Summit events, it should make the Ops feedback an
integral part of the Design process rather than a second-class citizen.
By creating separate working session rooms, we hope to evolve the pod
concept into something where it's easier for teams to get work done
(less noise, more whiteboards, clearer agenda).

What do you think ? Could that work ? If not, do you have alternate
suggestions ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Ironic] Question about scheduling two instances to same baremetal node

2015-01-09 Thread Alex Xu
2015-01-09 22:07 GMT+08:00 Murray, Paul (HP Cloud) pmur...@hp.com:

   There is bug when running nova with ironic
 https://bugs.launchpad.net/nova/+bug/1402658



 I filed this bug – it has been a problem for us.



 The problem is at scheduler side the IronicHostManager will consume all
 the resources for that node whatever

 how much resource the instance used. But at compute node side, the
 ResourceTracker won't consume resources

 like that, just consume like normal virtual instance. And ResourceTracker
 will update the resource usage once the

 instance resource claimed, then scheduler will know there are some free
 resource on that node, then will try to

 schedule other new instance to that node



 You have summed up the problem nicely – i.e.: the resource availability is
 calculated incorrectly for ironic nodes.



 I take look at that, there is NumInstanceFilter, it will limit how many
 instance can schedule to one host. So can

 we just use this filter to finish the goal? The max instance is
 configured by option 'max_instances_per_host', we

 can make the virt driver to report how many instances it supported. The
 ironic driver can just report max_instances_per_host=1.

 And libvirt driver can report max_instance_per_host=-1, that means no
 limit. And then we can just remove the

 IronicHostManager, then make the scheduler side is more simpler. Does
 make sense? or there are more trap?





 Makes sense, but solves the wrong problem. The problem is what you said
 above – i.e.: the resource availability is calculated incorrectly for
 ironic nodes.

 The right solution would be to fix the resource tracker. The ram resource
 on an ironic node has different allocation behavior to a regular node. The
 test to see if a new instance fits is the same, but instead of deducting
 the requested amount to get the remaining availability it should simply
 return 0. This should be dealt with in the new resource objects ([2] below)
 by either having different version of the resource object for ironic nodes
 (certainly doable and the most sensible option – resources should be
 presented according to the resources on the host). Alternatively the ram
 resource object should cater for the difference in its calculations.

 Dang it, I reviewed that specwhy I didn't found that :( Totally beat
me!

  I have a local fix for this that I was too shy to propose upstream
 because it’s a bit hacky and will hopefully be obsolete soon. I could share
 it if you like.

 Paul

 [2] https://review.openstack.org/#/c/127609/





 From: *Sylvain Bauza* sba...@redhat.com
 Date: 9 January 2015 at 09:17
 Subject: Re: [openstack-dev] [Nova][Ironic] Question about scheduling two
 instances to same baremetal node
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org



 Le 09/01/2015 09:01, Alex Xu a écrit :

  Hi, All



 There is bug when running nova with ironic
 https://bugs.launchpad.net/nova/+bug/1402658



 The case is simple: one baremetal node with 1024MB ram, then boot two
 instances with 512MB ram flavor.

 Those two instances will be scheduling to same baremetal node.



 The problem is at scheduler side the IronicHostManager will consume all
 the resources for that node whatever

 how much resource the instance used. But at compute node side, the
 ResourceTracker won't consume resources

 like that, just consume like normal virtual instance. And ResourceTracker
 will update the resource usage once the

 instance resource claimed, then scheduler will know there are some free
 resource on that node, then will try to

 schedule other new instance to that node.



 I take look at that, there is NumInstanceFilter, it will limit how many
 instance can schedule to one host. So can

 we just use this filter to finish the goal? The max instance is configured
 by option 'max_instances_per_host', we

 can make the virt driver to report how many instances it supported. The
 ironic driver can just report max_instances_per_host=1.

 And libvirt driver can report max_instance_per_host=-1, that means no
 limit. And then we can just remove the

 IronicHostManager, then make the scheduler side is more simpler. Does make
 sense? or there are more trap?



 Thanks in advance for any feedback and suggestion.





 Mmm, I think I disagree with your proposal. Let me explain by the best I
 can why :

 tl;dr: Any proposal unless claiming at the scheduler level tends to be
 wrong

 The ResourceTracker should be only a module for providing stats about
 compute nodes to the Scheduler.
 How the Scheduler is consuming these resources for making a decision
 should only be a Scheduler thing.

 Here, the problem is that the decision making is also shared with the
 ResourceTracker because of the claiming system managed by the context
 manager when booting an instance. It means that we have 2 distinct decision
 makers for validating a resource.

 Let's stop to be realistic for a moment and discuss about what 

Re: [openstack-dev] offlist: The scope of OpenStack wiki [all]

2015-01-09 Thread Thierry Carrez
Anne Gentle wrote:
 Oh hi list!
 
 Feel free to discuss the a project just getting started content that
 ends up on the wiki -- where would that go?

I think that's still fine for nascent projects to use the wiki as a
poor man CMS. This hardly qualifies as authoritative content and falls
more into the quick prototypes that Stefano mentioned as appropriate
on the long-term wiki.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The scope of OpenStack wiki [all]

2015-01-09 Thread Barrett, Carol L
I understand that you're moving content out of the wiki, which I think will be 
fine, as long as the wiki provides links to the new content location. Is that 
the intention?
Carol

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Friday, January 09, 2015 1:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] The scope of OpenStack wiki [all]

Stefano Maffulli wrote:
 The wiki served for many years the purpose of 'poor man CMS' when we 
 didn't have an easy way to collaboratively create content. So the wiki 
 ended up hosting pages like 'Getting started with OpenStack', demo 
 videos, How to contribute, mission, to document our culture / shared 
 understandings (4 opens, release cycle, use of blueprints, stable 
 branch policy...), to maintain the list of Programs, meetings/teams, 
 blueprints and specs, lots of random documentation and more.
 
 Lots of the content originally placed on the wiki was there because 
 there was no better place. Now that we have more mature content and 
 processes, these are finding their way out of the wiki like:
 
   * http://governance.openstack.org
   * http://specs.openstack.org
   * http://docs.openstack.org/infra/manual/
 
 Also, the Introduction to OpenStack is maintained on 
 www.openstack.org/software/ together with introductory videos and 
 other basic material. A redesign of openstack.org/community and the 
 new portal groups.openstack.org are making even more wiki pages obsolete.
 
 This makes the wiki very confusing to newcomers and more likely to 
 host conflicting information.

One of the issues here is that the wiki also serves as a default starting page 
for all things not on www.openstack.org (its main page is a list of relevant 
links). So at the same time we are moving authoritative content out of the wiki 
to more appropriate, version-controlled and peer-reviewed sites, we are still 
relying on the wiki as a reference catalog or starting point to find those more 
appropriate sites. That is IMHO what creates the confusion on where the 
authoritative content actually lives.

So we also need to revisit how to make navigation between the various web 
properties of OpenStack more seamless and discoverable, so that we don't rely 
on the wiki starting page for that important role.

 I would propose to restrict the scope of the wiki to things that 
 anything that don't need or want to be peer-reviewed. Things like:
 
   * agendas for meetings, sprints, etc
   * list of etherpads for summits
   * quick prototypes of new programs (mentors, upstream training) 
 before they find a stable home (which can still be the wiki)

+1 -- I agree on the end goal... Use the wiki a bit like we use
etherpads or pastebins, and have more appropriate locations for all of our 
reference information. It will take some time but we should move toward that.

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev