Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
 Let me elaborate a little on my thoughts about software orchestration, and 
 respond to the recent mails from Zane and Debo.  I have expanded my 
 picture at 
 https://docs.google.com/drawings/d/1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
  
 and added a companion picture at 
 https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-GQQ1bRVgBpJdstpu0lH_TONw6g
  
 that shows an alternative.
 
 One of the things I see going on is discussion about better techniques for 
 software orchestration than are supported in plain CFN.  Plain CFN allows 
 any script you want in userdata, and prescription of certain additional 
 setup elsewhere in cfn metadata.  But it is all mixed together and very 
 concrete.  I think many contributors would like to see something with more 
 abstraction boundaries, not only within one template but also the ability 
 to have modular sources.
 

Yes please. Orchestrate things, don't configure them. That is what
configuration tools are for.

There is a third stealth-objective that CFN has caused to linger in
Heat. That is packaging cloud applications. By allowing the 100%
concrete CFN template to stand alone, users can ship the template.

IMO this marrying of software assembly, config, and orchestration is a
concern unto itself, and best left outside of the core infrastructure
orchestration system.

 I work closely with some colleagues who have a particular software 
 orchestration technology they call Weaver.  It takes as input for one 
 deployment not a single monolithic template but rather a collection of 
 modules.  Like higher level constructs in programming languages, these 
 have some independence and can be re-used in various combinations and 
 ways.  Weaver has a compiler that weaves together the given modules to 
 form a monolithic model.  In fact, the input is a modular Ruby program, 
 and the Weaver compiler is essentially running that Ruby program; this 
 program produces the monolithic model as a side effect.  Ruby is a pretty 
 good language in which to embed a domain-specific language, and my 
 colleagues have done this.  The modular Weaver input mostly looks 
 declarative, but you can use Ruby to reduce the verboseness of, e.g., 
 repetitive stuff --- as well as plain old modularity with abstraction.  We 
 think the modular Weaver input is much more compact and better for human 
 reading and writing than plain old CFN.  This might not be obvious when 
 you are doing the hello world example, but when you get to realistic 
 examples it becomes clear.
 
 The Weaver input discusses infrastructure issues, in the rich way Debo and 
 I have been advocating, as well as software.  For this reason I describe 
 it as an integrated model (integrating software and infrastructure 
 issues).  I hope for HOT to evolve to be similarly expressive to the 
 monolithic integrated model produced by the Weaver compiler.
 

Indeed, we're dealing with this very problem in TripleO right now. We need
to be able to compose templates that vary slightly for various reasons.

A ruby DSL is not something I think is ever going to happen in
OpenStack. But python has its advantages for DSL as well. I have been
trying to use clever tricks in yaml for a while, but perhaps we should
just move to a client-side python DSL that pushes the compiled yaml/json
templates into the engine.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Climate] bp:configurable-ip-allocation

2013-09-25 Thread Nikolay Starodubtsev
Hi, all.
I want to start this conversation again. Have you any progress?


On Fri, Aug 23, 2013 at 11:17 AM, Nikolay Starodubtsev 
nstarodubt...@mirantis.com wrote:

 Mark,
 Thank you for fast answer! We'll wait for it.


 On Thu, Aug 22, 2013 at 5:14 PM, Mark McClain 
 mark.mccl...@dreamhost.comwrote:

 Nokolay-

 Expect to updated code posted soon for Havana.

 mark

 On Aug 22, 2013, at 12:47 AM, Nikolay Starodubtsev 
 nstarodubt...@mirantis.com wrote:

 Hi, everyone!
 We are working on Climate, and we are interested in
 https://blueprints.launchpad.net/neutron/+spec/configurable-ip-allocationI 
 see two changes connected with this bp, but they both were abandoned in
 the beginning of the year. Can anyone give me an answer about
 implementation progress?
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-25 Thread Christopher Yeoh
On Mon, Sep 23, 2013 at 10:56 PM, Russell Bryant rbry...@redhat.com wrote:

 On 09/21/2013 12:02 AM, Pádraig Brady wrote:
  On 09/20/2013 10:47 PM, Michael Still wrote:
  Before https://review.openstack.org/#/c/46867/ if file injection of a
  mandatory file fails, nova just silently ignores the failure, which is
  clearly wrong.
 
  For reference, the original code you're adjusting is
  https://review.openstack.org/#/c/18900
  BTW, I'm not sure of your adjustments but that's beside the point
  and best left for discussion at the above review.
 
  However, that review now can't land because its
  revealed another failure in the file injection code via tempest, which
  is...
 
  Should file injection work for instances which are boot from volume?
 
  For consistency probably yes.
 
  Now that we actually notice injection failures we're now failing to
  boot such instances as file injection for them doesn't work.
 
  I'm undecided though -- should file injection work for boot from
  volume at all? Or should we just skip file injection for instances
  like this? I'd prefer to see us just support config drive and metadata
  server for these instances, but perhaps I am missing something really
  important.
 
  Now I wouldn't put too much effort into new file injection mechanisms,
  but in this case it might be easy enough to support injection to volumes.
  In fact there was already an attempt made at:
  https://review.openstack.org/#/c/33221/

 I agree with Monty and Thierry that ideally file injection should DIAF
 everywhere.  On that note, have we done anything with that in the v3
 API?  I propose we remove it completely.


It was separated from core as the os-personalities extension. So its very
easy to drop completely from the V3 API if we want to. Do you want me to
submit a changeset do do this
now (given the feature freeze) or wait until icehouse?

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Thomas Spatzier
Clint Byrum cl...@fewbar.com wrote on 25.09.2013 08:46:57:
 From: Clint Byrum cl...@fewbar.com
 To: openstack-dev openstack-dev@lists.openstack.org,
 Date: 25.09.2013 08:48
 Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
 together for Icehouse (now featuring software orchestration)

 Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
  Let me elaborate a little on my thoughts about software orchestration,
and
  respond to the recent mails from Zane and Debo.  I have expanded my
  picture at
  https://docs.google.com/drawings/d/
 1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
  and added a companion picture at
  https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-
 GQQ1bRVgBpJdstpu0lH_TONw6g
  that shows an alternative.
 
  One of the things I see going on is discussion about better techniques
for
  software orchestration than are supported in plain CFN.  Plain CFN
allows
  any script you want in userdata, and prescription of certain additional

  setup elsewhere in cfn metadata.  But it is all mixed together and very

  concrete.  I think many contributors would like to see something with
more
  abstraction boundaries, not only within one template but also the
ability
  to have modular sources.
 

 Yes please. Orchestrate things, don't configure them. That is what
 configuration tools are for.

 There is a third stealth-objective that CFN has caused to linger in
 Heat. That is packaging cloud applications. By allowing the 100%
 concrete CFN template to stand alone, users can ship the template.

 IMO this marrying of software assembly, config, and orchestration is a
 concern unto itself, and best left outside of the core infrastructure
 orchestration system.

  I work closely with some colleagues who have a particular software
  orchestration technology they call Weaver.  It takes as input for one
  deployment not a single monolithic template but rather a collection of
  modules.  Like higher level constructs in programming languages, these
  have some independence and can be re-used in various combinations and
  ways.  Weaver has a compiler that weaves together the given modules to
  form a monolithic model.  In fact, the input is a modular Ruby program,

  and the Weaver compiler is essentially running that Ruby program; this
  program produces the monolithic model as a side effect.  Ruby is a
pretty
  good language in which to embed a domain-specific language, and my
  colleagues have done this.  The modular Weaver input mostly looks
  declarative, but you can use Ruby to reduce the verboseness of, e.g.,
  repetitive stuff --- as well as plain old modularity with abstraction.
We
  think the modular Weaver input is much more compact and better for
human
  reading and writing than plain old CFN.  This might not be obvious when

  you are doing the hello world example, but when you get to realistic
  examples it becomes clear.
 
  The Weaver input discusses infrastructure issues, in the rich way Debo
and
  I have been advocating, as well as software.  For this reason I
describe
  it as an integrated model (integrating software and infrastructure
  issues).  I hope for HOT to evolve to be similarly expressive to the
  monolithic integrated model produced by the Weaver compiler.

I don't fully get this idea of HOT consuming a monolithic model produced by
some compiler - be it Weaver or anything else.
I thought the goal was to develop HOT in a way that users can actually
write HOT, as opposed to having to use some compiler to produce some
useful model.
So wouldn't it make sense to make sure we add the right concepts to HOT to
make sure we are able to express what we want to express and have things
like composability, re-use, substitutability?

 

 Indeed, we're dealing with this very problem in TripleO right now. We
need
 to be able to compose templates that vary slightly for various reasons.

 A ruby DSL is not something I think is ever going to happen in
 OpenStack. But python has its advantages for DSL as well. I have been
 trying to use clever tricks in yaml for a while, but perhaps we should
 just move to a client-side python DSL that pushes the compiled yaml/json
 templates into the engine.

As said in my comment above, I would like to see us focusing on the
agreement of one language - HOT - instead of yet another DSL.
There are things out there that are well established (like chef or puppet),
and HOT should be able to efficiently and intuitively use those things and
orchestrate components built using those things.

Anyway, this might be off the track that was originally discussed in this
thread (i.e. holistic scheduling and so on) ...


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova][libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-25 Thread P Balaji-B37839
Hi,

If anyone is already working on the below support for Nova, Please let us know.

Regards,
Balaji.P

-Original Message-
From: P Balaji-B37839 
Sent: Tuesday, September 24, 2013 4:10 PM
To: openstack-dev@lists.openstack.org
Cc: Addepalli Srini-B22160; Mannidi Purandhar Sairam-B39209; Lingala Srikanth 
Kumar-B37208; Somanchi Trinath-B39208; B Veera-B37207
Subject: [openstack-dev][Nova] Virtio-Serial support for Nova libvirt driver

Hi,

Virtio-Serial interface support for Nova - Libvirt is not available now. Some 
VMs who wants to access the Host may need like running qemu-guest-agent or any 
proprietary software want to use this mode of communication with Host.

Qemu-GA uses virtio-serial communication.

We want to propose a blue-print on this for IceHouse Release.

Anybody interested on this.

Regards,
Balaji.P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Case sensitivity backend databases

2013-09-25 Thread Henry Nash
Hi

Do we specify somewhere whether text field matching in the API is case 
sensitive or in-sensitive?  I'm thinking about filters, as well as user and 
domain names in authentication.  I think our current implementation will always 
be case sensitive for filters  (since we do that in python and do not, yet, 
pass the filters to the backends), while authentication will reflect the case 
sensitivity or lack thereof of the underlying database.  I believe that MySQL 
is case in-sensitive by default, while Postgres, sqllite and others are 
case-sensitive by default.  If using an LDAP backend, then I think this is 
case-sensitive.

The above seems to be inconsistent.  It might become even more so when we pass 
the filters to the backend.  Given that other projects already pass filters to 
the backend, we may also have inter-project inconsistencies that bleed through 
to the user experience.  Should we make at least a recommendation that the 
backend should case-sensitive (you can configure MySQL to be so)?  Insist on 
it? Ignore it and keep things as they are?

Henry


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-25 Thread balaji patnala
Hi Haomai,

Thanks for your interest on this.

The code check-ins done against the below bp are more specific to Qemu
Guest Agent.

 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


Our requirement is to enable Virtio-Serial Interface to the applications
running in VM.

Do you have the same requirement?

We will share the draft BP on this.


Any comments on this approach will be helpful.

Regards,
Balaji.P


On Tue, Sep 24, 2013 at 8:10 PM, Haomai Wang hao...@unitedstack.com wrote:


 On Sep 24, 2013, at 6:40 PM, P Balaji-B37839 b37...@freescale.com wrote:

  Hi,
 
  Virtio-Serial interface support for Nova - Libvirt is not available now.
 Some VMs who wants to access the Host may need like running
 qemu-guest-agent or any proprietary software want to use this mode of
 communication with Host.
 
  Qemu-GA uses virtio-serial communication.
 
  We want to propose a blue-print on this for IceHouse Release.
 
  Anybody interested on this.

 Great! We have common interest and I hope we can promote it for IceHouse.

 BTW, do you have a initial plan or description about it.

 And I think this bp may invoke.
 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support

 
  Regards,
  Balaji.P
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Best regards,
 Haomai Wang, UnitedStack Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread Thierry Carrez
Joshua Harlow wrote:
 +2
 
 I think we need to as a community figure out why this is the case and
 figure out ways to make it not the case.
 
 Is it education around what a PTL is? Is it lack of time? Is it something
 else?

In my view the PTL handles three roles: final decider on
program-specific issues, release management liaison (for programs
containing an integrated project) and program ambassador (natural point
of contact). Note that the last two roles can be delegated.

If you don't delegate anything then it's a lot of work, especially for
programs with large integrated projects -- so if the current PTL does a
great job and runs for election again, I suspect everyone else doesn't
feel the urge to run against him.

FWIW I don't think established PTLs mind being challenged at all. If
anything, in the past this served to identify people interested in
project management that could help in the PTL role and serve in a
succession strategy. So you shouldn't fear to piss of the established
PTL by challenging them :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Canceling meeting for Wed Sep 25th at 2100 UTC

2013-09-25 Thread Julien Danjou
Hi fellow developers,

Taking a look at our agenda, we don't have much to discuss on Ceilometer
this week I guess. RC1 is on its way, and should be our priority for
now. Status can be tracked at:
  https://launchpad.net/ceilometer/+milestone/havana-rc1

If nobody objects, our next meeting will be on 3rd October.

Cheers,
-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-25 Thread Thierry Carrez
Christopher Yeoh wrote:
 On Mon, Sep 23, 2013 at 10:56 PM, Russell Bryant rbry...@redhat.com
 mailto:rbry...@redhat.com wrote:
 I agree with Monty and Thierry that ideally file injection should DIAF
 everywhere.  On that note, have we done anything with that in the v3
 API?  I propose we remove it completely.
 
 It was separated from core as the os-personalities extension. So its very
 easy to drop completely from the V3 API if we want to. Do you want me to
 submit a changeset do do this
 now (given the feature freeze) or wait until icehouse?

I actually would like to have a discussion at next summit of how to
bring Nova's security to the next step. This will involve getting rid of
risky operations when they are not so needed (like injecting files into
mounted image filesystems), but we need to have an overall view (no
point in removing that specific weak chain link if another remains as
weak) to see where we can actually improve things significantly.

So I would wait for icehouse to do anything. If it's separated from the
core V3 API already, I guess it's still easy to get rid of it in
icehouse if that's the outcome of that discussion session.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-25 Thread P Balaji-B37839
Hi Wangpan,
Thanks for Information and suggestions.
We want to have generic virtio-serial interface for Libvirt  and applications 
can use this irrespective of Qemu Guest Agent in VM.
As suggested, Daniel can throw some light on this and help us.
Regards,
Balaji.P



From: Wangpan [mailto:hzwang...@corp.netease.com]
Sent: Wednesday, September 25, 2013 3:24 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova 
libvirt driver

Hi all,

I'm the owner of this bp 
https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support
and Daniel Berrange gave me lots of help about implementing this bp, and the 
original idea of mine is the same as yours.
So I think the opinion of Daniel will be very useful.

2013-09-25

Wangpan

发件人:balaji patnala patnala...@gmail.com
发送时间:2013-09-25 22:36
主题:Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt 
driver
收件人:OpenStack Development Mailing Listopenstack-dev@lists.openstack.org
抄送:

Hi Haomai,

Thanks for your interest on this.

The code check-ins done against the below bp are more specific to Qemu Guest 
Agent.

 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


Our requirement is to enable Virtio-Serial Interface to the applications 
running in VM.

Do you have the same requirement?

We will share the draft BP on this.


Any comments on this approach will be helpful.

Regards,
Balaji.P

On Tue, Sep 24, 2013 at 8:10 PM, Haomai Wang 
hao...@unitedstack.commailto:hao...@unitedstack.com wrote:

On Sep 24, 2013, at 6:40 PM, P Balaji-B37839 
b37...@freescale.commailto:b37...@freescale.com wrote:

 Hi,

 Virtio-Serial interface support for Nova - Libvirt is not available now. Some 
 VMs who wants to access the Host may need like running qemu-guest-agent or 
 any proprietary software want to use this mode of communication with Host.

 Qemu-GA uses virtio-serial communication.

 We want to propose a blue-print on this for IceHouse Release.

 Anybody interested on this.
Great! We have common interest and I hope we can promote it for IceHouse.

BTW, do you have a initial plan or description about it.

And I think this bp may invoke. 
https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


 Regards,
 Balaji.P


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Best regards,
Haomai Wang, UnitedStack Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-25 Thread Daniel P. Berrange
On Wed, Sep 25, 2013 at 12:02:03PM +0200, Thierry Carrez wrote:
 Christopher Yeoh wrote:
  On Mon, Sep 23, 2013 at 10:56 PM, Russell Bryant rbry...@redhat.com
  mailto:rbry...@redhat.com wrote:
  I agree with Monty and Thierry that ideally file injection should DIAF
  everywhere.  On that note, have we done anything with that in the v3
  API?  I propose we remove it completely.
  
  It was separated from core as the os-personalities extension. So its very
  easy to drop completely from the V3 API if we want to. Do you want me to
  submit a changeset do do this
  now (given the feature freeze) or wait until icehouse?
 
 I actually would like to have a discussion at next summit of how to
 bring Nova's security to the next step. This will involve getting rid of
 risky operations when they are not so needed (like injecting files into
 mounted image filesystems), but we need to have an overall view (no
 point in removing that specific weak chain link if another remains as
 weak) to see where we can actually improve things significantly.

NB file injection is only insecure if you're using the impl that mounts
stuff on the host. The libguestfs impl of file injection is doing all
its work inside a single use, throwaway VM instance to confine any
possible exploits.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Scheduler meeting minutes

2013-09-25 Thread Gary Kotton
Hi,
The minutes from the meeting are below.
Minutes:
http://eavesdrop.openstack.org/meetings/scheduling/2013/scheduling.2013-09-24-15.03.html
6:59
Minutes (text): 
http://eavesdrop.openstack.org/meetings/scheduling/2013/scheduling.2013-09-24-15.03.txt
Log:
http://eavesdrop.openstack.org/meetings/scheduling/2013/scheduling.2013-09-24-15.03.log.html
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread Flavio Percoco

On 25/09/13 11:29 +0200, Thierry Carrez wrote:

Joshua Harlow wrote:

+2

I think we need to as a community figure out why this is the case and
figure out ways to make it not the case.

Is it education around what a PTL is? Is it lack of time? Is it something
else?


In my view the PTL handles three roles: final decider on
program-specific issues, release management liaison (for programs
containing an integrated project) and program ambassador (natural point
of contact). Note that the last two roles can be delegated.

If you don't delegate anything then it's a lot of work, especially for
programs with large integrated projects -- so if the current PTL does a
great job and runs for election again, I suspect everyone else doesn't
feel the urge to run against him.

FWIW I don't think established PTLs mind being challenged at all. If
anything, in the past this served to identify people interested in
project management that could help in the PTL role and serve in a
succession strategy. So you shouldn't fear to piss of the established
PTL by challenging them :)



I agree with Thierry here.

The PTL role takes time and dedication which is the first thing people
must be aware of before submitting their candidacy. I'm very happy
with the job current PTLs have done, although I certainly don't have a
360 view. This should also be taken under consideration, before
submitting a PTL candidacy, I expect people to ask themselves - and
then share with others - what their plan is for the next development
cycle, how they can improve the project they want to run for, etc.

IMHO, the fact that there hasn't been many candidacies means that
folks are happy with the work current PTLs have done and would love to
have them around for another release cycle. However, this doesn't mean
that folks that have submitted their candidacy are not happy with the
current PTL and I'm very happy to see other folks willing to run for
the PTL possition.

I also think that PTLs have integrated the community at large in their
PTL role and this has definitely helped folks to participate in the
decision process. I've never thought about PTLs as final deciders but
as the ones responsible for leading the team towards a decision that
reflects the best interest of the project.

That being said, I wouldn't worry that much for not seeing so many
candidacies. I think this fits into the Lazy Consensus concept.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-25 Thread Ladislav Smola

Hello,
thank you very much for the feedback.

Ok, I have tried to go through the triggers an events.

Let me summarize, so we can confirm I understand it correctly:
==

The trigger:
--

- has a pattern - that is something like starting condition, that leads 
to creating a trigger
- has a criteria - that is the condition, that has to be fulfilled in 
some given timeout after

 the trigger is created:
 --- then it can run some action(triggered 
notification pipeline) and it saves the
  trigger(so it has query-able history of 
the triggers)
 --- or the timeout action ( optional 
expiration notification pipeline).


Questions
-

1. In the example in 
https://blueprints.launchpad.net/ceilometer/+spec/notifications-triggers
the pattern and criteria are a condition that are checking appearance of 
specific events.

What are the options for the conditions? What is the querying API?

2. Can the conditions be tied only to the events? Or also to samples and 
statistics, so I can build

similar queries and conditions, that alarms have?

3. If I have set e.g. trigger for measuring health of my 
baremetals(checking disk failures), I could just set
both conditions(pattern, criteria) the same, to observing some events 
marking disk failure, right?


If there will be disk Failures, it would create a trigger for each disk 
failure notification, right? So I could then

browse the triggers to check which resources had a disk failures?

What are the querying options over the triggers? E.g. I would like to 
get number of triggers of some type

on some resource_ids, from last month, grouped by project?

Summary
==

If the trigger pattern and criteria supports a general condition like 
Alarms do, I believe this could work, yes.


Otherwise it seems we should use Alarms(and Alarms Groups) for checking 
sample based alerts, and Triggers
for checking events(notifications) based alerts. So e.g. the health of 
hardware would be likely computed from

combination of Alarms and Triggers.


On 09/24/2013 03:48 PM, Thomas Maddox wrote:

I think Dragon's BP for notification triggers would solve this problem.

Instead of looking at it as applying a single alarm to several resources,
you could instead leverage the similarities of the resources:
https://blueprints.launchpad.net/ceilometer/+spec/notifications-triggers.

Compound that with configurable events:
https://blueprints.launchpad.net/ceilometer/+spec/configurable-event-defini
tions

-Thomas

On 9/24/13 7:46 AM, Julien Danjou jul...@danjou.info wrote:


On Tue, Sep 24 2013, Ladislav Smola wrote:


Yes it would be good if something like this would be supported. -
relation of alarm to multiple entities, that
are result of sample-api query. Could it be worth creating a BP?

Probably indeed.

--
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-25 Thread Ladislav Smola

On 09/25/2013 01:51 AM, Gabriel Hurley wrote

4. There is a thought about tagging the alarms by user defined tag, so
user can easily group alarms together and then watch them together
based on their tag.

The alarm API don't provide that directly, but you can imagine some sort of
filter based on description matching some texts.

I'd love to see this as an extension to the alarm API. I think tracking 
metadata about alarms (e.g. tags or arbitrary key-value pairs) would be 
tremendously useful.


yes, that sound like a very good idea.


5. There is a thought about generating a default alarms, that could
observe the most important things (verifying good behaviour, showing bad

behaviour).

Does anybody have an idea which alarms could be the most important and
usable for everybody?

I'm not sure you want to create alarm by default; alarm are resources, I don't
think we should create resources without the user asking for it.

Seconded.


Continues as Alarms Groups or Triggers conversation in this thread.


Maybe you were talking about generating alarm template? You could start
with things like CPU usage staying at 90% for more than 1 hour, and having
an action that alerts the user via mail.
Same for disk usage.

We do this kind of template for common user tasks with security group rules 
already. The same concept applies to alarms.



Ok, will check this out.

Thank you for the feedback,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-25 Thread Chris Jones
Hi

On 25 September 2013 04:15, Robert Collins robe...@robertcollins.netwrote:

 E.g. for any node I should be able to ask:
 - what failure domains is this in? [e.g. power-45, switch-23, ac-15,
 az-3, region-1]
 - what locality-of-reference features does this have? [e.g. switch-23,
 az-3, region-1]
 - where is it [e.g. DC 2, pod 4, enclosure 2, row 5, rack 3, RU 30,
 cartridge 40].




 So, what do you think?


As a recovering data-centre person, I love the idea of being able to map a
given thing to not only its physical location, but its failure domain. +1

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

2013-09-25 Thread Jaromir Coufal

Hi Rob,

thank you very much for such valuable feedback, I really appreciate it. 
Few comments follow inline the text.


On 2013/25/09 03:50, Robert Collins wrote:

A few quick notes.

Flavors need their extra attributes to be editable (e.g. architecture,
raid config etc) - particularly in the undercloud, but it's also
relevant for overcloud : If we're creating flavors for deployers, we
need to expose the full capabilities of flavors.

Secondly, if we're creating flavors for deployers, the UI should
reflect that: is it directly editing flavors, or editing the inputs to
the algorithm that creates flavors.

ad extra attributes:
Until now for POC we were dealing only with flavor definition values and 
no extra attributes. Can you be please a bit more descriptive about 
extra attributes or at least point me to some documentation for it (for 
undercloud as well as overcloud flavors)?


Flavors in the Tuskar UI is for deployers and it is a definition for the 
algorithm that registers flavors in nova after a machine is provisioned.



We seemed to have consensus @ the sprint in Seattle that Racks aren't
really Racks : that is that Rack as currently defined is more of a
logical construct (and some clouds may have just one), vs the actual
physical 'This is a Rack in a DC' aspect. If there is a physical thing
that the current logical Rack maps too, perhaps we should use that as
the top level UI construct?
Well in ideal case we represent physical thing with logical grouping of 
nodes. In this case we are able to operate with hardware in the most 
efficient way. Of course we might end up that this is not reality and we 
need to support it in the UI as well. But I don't think I follow the 
idea with rack being the top level UI construct. I think this depends on 
the point of view how you are looking at the deployment. I think there 
are 2 ways of how deployer wants to see his deployment:
1) Hardware focus. Deployer is interested if his hardware is running 
fine and everything is running correctly. In this case you are right - 
the top level should be rack and it is rack in this moment.
2) Service focus. Deployer is interested what service is he providing, 
how much capacity he has available, left, in capacity planning, etc. For 
this purpose we have resource classes, which are defining what service 
you (as deployer) provide to your customers/users.



The related thing is we need to expose the other things that also tend
to map failure domains - shared switches, power bars, A/C - but that
is future work I believe.
In general I don't think that it is good idea to replicate other 
applications for DC monitoring, which already exist and we would only 
put effort to their duplication. I mean if we can get general 
information about switches, etc, yes that would be great, but I would 
recommend to make distinction between deployment management and DC 
monitoring.



The 'add rack' thing taking a list of MACs seems odd : a MAC address
isn't enough to deploy even a hardware inventory image to (you need
power management details + CPU arch + one-or-more MACs to configure
TFTP and PXE enough to do a detailed automatic inventory). Long term
I'd like to integrate with the management network switch, so we can
drive the whole thing automatically, but in the short term, I think we
want to drive folk to use the API for mass enrollment. What do you
think?
So for the short term we were counting with some sort of auto-discovery 
which means with minimal input from user PXE boot some minimal image, do 
the introspection of the machine and fill all the details for user. But 
you are right, that only MAC address isn't enough. What I think will be 
needed are power management credentials (currently support for IPMI - so 
the MAC address /or IP/, IPMI username and IPMI password). I believe all 
other information can be introspected (in short term). What do you think?



Regardless, the node list will need to deal with nodes having N MAC
addresses and management credentials, not just a management IP.
Lastly, whats node name for? Instances [may] have names, but I don't
see any reason for nodes to have a name.
I believe that node name will be mac address by default (in majority 
cases). There was idea about having possibility to rename the node for 
deployers' needs if they need better recognition. Let's imagine that we 
have a rack with mixed hardware, each node running different services, 
if we rename those few nodes with for example name of services they are 
running (or any purpose they are doing), then for the first glance, I as 
deployer have better overview about what my rack contains and where it 
is located. Do you see the use case there?



Similarly, it's a little weird that racks would have names.

Similar situation as nodes above.


CSV uploads stand out to me as an odd thing: JSON is the standard
serialisation format we use, does using CSV really make sense? Tied
into that is the question above - does it make sense to put bulk

Re: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

2013-09-25 Thread Jaromir Coufal

Hi Gabriel,

thanks for follwoing this thread and having a look on wireframes. 
Regarding the term 'resource class', the naming is what we got into 
during our initial intents. It's not final version, so if there are 
concerns, there is no problem in finding more accurate one (we just 
couldn't find better). As for resource class definition, I tried to 
explain it a bit more in reply to Rob's mail (in this thread), so if you 
get that one, I hope it will help to answer and explain the concept of 
classes a little bit more.


If you still have any concerns, let me know I will try to be more explicit.
-- Jarda

On 2013/25/09 02:03, Gabriel Hurley wrote:


Really digging a lot of that. Particularly the inter-rack/inter-node 
communication stuff around page 36ish or so.


I’m concerned about using the term “Class”. Maybe it’s just me as a 
developer, but I couldn’t think of a more generic, less inherently 
meaningful word there. I read through it and I still only vaguely 
understand what a “Class” is in this context. We either need better 
terminolody or some serious documentation/user education on that one.


Also, I can’t quite say how, but I feel like the “Class” stuff ought 
to be meshed with the Resource side of things. The separation seems 
artificial and based more on the API structure (presumably?) than on 
the most productive user flow when interacting with that system. Maybe 
start with the question “if the system were empty, what would I need 
to do and how would I find it?”


Very cool though.

-Gabriel

*From:*Jaromir Coufal [mailto:jcou...@redhat.com]
*Sent:* Tuesday, September 24, 2013 2:04 PM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

 Hey folks,

I want to introduce our direction of Tuskar UI, currently described 
with POC wireframes. Keep in mind, that wireframes which I am sending 
were made for purpose of proof of concept (which was built and 
released in August) and there are various changes since then, which 
were already adopted. However, basic concepts are staying similar. Any 
updates for wireframes and future direction will be sent here to the 
dev-list for feedback and reviews.


http://people.redhat.com/~jcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdf 
http://people.redhat.com/%7Ejcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdf


Just quick description of what is happening there:
* 1st step implementation - Layouts (page 2)
- just showing that we are re-using all Horizon components and layouts
* Where we are heading - Layouts (page 8)
- possible smaller improvements to Horizon concepts
- majority just smaller CSS changes in POC timeframe scope
* Resource Management - Flavors (page 15) - ALREADY REMOVED
- these were templates for flavors, which were part of selection 
in resource class creation process
- currently the whole flavor definition moved under compute 
resource class completely (templates are no longer used)

* Resource Management - Resources (page 22)
- this is rack management
- creation workflow was based on currently obsolete data (settings 
are going to be changed a bit)
- upload rack needs to make sure that we know some standard csv 
file format (can we specify some?)
- detail page of rack and node, which are going through 
enhancement process

* Resource Management - Classes (page 40)
- resource class management
- few changes will happen here as well regarding creation workflow
- detail page is going through enhancements as well as racks/nodes 
detail pages

* Graphic Design
- just showing the very similar look and feel as OpenStack Dashboard

If you have any further questions, just follow this thread, I'll be 
very happy to answer as much as possible.


Cheers,
-- Jarda



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-25 Thread Tomas Sedovic

On 09/25/2013 05:15 AM, Robert Collins wrote:

One of the major things Tuskar does is model a datacenter - which is
very useful for error correlation, capacity planning and scheduling.

Long term I'd like this to be held somewhere where it is accessible
for schedulers and ceilometer etc. E.g. network topology + switch
information might be held by neutron where schedulers can rely on it
being available, or possibly held by a unified topology db with
scheduler glued into that, but updated by neutron / nova / cinder.
Obviously this is a) non-trivial and b) not designed yet.

However, the design of Tuskar today needs to accomodate a few things:
  - multiple reference architectures for clouds (unless there really is
one true design)
  - the fact that today we don't have such an integrated vertical scheduler.

So the current Tuskar model has three constructs that tie together to
model the DC:
  - nodes
  - resource classes (grouping different types of nodes into service
offerings - e.g. nodes that offer swift, or those that offer nova).
  - 'racks'

AIUI the initial concept of Rack was to map to a physical rack, but
this rapidly got shifted to be 'Logical Rack' rather than physical
rack, but I think of Rack as really just a special case of a general
modelling problem..


Yeah. Eventually, we settled on Logical Rack meaning a set of nodes on 
the same L2 network (in a setup where you would group nodes into 
isolated L2 segments). Which kind of suggests we come up with a better name.


I agree there's a lot more useful stuff to model than just racks (or 
just L2 node groups).





From a deployment perspective, if you have two disconnected

infrastructures, thats two AZ's, and two underclouds : so we know that
any one undercloud is fully connected (possibly multiple subnets, but
one infrastructure). When would we want to subdivide that?

One case is quick fault aggregation: if a physical rack loses power,
rather than having 16 NOC folk independently investigating the same 16
down hypervisors, one would prefer to identify that the power to the
rack has failed (for non-HA powered racks); likewise if a single
switch fails (for non-HA network topologies) you want to identify that
that switch is down rather than investigating all the cascaded errors
independently.

A second case is scheduling: you may want to put nova instances on the
same switch as the cinder service delivering their block devices, when
possible, or split VM's serving HA tasks apart. (We currently do this
with host aggregates, but being able to do it directly would be much
nicer).

Lastly, if doing physical operations like power maintenance or moving
racks around in a datacentre, being able to identify machines in the
same rack can be super useful for planning, downtime announcements, 
orhttps://plus.google.com/hangouts/_/04919b4400b8c4c5ba706b752610cd433d9acbe1
host evacuation, and being able to find a specific machine in a DC is
also important (e.g. what shelf in the rack, what cartridge in a
chassis).


I agree. However, we should take care not to commit ourselves to 
building a DCIM just yet.




Back to 'Logical Rack' - you can see then that having a single
construct to group machines together doesn't really support these use
cases in a systematic fasion:- Physical rack modelling supports only a
subset of the location/performance/failure use cases, and Logical rack
doesn't support them at all: we're missing all the rich data we need
to aggregate faults rapidly : power, network, air conditioning - and
these things cover both single machine/groups of machines/racks/rows
of racks scale (consider a networked PDU with 10 hosts on it - thats a
fraction of a rack).

So, what I'm suggesting is that we model the failure and performance
domains directly, and include location (which is the incremental data
racks add once failure and performance domains are modelled) too. We
can separately noodle on exactly what failure domain and performance
domain modelling looks like - e.g. the scheduler focus group would be
a good place to have that discussion.


Yeah I think it's pretty clear that the current Tuskar concept where 
Racks are the first-class objects isn't going to fly. We should switch 
our focus on the individual nodes and their grouping and metadata.


I'd like to start with something small and simple that we can improve 
upon, though. How about just going with freeform tags and key/value 
metadata for the nodes?


We can define some well-known tags and keys to begin with (rack, 
l2-network, power, switch, etc.), it would be easy to iterate and once 
we settle on the things we need, we can solidify them more.


In the meantime, we have the API flexible enough to handle whatever 
architectures we end up supporting and the UI can provide the 
appropriate views into the data.


And this would allow people to add their own criteria that we didn't 
consider.




E.g. for any node I should be able to ask:
- what failure domains is this in? [e.g. power-45, 

Re: [openstack-dev] [Ironic] PTL nomination

2013-09-25 Thread Chris K
+1
Devananda is a great PLT. It is his vision that has and is driving Ironic's
rapid development.


Chris Krelle


On Tue, Sep 24, 2013 at 11:04 AM, Devananda van der Veen 
devananda@gmail.com wrote:

 Hi!

 I would like to nominate myself for the OpenStack Bare Metal Provisioning
 (Ironic) PTL position.

 I have been working with OpenStack for over 18 months, and was a
 scalability and performance consultant at Percona for four years prior.
 Since '99, I have worked as a developer, team lead, database admin, and
 linux systems architect for a variety of companies.

 I am the current PTL of the Bare Metal Provisioning (Ironic) program,
 which began incubation during Havana. In collaboration with many fine folks
 from HP, NTT Docomo, USC/ISI, and VirtualTech, I worked extensively on the
 Nova Baremetal driver during the Grizzly cycle. I also helped start the
 TripleO program, which relies heavily on the baremetal driver to achieve
 its goals. During the Folsom cycle, I led the effort to improve Nova's DB
 API layer and added devstack support for the OpenVZ driver. Through that
 work, I became a member of nova-core for a time, though my attention has
 shifted away from Nova more recently.

 Once I had seen nova-baremetal and TripleO running in our test environment
 and began to assess our longer-term goals (eg, HA, scalability, integration
 with other OpenStack services), I felt very strongly that bare metal
 provisioning was a separate problem domain from Nova and would be best
 served with a distinct API service and a different HA framework than what
 is provided by Nova. I circulated this idea during the last summit, and
 then proposed it to the TC shortly thereafter.

 During this development cycle, I feel that Ironic has made significant
 progress. Starting from the initial git bisect to retain the history of
 the baremetal driver, I added an initial service and RPC framework,
 implemented some architectural pieces, and left a lot of #TODO's. Today,
 with commits from 10 companies during Havana (*) and integration already
 underway with devstack, tempest, and diskimage-builder, I believe we will
 have a functional release within the Icehouse time frame.

 I feel that a large part of my role as PTL has been - and continues to be
 - to gather ideas from a wide array of individuals and companies interested
 in bare metal provisioning, then translate those ideas into a direction for
 the program that fits within the OpenStack ecosystem. Additionally, I am
 often guiding compromise between the long-term goals, such as firmware
 management, and the short-term needs of getting the project to a
 fully-functional state. To that end, here is a brief summary of my goals
 for the project in the Icehouse cycle.

 * API service and client library (likely finished before the summit)
 * Nova driver (blocked, depends on ironic client library)
 * Finish RPC bindings for power and deploy management
 * Finish merging bm-deploy-helper with Ironic's PXE driver
 * PXE boot integration with Neutron
 * Integrate with TripleO / TOCI for automated testing
 * Migration script for existing deployments to move off the nova-baremetal
 driver
 * Fault tolerance of the ironic-conductor nodes
 * Translation support
 * Docs, docs, docs!

 Beyond this, there are many long-term goals which I would very much like
 to facilitate, such as:

 * hardware discovery
 * better integration with SDN capable hardware
 * pre-provisioning tools, eg. management of bios, firmware, and raid
 config, hardware burn-in, etc.
 * post-provisioning tools, eg. secure-erase
 * boot from network volume
 * secure boot (protect deployment against MITM attacks)
 * validation of signed firmware (protect tenant against prior tenant)

 Overall, I feel honored to be working with so many talented individuals
 across the OpenStack community, and know that there is much more to learn
 as a developer, and as a program lead.

 (*)

 http://www.stackalytics.com/?release=havanametric=commitsproject_type=Allmodule=ironic

 http://russellbryant.net/openstack-stats/ironic-reviewers-30.txt
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt

 --
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] PTL nomination

2013-09-25 Thread Wentian Jiang
+1 Devananda

On Wed, Sep 25, 2013 at 10:44 PM, Chris K nobody...@gmail.com wrote:
 +1
 Devananda is a great PLT. It is his vision that has and is driving Ironic's
 rapid development.


 Chris Krelle


 On Tue, Sep 24, 2013 at 11:04 AM, Devananda van der Veen
 devananda@gmail.com wrote:

 Hi!

 I would like to nominate myself for the OpenStack Bare Metal Provisioning
 (Ironic) PTL position.

 I have been working with OpenStack for over 18 months, and was a
 scalability and performance consultant at Percona for four years prior.
 Since '99, I have worked as a developer, team lead, database admin, and
 linux systems architect for a variety of companies.

 I am the current PTL of the Bare Metal Provisioning (Ironic) program,
 which began incubation during Havana. In collaboration with many fine folks
 from HP, NTT Docomo, USC/ISI, and VirtualTech, I worked extensively on the
 Nova Baremetal driver during the Grizzly cycle. I also helped start the
 TripleO program, which relies heavily on the baremetal driver to achieve its
 goals. During the Folsom cycle, I led the effort to improve Nova's DB API
 layer and added devstack support for the OpenVZ driver. Through that work, I
 became a member of nova-core for a time, though my attention has shifted
 away from Nova more recently.

 Once I had seen nova-baremetal and TripleO running in our test environment
 and began to assess our longer-term goals (eg, HA, scalability, integration
 with other OpenStack services), I felt very strongly that bare metal
 provisioning was a separate problem domain from Nova and would be best
 served with a distinct API service and a different HA framework than what is
 provided by Nova. I circulated this idea during the last summit, and then
 proposed it to the TC shortly thereafter.

 During this development cycle, I feel that Ironic has made significant
 progress. Starting from the initial git bisect to retain the history of
 the baremetal driver, I added an initial service and RPC framework,
 implemented some architectural pieces, and left a lot of #TODO's. Today,
 with commits from 10 companies during Havana (*) and integration already
 underway with devstack, tempest, and diskimage-builder, I believe we will
 have a functional release within the Icehouse time frame.

 I feel that a large part of my role as PTL has been - and continues to be
 - to gather ideas from a wide array of individuals and companies interested
 in bare metal provisioning, then translate those ideas into a direction for
 the program that fits within the OpenStack ecosystem. Additionally, I am
 often guiding compromise between the long-term goals, such as firmware
 management, and the short-term needs of getting the project to a
 fully-functional state. To that end, here is a brief summary of my goals for
 the project in the Icehouse cycle.

 * API service and client library (likely finished before the summit)
 * Nova driver (blocked, depends on ironic client library)
 * Finish RPC bindings for power and deploy management
 * Finish merging bm-deploy-helper with Ironic's PXE driver
 * PXE boot integration with Neutron
 * Integrate with TripleO / TOCI for automated testing
 * Migration script for existing deployments to move off the nova-baremetal
 driver
 * Fault tolerance of the ironic-conductor nodes
 * Translation support
 * Docs, docs, docs!

 Beyond this, there are many long-term goals which I would very much like
 to facilitate, such as:

 * hardware discovery
 * better integration with SDN capable hardware
 * pre-provisioning tools, eg. management of bios, firmware, and raid
 config, hardware burn-in, etc.
 * post-provisioning tools, eg. secure-erase
 * boot from network volume
 * secure boot (protect deployment against MITM attacks)
 * validation of signed firmware (protect tenant against prior tenant)

 Overall, I feel honored to be working with so many talented individuals
 across the OpenStack community, and know that there is much more to learn as
 a developer, and as a program lead.

 (*)

 http://www.stackalytics.com/?release=havanametric=commitsproject_type=Allmodule=ironic

 http://russellbryant.net/openstack-stats/ironic-reviewers-30.txt
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt

 --
 Devananda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Wentian Jiang
UnitedStack Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-25 Thread Ravi Chunduru
I am working on this generic virtio-serial interface for appliances. To
start with I experimented on existing Wangpan's added feature on
hw_qemu_guest agent. I am preparing to propose a blueprint to modify it for
generic use and open to collaborate.

I could bring up VM with generic source path(say /tmp/appliance_port) and
target name(appliance_port). But I see qemu listening on the unix socket in
host as soon as I start the VM. If we want to have our server program on
host listening, that should not happen. How do I overcome that?

Thanks,
-Ravi.



On Wed, Sep 25, 2013 at 3:01 AM, P Balaji-B37839 b37...@freescale.comwrote:

 

 Hi Wangpan,

 Thanks for Information and suggestions.

 We want to have generic virtio-serial interface for Libvirt  and
 applications can use this irrespective of Qemu Guest Agent in VM.

 As suggested, Daniel can throw some light on this and help us.

 Regards,

 Balaji.P

 ** **

 ** **

 ** **

 *From:* Wangpan [mailto:hzwang...@corp.netease.com]
 *Sent:* Wednesday, September 25, 2013 3:24 PM
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for
 Nova libvirt driver

 ** **

 Hi all,

  

 I'm the owner of this bp
 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support

 and Daniel Berrange gave me lots of help about implementing this bp, and
 the original idea of mine is the same as yours.

 So I think the opinion of Daniel will be very useful.

  

 2013-09-25
  --

 Wangpan
   --

 *发件人:*balaji patnala patnala...@gmail.com

 *发送时间:*2013-09-25 22:36

 *主**题:*Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for
 Nova libvirt driver

 *收件人:*OpenStack Development Mailing List
 openstack-dev@lists.openstack.org

 *抄送:*

  

 Hi Haomai, 

 ** **

 Thanks for your interest on this.

 ** **

 The code check-ins done against the below bp are more specific to Qemu
 Guest Agent.

 ** **

  https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support

 ** **

 ** **

 Our requirement is to enable Virtio-Serial Interface to the applications
 running in VM.

 ** **

 Do you have the same requirement?

 ** **

 We will share the draft BP on this.

 ** **

 ** **

 Any comments on this approach will be helpful.

 ** **

 Regards,

 Balaji.P

 ** **

 On Tue, Sep 24, 2013 at 8:10 PM, Haomai Wang hao...@unitedstack.com
 wrote:


 On Sep 24, 2013, at 6:40 PM, P Balaji-B37839 b37...@freescale.com wrote:

  Hi,
 
  Virtio-Serial interface support for Nova - Libvirt is not available now.
 Some VMs who wants to access the Host may need like running
 qemu-guest-agent or any proprietary software want to use this mode of
 communication with Host.
 
  Qemu-GA uses virtio-serial communication.
 
  We want to propose a blue-print on this for IceHouse Release.
 
  Anybody interested on this.

 Great! We have common interest and I hope we can promote it for IceHouse.

 BTW, do you have a initial plan or description about it.

 And I think this bp may invoke.
 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


 
  Regards,
  Balaji.P
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Best regards,
 Haomai Wang, UnitedStack Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ** **

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna-dashboard] version 0.3 - updated UI mockups for EDP workflow

2013-09-25 Thread Nikita Konovalov
Hi Liz,

Thank you for your comments.

Your concern about the case when more than three tabs appear sounds reasonable. 
When we started to build a dashboard for Savanna, we wanted it to be easy to 
install into an existing environment, so a separate tab fitted nicely. We were 
thinking of moving our panels to a “Project” tab, but it seems like there will 
be too many menu items in that tab. Anyway it will be great if a new navigation 
mechanism appears in IceHouse Horizon and if it does, we will adapt our 
dashboard as soon as possible.

The consistency of the workflows is very important and the mockups are 
not to be considered to be a final design, we just want to show what functions 
will be available to user through UI Workflow. Of course the implementation 
should look consistent with existing flows with descriptions and hints on the 
right side and all input fields aligned correctly. The same applies to action 
buttons, it’s important that the user can find “Create” and “Delete” buttons 
where the are supposed to be.

Best Regards,
Nikita Konovalov
Mirantis, Inc

On Sep 25, 2013, at 00:14 , Liz Blanchard lsure...@redhat.com wrote:

 Chad  Nadya,
 
 Sorry for the late reply on this, I've been meaning to send some thoughts and 
 finally had some time today to pull it all together :)
 
 First off, an additional place that you should feel free to post 
 wireframes/design ideas and have a discussion is on our G+ OpenStack UX page:
 https://plus.google.com/u/0/communities/100954512393463248122
 
 We are actually hoping to move this to use Askbot in the future, but for now 
 this community is still very active and you could get some reviewers there 
 who might not have seen this e-mail.
 
 As for these designs, I have a bit of feedback:
 1) The addition of a high level tab for the main section of Savanna 
 features might introduce some complications. I've been seeing a lot of 
 developers adding tabs here which work if they are the only additional tab in 
 addition to Project and Admin, but it doesn't scale well if there are 
 more than three tabs. We are trying to address this in a navigation 
 enhancement blueprint for Horizon:
 https://blueprints.launchpad.net/horizon/+spec/navigation-enhancement
 
 Hopefully in the Icehouse release, it will be much easier to scale out and 
 add new sections at the top level, but I wonder if this would make more sense 
 as a new Panel which would sit at the same level as Manage Compute under 
 the current Project tab. Just an idea!
 
 2) Currently in Horizon, there are a few Create modal windows where the 
 modal is labeled with the action such as Launch Instance and the user is 
 given one or more tabs of fields to fill out. The first tab is typically the 
 Details section with the general fields that need to be filled out. There 
 could be more tabs for more groups of fields as needed. If you take a look at 
 the way the launch Instance modal works, I think the Job Launch/Creation 
 modals that are being designed for Savanna could be more consistent with this 
 design. This includes things like the Add button next to some of the 
 fields. Here is a screenshot of the Launch Instance Details tab for 
 reference:
 http://people.redhat.com/~lsurette/OpenStack/Launch%20Instance.png
 
 3) This is a small point, but I just want to be sure that in the table 
 designs it is expected that there would be some overall table level buttons 
 for Launch and Terminate that would allow the user to click the check box 
 for multiple items and select this action. I see in one of the mockups that 
 the checkbox is selected, but I didn't see any buttons on top of the table, 
 so I figured I would mention it.
 
 Hopefully this helps! I'm happy to chat more about these designs in detail 
 and help move them forward too. Let me know if you have any questions on my 
 thoughts here.
 
 Thanks,
 Liz
 
 On Sep 10, 2013, at 12:46 PM, Nadya Privalova nprival...@mirantis.com wrote:
 
 Hi all,
 
 I've created a temporary page for UIs mockups. Please take a look:
 https://wiki.openstack.org/wiki/Savanna/UIMockups/JobCreationProposal
 
 Chad, it's just pictures demonstrate how we see dependencies in UI. It's not 
 a final decision.
 Guys, feel free to comment this. I think it's time to start discussions.
 
 Regards,
 Nadya
 
 
 On Mon, Sep 9, 2013 at 10:19 PM, Chad Roberts crobe...@redhat.com wrote:
 Updated UI mockups for savanna dashboard EDP.
 https://wiki.openstack.org/wiki/Savanna/UIMockups/JobCreation
 
 Regards,
 Chad
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-25 Thread Mike Spreitzer
I agree that such a thing is useful for scheduling.  I see a bit of a 
tension here: for software engineering reasons we want some independence, 
but we also want to avoid wasteful duplication.

I think we are collectively backing into the problem of metamodeling for 
datacenters, and establishing one or more software thingies that will 
contain/communicate datacenter models.  A collection of nodes annotated 
with tags is a metamodel.  You could define a graph-based metamodel 
without mandating any particular graph shape.  You could be more 
prescriptive and mandate a tree shape as a good compromise between 
flexibility and making something that is reasonably easy to process.  We 
can debate what the metamodel should be, but that is different from 
debating whether there is a metamodel.

Regards,
Mike



From:   Tomas Sedovic tsedo...@redhat.com
To: openstack-dev@lists.openstack.org, 
Date:   09/25/2013 10:37 AM
Subject:Re: [openstack-dev] [TripleO] Generalising racks :- 
modelling a datacentre



On 09/25/2013 05:15 AM, Robert Collins wrote:
 One of the major things Tuskar does is model a datacenter - which is
 very useful for error correlation, capacity planning and scheduling.

 Long term I'd like this to be held somewhere where it is accessible
 for schedulers and ceilometer etc. E.g. network topology + switch
 information might be held by neutron where schedulers can rely on it
 being available, or possibly held by a unified topology db with
 scheduler glued into that, but updated by neutron / nova / cinder.
 Obviously this is a) non-trivial and b) not designed yet.

 However, the design of Tuskar today needs to accomodate a few things:
   - multiple reference architectures for clouds (unless there really is
 one true design)
   - the fact that today we don't have such an integrated vertical 
scheduler.

 So the current Tuskar model has three constructs that tie together to
 model the DC:
   - nodes
   - resource classes (grouping different types of nodes into service
 offerings - e.g. nodes that offer swift, or those that offer nova).
   - 'racks'

 AIUI the initial concept of Rack was to map to a physical rack, but
 this rapidly got shifted to be 'Logical Rack' rather than physical
 rack, but I think of Rack as really just a special case of a general
 modelling problem..

Yeah. Eventually, we settled on Logical Rack meaning a set of nodes on 
the same L2 network (in a setup where you would group nodes into 
isolated L2 segments). Which kind of suggests we come up with a better 
name.

I agree there's a lot more useful stuff to model than just racks (or 
just L2 node groups).


From a deployment perspective, if you have two disconnected
 infrastructures, thats two AZ's, and two underclouds : so we know that
 any one undercloud is fully connected (possibly multiple subnets, but
 one infrastructure). When would we want to subdivide that?

 One case is quick fault aggregation: if a physical rack loses power,
 rather than having 16 NOC folk independently investigating the same 16
 down hypervisors, one would prefer to identify that the power to the
 rack has failed (for non-HA powered racks); likewise if a single
 switch fails (for non-HA network topologies) you want to identify that
 that switch is down rather than investigating all the cascaded errors
 independently.

 A second case is scheduling: you may want to put nova instances on the
 same switch as the cinder service delivering their block devices, when
 possible, or split VM's serving HA tasks apart. (We currently do this
 with host aggregates, but being able to do it directly would be much
 nicer).

 Lastly, if doing physical operations like power maintenance or moving
 racks around in a datacentre, being able to identify machines in the
 same rack can be super useful for planning, downtime announcements, 
orhttps://plus.google.com/hangouts/_/04919b4400b8c4c5ba706b752610cd433d9acbe1
 host evacuation, and being able to find a specific machine in a DC is
 also important (e.g. what shelf in the rack, what cartridge in a
 chassis).

I agree. However, we should take care not to commit ourselves to 
building a DCIM just yet.


 Back to 'Logical Rack' - you can see then that having a single
 construct to group machines together doesn't really support these use
 cases in a systematic fasion:- Physical rack modelling supports only a
 subset of the location/performance/failure use cases, and Logical rack
 doesn't support them at all: we're missing all the rich data we need
 to aggregate faults rapidly : power, network, air conditioning - and
 these things cover both single machine/groups of machines/racks/rows
 of racks scale (consider a networked PDU with 10 hosts on it - thats a
 fraction of a rack).

 So, what I'm suggesting is that we model the failure and performance
 domains directly, and include location (which is the incremental data
 racks add once failure and performance domains are modelled) too. We
 can separately noodle on 

[openstack-dev] Renaming PROJECT-core teams in Launchpad, deleting core PPAs

2013-09-25 Thread Thierry Carrez
Hi everyone,

We have been maintaining the core developers groups membership directly
on Gerrit for quite some time now. That said, the Launchpad PROJECT-core
teams continued to exist in Launchpad... and the lack of synchronization
between the two (similarly-named) teams created the confusion described
in https://bugs.launchpad.net/openstack-ci/+bug/1160277 .

There are two blockers: the VMT uses those teams as a convenience to
give key developers access to embargoed security bugs, and established
PPAs prevent the pure removal of the teams.

The proposed fix is to rename the PROJECT-core teams to PROJECT-coresec
and use them as convenient security contacts for embargoed security
fixes (the groups will have to be cleaned up at some point to only
contain people actually interested in helping in that area).

In order to do that, we have to delete the *PPAs* associated with those
core teams in Launchpad. Those should have been unused for a few years
now, but i figured I would give everyone a heads-up before going into
this clean-up rampage.

I'll proceed in a few days if nobody objects to this plan.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] States Discussion - YouTube stream

2013-09-25 Thread Jaromir Coufal

Hey folks,

couple of minutes ago we had conversation about Tuskar states, which was 
based on previous brainstorming in etherpad 
(https://etherpad.openstack.org/tuskar-states) - already updated version.


YouTube stream of discussion:
http://www.youtube.com/watch?v=H6N0h-D2nr4feature=youtu.be

After the discussion, we decided also to delete provisioned and 
unprovisioned-error states at Node, because they might be represented 
with other states (operational and error).


I will follow with summary of the video call and detailed description of 
each state soon.


If you have any ideas, edge cases or thoughts, they are all very welcome.

Thanks for contribution
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Mike Spreitzer
Clint wrote:

 There is a third stealth-objective that CFN has caused to linger in
 Heat. That is packaging cloud applications. By allowing the 100%
 concrete CFN template to stand alone, users can ship the template.
 
 IMO this marrying of software assembly, config, and orchestration is a
 concern unto itself, and best left outside of the core infrastructure
 orchestration system.

I favor separation of concerns.  I do not follow what you are suggesting 
about how to separate these particular concerns.  Can you elaborate?

Clint also wrote:

 A ruby DSL is not something I think is ever going to happen in
 OpenStack.

Ruby is particularly good when the runtime scripting is done through chef 
or puppet, which are based on Ruby.  For example, Weaver supports chef 
based scripting, and integrates in a convenient way.

A distributed system does not all have to be written in the same language.

Thomas wrote:

 I don't fully get this idea of HOT consuming a monolithic model produced 
by
 some compiler - be it Weaver or anything else.
 I thought the goal was to develop HOT in a way that users can actually
 write HOT, as opposed to having to use some compiler to produce some
 useful model.
 So wouldn't it make sense to make sure we add the right concepts to HOT 
to
 make sure we are able to express what we want to express and have things
 like composability, re-use, substitutability?

I am generally suspicious of analogies, but let me offer one here.  In the 
realm of programming languages, many have great features for modularity 
within one source file.  These features are greatly appreciated and used. 
But that does not stop people from wanting to maintain sources factored 
into multiple files.

Back to the world at hand, I do not see a conflict between (1) making a 
language for monoliths with sophisticated internal structure and (2) 
defining one or more languages for non-monolithic sources.

Thomas wrote:
 As said in my comment above, I would like to see us focusing on the
 agreement of one language - HOT - instead of yet another DSL.
 There are things out there that are well established (like chef or 
puppet),
 and HOT should be able to efficiently and intuitively use those things 
and
 orchestrate components built using those things.

Yes, it may be that our best tactic at this point is to allow multiple 
(2), some or all not defined through the OpenStack Foundation, while 
agreeing here on (1).

Thomas wrote:
 Anyway, this might be off the track that was originally discussed in 
this
 thread (i.e. holistic scheduling and so on) ...

We are engaged in a boundary-drawing and relationship-drawing exercise.  I 
brought up this idea of a software orchestration compiler to show why I 
think the software orchestration preparation stage is best done earlier 
rather than later.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity backend databases

2013-09-25 Thread Dolph Mathews
On Wed, Sep 25, 2013 at 3:45 AM, Henry Nash hen...@linux.vnet.ibm.comwrote:

 Hi

 Do we specify somewhere whether text field matching in the API is case
 sensitive or in-sensitive?  I'm thinking about filters, as well as user and
 domain names in authentication.  I think our current implementation will
 always be case sensitive for filters  (since we do that in python and do
 not, yet, pass the filters to the backends), while authentication will
 reflect the case sensitivity or lack thereof of the underlying database.
  I believe that MySQL is case in-sensitive by default, while Postgres,
 sqllite and others are case-sensitive by default.  If using an LDAP
 backend, then I think this is case-sensitive.

 The above seems to be inconsistent.  It might become even more so when we
 pass the filters to the backend.  Given that other projects already pass
 filters to the backend, we may also have inter-project inconsistencies that
 bleed through to the user experience.  Should we make at least a
 recommendation that the backend should case-sensitive (you can configure
 MySQL to be so)?  Insist on it? Ignore it and keep things as they are?


Relevant bug: https://bugs.launchpad.net/keystone/+bug/1229093

I've never heard anyone complain that keystone is case sensitive but I
expect it to be case-insensitive. There is a clear expectation /
assumption that we are case sensitive, and I'd like to assert that to be
true in general, across driver implementations.

However, there are times when you might want to make case-insensitive
comparisons, but that should be hidden from the user. For example:

  create_user(name='dolph', password='asdf') # should succeed
  create_user(name='Dolph' password='asdf') # should fail, because a
case-insensitive equivalent already exists
  authenticate(name='Dolph', password='asdf') # should fail, because the
user that exists was created with an lowercase name
  get_user_by_name(name='Dolph') # should fail, because the user that
exists was created with an lowercase name (until we get into
case-sensitivity hints for the driver)

We don't provide for the behavior illustrated in the second line of
keystone today, but I think it's worth considering (it's not trivial with
UTF-8 support, though).


 Henry


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-25 Thread Keith Basil
On Sep 25, 2013, at 10:36 AM, Tomas Sedovic wrote:

 On 09/25/2013 05:15 AM, Robert Collins wrote:
 One of the major things Tuskar does is model a datacenter - which is
 very useful for error correlation, capacity planning and scheduling.

Tuskar was designed for general infrastructure modeling within the scope of 
OpenStack.

Yes, Tuskar could be used to model a datacenter but that was not its original 
design goal.  This is not to say that modeling a datacenter wouldn't be useful 
and some of the points, concepts and ideas later in the post are very good.

But in terms of an MVP, we were focused on providing an easy approach for cloud 
operators wishing to deploy OpenStack.  What we're seeing is a use case where 
deployments are fairly small (small being 2-30 racks of gear).


 Long term I'd like this to be held somewhere where it is accessible
 for schedulers and ceilometer etc. E.g. network topology + switch
 information might be held by neutron where schedulers can rely on it
 being available, or possibly held by a unified topology db with
 scheduler glued into that, but updated by neutron / nova / cinder.
 Obviously this is a) non-trivial and b) not designed yet.
 
 However, the design of Tuskar today needs to accomodate a few things:
  - multiple reference architectures for clouds (unless there really is
 one true design)

  - the fact that today we don't have such an integrated vertical scheduler.

+1 to both, but recognizing that these are long term asks. 

 So the current Tuskar model has three constructs that tie together to
 model the DC:
  - nodes
  - resource classes (grouping different types of nodes into service
 offerings - e.g. nodes that offer swift, or those that offer nova).
  - 'racks'
 
 AIUI the initial concept of Rack was to map to a physical rack, but
 this rapidly got shifted to be 'Logical Rack' rather than physical
 rack, but I think of Rack as really just a special case of a general
 modelling problem..
 
 Yeah. Eventually, we settled on Logical Rack meaning a set of nodes on the 
 same L2 network (in a setup where you would group nodes into isolated L2 
 segments). Which kind of suggests we come up with a better name.
 
 I agree there's a lot more useful stuff to model than just racks (or just L2 
 node groups).

Indeed.  We chose the label rack because most folk understand it.  When 
generating a bill of materials for cloud gear for example, people tend to think 
in rack elevations, etc.  The rack model breaks down a bit when you start to 
consider things like system on chip solutions like Moonshot with the 
possibility of a number of chassis within a physical rack.  This  prompted 
further refinement of the concept.  And as Tomas mentioned, we have shifted to 
logical racks based on L2 binding between nodes.  Better, more fitting naming 
ideas here are welcome. 


 From a deployment perspective, if you have two disconnected
 infrastructures, thats two AZ's, and two underclouds : so we know that
 any one undercloud is fully connected (possibly multiple subnets, but
 one infrastructure). When would we want to subdivide that?
 
 One case is quick fault aggregation: if a physical rack loses power,
 rather than having 16 NOC folk independently investigating the same 16
 down hypervisors, one would prefer to identify that the power to the
 rack has failed (for non-HA powered racks); likewise if a single
 switch fails (for non-HA network topologies) you want to identify that
 that switch is down rather than investigating all the cascaded errors
 independently.
 
 A second case is scheduling: you may want to put nova instances on the
 same switch as the cinder service delivering their block devices, when
 possible, or split VM's serving HA tasks apart. (We currently do this
 with host aggregates, but being able to do it directly would be much
 nicer).
 
 Lastly, if doing physical operations like power maintenance or moving
 racks around in a datacentre, being able to identify machines in the
 same rack can be super useful for planning, downtime announcements, 
 orhttps://plus.google.com/hangouts/_/04919b4400b8c4c5ba706b752610cd433d9acbe1
 host evacuation, and being able to find a specific machine in a DC is
 also important (e.g. what shelf in the rack, what cartridge in a
 chassis).
 
 I agree. However, we should take care not to commit ourselves to building a 
 DCIM just yet.
 
 
 Back to 'Logical Rack' - you can see then that having a single
 construct to group machines together doesn't really support these use
 cases in a systematic fasion:- Physical rack modelling supports only a
 subset of the location/performance/failure use cases, and Logical rack
 doesn't support them at all: we're missing all the rich data we need
 to aggregate faults rapidly : power, network, air conditioning - and
 these things cover both single machine/groups of machines/racks/rows
 of racks scale (consider a networked PDU with 10 hosts on it - thats a
 fraction of a rack).
 
 So, what I'm suggesting 

[openstack-dev] [Heat] Meeting agenda for Wed Sept 25th at 2000 UTC

2013-09-25 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed Sept 25th at 2000 UTC

Current topics for discussion:
* Review last week's actions
* RC1 bug status
* Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] automatically evacuate instances on compute failure

2013-09-25 Thread Chris Friesen
I'm interested in automatically evacuating instances in the case of a 
failed compute node.  I found the following blueprint that covers 
exactly this case:


https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically

However, the comments there seem to indicate that the code that 
orchestrates the evacuation shouldn't go into nova (referencing the 
Havana design summit).


Why wouldn't this type of behaviour belong in nova?  (Is there a summary 
of discussions at the summit?)  Is there a recommended place where this 
sort of thing should go?


Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread Joshua Harlow
I agree with all that u guys are saying and I think that the current PTL's
have done a great job and I know that there is a lot to take under
consideration when submitting a potential PTL candidacy and that its all
about delegating, integrating, publicizing.

I don't think any of that is in question.

I am just more concerned about the 'diversity' issue, which looking at
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates is imho
lacking (1 person elections aren't really elections). Now of course this
may not be an immediate problem, but it does not seem to be the ideal
situation a community would be in; I just imagine a community that has a
multi-person elections (those multi-people don't need to be at each others
throats, or even competitors, or any of that) and which thrives off the
diversity of those different people.

It just seems like something we can work on as a community, to ensure that
there is diversity.

-Josh

On 9/25/13 4:31 AM, Flavio Percoco fla...@redhat.com wrote:

On 25/09/13 11:29 +0200, Thierry Carrez wrote:
Joshua Harlow wrote:
 +2

 I think we need to as a community figure out why this is the case and
 figure out ways to make it not the case.

 Is it education around what a PTL is? Is it lack of time? Is it
something
 else?

In my view the PTL handles three roles: final decider on
program-specific issues, release management liaison (for programs
containing an integrated project) and program ambassador (natural point
of contact). Note that the last two roles can be delegated.

If you don't delegate anything then it's a lot of work, especially for
programs with large integrated projects -- so if the current PTL does a
great job and runs for election again, I suspect everyone else doesn't
feel the urge to run against him.

FWIW I don't think established PTLs mind being challenged at all. If
anything, in the past this served to identify people interested in
project management that could help in the PTL role and serve in a
succession strategy. So you shouldn't fear to piss of the established
PTL by challenging them :)


I agree with Thierry here.

The PTL role takes time and dedication which is the first thing people
must be aware of before submitting their candidacy. I'm very happy
with the job current PTLs have done, although I certainly don't have a
360 view. This should also be taken under consideration, before
submitting a PTL candidacy, I expect people to ask themselves - and
then share with others - what their plan is for the next development
cycle, how they can improve the project they want to run for, etc.

IMHO, the fact that there hasn't been many candidacies means that
folks are happy with the work current PTLs have done and would love to
have them around for another release cycle. However, this doesn't mean
that folks that have submitted their candidacy are not happy with the
current PTL and I'm very happy to see other folks willing to run for
the PTL possition.

I also think that PTLs have integrated the community at large in their
PTL role and this has definitely helped folks to participate in the
decision process. I've never thought about PTLs as final deciders but
as the ones responsible for leading the team towards a decision that
reflects the best interest of the project.

That being said, I wouldn't worry that much for not seeing so many
candidacies. I think this fits into the Lazy Consensus concept.

Cheers,
FF

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread Tim Bell

It would be interesting to compare candidates with who stood previously. I also 
welcome elections but if the previous PTL is
standing down and there is only one new candidate, that is more healthy than a 
single person consistently in the same position
(although maybe not ideal).

Are there any statistics as to a change of PTL (even if there is only one 
candidate standing) ?

Tim

 -Original Message-
 From: Joshua Harlow [mailto:harlo...@yahoo-inc.com]
 Sent: 25 September 2013 20:15
 To: OpenStack Development Mailing List; Flavio Percoco
 Subject: Re: [openstack-dev] Current list of confirmed PTL Candidates
 
 I agree with all that u guys are saying and I think that the current PTL's 
 have done a great job and I know that there is a lot to
take under
 consideration when submitting a potential PTL candidacy and that its all 
 about delegating, integrating, publicizing.
 
 I don't think any of that is in question.
 
 I am just more concerned about the 'diversity' issue, which looking at
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates
 is imho lacking (1 person elections aren't really elections). Now of course 
 this may not be an immediate problem, but it does not
seem to
 be the ideal situation a community would be in; I just imagine a community 
 that has a multi-person elections (those multi-people
don't
 need to be at each others throats, or even competitors, or any of that) and 
 which thrives off the diversity of those different
people.
 
 It just seems like something we can work on as a community, to ensure that 
 there is diversity.
 
 -Josh
 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread John Griffith
On Wed, Sep 25, 2013 at 12:15 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

 I agree with all that u guys are saying and I think that the current PTL's
 have done a great job and I know that there is a lot to take under
 consideration when submitting a potential PTL candidacy and that its all
 about delegating, integrating, publicizing.

 I don't think any of that is in question.

 I am just more concerned about the 'diversity' issue, which looking at
 https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates is imho
 lacking (1 person elections aren't really elections). Now of course this
 may not be an immediate problem, but it does not seem to be the ideal
 situation a community would be in; I just imagine a community that has a
 multi-person elections (those multi-people don't need to be at each others
 throats, or even competitors, or any of that) and which thrives off the
 diversity of those different people.

 It just seems like something we can work on as a community, to ensure that
 there is diversity.

 -Josh

 On 9/25/13 4:31 AM, Flavio Percoco fla...@redhat.com wrote:

 On 25/09/13 11:29 +0200, Thierry Carrez wrote:
 Joshua Harlow wrote:
  +2
 
  I think we need to as a community figure out why this is the case and
  figure out ways to make it not the case.
 
  Is it education around what a PTL is? Is it lack of time? Is it
 something
  else?
 
 In my view the PTL handles three roles: final decider on
 program-specific issues, release management liaison (for programs
 containing an integrated project) and program ambassador (natural point
 of contact). Note that the last two roles can be delegated.
 
 If you don't delegate anything then it's a lot of work, especially for
 programs with large integrated projects -- so if the current PTL does a
 great job and runs for election again, I suspect everyone else doesn't
 feel the urge to run against him.
 
 FWIW I don't think established PTLs mind being challenged at all. If
 anything, in the past this served to identify people interested in
 project management that could help in the PTL role and serve in a
 succession strategy. So you shouldn't fear to piss of the established
 PTL by challenging them :)
 
 
 I agree with Thierry here.
 
 The PTL role takes time and dedication which is the first thing people
 must be aware of before submitting their candidacy. I'm very happy
 with the job current PTLs have done, although I certainly don't have a
 360 view. This should also be taken under consideration, before
 submitting a PTL candidacy, I expect people to ask themselves - and
 then share with others - what their plan is for the next development
 cycle, how they can improve the project they want to run for, etc.
 
 IMHO, the fact that there hasn't been many candidacies means that
 folks are happy with the work current PTLs have done and would love to
 have them around for another release cycle. However, this doesn't mean
 that folks that have submitted their candidacy are not happy with the
 current PTL and I'm very happy to see other folks willing to run for
 the PTL possition.
 
 I also think that PTLs have integrated the community at large in their
 PTL role and this has definitely helped folks to participate in the
 decision process. I've never thought about PTLs as final deciders but
 as the ones responsible for leading the team towards a decision that
 reflects the best interest of the project.
 
 That being said, I wouldn't worry that much for not seeing so many
 candidacies. I think this fits into the Lazy Consensus concept.
 
 Cheers,
 FF
 
 --
 @flaper87
 Flavio Percoco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've put a request to all those in the Cinder team meeting this morning
that if they have any interest/desire that they should freely submit their
candidacy today (I've even advised some folks that I felt they would make
good candidates).  Other than openly encouraging others to run for the
position I'm not quite sure what folks would like to propose with respect
to this thread and the concerns that they have raised.  I've also had
conversations in IRC with multiple cinder-core team members to the same
effect.

The fact is you can't force people to run for the position, however you
can make it clear that it's an open process and encourage folks that have
interest.  I think we've always done that, and I think now even more than
before we've made it explicit.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Generalising racks :- modelling a datacentre

2013-09-25 Thread Narayan Desai
We're interested in topology support in order to support placement locality
(to optimize task placement in ensembles). Vish and I started talking about
what would be needed a few months ago, and came up with two approaches that
would work:
 - modelling the system as a full graph (ie rich enough topology
information in order to represent orthogonal concerns, like power and
network, for example)
 - a limited approach where location was described through a feature vector
that could be used for determining group diameter, which could be in turn
used to compute group affinity and dispersion.

We're also beginning to think about trying to expose network topology
upwards for scheduling as well. When you are interested in full topology,
you can't take any shortcuts, so I think that we're stuck with a full graph
for this.

It definitely makes sense to have a well maintained, flexible single
definition of this data that can be used everywhere.
 -nld
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] sessions for design summit

2013-09-25 Thread Sergey Lukjanov
Hi folks,

Savanna has been accepted for incubation and, so, we have 4 slots at the 
Icehouse Design Summit that is awesome.

Please, feel free to suggest sessions about Savanna using topic savanna at 
http://summit.openstack.org/ and comment on exesting sessions.

You have time until mid-October to suggest sessions. Suggested sessions will be 
reviewed/merged/scheduled after it.

More information about the Icehouse Design Summit can be found at 
https://wiki.openstack.org/wiki/Summit

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] PTL candidacy

2013-09-25 Thread Clint Byrum
I would like to formally announce my candidacy for Heat PTL.

It has been my extreme pleasure to work with the fine Heat development
community since this past December. My interest in Heat started with using
it as one of the many cogs in the TripleO deployment machine. However,
I have grown to really appreciate just how much work it takes to ship
a stable orchestration engine.

I see PTL as a communication focal point and an enabling position for
the development community around Heat. As a good friend once describe
a similar position to me, the PTL is the servant of the servants.
Of course, one duty of that servant is to relieve the others of turmoil
when there is need for a PTL hammer swing. I have clashed at times with
the other Heat developers, but I think that is a sign of a healthy dialog,
and not a sign of trouble at all. We are all expressing our opinions in
a healthy manner, and that should lead to a better code base and more
rational decisions.

In Icehouse, I see Heat becoming more user driven. We have seen several
real users pop up here late in the Havana cycle, and I think we will see
quite a few more over the next cycle. I'd like to address any concerns
they have while deploying Havana, and make Icehouse the most boring,
and thus useful release of Heat ever. Will we ship features? Oh yes of
course we will ship features. We are integrating Heat with all of the
other relevant projects and there will have to be a feature explosion to
accommodate all of that. But primarily, we need to take care of the users,
and ship a stable, useful Orchestration system and supporting tools.

So, I humbly prostrate myself upon the Heat developers and ask that you
consider me for such a position.

Some facts for those who are data driven:

* http://russellbryant.net/openstack-stats/heat-reviewers-90.txt shows
  me as 4th in review volumeover the last 90 days.
* The same link also shows that if you sort by ascending +/-% of core
  devs, I am by far the meanest core reviewer. More time spent in release
  meetings as PTL probably means you will have to deal with less of my
  incessant nit picking.
* 
https://bugs.launchpad.net/heat/+bugs?search=Searchfield.bug_reporter=clint-fewbar
  I have opened 22 bugs that are still open (and many that are closed.) I
  have quite a bit of motivation to aid the Heat development community
  in getting not only my bugs, but as many of the bugs in Heat addressed
  as possible.
* My name is not, in fact, Steve. You may consider this a pro, or a con.

Clint Byrum
Senior Server/Cloud Software Engineer - HP

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Current list of confirmed PTL Candidates

2013-09-25 Thread David Kranz

On 09/25/2013 02:15 PM, Joshua Harlow wrote:

I agree with all that u guys are saying and I think that the current PTL's
have done a great job and I know that there is a lot to take under
consideration when submitting a potential PTL candidacy and that its all
about delegating, integrating, publicizing.

I don't think any of that is in question.

I am just more concerned about the 'diversity' issue, which looking at
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013#Candidates is imho
lacking (1 person elections aren't really elections). Now of course this
may not be an immediate problem, but it does not seem to be the ideal
situation a community would be in; I just imagine a community that has a
multi-person elections (those multi-people don't need to be at each others
throats, or even competitors, or any of that) and which thrives off the
diversity of those different people.
These are all legitimate concerns, but I am more grateful that the 
number of PTL volunteers in each project is non-zero than that it is 
only one. IMO, the feel of the community is more like a volunteer civic 
or religious organization where one candidate for leadership positions 
that involve a lot of work is the norm, and some kind of rotation may 
also occur.


 -David





It just seems like something we can work on as a community, to ensure that
there is diversity.

-Josh

On 9/25/13 4:31 AM, Flavio Percoco fla...@redhat.com wrote:


On 25/09/13 11:29 +0200, Thierry Carrez wrote:

Joshua Harlow wrote:

+2

I think we need to as a community figure out why this is the case and
figure out ways to make it not the case.

Is it education around what a PTL is? Is it lack of time? Is it
something
else?

In my view the PTL handles three roles: final decider on
program-specific issues, release management liaison (for programs
containing an integrated project) and program ambassador (natural point
of contact). Note that the last two roles can be delegated.

If you don't delegate anything then it's a lot of work, especially for
programs with large integrated projects -- so if the current PTL does a
great job and runs for election again, I suspect everyone else doesn't
feel the urge to run against him.

FWIW I don't think established PTLs mind being challenged at all. If
anything, in the past this served to identify people interested in
project management that could help in the PTL role and serve in a
succession strategy. So you shouldn't fear to piss of the established
PTL by challenging them :)


I agree with Thierry here.

The PTL role takes time and dedication which is the first thing people
must be aware of before submitting their candidacy. I'm very happy
with the job current PTLs have done, although I certainly don't have a
360 view. This should also be taken under consideration, before
submitting a PTL candidacy, I expect people to ask themselves - and
then share with others - what their plan is for the next development
cycle, how they can improve the project they want to run for, etc.

IMHO, the fact that there hasn't been many candidacies means that
folks are happy with the work current PTLs have done and would love to
have them around for another release cycle. However, this doesn't mean
that folks that have submitted their candidacy are not happy with the
current PTL and I'm very happy to see other folks willing to run for
the PTL possition.

I also think that PTLs have integrated the community at large in their
PTL role and this has definitely helped folks to participate in the
decision process. I've never thought about PTLs as final deciders but
as the ones responsible for leading the team towards a decision that
reflects the best interest of the project.

That being said, I wouldn't worry that much for not seeing so many
candidacies. I think this fits into the Lazy Consensus concept.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-25 Thread Ravi Chunduru
I got this working after I made guest to behave as serial device and host
side program as unix socket based client.
Now all set to collaborate the BP  with the use case.

Thanks,
-Ravi.


On Wed, Sep 25, 2013 at 8:09 AM, Ravi Chunduru ravi...@gmail.com wrote:

 I am working on this generic virtio-serial interface for appliances. To
 start with I experimented on existing Wangpan's added feature on
 hw_qemu_guest agent. I am preparing to propose a blueprint to modify it for
 generic use and open to collaborate.

 I could bring up VM with generic source path(say /tmp/appliance_port) and
 target name(appliance_port). But I see qemu listening on the unix socket in
 host as soon as I start the VM. If we want to have our server program on
 host listening, that should not happen. How do I overcome that?

 Thanks,
 -Ravi.



 On Wed, Sep 25, 2013 at 3:01 AM, P Balaji-B37839 b37...@freescale.comwrote:

 

 Hi Wangpan,

 Thanks for Information and suggestions.

 We want to have generic virtio-serial interface for Libvirt  and
 applications can use this irrespective of Qemu Guest Agent in VM.

 As suggested, Daniel can throw some light on this and help us.

 Regards,

 Balaji.P

 ** **

 ** **

 ** **

 *From:* Wangpan [mailto:hzwang...@corp.netease.com]
 *Sent:* Wednesday, September 25, 2013 3:24 PM
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support
 for Nova libvirt driver

 ** **

 Hi all,

  

 I'm the owner of this bp
 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support

 and Daniel Berrange gave me lots of help about implementing this bp, and
 the original idea of mine is the same as yours.

 So I think the opinion of Daniel will be very useful.

  

 2013-09-25
  --

 Wangpan
   --

 *发件人:*balaji patnala patnala...@gmail.com

 *发送时间:*2013-09-25 22:36

 *主**题:*Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for
 Nova libvirt driver

 *收件人:*OpenStack Development Mailing List
 openstack-dev@lists.openstack.org

 *抄送:*

  

 Hi Haomai, 

 ** **

 Thanks for your interest on this.

 ** **

 The code check-ins done against the below bp are more specific to Qemu
 Guest Agent.

 ** **

  https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support

 ** **

 ** **

 Our requirement is to enable Virtio-Serial Interface to the applications
 running in VM.

 ** **

 Do you have the same requirement?

 ** **

 We will share the draft BP on this.

 ** **

 ** **

 Any comments on this approach will be helpful.

 ** **

 Regards,

 Balaji.P

 ** **

 On Tue, Sep 24, 2013 at 8:10 PM, Haomai Wang hao...@unitedstack.com
 wrote:


 On Sep 24, 2013, at 6:40 PM, P Balaji-B37839 b37...@freescale.com
 wrote:

  Hi,
 
  Virtio-Serial interface support for Nova - Libvirt is not available
 now. Some VMs who wants to access the Host may need like running
 qemu-guest-agent or any proprietary software want to use this mode of
 communication with Host.
 
  Qemu-GA uses virtio-serial communication.
 
  We want to propose a blue-print on this for IceHouse Release.
 
  Anybody interested on this.

 Great! We have common interest and I hope we can promote it for IceHouse.

 BTW, do you have a initial plan or description about it.

 And I think this bp may invoke.
 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


 
  Regards,
  Balaji.P
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Best regards,
 Haomai Wang, UnitedStack Inc.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ** **

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Ravi




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting reminder September 26

2013-09-25 Thread Sergey Lukjanov
Hi folks,

We'll be have the Savanna team meeting as usual in #openstack-meeting-alt 
channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_September.2C_26

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20130926T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Consider rechecking your patches

2013-09-25 Thread David Kranz
There was a gate failure yesterday. A lot of tempest patches have a -1 
from jenkins and I suspect a lot of them were victims. It would be a 
good idea to look at any such patches that belong to you and do a 
'recheck bug 1229797 ' (or reverify as the case may be. That is the 
tracking bug for this issue 
https://bugs.launchpad.net/openstack-ci/+bug/1229797


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

2013-09-25 Thread Gabriel Hurley
After reading your description in the other email, I might suggest the term 
“hardware profile” instead of “class”. Just a thought.


-  Gabriel

From: Jaromir Coufal [mailto:jcou...@redhat.com]
Sent: Wednesday, September 25, 2013 6:11 AM
To: OpenStack Development Mailing List
Cc: Gabriel Hurley
Subject: Re: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

Hi Gabriel,

thanks for follwoing this thread and having a look on wireframes. Regarding the 
term 'resource class', the naming is what we got into during our initial 
intents. It's not final version, so if there are concerns, there is no problem 
in finding more accurate one (we just couldn't find better). As for resource 
class definition, I tried to explain it a bit more in reply to Rob's mail (in 
this thread), so if you get that one, I hope it will help to answer and explain 
the concept of classes a little bit more.

If you still have any concerns, let me know I will try to be more explicit.
-- Jarda
On 2013/25/09 02:03, Gabriel Hurley wrote:
Really digging a lot of that. Particularly the inter-rack/inter-node 
communication stuff around page 36ish or so.

I’m concerned about using the term “Class”. Maybe it’s just me as a developer, 
but I couldn’t think of a more generic, less inherently meaningful word there. 
I read through it and I still only vaguely understand what a “Class” is in this 
context. We either need better terminolody or some serious documentation/user 
education on that one.

Also, I can’t quite say how, but I feel like the “Class” stuff ought to be 
meshed with the Resource side of things. The separation seems artificial and 
based more on the API structure (presumably?) than on the most productive user 
flow when interacting with that system. Maybe start with the question “if the 
system were empty, what would I need to do and how would I find it?”

Very cool though.


-  Gabriel

From: Jaromir Coufal [mailto:jcou...@redhat.com]
Sent: Tuesday, September 24, 2013 2:04 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Tuskar] [UI] Introducing POC Wireframes

 Hey folks,

I want to introduce our direction of Tuskar UI, currently described with POC 
wireframes. Keep in mind, that wireframes which I am sending were made for 
purpose of proof of concept (which was built and released in August) and there 
are various changes since then, which were already adopted. However, basic 
concepts are staying similar. Any updates for wireframes and future direction 
will be sent here to the dev-list for feedback and reviews.

http://people.redhat.com/~jcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdfhttp://people.redhat.com/%7Ejcoufal/openstack/tuskar/2013-07-11_tuskar_poc_wireframes.pdf

Just quick description of what is happening there:
* 1st step implementation - Layouts (page 2)
- just showing that we are re-using all Horizon components and layouts
* Where we are heading - Layouts (page 8)
- possible smaller improvements to Horizon concepts
- majority just smaller CSS changes in POC timeframe scope
* Resource Management - Flavors (page 15) - ALREADY REMOVED
- these were templates for flavors, which were part of selection in 
resource class creation process
- currently the whole flavor definition moved under compute resource class 
completely (templates are no longer used)
* Resource Management - Resources (page 22)
- this is rack management
- creation workflow was based on currently obsolete data (settings are 
going to be changed a bit)
- upload rack needs to make sure that we know some standard csv file format 
(can we specify some?)
- detail page of rack and node, which are going through enhancement process
* Resource Management - Classes (page 40)
- resource class management
- few changes will happen here as well regarding creation workflow
- detail page is going through enhancements as well as racks/nodes detail 
pages
* Graphic Design
- just showing the very similar look and feel as OpenStack Dashboard

If you have any further questions, just follow this thread, I'll be very happy 
to answer as much as possible.

Cheers,
-- Jarda




___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2013-09-25 00:59:44 -0700:
 Clint Byrum cl...@fewbar.com wrote on 25.09.2013 08:46:57:
  From: Clint Byrum cl...@fewbar.com
  To: openstack-dev openstack-dev@lists.openstack.org,
  Date: 25.09.2013 08:48
  Subject: Re: [openstack-dev] [heat] [scheduler] Bringing things
  together for Icehouse (now featuring software orchestration)
 
  Excerpts from Mike Spreitzer's message of 2013-09-24 22:03:21 -0700:
   Let me elaborate a little on my thoughts about software orchestration,
 and
   respond to the recent mails from Zane and Debo.  I have expanded my
   picture at
   https://docs.google.com/drawings/d/
  1Y_yyIpql5_cdC8116XrBHzn6GfP_g0NHTTG_W4o0R9U
   and added a companion picture at
   https://docs.google.com/drawings/d/1TCfNwzH_NBnx3bNz-
  GQQ1bRVgBpJdstpu0lH_TONw6g
   that shows an alternative.
  
   One of the things I see going on is discussion about better techniques
 for
   software orchestration than are supported in plain CFN.  Plain CFN
 allows
   any script you want in userdata, and prescription of certain additional
 
   setup elsewhere in cfn metadata.  But it is all mixed together and very
 
   concrete.  I think many contributors would like to see something with
 more
   abstraction boundaries, not only within one template but also the
 ability
   to have modular sources.
  
 
  Yes please. Orchestrate things, don't configure them. That is what
  configuration tools are for.
 
  There is a third stealth-objective that CFN has caused to linger in
  Heat. That is packaging cloud applications. By allowing the 100%
  concrete CFN template to stand alone, users can ship the template.
 
  IMO this marrying of software assembly, config, and orchestration is a
  concern unto itself, and best left outside of the core infrastructure
  orchestration system.
 
   I work closely with some colleagues who have a particular software
   orchestration technology they call Weaver.  It takes as input for one
   deployment not a single monolithic template but rather a collection of
   modules.  Like higher level constructs in programming languages, these
   have some independence and can be re-used in various combinations and
   ways.  Weaver has a compiler that weaves together the given modules to
   form a monolithic model.  In fact, the input is a modular Ruby program,
 
   and the Weaver compiler is essentially running that Ruby program; this
   program produces the monolithic model as a side effect.  Ruby is a
 pretty
   good language in which to embed a domain-specific language, and my
   colleagues have done this.  The modular Weaver input mostly looks
   declarative, but you can use Ruby to reduce the verboseness of, e.g.,
   repetitive stuff --- as well as plain old modularity with abstraction.
 We
   think the modular Weaver input is much more compact and better for
 human
   reading and writing than plain old CFN.  This might not be obvious when
 
   you are doing the hello world example, but when you get to realistic
   examples it becomes clear.
  
   The Weaver input discusses infrastructure issues, in the rich way Debo
 and
   I have been advocating, as well as software.  For this reason I
 describe
   it as an integrated model (integrating software and infrastructure
   issues).  I hope for HOT to evolve to be similarly expressive to the
   monolithic integrated model produced by the Weaver compiler.
 
 I don't fully get this idea of HOT consuming a monolithic model produced by
 some compiler - be it Weaver or anything else.
 I thought the goal was to develop HOT in a way that users can actually
 write HOT, as opposed to having to use some compiler to produce some
 useful model.
 So wouldn't it make sense to make sure we add the right concepts to HOT to
 make sure we are able to express what we want to express and have things
 like composability, re-use, substitutability?
 

We saw this in the history of puppet in fact, where the DSL was always the
problem when trying to make less-than-obvious components, and eventually
puppet had to grow a full ruby dsl to avoid those mistakes and keep up
with Chef's language-first approach.

  
 
  Indeed, we're dealing with this very problem in TripleO right now. We
 need
  to be able to compose templates that vary slightly for various reasons.
 
  A ruby DSL is not something I think is ever going to happen in
  OpenStack. But python has its advantages for DSL as well. I have been
  trying to use clever tricks in yaml for a while, but perhaps we should
  just move to a client-side python DSL that pushes the compiled yaml/json
  templates into the engine.
 
 As said in my comment above, I would like to see us focusing on the
 agreement of one language - HOT - instead of yet another DSL.
 There are things out there that are well established (like chef or puppet),
 and HOT should be able to efficiently and intuitively use those things and
 orchestrate components built using those things.
 
 Anyway, this might be off the 

Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-25 Thread Mike Spreitzer
Debo, Yathi: I have read 
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit?pli=1
 
and most of the referenced materials, and I have a couple of big-picture 
questions.  That document talks about making Nova call out to something 
that makes the sort of smart decisions you and I favor.  As far as I know, 
Nova is still scheduling one thing at a time.  How does that smart 
decision maker get a look at the whole pattern/termplate/topology as soon 
as it is needed?  I think you intend the smart guy gets it first, before 
Nova starts getting individual VM calls, right?  How does this picture 
grow to the point where the smart guy is making joint decisions about 
compute, storage, and network?  I think the key idea has to be that the 
smart guy gets a look at the whole problem first, and makes its decisions, 
before any individual resources are requested from 
nova/cinder/neutron/etc.  I think your point about non-disruptive, works 
with the current nova architecture is about solving the problem of how 
the smart guy's decisions get into nova.  Presumably this problem will 
occur for cinder and so on, too.  Have I got this right?

There is another way, right?  Today Nova accepts an 'availability zone' 
argument whose value can specify a particular host.  I am not sure about 
Cinder, but you can abuse volume types to get this job done.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-25 Thread Thomas Maddox
Hey Steven!

Sorry for missing my chance to chat on IRC today. We have our next weekly 
meeting on October 3rd (next Thursday) at 15:00 UTC  (10:00 CST). That's 
probably the best chance at getting us all in a room, since we're pretty well 
distributed around the planet. I definitely agree with you that we're working 
on very similar goals and ought to communicate about what we're trying to 
accomplish and how we can help each other. Feel free to ping us on 
#openstack-metering; also you sit a few desks away from me, feel free to come 
by and chat. =]

Cheers!

-Thomas

On 9/24/13 2:10 PM, Steven Gonzales 
steven.gonza...@rackspace.commailto:steven.gonza...@rackspace.com wrote:

Celiometer Team,

I am a developer on the Project Meniscus team.  I noticed the conversation on 
adding ElasticSearch and Kibana to Ceilometer and thought I would share some 
information regarding our project.  I would love to discuss a way our projects 
could work together on some of these common goals and possibly collaborate.

Project Meniscus is an open-source Python logging-as-a-service solution.  The 
multi-tenant service will allow the dispatch of log messages to sinks such as 
ElasticSearch, Swift, and HDFS.  Our initial implementation is defaulting to 
ElasticSearch.

The system was designed with the intention to scale and to be resilient to 
failure.  We have written a tcp server for receiving syslog messages from 
standard syslog servers/daemons such as RSYSLOG and SYSLOG-NG.  The server 
receives syslog messages over long-lived tcp connections and parses individual 
log messages into json documents.  The server uses the tornado tcp server, and 
the parser itself is written in C and uses Cython bindings.

We have implemented features such as normalization of log data by writing a 
python library that that binds to liglognorm, a C library for log processing.

In our very early alpha implementation we have been able to process about 30-40 
GB of syslog messages per day on a single worker
node with a very small amount of load on the server.  Our current worker nodes 
are 8GB RAM Virtual Machines running on Nova.


Currently we are working on:
 1. load balancing for syslog messages after parsing(since syslog servers 
transmit using long lived tcp connections)
 2. Implementing keystone authentication into Kibana 3
 3. Building a proxy in front of ElasticSearch to limit queries by tenant.

Our project page is http://projectmeniscus.org/
Our repo is located at: https://github.com/ProjectMeniscus

The repo contains the main code base and all supporting projects, including our 
chef repository.

We would love to discuss a way our projects could work together on some of 
these common goals and possibly collaborate.  Would it be possible to set up a 
time for us talk briefly?

Steven Gonzales
Software Developer
Rackspace Hosting
steven.gonza...@rackspace.commailto:steven.gonza...@rackspace.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-25 Thread Thomas Spatzier
Excerpt from Clint's mail on 25.09.2013 22:23:07:


 I think we already have some summit suggestions for discussing HOT,
 it would be good to come prepared with some visions for the future
 of HOT so that we can hash these things out, so I'd like to see this
 discussion continue.

Absolutely! Can those involved in the discussion check if this seems to be
covered in one of the session proposal me or others posted recently, and if
not raise another proposal? This is a good one to have.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] unit test failures/errors

2013-09-25 Thread Jon Maron
Hi,

  I can't seem to get a clean run from the py27 unit tests.  On the surface it 
doesn't seem that my current commit has anything to do with the code paths 
tested.  I've tried rebuilding my virtual env as well as rebasing but the issue 
hasn't been resolved.  I've also clones the repository into a different 
directory and ran the tests with the same failure results (further proving my 
commit has nothing to do with the failures). I've started debugging thru this 
code to try to ascertain the issue, but if anyone can comment on what may be 
the underlying issue I would appreciate it.

==
ERROR: test_cluster_create_cluster_tmpl_node_group_mixin 
(savanna.tests.unit.service.validation.test_cluster_create_validation.TestClusterCreateFlavorValidation)
--
Traceback (most recent call last):
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/test_cluster_create_validation.py,
 line 206, in setUp
api.plugin_base.setup_plugins()
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
197, in setup_plugins
PLUGINS = PluginManager()
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
110, in __init__
self._load_all_plugins()
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
129, in _load_all_plugins
self.plugins[plugin_name] = self._get_plugin_instance(plugin_name)
  File /Users/jmaron/dev/workspaces/savanna/savanna/plugins/base.py, line 
148, in _get_plugin_instance
plugin_path = CONF['plugin:%s' % plugin_name].plugin_class
  File 
/Users/jmaron/dev/workspaces/savanna/.tox/py27/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1645, in __getitem__
return self.__getattr__(key)
  File 
/Users/jmaron/dev/workspaces/savanna/.tox/py27/lib/python2.7/site-packages/oslo/config/cfg.py,
 line 1641, in __getattr__
raise NoSuchOptError(name)
NoSuchOptError: no such option: plugin:vanilla
  begin captured logging  
savanna.plugins.base: DEBUG: List of requested plugins: []
-  end captured logging  -

==
FAIL: test_cluster_create_v_cluster_configs 
(savanna.tests.unit.service.validation.test_cluster_create_validation.TestClusterCreateValidation)
--
Traceback (most recent call last):
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/test_cluster_create_validation.py,
 line 153, in test_cluster_create_v_cluster_configs
self._assert_cluster_configs_validation(True)
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/utils.py,
 line 329, in _assert_cluster_configs_validation
Plugin's applicable target 'HDFS' doesn't 
  File 
/Users/jmaron/dev/workspaces/savanna/.tox/py27/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
return func(*args, **keywargs)
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/utils.py,
 line 227, in _assert_create_object_validation
self._assert_calls(bad_req, bad_req_i)
  File 
/Users/jmaron/dev/workspaces/savanna/savanna/tests/unit/service/validation/utils.py,
 line 211, in _assert_calls
self.assertEqual(mock.call_args[0][0].message, call_info[2])
AssertionError: Plugin doesn't contain applicable target 'HDFS' != Plugin's 
applicable target 'HDFS' doesn't contain config with name 's'
'Plugin doesn\'t contain applicable target \'HDFS\' != Plugin\'s 
applicable target \'HDFS\' doesn\'t contain config with name \'s\'' = '%s != 
%s' % (safe_repr(Plugin doesn't contain applicable target 'HDFS'), 
safe_repr(Plugin's applicable target 'HDFS' doesn't contain config with name 
's'))
'Plugin doesn\'t contain applicable target \'HDFS\' != Plugin\'s 
applicable target \'HDFS\' doesn\'t contain config with name \'s\'' = 
self._formatMessage('Plugin doesn\'t contain applicable target \'HDFS\' != 
Plugin\'s applicable target \'HDFS\' doesn\'t contain config with name 
\'s\'', 'Plugin doesn\'t contain applicable target \'HDFS\' != Plugin\'s 
applicable target \'HDFS\' doesn\'t contain config with name \'s\'')
  raise self.failureException('Plugin doesn\'t contain applicable target 
 \'HDFS\' != Plugin\'s applicable target \'HDFS\' doesn\'t contain config 
 with name \'s\'')

  begin captured logging  
savanna.plugins.base: DEBUG: List of requested plugins: ['vanilla', 'hdp']
savanna.plugins.base: INFO: Plugin 'vanilla' defined and loaded
savanna.plugins.base: INFO: Plugin 'hdp' defined and loaded
-  end captured logging  -

==
FAIL: 

[openstack-dev] Anyone involved with Horizon going to attend or present at O'Reilly Fluent?

2013-09-25 Thread Mark Atwood
Hi!

Is anyone involved in Horizon planning on attending O'Reilly Fluent in
San Francisco March 11th?

The people I know at O'Reilly have been asking me to encourage
OpenStackers to apply to present at all the appropriate conferences.
The Fluent conference is about HTML/CSS/JS web user interfaces, and
thus would be on point for Horizon.

The call for presentations closes on Sept 30th, in just 5 days.

..m

--
Mark Atwood mark.atw...@hp.com
Director of Open Source Engagement for HP Cloud  OpenStack
+1-206-473-7118

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-25 Thread Steven Gonzales
Hi Thomas,

I will definitely hop into the next meeting, and I will definitely swing by 
your desk to chat!

-Steven

From: Thomas Maddox 
thomas.mad...@rackspace.commailto:thomas.mad...@rackspace.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, September 25, 2013 3:59 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + 
ElasticSearch Integration

Hey Steven!

Sorry for missing my chance to chat on IRC today. We have our next weekly 
meeting on October 3rd (next Thursday) at 15:00 UTC  (10:00 CST). That's 
probably the best chance at getting us all in a room, since we're pretty well 
distributed around the planet. I definitely agree with you that we're working 
on very similar goals and ought to communicate about what we're trying to 
accomplish and how we can help each other. Feel free to ping us on 
#openstack-metering; also you sit a few desks away from me, feel free to come 
by and chat. =]

Cheers!

-Thomas

On 9/24/13 2:10 PM, Steven Gonzales 
steven.gonza...@rackspace.commailto:steven.gonza...@rackspace.com wrote:

Celiometer Team,

I am a developer on the Project Meniscus team.  I noticed the conversation on 
adding ElasticSearch and Kibana to Ceilometer and thought I would share some 
information regarding our project.  I would love to discuss a way our projects 
could work together on some of these common goals and possibly collaborate.

Project Meniscus is an open-source Python logging-as-a-service solution.  The 
multi-tenant service will allow the dispatch of log messages to sinks such as 
ElasticSearch, Swift, and HDFS.  Our initial implementation is defaulting to 
ElasticSearch.

The system was designed with the intention to scale and to be resilient to 
failure.  We have written a tcp server for receiving syslog messages from 
standard syslog servers/daemons such as RSYSLOG and SYSLOG-NG.  The server 
receives syslog messages over long-lived tcp connections and parses individual 
log messages into json documents.  The server uses the tornado tcp server, and 
the parser itself is written in C and uses Cython bindings.

We have implemented features such as normalization of log data by writing a 
python library that that binds to liglognorm, a C library for log processing.

In our very early alpha implementation we have been able to process about 30-40 
GB of syslog messages per day on a single worker
node with a very small amount of load on the server.  Our current worker nodes 
are 8GB RAM Virtual Machines running on Nova.


Currently we are working on:
 1. load balancing for syslog messages after parsing(since syslog servers 
transmit using long lived tcp connections)
 2. Implementing keystone authentication into Kibana 3
 3. Building a proxy in front of ElasticSearch to limit queries by tenant.

Our project page is http://projectmeniscus.org/
Our repo is located at: https://github.com/ProjectMeniscus

The repo contains the main code base and all supporting projects, including our 
chef repository.

We would love to discuss a way our projects could work together on some of 
these common goals and possibly collaborate.  Would it be possible to set up a 
time for us talk briefly?

Steven Gonzales
Software Developer
Rackspace Hosting
steven.gonza...@rackspace.commailto:steven.gonza...@rackspace.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Is barbican suitable/ready for production deployment?

2013-09-25 Thread Pathik Solanki
Hi Barbican Team,
My team here at salesforce.com is evaluating Barbican for our use case of
managing secrets. The git repository indicates that Barbican is still in
development and not ready for production deployment. I vaguely remember
from the presentation at OpenStack Summit that cloudkeep/barbican has
production ready code too. Please correct me if I am wrong and if there is
some production ready instance then please point me to it.

Thanks,
Pathik
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-25 Thread Joe Gordon
Hi All,

TL;DR: We will be automatically identifying your flaky tempest runs, so you
just have to confirm that you hit bug x, not identify which bug you hit.


http://status.openstack.org/rechecks/ is a great tool to identify which
bugs are causing our gating to be flaky allowing for better prioritization
of bug fixing. But as many of you have noticed hunting down which bug to
use for your recheck can be tedious, and using 'recheck no bug' just kicks
the problem down the road for someone else to deal with.

To address this issue, Matthew Treinish, Clark Boylan, myself, and others
have started elastic-recheck [
https://github.com/openstack-infra/elastic-recheck] to classify
tempest-devstack failures using ElasticSearch [http://logstash.openstack.org].
 When we hit a new bug, we use http://logstash.openstack.org to manually
find an ElasticSearch fingerprint for it
(https://github.com/openstack-infra/elastic-recheck/blob/master/queries.json).
 And every time we see a new tempst-devstack failure we try to classify it,
and report back to review.openstack.org so the patch author can confirm
that was the bug they saw and run a recheck.

We are in the middle of rolling this out, and you can expect to see
elastic-recheck commenting on your failed tempest jobs in the next few days.

Gotchas:

* Identifying which bugs are frequent is only the first step, we still need
to fix them.  Otherwise tempest will stay flaky.  We have about a 25%
failure rate in the gate pipeline, as of the most recent numbers.
* ElasticSearch is currently slow, and although we are fixing that, it may
take a few hours before elastic-recheck can classify your failures.
* We have more work to do on this, so help welcome!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-25 Thread Anita Kuno

Nice work on this. It will be great to see this in action and watch it grow.

Way to go!

Anita.

On 09/25/2013 08:56 PM, Joe Gordon wrote:

Hi All,

TL;DR: We will be automatically identifying your flaky tempest runs, 
so you just have to confirm that you hit bug x, not identify which bug 
you hit.



http://status.openstack.org/rechecks/ is a great tool to identify 
which bugs are causing our gating to be flaky allowing for better 
prioritization of bug fixing. But as many of you have noticed hunting 
down which bug to use for your recheck can be tedious, and using 
'recheck no bug' just kicks the problem down the road for someone else 
to deal with.


To address this issue, Matthew Treinish, Clark Boylan, myself, and 
others have started elastic-recheck 
[https://github.com/openstack-infra/elastic-recheck] to classify 
tempest-devstack failures using ElasticSearch 
[http://logstash.openstack.org].  When we hit a new bug, we use 
http://logstash.openstack.org to manually find an ElasticSearch 
fingerprint for it 
(https://github.com/openstack-infra/elastic-recheck/blob/master/queries.json 
%28https://github.com/openstack-infra/elastic-recheck/blob/master/queries.json). 
 And every time we see a new tempst-devstack failure we try to 
classify it, and report back to review.openstack.org 
http://review.openstack.org so the patch author can confirm that was 
the bug they saw and run a recheck.


We are in the middle of rolling this out, and you can expect to see 
elastic-recheck commenting on your failed tempest jobs in the next few 
days.


Gotchas:

* Identifying which bugs are frequent is only the first step, we still 
need to fix them.  Otherwise tempest will stay flaky.  We have about a 
25% failure rate in the gate pipeline, as of the most recent numbers.
* ElasticSearch is currently slow, and although we are fixing that, it 
may take a few hours before elastic-recheck can classify your failures.

* We have more work to do on this, so help welcome!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] ./run_tests.sh issues on Ubuntu 12.04 LTS

2013-09-25 Thread Neil Zhao

Hi, All,

I'm new to openstack, and I followed the Quick Start at : 
http://docs.openstack.org/developer/horizon/quickstart.html


I thought the Horizon ./run_tests.sh should run successfully since I'm 
using the popular Ubuntu.  And after the script finished, I'll have a 
out-of-box env to run Horizon, but I was wrong.


I found issues, and have to fix them manually with: sudo apt-get install 
xxx.


I've noted some of them as following, I think somebody should test the 
script more, :)



1) I think the following dependencies should be added/checked in the script:
python2.7-dev
libxml2-dev
libxslt1-dev

2) And the script should be ran as root/sudoers?   As it'll do something 
in my /usr/lib/python2.7/ directory.



BTW, should I file some bug for these?  And how?


Thank you,
Neil Zhao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] ./run_tests.sh issues on Ubuntu 12.04 LTS

2013-09-25 Thread Neil Zhao


Thank you, Noorul.   I didn't know about devstack.

And, I just re-read the Horizon Quick Start, and found there's actually 
a NOTE on this, :)


Yet I think it's better to move the Note at the beginning of the Quick 
Start..



Note

The DevStack project (http://devstack.org/) can be used to install an 
OpenStack development environment from scratch.






On 09/26/2013 09:22 AM, Noorul Islam Kamal Malmiyoda wrote:



On Sep 26, 2013 6:44 AM, Neil Zhao neilch...@gmail.com 
mailto:neilch...@gmail.com wrote:


 Hi, All,

 I'm new to openstack, and I followed the Quick Start at : 
http://docs.openstack.org/developer/horizon/quickstart.html


 I thought the Horizon ./run_tests.sh should run successfully since 
I'm using the popular Ubuntu.  And after the script finished, I'll 
have a out-of-box env to run Horizon, but I was wrong.



I think the document tells that you need other components. How did you 
bootstrap those? Did you use devstack?


Thanks and Regards
Noorul

 I found issues, and have to fix them manually with: sudo apt-get 
install xxx.


 I've noted some of them as following, I think somebody should test 
the script more, :)



 1) I think the following dependencies should be added/checked in the 
script:

 python2.7-dev
 libxml2-dev
 libxslt1-dev

 2) And the script should be ran as root/sudoers?   As it'll do 
something in my /usr/lib/python2.7/ directory.



 BTW, should I file some bug for these?  And how?


 Thank you,
 Neil Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards,
Neil Zhao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tommorow meeting at 2000 UTC

2013-09-25 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow, 
2013-09-26!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss ongoing status of the overall effort and any needed coordination.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, problems, open-reviews, issues, solutions, 
questions (and more).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] ./run_tests.sh issues on Ubuntu 12.04 LTS

2013-09-25 Thread Kieran Spear
Hi Neil,

On 26 September 2013 11:13, Neil Zhao neilch...@gmail.com wrote:
 Hi, All,

 I'm new to openstack, and I followed the Quick Start at :
 http://docs.openstack.org/developer/horizon/quickstart.html

 I thought the Horizon ./run_tests.sh should run successfully since I'm using
 the popular Ubuntu.  And after the script finished, I'll have a out-of-box
 env to run Horizon, but I was wrong.

 I found issues, and have to fix them manually with: sudo apt-get install
 xxx.

 I've noted some of them as following, I think somebody should test the
 script more, :)


 1) I think the following dependencies should be added/checked in the script:
 python2.7-dev
 libxml2-dev
 libxslt1-dev

 2) And the script should be ran as root/sudoers?   As it'll do something in
 my /usr/lib/python2.7/ directory.

The script creates a virtualenv in .venv and installs all the python
dependencies there. They're not available system-wide.

Running anything as root is out of scope for run_tests.sh, but we
should definitely list the required C libraries somewhere in the
documentation like Nova does:

http://docs.openstack.org/developer/nova/devref/development.environment.html#linux-systems



 BTW, should I file some bug for these?  And how?

Please do:

https://bugs.launchpad.net/horizon/+filebug

...you're welcome to fix it too :)

https://wiki.openstack.org/wiki/How_To_Contribute


Cheers,
Kieran



 Thank you,
 Neil Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Configuration API BP

2013-09-25 Thread Craig Vyvial
So we have a blueprint for this and there are a couple things to point out
that have changed since the inception of this BP.

https://blueprints.launchpad.net/trove/+spec/configuration-management

This is an overview of the API calls for

POST /configurations - create config
GET  /configurations - list all configs
PUT  /configurations/{id} - update all the parameters
GET  /configurations/{id} - get details on a single config
GET  /configurations/{id}/{key} - get single parameter value that was
set for the configuration
PUT  /configurations/{id}/{key} - update/insert a single parameter
DELETE  /configurations/{id}/{key} - delete a single parameter
GET  /configurations/{id}/instances - list of instances the config is
assigned to
GET  /configurations/parameters - list of all configuration parameters
GET  /configurations/parameters/{key} - get details on a configuration parameter


There has been talk about using PATCH http action instead of PUT action for
thie update of individual parameter(s).

PUT /configurations/{id}/{key} - update/insert a single parameter
and/or
PATCH /configurations/{id} - update/insert parameter(s)

I am not sold on the idea of using PATCH unless its widely used in other
projects across Openstack. What does everyone think about this?

If there are any concerns around this please let me know.

Thanks,
Craig Vyvial
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] PTL nomination

2013-09-25 Thread Jing CDL Sun
+1!





Best Regards,
-
Sun Jing(孙靖, sjing)





Wentian Jiang went...@unitedstack.com 
2013/09/25 23:02
Please respond to
OpenStack Development Mailing List openstack-dev@lists.openstack.org


To
OpenStack Development Mailing List openstack-dev@lists.openstack.org, 
cc

Subject
Re: [openstack-dev] [Ironic] PTL nomination






+1 Devananda

On Wed, Sep 25, 2013 at 10:44 PM, Chris K nobody...@gmail.com wrote:
 +1
 Devananda is a great PLT. It is his vision that has and is driving 
Ironic's
 rapid development.


 Chris Krelle


 On Tue, Sep 24, 2013 at 11:04 AM, Devananda van der Veen
 devananda@gmail.com wrote:

 Hi!

 I would like to nominate myself for the OpenStack Bare Metal 
Provisioning
 (Ironic) PTL position.

 I have been working with OpenStack for over 18 months, and was a
 scalability and performance consultant at Percona for four years prior.
 Since '99, I have worked as a developer, team lead, database admin, and
 linux systems architect for a variety of companies.

 I am the current PTL of the Bare Metal Provisioning (Ironic) program,
 which began incubation during Havana. In collaboration with many fine 
folks
 from HP, NTT Docomo, USC/ISI, and VirtualTech, I worked extensively on 
the
 Nova Baremetal driver during the Grizzly cycle. I also helped start the
 TripleO program, which relies heavily on the baremetal driver to 
achieve its
 goals. During the Folsom cycle, I led the effort to improve Nova's DB 
API
 layer and added devstack support for the OpenVZ driver. Through that 
work, I
 became a member of nova-core for a time, though my attention has 
shifted
 away from Nova more recently.

 Once I had seen nova-baremetal and TripleO running in our test 
environment
 and began to assess our longer-term goals (eg, HA, scalability, 
integration
 with other OpenStack services), I felt very strongly that bare metal
 provisioning was a separate problem domain from Nova and would be best
 served with a distinct API service and a different HA framework than 
what is
 provided by Nova. I circulated this idea during the last summit, and 
then
 proposed it to the TC shortly thereafter.

 During this development cycle, I feel that Ironic has made significant
 progress. Starting from the initial git bisect to retain the history 
of
 the baremetal driver, I added an initial service and RPC framework,
 implemented some architectural pieces, and left a lot of #TODO's. 
Today,
 with commits from 10 companies during Havana (*) and integration 
already
 underway with devstack, tempest, and diskimage-builder, I believe we 
will
 have a functional release within the Icehouse time frame.

 I feel that a large part of my role as PTL has been - and continues to 
be
 - to gather ideas from a wide array of individuals and companies 
interested
 in bare metal provisioning, then translate those ideas into a direction 
for
 the program that fits within the OpenStack ecosystem. Additionally, I 
am
 often guiding compromise between the long-term goals, such as firmware
 management, and the short-term needs of getting the project to a
 fully-functional state. To that end, here is a brief summary of my 
goals for
 the project in the Icehouse cycle.

 * API service and client library (likely finished before the summit)
 * Nova driver (blocked, depends on ironic client library)
 * Finish RPC bindings for power and deploy management
 * Finish merging bm-deploy-helper with Ironic's PXE driver
 * PXE boot integration with Neutron
 * Integrate with TripleO / TOCI for automated testing
 * Migration script for existing deployments to move off the 
nova-baremetal
 driver
 * Fault tolerance of the ironic-conductor nodes
 * Translation support
 * Docs, docs, docs!

 Beyond this, there are many long-term goals which I would very much 
like
 to facilitate, such as:

 * hardware discovery
 * better integration with SDN capable hardware
 * pre-provisioning tools, eg. management of bios, firmware, and raid
 config, hardware burn-in, etc.
 * post-provisioning tools, eg. secure-erase
 * boot from network volume
 * secure boot (protect deployment against MITM attacks)
 * validation of signed firmware (protect tenant against prior tenant)

 Overall, I feel honored to be working with so many talented individuals
 across the OpenStack community, and know that there is much more to 
learn as
 a developer, and as a program lead.

 (*)

 
http://www.stackalytics.com/?release=havanametric=commitsproject_type=Allmodule=ironic


 http://russellbryant.net/openstack-stats/ironic-reviewers-30.txt
 http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt

 --
 Devananda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-25 Thread P Balaji-B37839
Hi Ravi,

We did this as part of PoC few months back.

Daniel can give us more comments on this as he is the lead for Libvirt support 
in Nova.

Regards,
Balaji.P



From: Ravi Chunduru [mailto:ravi...@gmail.com]
Sent: Thursday, September 26, 2013 12:35 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova 
libvirt driver

I got this working after I made guest to behave as serial device and host side 
program as unix socket based client.
Now all set to collaborate the BP  with the use case.

Thanks,
-Ravi.

On Wed, Sep 25, 2013 at 8:09 AM, Ravi Chunduru 
ravi...@gmail.commailto:ravi...@gmail.com wrote:
I am working on this generic virtio-serial interface for appliances. To start 
with I experimented on existing Wangpan's added feature on hw_qemu_guest agent. 
I am preparing to propose a blueprint to modify it for generic use and open to 
collaborate.

I could bring up VM with generic source path(say /tmp/appliance_port) and 
target name(appliance_port). But I see qemu listening on the unix socket in 
host as soon as I start the VM. If we want to have our server program on host 
listening, that should not happen. How do I overcome that?

Thanks,
-Ravi.


On Wed, Sep 25, 2013 at 3:01 AM, P Balaji-B37839 
b37...@freescale.commailto:b37...@freescale.com wrote:
Hi Wangpan,
Thanks for Information and suggestions.
We want to have generic virtio-serial interface for Libvirt  and applications 
can use this irrespective of Qemu Guest Agent in VM.
As suggested, Daniel can throw some light on this and help us.
Regards,
Balaji.P



From: Wangpan 
[mailto:hzwang...@corp.netease.commailto:hzwang...@corp.netease.com]
Sent: Wednesday, September 25, 2013 3:24 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova 
libvirt driver

Hi all,

I'm the owner of this bp 
https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support
and Daniel Berrange gave me lots of help about implementing this bp, and the 
original idea of mine is the same as yours.
So I think the opinion of Daniel will be very useful.

2013-09-25

Wangpan

发件人:balaji patnala patnala...@gmail.commailto:patnala...@gmail.com
发送时间:2013-09-25 22tel:2013-09-25%C2%A022:36
主题:Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt 
driver
收件人:OpenStack Development Mailing 
Listopenstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
抄送:

Hi Haomai,

Thanks for your interest on this.

The code check-ins done against the below bp are more specific to Qemu Guest 
Agent.

 https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


Our requirement is to enable Virtio-Serial Interface to the applications 
running in VM.

Do you have the same requirement?

We will share the draft BP on this.


Any comments on this approach will be helpful.

Regards,
Balaji.P

On Tue, Sep 24, 2013 at 8:10 PM, Haomai Wang 
hao...@unitedstack.commailto:hao...@unitedstack.com wrote:

On Sep 24, 2013, at 6:40 PM, P Balaji-B37839 
b37...@freescale.commailto:b37...@freescale.com wrote:

 Hi,

 Virtio-Serial interface support for Nova - Libvirt is not available now. Some 
 VMs who wants to access the Host may need like running qemu-guest-agent or 
 any proprietary software want to use this mode of communication with Host.

 Qemu-GA uses virtio-serial communication.

 We want to propose a blue-print on this for IceHouse Release.

 Anybody interested on this.
Great! We have common interest and I hope we can promote it for IceHouse.

BTW, do you have a initial plan or description about it.

And I think this bp may invoke. 
https://blueprints.launchpad.net/nova/+spec/qemu-guest-agent-support


 Regards,
 Balaji.P


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Best regards,
Haomai Wang, UnitedStack Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ravi



--
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] re: Is barbican suitable/ready for production deployment?

2013-09-25 Thread John Wood
Hello Pathik,

We are preparing the application for production usage, but it is not yet ready 
to go. We could speak further with you about your production needs if that 
would be of interest.

For evaluation purposes, we have stood up an integration 
environmenthttps://github.com/cloudkeep/barbican/wiki/Integration-Environment.
 You could also stand up a local instance of 
Barbicanhttps://github.com/cloudkeep/barbican/wiki/Developer-Guide. The PKCS 
based HSM plugin may be used with a SafeNet HSM as well.

Thanks,
John
-
john.w...@rackspace.com


From: Pathik Solanki [psola...@salesforce.com]
Sent: Wednesday, September 25, 2013 6:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [barbican] Is barbican suitable/ready for production 
deployment?

Hi Barbican Team,
My team here at salesforce.comhttp://salesforce.com/ is evaluating Barbican 
for our use case of managing secrets. The git repository indicates that 
Barbican is still in development and not ready for production deployment. I 
vaguely remember from the presentation at OpenStack Summit that 
cloudkeep/barbican has production ready code too. Please correct me if I am 
wrong and if there is some production ready instance then please point me to it.

Thanks,
Pathik
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-25 Thread Clint Byrum
Excerpts from Joe Gordon's message of 2013-09-25 17:56:15 -0700:
 Hi All,
 
 TL;DR: We will be automatically identifying your flaky tempest runs, so you
 just have to confirm that you hit bug x, not identify which bug you hit.
 

\o/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev