Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Macdonald-Wallace, Matthew
Hi Cristian,

The functionality already exists within Openstack (certainly it's there in 
Nova) it's just not very well documented (something I keep meaning to do!)

Basically you need to add the following to your nova.conf file:

log_config=/etc/nova/logging.conf

And then create /etc/nova/logging.conf with the configuration you want to use 
based on the Python Logging Module's ini configuration format.

Hope that helps,

Matt

 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 29 January 2014 17:57
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards
 
 Hi Matthew,
 I¹m interested to help in this switch to python logging framework for 
 shipping to
 logstash/etc. Are you working on a blueprint for this?
 Cheers,
 
 Cristian
 
 On 27/01/14 11:07, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:
 
 Hi Sean,
 
 I'm currently working on moving away from the built-in logging to use
 log_config=filename and the python logging framework so that we can
 start shipping to logstash/sentry/insert other useful tool here.
 
 I'd be very interested in getting involved in this, especially from a
 why do we have log messages that are split across multiple lines
 perspective!
 
 Cheers,
 
 Matt
 
 P.S. FWIW, I'd also welcome details on what the Audit level gives us
 that the others don't... :)
 
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 27 January 2014 13:08
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] Proposed Logging Standards
 
  Back at the beginning of the cycle, I pushed for the idea of doing
 some log  harmonization, so that the OpenStack logs, across services,
 made sense.
 I've
  pushed a proposed changes to Nova and Keystone over the past couple
 of days.
 
  This is going to be a long process, so right now I want to just focus
 on making  INFO level sane, because as someone that spends a lot of
 time staring at logs in  test failures, I can tell you it currently
 isn't.
 
  https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
 written  down so far, comments welcomed.
 
  We kind of need to solve this set of recommendations once and for all
 up front,  because negotiating each change, with each project, isn't
 going to work (e.g -
  https://review.openstack.org/#/c/69218/)
 
  What I'd like to find out now:
 
  1) who's interested in this topic?
  2) who's interested in helping flesh out the guidelines for various
 log levels?
  3) who's interested in helping get these kinds of patches into
 various projects in  OpenStack?
  4) which projects are interested in participating (i.e. interested in
 prioritizing  landing these kinds of UX improvements)
 
  This is going to be progressive and iterative. And will require lots
 of folks  involved.
 
 -Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] - Cloud federation on top of the Apache

2014-01-30 Thread Marek Denis

On 29.01.2014 17:06, Adam Young wrote:


We had a team member looking into SAML, but I don't don't know if he
made that distinction.


Do you think he would be willing to give a helping hand and share his 
expertise? Any possibility to contact your colleague? Without ECP/http 
clients extensions i think the federation is only 50% useful (because 
eventually somehow you need to login and obtain the saml assertion 
manually, with your browser).



Is there anything that would prevent us from having a solution that
supported both, based on the requirements of the implementer?



mod_shib passes saml assertion parameters into discrete environment 
variables. I am now looking at the mod_mellon README file and it looks 
like mellon's behaviour is pretty much the same. So, if there any 
implementation details, they are minor ones and we basically start at 
the same page.



From https://modmellon.googlecode.com/svn/trunk/mod_mellon2/README :

===
 Using mod_auth_mellon
===

After you have set up mod_auth_mellon, you should be able to visit (in our
example) https://example.com/secret/, and be redirected to the IdP's login
page. After logging in you should be returned to
https://example.com/secret/, and get the contents of that page.

When authenticating a user, mod_auth_mellon will set some environment
variables to the attributes it received from the IdP. The name of the
variables will be MELLON_attribute name. If you have specified a
different name with the MellonSetEnv or MellonSetEnvNoPrefix configuration
directive, then that name will be used instead. In the case of MellonSetEnv,
the name will still be prefixed by 'MELLON_'.

The value of the attribute will be base64 decoded.

mod_auth_mellon supports multivalued attributes with the following format:
base64 encoded value_base64 encoded value_base 64 encoded value...

If an attribute has multiple values, then they will be stored as
MELLON_name_0, MELLON_name_1, MELLON_name_2, ...

Since mod_auth_mellon doesn't know which attributes may have multiple
values, it will store every attribute at least twice. Once named
MELLON_name, and once named MELLON_name_0.

In the case of multivalued attributes MELLON_name will contain the first
value.


The following code is a simple php-script which prints out all the
variables:

?php
header('Content-Type: text/plain');

foreach($_SERVER as $key=$value) {
  if(substr($key, 0, 7) == 'MELLON_') {
echo($key . '=' . $value . \r\n);
  }
}
?

--
Marek Denis
[marek.de...@cern.ch]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [TripleO] mid-cycle meetup?

2014-01-30 Thread Gregory Haynes
I would love to attend and March should work fine for me.



--
Gregory Haynes
g...@greghaynes.net





On Mon, Jan 27, 2014, at 01:53 AM, James Polley wrote:

I'd love to come, but at this stage I won't be able to travel before about March
 3, which sounds like it's a few weeks later than most people were thinking of.


-- Forwarded message --
From: Robert Collins
[1]robe...@robertcollins.net[2]mailto:robe...@robertcollins.net
Date: 24 January 2014 08:47
Subject: [TripleO] mid-cycle meetup?
To:
[3]openstack-operat...@lists.openstack.org[4]mailto:openstack-operat...@lists.
openstack.org
[5]openstack-operat...@lists.openstack.org[6]mailto:openstack-operat...@lists.
openstack.org


Hi, sorry for proposing this at *cough* the mid-way point [christmas
shutdown got in the way of internal acks...], but who would come if
there was a mid-cycle meetup? I'm thinking the HP sunnyvale office as
a venue.

-Rob

--
Robert Collins [7]rbtcoll...@hp.com[8]mailto:rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___

OpenStack-dev mailing list

[9]OpenStack-dev@lists.openstack.org

[10]http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

References

1. mailto:robe...@robertcollins.net
2. mailto:robe...@robertcollins.net
3. mailto:openstack-operat...@lists.openstack.org
4. mailto:openstack-operat...@lists.openstack.org
5. mailto:openstack-operat...@lists.openstack.org
6. mailto:openstack-operat...@lists.openstack.org
7. mailto:rbtcoll...@hp.com
8. mailto:rbtcoll...@hp.com
9. mailto:OpenStack-dev@lists.openstack.org
  10. http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-01-30 Thread Khanh-Toan Tran
There is an unexpected line break in the middle of the link, so I post it
again:

https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
IQB2Y

 -Message d'origine-
 De : Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
 Envoyé : mercredi 29 janvier 2014 13:25
 À : 'OpenStack Development Mailing List (not for usage questions)'
 Objet : [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
Solver
 Scheduler

 Dear all,

 As promised in the Scheduler/Gantt meeting, here is our analysis on the
 connection between Policy Based Scheduler and Solver Scheduler:

 https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bq
 olOri
 IQB2Y

 This document briefs the mechanism of the two schedulers and the
possibility of
 cooperation. It is my personal point of view only.

 In a nutshell, Policy Based Scheduler allows admin to define policies
for different
 physical resources (an aggregate, an availability-zone, or all
 infrastructure) or different (classes of) users. Admin can modify
 (add/remove/modify) any policy in runtime, and the modification effect
is only
 in the target (e.g. the aggregate, the users) that the policy is defined
to. Solver
 Scheduler solves the placement of groups of instances simultaneously by
putting
 all the known information into a integer linear system and uses Integer
Program
 solver to solve the latter. Thus relation between VMs and between VMs-
 computes are all accounted for.

 If working together, Policy Based Scheduler can supply the filters and
weighers
 following the policies rules defined for different computes.
 These filters and weighers can be converted into constraints  cost
function for
 Solver Scheduler to solve. More detailed will be found in the doc.

 I look forward for comments and hope that we can work it out.

 Best regards,

 Khanh-Toan TRAN


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Jesse Pretorius

 Tasks which we've identified need to be done:

 1) Convert existing networks
 2) Convert existing port allocations
 3) Convert existing security groups
 4) Convert existing security rules
 5) Convert existing floating IP allocations


Additional tasks:

6) Create routers for each network, where applicable
7) Set the gateway for the routers
8) Create internal interfaces for the routers on the instance subnets
9) Convert the nova-network DNS server entries to DNS entries for the
Neutron Subnets, or set some default DNS entries for those that don't have
any. Perhaps there should also be an option to override any existing DNS
entries with the defaults.

 The conversion approach could either be to directly manipulate the
database tables or to use the API's. My thinking is that using the API's
for the Neutron activities would be better as it would provide better
backwards/forwards compatibility, whereas directly accessing the database
for the source data will be a suitable approach. Thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler] Policy Based Scheduler needs review approvals

2014-01-30 Thread Khanh-Toan Tran
Dear Stackers,

Ice-house 3 deadline is approaching quickly and we still need reviews 
approval for Policy Based Scheduler ! So I kindly ask for your attention
for this blueprint. 

The purpose of this blueprint is to manage the scheduling process by the
policy. With it admin can define scheduling rules per group of physical
resources (an aggregate, an availability-zone, or the whole
infrastructure); or per (classes of) users. For instance, admin can define
a policy of Load Balancing (distribute workload evenly among the servers)
in some aggregates, and Consolidation (concentrate workloads in minimal of
servers to be able to hibernate others) in other aggregates. Admin can
also change the policies in runtime and the changes will immediately take
effect.

Among the usecases would be the Pclouds :
   https://blueprints.launchpad.net/nova/+spec/whole-host-allocation
where we need a scheduling configuration/decision per Pclouds. It can be
done easily by defining a policy to each Pclouds. Future development of
the policy system will even allow users to define their own rules in their
Pclouds!

Best regards,

Khanh-Toan Tran

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch

2014-01-30 Thread Dmitry Mescheryakov
I agree with Andrew. I see no value in letting users select how their
cluster is provisioned, it will only make interface a little bit more
complex.

Dmitry


2014/1/30 Andrew Lazarev alaza...@mirantis.com

 Alexander,

 What is the purpose of exposing this to user side? Both engines must do
 exactly the same thing and they exist in the same time only for transition
 period until heat engine is stabilized. I don't see any value in proposed
 option.

 Andrew.


 On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov 
 aigna...@mirantis.comwrote:

 Today Savanna has two provisioning engines, heat and old one known as
 'direct'.
 Users can choose which engine will be used by setting special parameter
 in 'savanna.conf'.

 I have an idea to give an ability for users to define provisioning engine
 not only when savanna is started but when new cluster is launched. The
 idea is simple.
 We will just add new field 'provisioning_engine' to 'cluster' and
 'cluster_template'
 objects. And profit is obvious, users can easily switch from one engine
 to another without
 restarting savanna service. Of course, this parameter can be omitted and
 the default value
 from the 'savanna.conf' will be applied.

 Is this viable? What do you think?

 Regards,
 Alexander Ignatov




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-01-30 Thread Murray, Paul (HP Cloud Services)
Hi,

I have heard a couple of conflicting comments about the scheduler and nova 
objects that I would like to clear up. In one scheduler/gantt meeting, Gary 
Kotton offered to convert the scheduler to use Nova objects. In another I heard 
that with the creation of Gantt, the scheduler would avoid using any Nova 
specific features including Nova objects.

I can see that these things are evolving at the same time, so it makes sense 
that plans or opinions might change. But I am at a point where it would be nice 
to know.

Which way should this go?

Paul.

Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Jesse Pretorius
Applicable Blueprints:
https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
https://blueprints.launchpad.net/nova/+spec/deprecate-nova-network
https://blueprints.launchpad.net/neutron/+spec/nova-network-to-neutron-recipes

Previous Discussions that relate:
http://www.gossamer-threads.com/lists/openstack/dev/18373
http://www.gossamer-threads.com/lists/openstack/operators/31312
http://www.gossamer-threads.com/lists/openstack/operators/22837
http://www.gossamer-threads.com/lists/openstack/dev/30102
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova style cleanups with associated hacking check addition

2014-01-30 Thread Daniel P. Berrange
On Wed, Jan 29, 2014 at 01:22:59PM -0500, Joe Gordon wrote:
 On Tue, Jan 28, 2014 at 4:45 AM, John Garbutt j...@johngarbutt.com wrote:
  On 27 January 2014 10:10, Daniel P. Berrange berra...@redhat.com wrote:
  On Fri, Jan 24, 2014 at 11:42:54AM -0500, Joe Gordon wrote:
  On Fri, Jan 24, 2014 at 7:24 AM, Daniel P. Berrange 
  berra...@redhat.comwrote:
 
   Periodically I've seen people submit big coding style cleanups to Nova
   code. These are typically all good ideas / beneficial, however, I have
   rarely (perhaps even never?) seen the changes accompanied by new hacking
   check rules.
  
   The problem with not having a hacking check added *in the same commit*
   as the cleanup is two-fold
  
- No guarantee that the cleanup has actually fixed all violations
  in the codebase. Have to trust the thoroughness of the submitter
  or do a manual code analysis yourself as reviewer. Both suffer
  from human error.
  
- Future patches will almost certainly re-introduce the same style
  problems again and again and again and again and again and again
  and again and again and again I could go on :-)
  
   I don't mean to pick on one particular person, since it isn't their
   fault that reviewers have rarely/never encouraged people to write
   hacking rules, but to show one example The following recent change
   updates all the nova config parameter declarations cfg.XXXOpt(...) to
   ensure that the help text was consistently styled:
  
 https://review.openstack.org/#/c/67647/
  
   One of the things it did was to ensure that the help text always started
   with a capital letter. Some of the other things it did were more subtle
   and hard to automate a check for, but an 'initial capital letter' rule
   is really straightforward.
  
   By updating nova/hacking/checks.py to add a new rule for this, it was
   found that there were another 9 files which had incorrect capitalization
   of their config parameter help. So the hacking rule addition clearly
   demonstrates its value here.
 
  This sounds like a rule that we should add to
  https://github.com/openstack-dev/hacking.git.
 
  Yep, it could well be added there. I figure rules added to Nova can
  be upstreamed to the shared module periodically.
 
  +1
 
  I worry about diverging, but I guess thats always going to happen here.
 
   I will concede that documentation about /how/ to write hacking checks
   is not entirely awesome. My current best advice is to look at how some
   of the existing hacking checks are done - find one that is checking
   something that is similar to what you need and adapt it. There are a
   handful of Nova specific rules in nova/hacking/checks.py, and quite a
   few examples in the shared repo
   https://github.com/openstack-dev/hacking.git
   see the file hacking/core.py. There's some very minimal documentation
   about variables your hacking check method can receive as input
   parameters
   https://github.com/jcrocholl/pep8/blob/master/docs/developer.rst
  
  
   In summary, if you are doing a global coding style cleanup in Nova for
   something which isn't already validated by pep8 checks, then I strongly
   encourage additions to nova/hacking/checks.py to validate the cleanup
   correctness. Obviously with some style cleanups, it will be too complex
   to write logic rules to reliably validate code, so this isn't a code
   review point that must be applied 100% of the time. Reasonable personal
   judgement should apply. I will try comment on any style cleanups I see
   where I think it is pratical to write a hacking check.
  
 
  I would take this even further, I don't think we should accept any style
  cleanup patches that can be enforced with a hacking rule and aren't.
 
  IMHO that would mostly just serve to discourage people from submitting
  style cleanup patches because it is too much stick, not enough carrot.
  Realistically for some types of style cleanup, the effort involved in
  writing a style checker that does not have unacceptable false positives
  will be too high to justify. So I think a pragmatic approach to enforcement
  is more suitable.
 
  +1
 
  I would love to enforce it 100% of the time, but sometimes its hard to
  write the rules, but still a useful cleanup. Lets see how it goes I
  guess.
 
 I am weary of adding any new style rules that have to manually
 enforced by human reviewers, we already have a lot of other items to
 cover in a review.

A recent style cleanup was against config variable help strings.
One of the rules used was Write complete sentances. This is a
perfectly reasonable style cleanup, but I challenge anyone to write
a hacking check that validates Write complete sentances in an
acceptable amount of code. Being pragmatic on when hacking checks
are needed is the only pratical approach.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- 

Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-01-30 Thread Sylvain Bauza
Hi Khanh-Toan,

I only have one comment on your proposal : why are you proposing something
new for overcommitments with aggregates while the AggregateCoreFilter [1]
and AggregateRAMFilter [2]already exist, which AIUI provide same feature ?


I'm also concerned about the scope of changes for scheduler, as Gantt is
currently trying to replace it. Can we imagine such big changes to be
committed on the Nova side, while it's planned to have a Scheduler service
in the next future ?

-Sylvain


[1]
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L74
[2]
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py#L75





2014-01-30 Khanh-Toan Tran khanh-toan.t...@cloudwatt.com

 There is an unexpected line break in the middle of the link, so I post it
 again:

 https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
 IQB2Yhttps://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOriIQB2Y

  -Message d'origine-
  De : Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
  Envoyé : mercredi 29 janvier 2014 13:25
  À : 'OpenStack Development Mailing List (not for usage questions)'
  Objet : [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
 Solver
  Scheduler
 
  Dear all,
 
  As promised in the Scheduler/Gantt meeting, here is our analysis on the
  connection between Policy Based Scheduler and Solver Scheduler:
 
  https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bq
  olOri
  IQB2Y
 
  This document briefs the mechanism of the two schedulers and the
 possibility of
  cooperation. It is my personal point of view only.
 
  In a nutshell, Policy Based Scheduler allows admin to define policies
 for different
  physical resources (an aggregate, an availability-zone, or all
  infrastructure) or different (classes of) users. Admin can modify
  (add/remove/modify) any policy in runtime, and the modification effect
 is only
  in the target (e.g. the aggregate, the users) that the policy is defined
 to. Solver
  Scheduler solves the placement of groups of instances simultaneously by
 putting
  all the known information into a integer linear system and uses Integer
 Program
  solver to solve the latter. Thus relation between VMs and between VMs-
  computes are all accounted for.
 
  If working together, Policy Based Scheduler can supply the filters and
 weighers
  following the policies rules defined for different computes.
  These filters and weighers can be converted into constraints  cost
 function for
  Solver Scheduler to solve. More detailed will be found in the doc.
 
  I look forward for comments and hope that we can work it out.
 
  Best regards,
 
  Khanh-Toan TRAN
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tomas Sedovic

Hi all,

I've seen some confusion regarding the homogenous hardware support as 
the first step for the tripleo UI. I think it's time to make sure we're 
all on the same page.


Here's what I think is not controversial:

1. Build the UI and everything underneath to work with homogenous 
hardware in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and do that (may or 
may not happen within Icehouse)


The first option implies having a single nova flavour that will match 
all the boxes we want to work with. It may or may not be surfaced in the 
UI (I think that depends on our undercloud installation story).


Now, someone (I don't honestly know who or when) proposed a slight step 
up from point #1 that would allow people to try the UI even if their 
hardware varies slightly:


1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't 
do a strict match on the hardware in Ironic. E.g. if our baremetal 
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB 
ram or 1.5TB disk.


The UI would still assume homogenous hardware and treat it as such. It's 
just that we would allow for small differences.


This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM 
when the flavour says 32. We would treat the flavour as a lowest common 
denominator.


Nor is this an alternative to a full heterogenous hardware support. We 
need to do that eventually anyway. This is just to make the first MVP 
useful to more people.


It's an incremental step that would affect neither point 1. (strict 
homogenous hardware) nor point 2. (full heterogenous hardware support).


If some of these assumptions are incorrect, please let me know. I don't 
think this is an insane U-turn from anything we've already agreed to do, 
but it seems to confuse people.


At any rate, this is not a huge deal and if it's not a good idea, let's 
just drop it.


Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Cumulative metrics resetting

2014-01-30 Thread Julien Danjou
On Thu, Jan 30 2014, Adrian Turjak wrote:

 example:
 10min pipeline interval, a reset/shutdown happens 7 mins after the last
 poll. The data for those 7 mins is gone. Even terminating a VM will mean we
 lose the data in that last interval.

If the shutdown is down properly, the nova notifier plugin that we
provide in Ceilometer do a lost polling of the instance before shutting
it down, so you don't lose anything.

OTOH if the compute node crashes, that's true that you lose 7 mins of
data. I guess you also have bigger problem.

 On the other hand, would it be possible to setup a notification based metric
 that updates cumulative metrics, or triggers a poll right before the
 reset/shutdown/suspension/terminate, so we have an entry right before it
 resets and don't lose any data? This would pretty much solve the issue, and
 as long as it is documented that the cumulative metrics reset, this would
 solve most problems.

Yes, we have the nova notifier plugin doing exactly that. :)

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] LBaaS subteam meeting 30.01.2014 14-00 UTC

2014-01-30 Thread Eugene Nikanorov
Hi neutrons,

Let's keep our usual weekly meeting at #openstack-meeting at 14-00 UTC
We'll discuss current status of main features on Icehouse agenda:
 - SSL
 - LB instance
 - L7 rules

Meeting page: https://wiki.openstack.org/wiki/Network/LBaaS

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Securing RPC channel between the server and the agent

2014-01-30 Thread Eugene Nikanorov
Hi Russel,

Thank for your input.
I'll look into that.

Thanks,
Eugene.


On Mon, Jan 27, 2014 at 6:57 PM, Russell Bryant rbry...@redhat.com wrote:

 On 01/27/2014 09:37 AM, Eugene Nikanorov wrote:
  Hi folks,
 
  As we are going to add ssl implementation to lbaas which would be based
  on well-known haproxy+stunnel combination, there is one problem that we
  need to solve: securing communication channel between neutron-server and
  the agent.
 
  I see several approaches here:
  1) Rely on secure messaging as described here:
 
 http://docs.openstack.org/security-guide/content/ch038_transport-security.html
 
  pros: no or minor additional things to care of on neutron-server side
  and client side
  cons: might be more complex to test. Also I'm not sure testing
  infrastructure uses that.
  We'll need to state that lbaas ssl is only secure when transpost
  security is enabled.
 
  2) Provide neutron server/agent with certificate for encrypting
  keys/certificates that are dedicated to loadbalancers.
 
  pros: doesn't depend on cloud-wide messaging security. We can say that
  'ssl works' in any case.
  cons: more to implement, more complex deployment.
 
  Unless I've missed some other obvious solution what do you think is the
  best approach here?
  (I'm not considering the usage of external secure store like barbican at
  this point)
 
  What do you think?

 Using existing available transport security is a good start (SSL to your
 amqp broker).

 For a step beyond that, we really need to look at a solution that
 applies across all of OpenStack, as this is a very general problem that
 needs to be solved across many components.

 There was a proposal a while back:

 https://wiki.openstack.org/wiki/MessageSecurity

 This has since been moving forward.  Utilizing it has been blocked on
 getting KDS in Keystone.  IIRC, KDS should be implemented in Icehouse,
 so we can start utilizing it in other services in the Juno cycle.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Thomas Herve
Hi all,

While talking to Zane yesterday, he raised an interesting question about 
whether or not we want to keep a LaunchConfiguration object for the native 
autoscaling resources.

The LaunchConfiguration object basically holds properties to be able to fire 
new servers in a scaling group. In the new design, we will be able to start 
arbitrary resources, so we can't keep a strict LaunchConfiguration object as it 
exists, as we can have arbitrary properties.

It may be still be interesting to store it separately to be able to reuse it 
between groups.

So either we do this:

group:
  type: OS::Heat::ScalingGroup
  properties:
scaled_resource: OS::Nova::Server
resource_properties:
  image: my_image
  flavor: m1.large 

Or:

group:
  type: OS::Heat::ScalingGroup
  properties:
scaled_resource: OS::Nova::Server
launch_configuration: server_config
server_config:
  type: OS::Heat::LaunchConfiguration
  properties:
image: my_image
flavor: m1.large 

(Not sure we can actually define dynamic properties, in which case it'd be 
behind a top property.)

Thoughts?

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Thomas Spatzier
Hi Thomas,

I haven't looked at the details of the autoscaling design for a while, but
the first option looks more intuitive to me.
It seems to cover the same content as LaunchConfiguration, but is it
generic and therefore would provide for one common approach for all kinds
of resources.

Regards,
Thomas

Thomas Herve thomas.he...@enovance.com wrote on 30/01/2014 12:01:38:
 From: Thomas Herve thomas.he...@enovance.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 30/01/2014 12:06
 Subject: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

 Hi all,

 While talking to Zane yesterday, he raised an interesting question
 about whether or not we want to keep a LaunchConfiguration object
 for the native autoscaling resources.

 The LaunchConfiguration object basically holds properties to be able
 to fire new servers in a scaling group. In the new design, we will
 be able to start arbitrary resources, so we can't keep a strict
 LaunchConfiguration object as it exists, as we can have arbitrary
properties.

 It may be still be interesting to store it separately to be able to
 reuse it between groups.

 So either we do this:

 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 resource_properties:
   image: my_image
   flavor: m1.large

 Or:

 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 launch_configuration: server_config
 server_config:
   type: OS::Heat::LaunchConfiguration
   properties:
 image: my_image
 flavor: m1.large

 (Not sure we can actually define dynamic properties, in which case
 it'd be behind a top property.)

 Thoughts?

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Jiří Stránský

On 01/30/2014 11:26 AM, Tomas Sedovic wrote:

1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't
do a strict match on the hardware in Ironic. E.g. if our baremetal
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
ram or 1.5TB disk.

The UI would still assume homogenous hardware and treat it as such. It's
just that we would allow for small differences.

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
when the flavour says 32. We would treat the flavour as a lowest common
denominator.

Nor is this an alternative to a full heterogenous hardware support. We
need to do that eventually anyway. This is just to make the first MVP
useful to more people.

It's an incremental step that would affect neither point 1. (strict
homogenous hardware) nor point 2. (full heterogenous hardware support).

If some of these assumptions are incorrect, please let me know. I don't
think this is an insane U-turn from anything we've already agreed to do,
but it seems to confuse people.


I think having this would allow users with almost-homogeous hardware use 
TripleO. If someone already has precisely homogenous hardware, they 
won't notice a difference.


So i'm +1 for this idea. The condition should be that it's easy to 
implement, because imho it's something that will get dropped when 
support for fully heterogenous hardware is added.


Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-01-30 Thread Khanh-Toan Tran
Hi Sylvain,



1) Some Filters such as AggregateCoreFilter, AggregateRAMFilter can change
its parameters for aggregates. But what if admin wants to change for all
hosts in an availability-zone? Does he have to rewrite all the parameters
in all aggregates? Or should we create a new AvailabilityZoneCoreFilter?



The Policy Based Scheduler (PBS)  blueprint separates the effect (filter
according to Core) from its target (all hosts in an aggregate, or in an
availability-zone). It will benefit all filters, not just CoreFilter or
RAMFilter, so that we can avoid creating for each filter XFilter the
AggregateXFilter and AvailabilityZoneWFilter from now on. Beside, if admin
wants to apply the a filter to some aggregates (or availability-zone) and
not the other (don’t call filters at all, not just modify parameters), he
can do it. It help us avoid running all filters on all hosts.



2) In fact, we also prepare for a separated scheduler in which PBS is a
very first step of it, that’s why we purposely separate the Policy Based
Scheduler from Policy Based Scheduling Module (PBSM) [1] which is the core
of our architecture. If you look at our code, you will see that
Policy_Based_Scheduler.py is only slightly different from Filter
Scheduler. That is because we just want a link from Nova-scheduler to
PBSM. We’re trying to push some more management into scheduler without
causing too much modification, as you can see in the patch .



Thus I’m very happy when Gantt is proposed. As I see it, Gantt is based on
Nova-scheduler code, with the planning on replacing nova-scheduler in J.
The separation from Nova will be complicated, but not on scheduling part.
Thus integrating PBS and PBSM into Gantt would not be a problem.



Best regards,



[1]
https://docs.google.com/document/d/1gr4Pb1ErXymxN9QXR4G_jVjLqNOg2ij9oA0JrL
wMVRA



Toan



De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : jeudi 30 janvier 2014 11:16
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
Solver Scheduler



Hi Khanh-Toan,



I only have one comment on your proposal : why are you proposing something
new for overcommitments with aggregates while the AggregateCoreFilter [1]
and AggregateRAMFilter [2]already exist, which AIUI provide same feature ?





I'm also concerned about the scope of changes for scheduler, as Gantt is
currently trying to replace it. Can we imagine such big changes to be
committed on the Nova side, while it's planned to have a Scheduler service
in the next future ?



-Sylvain





[1]
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_
filter.py#L74

[2]
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_f
ilter.py#L75









2014-01-30 Khanh-Toan Tran khanh-toan.t...@cloudwatt.com

There is an unexpected line break in the middle of the link, so I post it
again:

https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOr
iIQB2Y
IQB2Y

 -Message d'origine-
 De : Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
 Envoyé : mercredi 29 janvier 2014 13:25
 À : 'OpenStack Development Mailing List (not for usage questions)'
 Objet : [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and

Solver
 Scheduler

 Dear all,

 As promised in the Scheduler/Gantt meeting, here is our analysis on the
 connection between Policy Based Scheduler and Solver Scheduler:

 https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bq
 olOri
 IQB2Y

 This document briefs the mechanism of the two schedulers and the
possibility of
 cooperation. It is my personal point of view only.

 In a nutshell, Policy Based Scheduler allows admin to define policies
for different
 physical resources (an aggregate, an availability-zone, or all
 infrastructure) or different (classes of) users. Admin can modify
 (add/remove/modify) any policy in runtime, and the modification effect
is only
 in the target (e.g. the aggregate, the users) that the policy is defined
to. Solver
 Scheduler solves the placement of groups of instances simultaneously by
putting
 all the known information into a integer linear system and uses Integer
Program
 solver to solve the latter. Thus relation between VMs and between VMs-
 computes are all accounted for.

 If working together, Policy Based Scheduler can supply the filters and
weighers
 following the policies rules defined for different computes.
 These filters and weighers can be converted into constraints  cost
function for
 Solver Scheduler to solve. More detailed will be found in the doc.

 I look forward for comments and hope that we can work it out.

 Best regards,

 Khanh-Toan TRAN


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Ladislav Smola

On 01/30/2014 12:39 PM, Jiří Stránský wrote:

On 01/30/2014 11:26 AM, Tomas Sedovic wrote:

1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't
do a strict match on the hardware in Ironic. E.g. if our baremetal
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
ram or 1.5TB disk.

The UI would still assume homogenous hardware and treat it as such. It's
just that we would allow for small differences.

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
when the flavour says 32. We would treat the flavour as a lowest common
denominator.

Nor is this an alternative to a full heterogenous hardware support. We
need to do that eventually anyway. This is just to make the first MVP
useful to more people.

It's an incremental step that would affect neither point 1. (strict
homogenous hardware) nor point 2. (full heterogenous hardware support).

If some of these assumptions are incorrect, please let me know. I don't
think this is an insane U-turn from anything we've already agreed to do,
but it seems to confuse people.


I think having this would allow users with almost-homogeous hardware 
use TripleO. If someone already has precisely homogenous hardware, 
they won't notice a difference.


So i'm +1 for this idea. The condition should be that it's easy to 
implement, because imho it's something that will get dropped when 
support for fully heterogenous hardware is added.


Jirka



Hello,

I am for implementing support for Heterogeneous hardware properly, 
lifeless should post what he recommends soon, so I would rather discuss 
that. We should be able to do simple version in I.


Lowest common denominator doesn't solve storage vs. compute node. If we 
really have similar hardware, we don't care about, we can just fill the 
nova-baremetal/ironic specs the same as the flavor.
Why would we want to see in UI that the hardware is different, when we 
can't really determine what goes where.
And as you say assume homogenous hardware and treat it as such. So 
showing in UI that the hardware is different doesn't make any sense then.

So the solution for similar hardware is already there.

I don't see this as an incremental step, but as ugly hack that is not 
placed anywhere on the roadmap.


Regards,
Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-30 Thread Robert Li (baoli)
Ian,

I hope that you guys are in agreement on this. But take a look at the wiki: 
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support and see if it has 
any difference from your proposals.  IMO, it's the critical piece of the 
proposal, and hasn't been specified in exact term yet. I'm not sure about 
vif_attributes or vif_stats, which I just heard from you. In any case, I'm not 
convinced with the flexibility and/or complexity, and so far I haven't seen a 
use case that really demands it. But I'd be happy to see one.

thanks,
Robert

On 1/29/14 4:43 PM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:

My proposals:

On 29 January 2014 16:43, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:
1. pci-flavor-attrs is configured through configuration files and will be
available on both the controller node and the compute nodes. Can the cloud
admin decide to add a new attribute in a running cloud? If that's
possible, how is that done?

When nova-compute starts up, it requests the VIF attributes that the schedulers 
need.  (You could have multiple schedulers; they could be in disagreement; it 
picks the last answer.)  It returns pci_stats by the selected combination of 
VIF attributes.

When nova-scheduler starts up, it sends an unsolicited cast of the attributes.  
nova-compute updates the attributes, clears its pci_stats and recreates them.

If nova-scheduler receives pci_stats with incorrect attributes it discards them.

(There is a row from nova-compute summarising devices for each unique 
combination of vif_stats, including 'None' where no attribute is set.)

I'm assuming here that the pci_flavor_attrs are read on startup of 
nova-scheduler and could be re-read and different when nova-scheduler is reset. 
 There's a relatively straightforward move from here to an API for setting it 
if this turns out to be useful, but firstly I think it would be an uncommon 
occurrence and secondly it's not something we should implement now.

2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
flavor is defined with a matching expression in the form of attr1 = val11
[| val12 Š.], [attr2 = val21 [| val22 Š]], Š. And this expression is used
to match one or more PCI stats groups until a free PCI device is located.
In this case, both attr1 and attr2 can have multiple values, and both
attributes need to be satisfied. Please confirm this understanding is
correct

This looks right to me as we've discussed it, but I think we'll be wanting 
something that allows a top level AND.  In the above example, I can't say an 
Intel NIC and a Mellanox NIC are equally OK, because I can't say (intel + 
product ID 1) AND (Mellanox + product ID 2).  I'll leave Yunhong to decice how 
the details should look, though.

3. I'd like to see an example that involves multiple attributes. let's say
pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
know how PCI stats groups are formed on compute nodes based on that, and
how many of PCI stats groups are there? What's the reasonable guidelines
in defining the PCI flavors.

I need to write up the document for this, and it's overdue.  Leave it with me.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-01-30 Thread Gary Kotton
Hi,
I started to do the work – https://review.openstack.org/#/c/65691/. From the 
comments on the review it did not seem the right way to go. So I gave up on it. 
Sorry to not have updated. I personally think that the scheduler should use 
objects, the reason for this is as follows:

 1.  One of the aims of the objects is to enable seamless upgrades. If we have 
this in the gannet, that starts with using only the nova database then we can 
upgrade to using another database. The object interface will do the translations
 2.  We may be able to leverage objects to interface with different types of 
services. That will enable us to provide cross service features far quicker.

Thanks
Gary

From: Murray, Paul (HP Cloud Services) 
pmur...@hp.commailto:pmur...@hp.com
Date: Thursday, January 30, 2014 11:41 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Administrator gkot...@vmware.commailto:gkot...@vmware.com, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com
Subject: [Nova][Scheduler] Will the Scheuler use Nova Objects?

Hi,

I have heard a couple of conflicting comments about the scheduler and nova 
objects that I would like to clear up. In one scheduler/gantt meeting, Gary 
Kotton offered to convert the scheduler to use Nova objects. In another I heard 
that with the creation of Gantt, the scheduler would avoid using any Nova 
specific features including Nova objects.

I can see that these things are evolving at the same time, so it makes sense 
that plans or opinions might change. But I am at a point where it would be nice 
to know.

Which way should this go?

Paul.

Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Cumulative metrics resetting

2014-01-30 Thread Adrian Turjak


On 30/01/2014 23:31, Julien Danjou wrote:


On the other hand, would it be possible to setup a notification based metric
that updates cumulative metrics, or triggers a poll right before the
reset/shutdown/suspension/terminate, so we have an entry right before it
resets and don't lose any data? This would pretty much solve the issue, and
as long as it is documented that the cumulative metrics reset, this would
solve most problems.

Yes, we have the nova notifier plugin doing exactly that. :)



Awesome! But where in the source are they? As the only two notification 
plugins that seem to exist on the compute agent are for cpu and 
instance. Is there meant to be one that handles 
network.outgoing/incoming updates on those VM state changes?


On Havana I know those network notifier plugins aren't there, so I 
assume master then and possibly going towards icehouse.


I'll have a play with it again, but last time i did a clean devstack off 
of master, notification plugins weren't working at all for me. Has 
anyone else had this issue?


Sorry for all the questions, just poking at ceilometer and trying to get 
data from it that I can be sure I can trust, and use for billing. I 
don't mind digging in the source, or even extending/fixing things, just 
need to know where to look.


Cheers,
-Adrian Turjak

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Cumulative metrics resetting

2014-01-30 Thread Julien Danjou
On Thu, Jan 30 2014, Adrian Turjak wrote:

 Awesome! But where in the source are they? As the only two notification
 plugins that seem to exist on the compute agent are for cpu and instance. Is
 there meant to be one that handles network.outgoing/incoming updates on
 those VM state changes?

This does not rely on notifications but on polling. The notifier is in
ceilometer.compute.nova_notifier.

 On Havana I know those network notifier plugins aren't there, so I assume
 master then and possibly going towards icehouse.

That's not based on notification, that's based on polling.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-30 Thread Robert Kukura
On 01/30/2014 01:42 AM, Irena Berezovsky wrote:
 Please see inline
 
  
 
 *From:*Ian Wells [mailto:ijw.ubu...@cack.org.uk]
 *Sent:* Thursday, January 30, 2014 1:17 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on
 Jan. 29th
 
  
 
 On 29 January 2014 23:50, Robert Kukura rkuk...@redhat.com
 mailto:rkuk...@redhat.com wrote:
 
 On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
  Hi Bob,
 
  that's a good find. profileid as part of IEEE 802.1br needs to be in
  binding:profile, and can be specified by a normal user, and later
 possibly
  the pci_flavor. Would it be wrong to say something as in below in the
  policy.json?
   create_port:binding:vnic_type: rule:admin_or_network_owner
   create_port:binding:profile:profileid:
 rule:admin_or_network_owner
 
 Maybe, but a normal user that owns a network has no visibility into the
 underlying details (such as the providernet extension attributes.
 
  
 
 I'm with Bob on this, I think - I would expect that vnic_type is passed
 in by the user (user readable, and writeable, at least if the port is
 not attached) and then may need to be reflected back, if present, in the
 'binding' attribute via the port binding extension (unless Nova can just
 go look for it - I'm not clear on what's possible here).
 
 */[IrenaB] I would prefer not to add new extension for vnic_type. I
 think it fits well into port binding extension, and it may be reasonable
 to follow the policy rules as Robert suggested. The way user specifies
 the vnic_type via nova API is currently left out for short term. Based
 on previous PCI meeting discussions, it was raised by John that regular
 user may be required to set vNIC flavor, but he definitely not expected
 to manage ‘driver’ level details of the way to connect vNIC./*
 
 */For me it looks like neutron port can handle vnic_type via port
 binding, and the question is whether it is standalone attribute on port
 binding or a key,val pair on port binding:profile./*

I do not think we should try to associate different access policies with
different keys within the binding:profile attribute (or any other
dictionary attribute). We could consider changing the policy for
binding:profile itself, but I'm not in favor of that because I strongly
feel normal cloud users should not be exposed to any of these internal
details of the deployment. If vnic_type does need to be accessed by
normal users, I believe it should be a top-level attribute or a
key/value pair within a user-accessible top-level attribute.

-Bob

 
 
  
 
 Also, would a normal cloud user really know what pci_flavor to use?
 Isn't all this kind of detail hidden from a normal user within the nova
 VM flavor (or host aggregate or whatever) pre-configured by the admin?
 
  
 
 Flavors are user-visible, analogous to Nova's machine flavors, they're
 just not user editable.  I'm not sure where port profiles come from.
 -- 
 
 Ian.
 
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Christopher Yeoh
On Thu, Jan 30, 2014 at 2:08 PM, Michael Still mi...@stillhq.com wrote:

 On Thu, Jan 30, 2014 at 2:29 PM, Christopher Yeoh cbky...@gmail.com
 wrote:

  So if nova-network doesn't go away this has implications for the V3 API
 as
  it currently doesn't support
  nova-network. I'm not sure that we have time to add support for it in
  icehouse now, but if nova-network is
  not going to go away then we need to add it to the V3 API or we will be
  unable to ever deprecate the
  V2 API.

 Is the problem here getting the code written, or getting it through
 reviews? i.e. How can I re-prioritise work to help you here?


So I think its a combination of both. There's probably around 10 extensions
from V2 that would need looking at to port from V2. There's some cases
where the API supported both nova network and neutron, proxying in the
latter case and others where only nova network was supported. So we'll need
to make a decision pretty quickly around whether we present a unified
networking interface (eg proxy for neutron) or have some interfaces which
you only use when you use nova-network. There's a bit of work either way.
Also given how long we'll have V3 for want to take the opportunity to
cleanup the APIs we do port. And feature proposal deadline is now less than
3 weeks away so combined with the already existing work we have for i-3 it
is going to be a little tight.

The other issue is we have probably at least 50 or so V3 API related
changesets in the queue at the moment, plus obviously more coming over the
next few weeks. So I'm a bit a wary of how much extra review attention we
can realistically expect.

The two problems together make me think that although its not impossible,
there's a reasonable level of risk that we wouldn't get it all done AND
merged in i-3. And I think we want to avoid the situation where we have
some of the things currently in the queue merged and some of say the
nova-network patches done, but not complete with either. More people
contributing patches and core review cycles will of course help though so
any help is welcome :-)

This is all dependent on nova-network never going away. If the intent is
that it would eventually be deprecated - say in the same timeframe as the
V2 API then I don't think its worth the extra effort/risk putting it in the
V3 API in icehouse.

Regards,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Assigning a floating IP to an internal network

2014-01-30 Thread Ofer Barkai
Hi all,

During the implementation of:
https://blueprints.launchpad.net/neutron/+spec/floating-ip-extra-route

Which suggest allowing assignment of floating IP to internal address
not directly connected to the router, if there is a route configured on
the router to the internal address.

In: https://review.openstack.org/55987

There seem to be 2 possible approaches for finding an appropriate
router for a floating IP assignment, while considering extra routes:

1. Use the first router that has a route matching the internal address
which is the target of the floating IP.

2. Use the first router that has a matching route, _and_ verify that
there exists a path of connected devices to the network object to
which the internal address belongs.

The first approach solves the simple case of a gateway on a compute
hosts that protects an internal network (which is the motivation for
this enhancement).

However, if the same (or overlapping) addresses are assigned to
different internal networks, there is a risk that the first approach
might find the wrong router.

Still, the second approach might force many DB lookups to trace the path from
the router to the internal network. This overhead might not be
desirable if the use case does not (at least, initially) appear in the
real world.

Patch set 6 presents the first, lightweight approach, and Patch set 5
presents the second, more accurate approach.

I would appreciate the opportunity to get more points of view on this subject.

Thanks,

-Ofer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Clint Byrum
Excerpts from Thomas Herve's message of 2014-01-30 03:01:38 -0800:
 Hi all,
 
 While talking to Zane yesterday, he raised an interesting question about 
 whether or not we want to keep a LaunchConfiguration object for the native 
 autoscaling resources.
 
 The LaunchConfiguration object basically holds properties to be able to fire 
 new servers in a scaling group. In the new design, we will be able to start 
 arbitrary resources, so we can't keep a strict LaunchConfiguration object as 
 it exists, as we can have arbitrary properties.
 

IIRC, LaunchConfiguration is just part of the API for AWS's separate
auto scaling service.

Since we're auto scaling in Heat we have template intelligence in our
auto scaler and thus we shouldn't need two resources.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Sanchez, Cristian A
Hi Matt, 
What about the rest of the components? Do they also have this capability?
Thanks

Cristian

On 30/01/14 04:59, Macdonald-Wallace, Matthew
matthew.macdonald-wall...@hp.com wrote:

Hi Cristian,

The functionality already exists within Openstack (certainly it's there
in Nova) it's just not very well documented (something I keep meaning to
do!)

Basically you need to add the following to your nova.conf file:

log_config=/etc/nova/logging.conf

And then create /etc/nova/logging.conf with the configuration you want to
use based on the Python Logging Module's ini configuration format.

Hope that helps,

Matt

 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 29 January 2014 17:57
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards
 
 Hi Matthew,
 I¹m interested to help in this switch to python logging framework for
shipping to
 logstash/etc. Are you working on a blueprint for this?
 Cheers,
 
 Cristian
 
 On 27/01/14 11:07, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:
 
 Hi Sean,
 
 I'm currently working on moving away from the built-in logging to use
 log_config=filename and the python logging framework so that we can
 start shipping to logstash/sentry/insert other useful tool here.
 
 I'd be very interested in getting involved in this, especially from a
 why do we have log messages that are split across multiple lines
 perspective!
 
 Cheers,
 
 Matt
 
 P.S. FWIW, I'd also welcome details on what the Audit level gives us
 that the others don't... :)
 
  -Original Message-
  From: Sean Dague [mailto:s...@dague.net]
  Sent: 27 January 2014 13:08
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] Proposed Logging Standards
 
  Back at the beginning of the cycle, I pushed for the idea of doing
 some log  harmonization, so that the OpenStack logs, across services,
 made sense.
 I've
  pushed a proposed changes to Nova and Keystone over the past couple
 of days.
 
  This is going to be a long process, so right now I want to just focus
 on making  INFO level sane, because as someone that spends a lot of
 time staring at logs in  test failures, I can tell you it currently
 isn't.
 
  https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
 written  down so far, comments welcomed.
 
  We kind of need to solve this set of recommendations once and for all
 up front,  because negotiating each change, with each project, isn't
 going to work (e.g -
  https://review.openstack.org/#/c/69218/)
 
  What I'd like to find out now:
 
  1) who's interested in this topic?
  2) who's interested in helping flesh out the guidelines for various
 log levels?
  3) who's interested in helping get these kinds of patches into
 various projects in  OpenStack?
  4) which projects are interested in participating (i.e. interested in
 prioritizing  landing these kinds of UX improvements)
 
  This is going to be progressive and iterative. And will require lots
 of folks  involved.
 
-Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-01-30 Thread Andrew Laski

On 01/30/14 at 04:13am, Gary Kotton wrote:

Hi,
I started to do the work – https://review.openstack.org/#/c/65691/. From the 
comments on the review it did not seem the right way to go. So I gave up on it. 
Sorry to not have updated. I personally think that the scheduler should use 
objects, the reason for this is as follows:

1.  One of the aims of the objects is to enable seamless upgrades. If we have 
this in the gannet, that starts with using only the nova database then we can 
upgrade to using another database. The object interface will do the translations
2.  We may be able to leverage objects to interface with different types of 
services. That will enable us to provide cross service features far quicker.


I'm of the opinion that the scheduler should use objects, for all the 
reasons that Nova uses objects, but that they should not be Nova 
objects.  Ultimately what the scheduler needs is a concept of capacity, 
allocations, and locality of resources.  But the way those are modeled 
doesn't need to be tied to how Nova does it, and once the scope expands 
to include Cinder it may quickly turn out to be limiting to hold onto 
Nova objects.




Thanks
Gary

From: Murray, Paul (HP Cloud Services) 
pmur...@hp.commailto:pmur...@hp.com
Date: Thursday, January 30, 2014 11:41 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Administrator gkot...@vmware.commailto:gkot...@vmware.com, Dan Smith 
d...@danplanet.commailto:d...@danplanet.com
Subject: [Nova][Scheduler] Will the Scheuler use Nova Objects?

Hi,

I have heard a couple of conflicting comments about the scheduler and nova 
objects that I would like to clear up. In one scheduler/gantt meeting, Gary 
Kotton offered to convert the scheduler to use Nova objects. In another I heard 
that with the creation of Gantt, the scheduler would avoid using any Nova 
specific features including Nova objects.

I can see that these things are evolving at the same time, so it makes sense 
that plans or opinions might change. But I am at a point where it would be nice 
to know.

Which way should this go?

Paul.

Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments to it are 
confidential and may be legally privileged. If you have received this message in error, 
you should delete it from your system immediately and advise the sender. To any recipient 
of this message within HP, unless otherwise stated you should consider this message and 
attachments as HP CONFIDENTIAL.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Russell Bryant
On 01/30/2014 08:10 AM, Christopher Yeoh wrote:
 On Thu, Jan 30, 2014 at 2:08 PM, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:
 
 On Thu, Jan 30, 2014 at 2:29 PM, Christopher Yeoh cbky...@gmail.com
 mailto:cbky...@gmail.com wrote:
 
  So if nova-network doesn't go away this has implications for the
 V3 API as
  it currently doesn't support
  nova-network. I'm not sure that we have time to add support for it in
  icehouse now, but if nova-network is
  not going to go away then we need to add it to the V3 API or we
 will be
  unable to ever deprecate the
  V2 API.
 
 Is the problem here getting the code written, or getting it through
 reviews? i.e. How can I re-prioritise work to help you here?
 
 
 So I think its a combination of both. There's probably around 10
 extensions from V2 that would need looking at to port from V2. There's
 some cases where the API supported both nova network and neutron,
 proxying in the latter case and others where only nova network was
 supported. So we'll need to make a decision pretty quickly around
 whether we present a unified networking interface (eg proxy for neutron)
 or have some interfaces which you only use when you use nova-network.
 There's a bit of work either way. Also given how long we'll have V3 for
 want to take the opportunity to cleanup the APIs we do port. And feature
 proposal deadline is now less than 3 weeks away so combined with the
 already existing work we have for i-3 it is going to be a little tight.
 
 The other issue is we have probably at least 50 or so V3 API related
 changesets in the queue at the moment, plus obviously more coming over
 the next few weeks. So I'm a bit a wary of how much extra review
 attention we can realistically expect.
 
 The two problems together make me think that although its not
 impossible, there's a reasonable level of risk that we wouldn't get it
 all done AND merged in i-3. And I think we want to avoid the situation
 where we have some of the things currently in the queue merged and some
 of say the nova-network patches done, but not complete with either. More
 people contributing patches and core review cycles will of course help
 though so any help is welcome :-)
 
 This is all dependent on nova-network never going away. If the intent is
 that it would eventually be deprecated - say in the same timeframe as
 the V2 API then I don't think its worth the extra effort/risk putting it
 in the V3 API in icehouse.

I can't say in any sort of confidence that I think nova-network will go
away in the foreseeable future.  Yes, this has an unfortunate big impact
on our original plan for the v3 API.  :-(

However, I'm also not sure about the status of v3 in Icehouse, anyway.
One of the key things I want to see in before we freeze the API is the
tasks work.  AFAIK, there hasn't been any design review on this, much
less code review.  It seems incredibly unlikely that it will be done for
Icehouse at this point.  Andrew, thoughts?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Macdonald-Wallace, Matthew
No idea, I only really work on Nova, but as this is in Oslo I expect so!

Matt

 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 30 January 2014 13:44
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards
 
 Hi Matt,
 What about the rest of the components? Do they also have this capability?
 Thanks
 
 Cristian
 
 On 30/01/14 04:59, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:
 
 Hi Cristian,
 
 The functionality already exists within Openstack (certainly it's there
 in Nova) it's just not very well documented (something I keep meaning
 to
 do!)
 
 Basically you need to add the following to your nova.conf file:
 
 log_config=/etc/nova/logging.conf
 
 And then create /etc/nova/logging.conf with the configuration you want
 to use based on the Python Logging Module's ini configuration format.
 
 Hope that helps,
 
 Matt
 
  -Original Message-
  From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
  Sent: 29 January 2014 17:57
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] Proposed Logging Standards
 
  Hi Matthew,
  I¹m interested to help in this switch to python logging framework for
 shipping to  logstash/etc. Are you working on a blueprint for this?
  Cheers,
 
  Cristian
 
  On 27/01/14 11:07, Macdonald-Wallace, Matthew
  matthew.macdonald-wall...@hp.com wrote:
 
  Hi Sean,
  
  I'm currently working on moving away from the built-in logging to
  use log_config=filename and the python logging framework so that
  we can start shipping to logstash/sentry/insert other useful tool here.
  
  I'd be very interested in getting involved in this, especially from
  a why do we have log messages that are split across multiple lines
  perspective!
  
  Cheers,
  
  Matt
  
  P.S. FWIW, I'd also welcome details on what the Audit level gives
  us that the others don't... :)
  
   -Original Message-
   From: Sean Dague [mailto:s...@dague.net]
   Sent: 27 January 2014 13:08
   To: OpenStack Development Mailing List
   Subject: [openstack-dev] Proposed Logging Standards
  
   Back at the beginning of the cycle, I pushed for the idea of doing
  some log  harmonization, so that the OpenStack logs, across
  services, made sense.
  I've
   pushed a proposed changes to Nova and Keystone over the past
  couple of days.
  
   This is going to be a long process, so right now I want to just
  focus on making  INFO level sane, because as someone that spends a
  lot of time staring at logs in  test failures, I can tell you it
  currently isn't.
  
   https://wiki.openstack.org/wiki/LoggingStandards is a few things
  I've written  down so far, comments welcomed.
  
   We kind of need to solve this set of recommendations once and for
  all up front,  because negotiating each change, with each project,
  isn't going to work (e.g -
   https://review.openstack.org/#/c/69218/)
  
   What I'd like to find out now:
  
   1) who's interested in this topic?
   2) who's interested in helping flesh out the guidelines for
  various log levels?
   3) who's interested in helping get these kinds of patches into
  various projects in  OpenStack?
   4) which projects are interested in participating (i.e. interested
  in prioritizing  landing these kinds of UX improvements)
  
   This is going to be progressive and iterative. And will require
  lots of folks  involved.
  
   -Sean
  
   --
   Sean Dague
   Samsung Research America
   s...@dague.net / sean.da...@samsung.com http://dague.net
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Russell Bryant
On 01/29/2014 05:24 PM, Kyle Mestery wrote:
 
 On Jan 29, 2014, at 12:04 PM, Russell Bryant rbry...@redhat.com wrote:
 
 On 01/29/2014 12:45 PM, Daniel P. Berrange wrote:
 I was thinking of an upgrade path more akin to what users got when we
 removed the nova volume driver, in favour of cinder.

  https://wiki.openstack.org/wiki/MigrateToCinder

 ie no guest visible downtime / interuption of service, nor running of
 multiple Nova instances in parallel.

 Yeah, I'd love to see something like that.  I would really like to see
 more effort in this area.  I honestly haven't been thinking about it
 much in a while personally, because the rest of the make it work gaps
 have still been a work in progress.

 There's a bit of a bigger set of questions here, too ...

 Should nova-network *ever* go away?  Or will there always just be a
 choice between the basic/legacy nova-network option, and the new fancy
 SDN-enabling Neutron option?  Is the Neutron team's time better spent on
 OpenDaylight integration than the existing open source plugins?

 This point about OpenDaylight vs. existing open source plugins is something
 which some of us have talked about for a while now. I’ve spent a lot of time
 with the OpenDaylight team over the last 2 months, and I believe once we
 get that ML2 MechanismDriver upstreamed (waiting on third party testing and
 reviews [1], perhaps we can at least remove some pressure agent-wise. The
 current OpenDaylight driver doesn’t use a compute agent. And future iterations
 will hopefully remove the need for an L3 agent as well, maybe even DHCP.
 Since a lot of the gate issues seem to resolve around those things, my hope
 is this approach can simplify some code and lead to more stability. But we’ll
 see, we’re very early here at the moment.

I think this point is really important and I'd love to see more input
from others on the Neutron side.

There's the long term view: where are we headed?  What's the
nova-network/Nova+neutron end game?

There's also the short term issues: Neutron reliability has been causing
massive pain in the OpenStack gate for months.  Are we going through
this pain for no good reason if we expect these plugins to go away
before becoming a viable production option?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Sean Dague
For all projects that use oslo logging (which is currently everything
except swift), this works.

-Sean

On 01/30/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
 No idea, I only really work on Nova, but as this is in Oslo I expect so!
 
 Matt
 
 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 30 January 2014 13:44
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards

 Hi Matt,
 What about the rest of the components? Do they also have this capability?
 Thanks

 Cristian

 On 30/01/14 04:59, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:

 Hi Cristian,

 The functionality already exists within Openstack (certainly it's there
 in Nova) it's just not very well documented (something I keep meaning
 to
 do!)

 Basically you need to add the following to your nova.conf file:

 log_config=/etc/nova/logging.conf

 And then create /etc/nova/logging.conf with the configuration you want
 to use based on the Python Logging Module's ini configuration format.

 Hope that helps,

 Matt

 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 29 January 2014 17:57
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards

 Hi Matthew,
 I¹m interested to help in this switch to python logging framework for
 shipping to  logstash/etc. Are you working on a blueprint for this?
 Cheers,

 Cristian

 On 27/01/14 11:07, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:

 Hi Sean,

 I'm currently working on moving away from the built-in logging to
 use log_config=filename and the python logging framework so that
 we can start shipping to logstash/sentry/insert other useful tool here.

 I'd be very interested in getting involved in this, especially from
 a why do we have log messages that are split across multiple lines
 perspective!

 Cheers,

 Matt

 P.S. FWIW, I'd also welcome details on what the Audit level gives
 us that the others don't... :)

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 27 January 2014 13:08
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] Proposed Logging Standards

 Back at the beginning of the cycle, I pushed for the idea of doing
 some log  harmonization, so that the OpenStack logs, across
 services, made sense.
 I've
 pushed a proposed changes to Nova and Keystone over the past
 couple of days.

 This is going to be a long process, so right now I want to just
 focus on making  INFO level sane, because as someone that spends a
 lot of time staring at logs in  test failures, I can tell you it
 currently isn't.

 https://wiki.openstack.org/wiki/LoggingStandards is a few things
 I've written  down so far, comments welcomed.

 We kind of need to solve this set of recommendations once and for
 all up front,  because negotiating each change, with each project,
 isn't going to work (e.g -
 https://review.openstack.org/#/c/69218/)

 What I'd like to find out now:

 1) who's interested in this topic?
 2) who's interested in helping flesh out the guidelines for
 various log levels?
 3) who's interested in helping get these kinds of patches into
 various projects in  OpenStack?
 4) which projects are interested in participating (i.e. interested
 in prioritizing  landing these kinds of UX improvements)

 This is going to be progressive and iterative. And will require
 lots of folks  involved.

  -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Christopher Yeoh
On Fri, Jan 31, 2014 at 12:27 AM, Russell Bryant rbry...@redhat.com wrote:

 On 01/30/2014 08:10 AM, Christopher Yeoh wrote:
  On Thu, Jan 30, 2014 at 2:08 PM, Michael Still mi...@stillhq.com
  mailto:mi...@stillhq.com wrote:
 
  On Thu, Jan 30, 2014 at 2:29 PM, Christopher Yeoh cbky...@gmail.com
  mailto:cbky...@gmail.com wrote:
 
   So if nova-network doesn't go away this has implications for the
  V3 API as
   it currently doesn't support
   nova-network. I'm not sure that we have time to add support for it
 in
   icehouse now, but if nova-network is
   not going to go away then we need to add it to the V3 API or we
  will be
   unable to ever deprecate the
   V2 API.
 
  Is the problem here getting the code written, or getting it through
  reviews? i.e. How can I re-prioritise work to help you here?
 
 
  So I think its a combination of both. There's probably around 10
  extensions from V2 that would need looking at to port from V2. There's
  some cases where the API supported both nova network and neutron,
  proxying in the latter case and others where only nova network was
  supported. So we'll need to make a decision pretty quickly around
  whether we present a unified networking interface (eg proxy for neutron)
  or have some interfaces which you only use when you use nova-network.
  There's a bit of work either way. Also given how long we'll have V3 for
  want to take the opportunity to cleanup the APIs we do port. And feature
  proposal deadline is now less than 3 weeks away so combined with the
  already existing work we have for i-3 it is going to be a little tight.
 
  The other issue is we have probably at least 50 or so V3 API related
  changesets in the queue at the moment, plus obviously more coming over
  the next few weeks. So I'm a bit a wary of how much extra review
  attention we can realistically expect.
 
  The two problems together make me think that although its not
  impossible, there's a reasonable level of risk that we wouldn't get it
  all done AND merged in i-3. And I think we want to avoid the situation
  where we have some of the things currently in the queue merged and some
  of say the nova-network patches done, but not complete with either. More
  people contributing patches and core review cycles will of course help
  though so any help is welcome :-)
 
  This is all dependent on nova-network never going away. If the intent is
  that it would eventually be deprecated - say in the same timeframe as
  the V2 API then I don't think its worth the extra effort/risk putting it
  in the V3 API in icehouse.

 I can't say in any sort of confidence that I think nova-network will go
 away in the foreseeable future.  Yes, this has an unfortunate big impact
 on our original plan for the v3 API.  :-(

 However, I'm also not sure about the status of v3 in Icehouse, anyway.
 One of the key things I want to see in before we freeze the API is the
 tasks work.  AFAIK, there hasn't been any design review on this, much
 less code review.  It seems incredibly unlikely that it will be done for
 Icehouse at this point.  Andrew, thoughts?


I don't think the lack the tasks api being merged should stop us from
releasing the V3 API (it perhaps means there is one less significant reason
for people to move from the V2 API). Releasing the v3 API doesn't stop us
from adding tasks at a later stage to the V3 API as it could be a simple
additional way to interact and in practice I'd imagine there will be
gradual increase in support of doing things in a tasks oriented way rather
than a big bang everything now uses tasks approach.

And the sooner we can release the V3 API, the sooner we can put the V2 API
into maintenance mode and avoid the overhead of having every new feature
having to be written for both.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch

2014-01-30 Thread Matthew Farrellee
i imagine this is something that can be useful in a development and 
testing environment, especially during the transition period from direct 
to heat. so having the ability is not unreasonable, but i wouldn't 
expose it to users via the dashboard (maybe not even directly in the cli)


generally i want to reduce the number of parameters / questions the user 
is asked


best,


matt

On 01/30/2014 04:42 AM, Dmitry Mescheryakov wrote:

I agree with Andrew. I see no value in letting users select how their
cluster is provisioned, it will only make interface a little bit more
complex.

Dmitry


2014/1/30 Andrew Lazarev alaza...@mirantis.com
mailto:alaza...@mirantis.com

Alexander,

What is the purpose of exposing this to user side? Both engines must
do exactly the same thing and they exist in the same time only for
transition period until heat engine is stabilized. I don't see any
value in proposed option.

Andrew.


On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov
aigna...@mirantis.com mailto:aigna...@mirantis.com wrote:

Today Savanna has two provisioning engines, heat and old one
known as 'direct'.
Users can choose which engine will be used by setting special
parameter in 'savanna.conf'.

I have an idea to give an ability for users to define
provisioning engine
not only when savanna is started but when new cluster is
launched. The idea is simple.
We will just add new field 'provisioning_engine' to 'cluster'
and 'cluster_template'
objects. And profit is obvious, users can easily switch from one
engine to another without
restarting savanna service. Of course, this parameter can be
omitted and the default value
from the 'savanna.conf' will be applied.

Is this viable? What do you think?

Regards,
Alexander Ignatov




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Sean Dague
On 01/27/2014 11:03 AM, Macdonald-Wallace, Matthew wrote:
 I've also noticed just now that we appear to be re-inventing some parts of
 the logging framework (openstack.common.log.WriteableLogger for example
 appears to be a catchall when we should just be handing off to the default
 logger and letting the python logging framework decide what to do IMHO).

 WriteableLogger exists for a very specific reason: eventlet. Eventlet 
 assumes a
 file object for logging, not a python logger.

 I've proposed a change for that -
 https://github.com/eventlet/eventlet/pull/75 - but it's not yet upstream.
 
 Thanks for clearing that up, makes a lot more sense now!
 
 So when the change is merged upstream we can get rid of that in our code as 
 well?

I'm pretty sure that's the only place it's used, so my hope is yes.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-30 Thread CARVER, PAUL
Vishvananda Ishaya wrote:

In testing I have been unable to saturate a 10g link using a single VM. Even 
with multiple streams,
the best I have been able to do (using virtio and vhost_net is about 7.8g.

Can you share details about your hardware and vSwitch config (possibly off list 
if that isn't a valid openstack-dev topic)

I haven't been able to spend any time on serious performance testing, but just 
doing preliminary testing on a HP BL460cG8 and Virtual Connect I haven't been 
able to push more than about 1Gbps using a pretty vanilla Havana install with 
OvS and VLANs (no GRE).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Sean Dague
On 01/28/2014 06:28 PM, Scott Devoid wrote:
 
 A big part of my interest here is to make INFO a useful informational
 level for operators. That means getting a bunch of messages out of it
 that don't belong.
 
 
 +1 to that! How should I open / tag bugs for this?

I'm thinking right now we should probably call out specifically
unhelpful messages in the wiki -
https://wiki.openstack.org/wiki/LoggingStandards  (possibly create a new
page? https://wiki.openstack.org/wiki/LoggingStandardsBadLogMessages ?)

With a suggestion on whether we should either:
 * fix the message to be useful (the secret decoder ring problem)
 * push it to DEBUG

Right now straight out deleting messages is not my intent, it's make
them useful, or put them to DEBUG. We'll audit DEBUG later.

I am very much interested in getting feedback from large operators like
yourself on this, as I think that's a really important voice in this
discussion.

 We should be logging user / tenant on every wsgi request, so that should
 be parsable out of INFO. If not, we should figure out what is falling
 down there.
 
 
 At the moment we're not automatically parsing logs (just collecting via
 syslog and logstash).

Well for logstash purposes, the standard format should give you user /
tenant

 
 Follow on question: do you primarily use the EC2 or OSAPI? As there are
 some current short comings on the EC2 logging, and figuring out
 normalizing those would be good as well.
 
  
 Most of our users work through Horizon or the nova CLI. Good to know
 about the EC2 issues though.

Thanks for the feedback.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Russell Bryant
On 01/30/2014 09:13 AM, Christopher Yeoh wrote:
 
 On Fri, Jan 31, 2014 at 12:27 AM, Russell Bryant rbry...@redhat.com
 I can't say in any sort of confidence that I think nova-network will go
 away in the foreseeable future.  Yes, this has an unfortunate big impact
 on our original plan for the v3 API.  :-(
 
 However, I'm also not sure about the status of v3 in Icehouse, anyway.
 One of the key things I want to see in before we freeze the API is the
 tasks work.  AFAIK, there hasn't been any design review on this, much
 less code review.  It seems incredibly unlikely that it will be done for
 Icehouse at this point.  Andrew, thoughts?
 
 
 I don't think the lack the tasks api being merged should stop us from
 releasing the V3 API (it perhaps means there is one less significant
 reason for people to move from the V2 API). Releasing the v3 API doesn't
 stop us from adding tasks at a later stage to the V3 API as it could be
 a simple additional way to interact and in practice I'd imagine there
 will be gradual increase in support of doing things in a tasks oriented
 way rather than a big bang everything now uses tasks approach.
 
 And the sooner we can release the V3 API, the sooner we can put the V2
 API into maintenance mode and avoid the overhead of having every new
 feature having to be written for both.

Well, it depends.

If the tasks API is going to purely be an add-on, then sure, I agree.
If it's a fundamental shift to the existing API, including changing how
we respond to things like creating a server, then I think it has to wait.

We really need to have some rough design agreed upon to make this call
effectively.  In the absence of that, I think the right thing to do is
to proceed with v3 as it stands, which will put some limitations on how
drastic the tasks addition can be.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] New date for closest team meeting

2014-01-30 Thread Dina Belova
Hi, folks!

We need to choose new date for our next weekly teem meeting due to some
emergency - at least two our core members (Sylvain and Swann) and usually
active meeting participants will be travelling to FOSDEM tomorrow 15:00
UTC. So they'll have no opportunity to participate.

Have you any preferences on what date and time to choose?

Cheers,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Matt Wagner
On 1/30/14, 5:26 AM, Tomas Sedovic wrote:
 Hi all,
 
 I've seen some confusion regarding the homogenous hardware support as
 the first step for the tripleo UI. I think it's time to make sure we're
 all on the same page.
 
 Here's what I think is not controversial:
 
 1. Build the UI and everything underneath to work with homogenous
 hardware in the Icehouse timeframe
 2. Figure out how to support heterogenous hardware and do that (may or
 may not happen within Icehouse)
 
 The first option implies having a single nova flavour that will match
 all the boxes we want to work with. It may or may not be surfaced in the
 UI (I think that depends on our undercloud installation story).
 
 Now, someone (I don't honestly know who or when) proposed a slight step
 up from point #1 that would allow people to try the UI even if their
 hardware varies slightly:
 
 1.1 Treat similar hardware configuration as equal
 
 The way I understand it is this: we use a scheduler filter that wouldn't
 do a strict match on the hardware in Ironic. E.g. if our baremetal
 flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
 ram or 1.5TB disk.
 
 The UI would still assume homogenous hardware and treat it as such. It's
 just that we would allow for small differences.
 
 This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
 when the flavour says 32. We would treat the flavour as a lowest common
 denominator.

Does Nova already handle this? Or is it built on exact matches?

I guess my question is -- what is the benefit of doing this? Is it just
so people can play around with it? Or is there a lasting benefit
long-term? I can see one -- match to the closest, but be willing to give
me more than I asked for if that's all that's available. Is there any
downside to this being permanent behavior?

I think the lowest-common-denominator match will be familiar to
sysadmins, too. Want to do RAID striping across a 500GB and a 750GB
disk? You'll get a striped 500GB volume.

-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] New date for closest team meeting

2014-01-30 Thread Dina Belova
I'm ok with Tuesday 1500 UTC. We may go to #openstack-meeting-3 if we'll
need. I'm pretty sure it'll be free. Horizon folks have there meeting
Tuesday 1600 UTC, but I'm sure 1 hour will be enough for us :)

Folks, any other ideas?

Cheers,
Dina


On Thu, Jan 30, 2014 at 6:42 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Le 30/01/2014 15:33, Dina Belova a écrit :

  Hi, folks!

  We need to choose new date for our next weekly teem meeting due to some
 emergency - at least two our core members (Sylvain and Swann) and usually
 active meeting participants will be travelling to FOSDEM tomorrow 15:00
 UTC. So they'll have no opportunity to participate.

  Have you any preferences on what date and time to choose?


 Tuesday 1500 UTC would get my preference (I do have some family concerns
 for Monday)
 IIRC, #openstack-meeting will probably be busy at this time, but we can go
 to other rooms if necessary.

 -Sylvain



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Andrew Laski

On 01/30/14 at 09:31am, Russell Bryant wrote:

On 01/30/2014 09:13 AM, Christopher Yeoh wrote:


On Fri, Jan 31, 2014 at 12:27 AM, Russell Bryant rbry...@redhat.com
I can't say in any sort of confidence that I think nova-network will go
away in the foreseeable future.  Yes, this has an unfortunate big impact
on our original plan for the v3 API.  :-(

However, I'm also not sure about the status of v3 in Icehouse, anyway.
One of the key things I want to see in before we freeze the API is the
tasks work.  AFAIK, there hasn't been any design review on this, much
less code review.  It seems incredibly unlikely that it will be done for
Icehouse at this point.  Andrew, thoughts?


I don't think the lack the tasks api being merged should stop us from
releasing the V3 API (it perhaps means there is one less significant
reason for people to move from the V2 API). Releasing the v3 API doesn't
stop us from adding tasks at a later stage to the V3 API as it could be
a simple additional way to interact and in practice I'd imagine there
will be gradual increase in support of doing things in a tasks oriented
way rather than a big bang everything now uses tasks approach.

And the sooner we can release the V3 API, the sooner we can put the V2
API into maintenance mode and avoid the overhead of having every new
feature having to be written for both.


Well, it depends.

If the tasks API is going to purely be an add-on, then sure, I agree.
If it's a fundamental shift to the existing API, including changing how
we respond to things like creating a server, then I think it has to wait.

We really need to have some rough design agreed upon to make this call
effectively.  In the absence of that, I think the right thing to do is
to proceed with v3 as it stands, which will put some limitations on how
drastic the tasks addition can be.


I just recently had a chance to put some serious effort into this and 
should have something together for discussion and design soon.  It's 
unfortunate that it's happening this late though.


Based on what I've done so far, the main change from what the APIs are 
doing now is a new Location header and a task object in the response for 
POST requests.  For a server create this is a bigger change than server 
actions because the task would replace the server in the response.


If necessary the tasks work could be done solely as an extension, but I 
would really prefer to avoid that so I'll get this ball rolling quickly.




--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] New date for closest team meeting

2014-01-30 Thread Swann Croiset
Hi,

I won't be at office before Wednesday 5.

No need to switch for me, Sylvain will take my voice.

FYI, I'll continue the Belgium tour after Brussel : Gent for the
http://cfgmgmtcamp.eu/



2014-01-30 Dina Belova dbel...@mirantis.com

 I'm ok with Tuesday 1500 UTC. We may go to #openstack-meeting-3 if we'll
 need. I'm pretty sure it'll be free. Horizon folks have there meeting
 Tuesday 1600 UTC, but I'm sure 1 hour will be enough for us :)

 Folks, any other ideas?

 Cheers,
 Dina


 On Thu, Jan 30, 2014 at 6:42 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Le 30/01/2014 15:33, Dina Belova a écrit :

  Hi, folks!

  We need to choose new date for our next weekly teem meeting due to some
 emergency - at least two our core members (Sylvain and Swann) and usually
 active meeting participants will be travelling to FOSDEM tomorrow 15:00
 UTC. So they'll have no opportunity to participate.

  Have you any preferences on what date and time to choose?


 Tuesday 1500 UTC would get my preference (I do have some family concerns
 for Monday)
 IIRC, #openstack-meeting will probably be busy at this time, but we can
 go to other rooms if necessary.

 -Sylvain



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

2014-01-30 Thread sahid
  Greetings,

A blueprint is being discussed about the disk resize down feature of libvirt 
driver.
  https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down

The current implementation does not handle disk resize down and just skips the
step during a resize down of the instance. I'm really convinced we can 
implement 
this feature by using the good job of disk resize down of the driver xenapi.

Criteria for allowing disk resize down:
  + The disk must have one partition
  + The fs must be ext3 or ext4

The implementation will be separated in several commits:
  + Move shared utility methods to a common module:
- virt.xenapi.vm_utils._get_partitions to virt.disk.utils.get_partitions
- virt.libvirt.utils.copy_image to virt.disk.utils.copy_image
- virt.xenapi.vm_utils._repair_filesystem to 
virt.disk.utils.repair_filesystem
  + Disk resize down implementation

Notes:
  - Another point we have to discuss, is that the current implementation just 
skips
the fs resize if not supported, is it a good choice? Should we have
to raise an exception to inform the user that it is not possible to resize
the instance? (if we have to raise an exception, a task will be added to the
TODO to handle this case for resize up before working on resize down.)
  - The current workflow for a user is to confirm the resize when the state
of the instance is VERIFY_RESIZE, I think we probably have to add a
checklist of good pratices of how to verify a resize in the manual:
  http://docs.openstack.org/user-guide/content/nova_cli_resize.html

Thanks a lot,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-01-30 Thread Gil Rapaport
Hi all,

Excellent definition of the issue at hand.
The recent blueprints of policy-based-scheduler and solver-scheduler 
indeed highlight a possible weakness in the current design, as despite 
their completely independent contributions (i.e. which filters to apply 
per request vs. how to compute a valid placement) their implementation as 
drivers makes combining them non-trivial.

As Alex Glikson hinted a couple of weekly meetings ago, our approach to 
this is to think of the driver's work as split between two entities:
-- A Placement Advisor, that constructs placement problems for scheduling 
requests (filter-scheduler and policy-based-scheduler)
-- A Placement Engine, that solves placement problems (HostManager in 
get_filtered_hosts() and solver-scheduler with its LP engine).

Such modularity should allow developing independent mechanisms that can be 
combined seamlessly through a unified  well-defined protocol based on 
constructing placement problem objects by the placement advisor and then 
passing them to the placement engine, which returns the solution. The 
protocol can be orchestrated by the scheduler manager. 

As can be seen at this point already, the policy-based-scheduler blueprint 
can now be positioned as an improvement of the placement advisor. 
Similarly, the solver-scheduler blueprint can be positioned as an 
improvement of the placement engine.

I'm working on a wiki page that will get into the details.
Would appreciate your initial thoughts on this approach.

Regards,
Gil



From:   Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
To: OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.org, 
Date:   01/30/2014 01:43 PM
Subject:Re: [openstack-dev] [Nova][Scheduler] Policy Based 
Scheduler and   Solver Scheduler



Hi Sylvain,
 
1) Some Filters such as AggregateCoreFilter, AggregateRAMFilter can change 
its parameters for aggregates. But what if admin wants to change for all 
hosts in an availability-zone? Does he have to rewrite all the parameters 
in all aggregates? Or should we create a new AvailabilityZoneCoreFilter?
 
The Policy Based Scheduler (PBS)  blueprint separates the effect (filter 
according to Core) from its target (all hosts in an aggregate, or in an 
availability-zone). It will benefit all filters, not just CoreFilter or 
RAMFilter, so that we can avoid creating for each filter XFilter the 
AggregateXFilter and AvailabilityZoneWFilter from now on. Beside, if admin 
wants to apply the a filter to some aggregates (or availability-zone) and 
not the other (don’t call filters at all, not just modify parameters), he 
can do it. It help us avoid running all filters on all hosts.
 
2) In fact, we also prepare for a separated scheduler in which PBS is a 
very first step of it, that’s why we purposely separate the Policy Based 
Scheduler from Policy Based Scheduling Module (PBSM) [1] which is the core 
of our architecture. If you look at our code, you will see that 
Policy_Based_Scheduler.py is only slightly different from Filter 
Scheduler. That is because we just want a link from Nova-scheduler to 
PBSM. We’re trying to push some more management into scheduler without 
causing too much modification, as you can see in the patch .
 
Thus I’m very happy when Gantt is proposed. As I see it, Gantt is based on 
Nova-scheduler code, with the planning on replacing nova-scheduler in J. 
The separation from Nova will be complicated, but not on scheduling part. 
Thus integrating PBS and PBSM into Gantt would not be a problem.
 
Best regards,
 
[1] 
https://docs.google.com/document/d/1gr4Pb1ErXymxN9QXR4G_jVjLqNOg2ij9oA0JrLwMVRA
 
Toan
 
De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com] 
Envoyé : jeudi 30 janvier 2014 11:16
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and 
Solver Scheduler
 
Hi Khanh-Toan,
 
I only have one comment on your proposal : why are you proposing something 
new for overcommitments with aggregates while the AggregateCoreFilter [1] 
and AggregateRAMFilter [2]already exist, which AIUI provide same feature ?
 
 
I'm also concerned about the scope of changes for scheduler, as Gantt is 
currently trying to replace it. Can we imagine such big changes to be 
committed on the Nova side, while it's planned to have a Scheduler service 
in the next future ?
 
-Sylvain
 
 
[1] 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L74
[2] 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/ram_filter.py#L75
 
 
 
 
2014-01-30 Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
There is an unexpected line break in the middle of the link, so I post it
again:

https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
IQB2Y

 -Message d'origine-
 De : Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
 Envoyé : mercredi 29 janvier 2014 13:25
 À : 'OpenStack Development 

Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tomas Sedovic

On 30/01/14 15:53, Matt Wagner wrote:

On 1/30/14, 5:26 AM, Tomas Sedovic wrote:

Hi all,

I've seen some confusion regarding the homogenous hardware support as
the first step for the tripleo UI. I think it's time to make sure we're
all on the same page.

Here's what I think is not controversial:

1. Build the UI and everything underneath to work with homogenous
hardware in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and do that (may or
may not happen within Icehouse)

The first option implies having a single nova flavour that will match
all the boxes we want to work with. It may or may not be surfaced in the
UI (I think that depends on our undercloud installation story).

Now, someone (I don't honestly know who or when) proposed a slight step
up from point #1 that would allow people to try the UI even if their
hardware varies slightly:

1.1 Treat similar hardware configuration as equal

The way I understand it is this: we use a scheduler filter that wouldn't
do a strict match on the hardware in Ironic. E.g. if our baremetal
flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
ram or 1.5TB disk.

The UI would still assume homogenous hardware and treat it as such. It's
just that we would allow for small differences.

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
when the flavour says 32. We would treat the flavour as a lowest common
denominator.


Does Nova already handle this? Or is it built on exact matches?


It's doing an exact match as far as I know. This would likely involve 
writing a custom filter for nova scheduler and updating nova.conf 
accordingly.




I guess my question is -- what is the benefit of doing this? Is it just
so people can play around with it? Or is there a lasting benefit
long-term? I can see one -- match to the closest, but be willing to give
me more than I asked for if that's all that's available. Is there any
downside to this being permanent behavior?


Absolutely not a long term thing. This is just to let people play around 
with the MVP until we have the proper support for heterogenous hardware in.


It's just an idea that would increase the usefulness of the first 
version and should be trivial to implement and take out.


If neither is the case or if we will in fact manage to have a proper 
heterogenous hardware support early (in Icehouse), it doesn't make any 
sense to do this.




I think the lowest-common-denominator match will be familiar to
sysadmins, too. Want to do RAID striping across a 500GB and a 750GB
disk? You'll get a striped 500GB volume.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][swift] Importing Launchpad Answers in Ask OpenStack

2014-01-30 Thread Stefano Maffulli
On Wed 29 Jan 2014 01:49:04 PM CET, Swapnil Kulkarni wrote:
 Getting lauchpad bus in ask.openstack would really help people and
 this looks really nice.(just saw some question-answers) I was not able
 to search for questions though

Yes, we know, search is disabled on staging.

  and  (answered/unanswered) questions

This should work: what do you mean exactly? What sort of filter did you 
apply and what were you expecting?

 filters are not working. Just one small question, how the import will
 happen for future launchpad questions? Or launchpad questions will be
 disabled making ask.openstack default for openstack questions-answers?

We should completely disable Launchpad answers and remove them from 
visible archives (this should also answer Scott's comment on the same 
topic --good idea to add the lp-imported tag)

/stef

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] devtest thoughts

2014-01-30 Thread James Slagle
devtest, our TripleO setup, has been rapidly evolving. We've added a
fair amount of configuration options for stuff like using actual
baremetal, and (soon) HA deployments by default. Also, the scripts
(which the docs are generated from) are being used for both CD and CI.

This is all great progress.

However, due to these changes,  I think that devtest no longer works
great as a tripleo developer setup. You haven't been able to complete
a setup following our docs for 1 week now. The patches are in review
to fix that, and they need to be properly reviewed and I'm not saying
they should be rushed. Just that it's another aspect of the problem of
trying to use devtest for CI/CD and a dev setup.

I think it might be time to have a developer setup vs. devtest, which
is more of a documented tripleo setup at this point.

In irc earlier this week (sorry if i misquoting the intent here), I
saw mention of getting setup easier by just using a seed to deploy an
overcloud.  I think that's a great idea.  We are all already probably
doing it :). Why not document that in some sort of fashion?

There would be some initial trade offs, around folks not necessarily
understanding the full devtest process. But, you don't necessarily
need to understand all of that to hack on the upgrade story, or
tuskar, or ironic.

These are just some additional thoughts around the process and mail I
sent earlier this week:
http://lists.openstack.org/pipermail/openstack-dev/2014-January/025726.html
But, I thought this warranted a broader discussion.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Zane Bitter

On 30/01/14 06:01, Thomas Herve wrote:

Hi all,

While talking to Zane yesterday, he raised an interesting question about 
whether or not we want to keep a LaunchConfiguration object for the native 
autoscaling resources.

The LaunchConfiguration object basically holds properties to be able to fire 
new servers in a scaling group. In the new design, we will be able to start 
arbitrary resources, so we can't keep a strict LaunchConfiguration object as it 
exists, as we can have arbitrary properties.

It may be still be interesting to store it separately to be able to reuse it 
between groups.

So either we do this:

group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 resource_properties:
   image: my_image
   flavor: m1.large


The main advantages of this that I see are:

* It's one less resource.
* We can verify properties against the scaled_resource at the place the 
LaunchConfig is defined. (Note: in _both_ models these would be verified 
at the same place the _ScalingGroup_ is defined.)



Or:

group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 launch_configuration: server_config
server_config:
   type: OS::Heat::LaunchConfiguration
   properties:
 image: my_image
 flavor: m1.large



I favour this one for a few reasons:

* A single LaunchConfiguration can be re-used by multiple scaling 
groups. Reuse is good, and is one of the things we have been driving 
toward with e.g. software deployments.
* Assuming the Autoscaling API and Resources use the same model (as they 
should), in this model the Launch Configuration can be defined in a 
separate template to the scaling group, if the user so chooses. Or it 
can even be defined outside Heat and passed in as a parameter.
* We can do the same with the LaunchConfiguration for the existing 
AWS-compatibility resources. That will allows us to fix the current 
broken implementation that goes magically fishing in the local stack for 
launch configs[1]. If we pick a model that is strictly less powerful 
than stuff we already know we have to support, we will likely be stuck 
with broken hacks forever :(



(Not sure we can actually define dynamic properties, in which case it'd be 
behind a top property.)


(This part is just a question of how the resource would look in Heat, 
and the answer would not really effect the API.)


I think this would be possible, but it would require working around the 
usual code we have for managing/validating properties. Probably not a 
show-stopper, but it is more work. If we can do this there are a couple 
more benefits to this way:


* Extremely deeply nested structures are unwieldy to deal with, both for 
us as developers[2] and for users writing templates; shallower 
hierarchies are better.
* You would be able to change an OS::Nova::Server resource into a 
LaunchConfiguration, in most cases, just by changing the resource type. 
(This also opens up the possibility of switching between them using the 
environment, although I don't know how useful that would be.)


cheers,
Zane.

[1] https://etherpad.openstack.org/p/icehouse-summit-heat-exorcism
[2] 
https://github.com/openstack/heat/blob/master/contrib/rackspace/heat/engine/plugins/auto_scale.py




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Speaking at PyCON US 2014 about OpenStack?

2014-01-30 Thread Stefano Maffulli
If you're going to talk about anything related to OpenStack at PyCON
US/Canada this year, please let me know. We're collecting the list of
talks related to the project.

Cheers,
stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

2014-01-30 Thread Jay Pipes
On Thu, 2014-01-30 at 14:59 +, sahid wrote:
 snip
 The implementation will be separated in several commits:
   + Move shared utility methods to a common module:
 - virt.xenapi.vm_utils._get_partitions to virt.disk.utils.get_partitions
 - virt.libvirt.utils.copy_image to virt.disk.utils.copy_image
 - virt.xenapi.vm_utils._repair_filesystem to 
 virt.disk.utils.repair_filesystem
   + Disk resize down implementation

Above looks like a good plan, and +1 for pulling useful generic code out
of a particular driver into a reusable library.

 Notes:
   - Another point we have to discuss, is that the current implementation just 
 skips
 the fs resize if not supported, is it a good choice? 

For metering/usage purposes, does the old size of ephemeral disk
continue to be shown in usage records, or does the size of the disk in
the newly-selected instance type (flavor) get used? If the former, then
this would be an avenue for users to Get more disk space than they are
paying for. Something to look into...

Best,
-jay

 Should we have
 to raise an exception to inform the user that it is not possible to resize
 the instance? (if we have to raise an exception, a task will be added to 
 the
 TODO to handle this case for resize up before working on resize down.)
   - The current workflow for a user is to confirm the resize when the state
 of the instance is VERIFY_RESIZE, I think we probably have to add a
 checklist of good pratices of how to verify a resize in the manual:
   http://docs.openstack.org/user-guide/content/nova_cli_resize.html
 
 Thanks a lot,
 s.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

2014-01-30 Thread Jon Bernard
* sahid sahid.ferdja...@cloudwatt.com wrote:
   Greetings,
 
 A blueprint is being discussed about the disk resize down feature of libvirt 
 driver.
   https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
 
 The current implementation does not handle disk resize down and just skips the
 step during a resize down of the instance. I'm really convinced we can 
 implement 
 this feature by using the good job of disk resize down of the driver xenapi.

In case it hasn't been considered yet, shrinking a filesystem can result
in terrible fragmentation.  The block allocator in resize2fs does not do
a great job of handling this case.  The result will be a very
non-optimal file layout and measurably worse performance, especially for
drives with a relatively high average seek time.

-- 
Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch

2014-01-30 Thread Trevor McKay
My mistake, it's already there.  I missed the distinction between set on
startup and set per cluster.

Trev

On Thu, 2014-01-30 at 10:50 -0500, Trevor McKay wrote:
 +1
 
 How about an undocumented config?
 
 Trev
 
 On Thu, 2014-01-30 at 09:24 -0500, Matthew Farrellee wrote:
  i imagine this is something that can be useful in a development and 
  testing environment, especially during the transition period from direct 
  to heat. so having the ability is not unreasonable, but i wouldn't 
  expose it to users via the dashboard (maybe not even directly in the cli)
  
  generally i want to reduce the number of parameters / questions the user 
  is asked
  
  best,
  
  
  matt
  
  On 01/30/2014 04:42 AM, Dmitry Mescheryakov wrote:
   I agree with Andrew. I see no value in letting users select how their
   cluster is provisioned, it will only make interface a little bit more
   complex.
  
   Dmitry
  
  
   2014/1/30 Andrew Lazarev alaza...@mirantis.com
   mailto:alaza...@mirantis.com
  
   Alexander,
  
   What is the purpose of exposing this to user side? Both engines must
   do exactly the same thing and they exist in the same time only for
   transition period until heat engine is stabilized. I don't see any
   value in proposed option.
  
   Andrew.
  
  
   On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov
   aigna...@mirantis.com mailto:aigna...@mirantis.com wrote:
  
   Today Savanna has two provisioning engines, heat and old one
   known as 'direct'.
   Users can choose which engine will be used by setting special
   parameter in 'savanna.conf'.
  
   I have an idea to give an ability for users to define
   provisioning engine
   not only when savanna is started but when new cluster is
   launched. The idea is simple.
   We will just add new field 'provisioning_engine' to 'cluster'
   and 'cluster_template'
   objects. And profit is obvious, users can easily switch from one
   engine to another without
   restarting savanna service. Of course, this parameter can be
   omitted and the default value
   from the 'savanna.conf' will be applied.
  
   Is this viable? What do you think?
  
   Regards,
   Alexander Ignatov
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Choosing provisioning engine during cluster launch

2014-01-30 Thread Trevor McKay
+1

How about an undocumented config?

Trev

On Thu, 2014-01-30 at 09:24 -0500, Matthew Farrellee wrote:
 i imagine this is something that can be useful in a development and 
 testing environment, especially during the transition period from direct 
 to heat. so having the ability is not unreasonable, but i wouldn't 
 expose it to users via the dashboard (maybe not even directly in the cli)
 
 generally i want to reduce the number of parameters / questions the user 
 is asked
 
 best,
 
 
 matt
 
 On 01/30/2014 04:42 AM, Dmitry Mescheryakov wrote:
  I agree with Andrew. I see no value in letting users select how their
  cluster is provisioned, it will only make interface a little bit more
  complex.
 
  Dmitry
 
 
  2014/1/30 Andrew Lazarev alaza...@mirantis.com
  mailto:alaza...@mirantis.com
 
  Alexander,
 
  What is the purpose of exposing this to user side? Both engines must
  do exactly the same thing and they exist in the same time only for
  transition period until heat engine is stabilized. I don't see any
  value in proposed option.
 
  Andrew.
 
 
  On Wed, Jan 29, 2014 at 8:44 PM, Alexander Ignatov
  aigna...@mirantis.com mailto:aigna...@mirantis.com wrote:
 
  Today Savanna has two provisioning engines, heat and old one
  known as 'direct'.
  Users can choose which engine will be used by setting special
  parameter in 'savanna.conf'.
 
  I have an idea to give an ability for users to define
  provisioning engine
  not only when savanna is started but when new cluster is
  launched. The idea is simple.
  We will just add new field 'provisioning_engine' to 'cluster'
  and 'cluster_template'
  objects. And profit is obvious, users can easily switch from one
  engine to another without
  restarting savanna service. Of course, this parameter can be
  omitted and the default value
  from the 'savanna.conf' will be applied.
 
  Is this viable? What do you think?
 
  Regards,
  Alexander Ignatov
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] PXE driver deploy issues

2014-01-30 Thread Rohan Kanade
Hi,

I have been trying to use the PXE driver along with  a unreleased power and
vendor passthru driver from SeaMicro to provision a node in Ironic.

Has anyone successfully used the PXE driver to get the deploy image onto
the node and then actually completing the deployment?

I have created a deployment kernel and ramdisk using the
diskimage-builder's deploy-ironic and ubuntu elements.

After calling /v1/nodes/my_node_uuid/states/provision, my Node
provision_state is stuck at deploying and i can also see that there is
no way to actually call pass_deploy_info pxe vendor passthru which dd's
the user image to the server.

The tftp logs and /tftpboot directory shows tokens and images which Ironic
conductor wants to use as deployment image.

Am i missing some steps here?

Regards,
Rohan Kanade
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Speaking at PyCON US 2014 about OpenStack?

2014-01-30 Thread Anita Kuno
On 01/30/2014 08:42 AM, Stefano Maffulli wrote:
 If you're going to talk about anything related to OpenStack at PyCON
 US/Canada this year, please let me know. We're collecting the list of
 talks related to the project.
 
 Cheers,
 stef
 
Would it be possible to start an etherpad for this? I am considering
offering a workshop or lab of some sort (if I haven't missed the
deadline for that) but don't want to be stepping on toes if someone else
is already covering that material.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-30 Thread Dan Smith
 If necessary the tasks work could be done solely as an extension, but I
 would really prefer to avoid that so I'll get this ball rolling quickly.

I agree that doing it as a bolt-on to v3 would be significantly less
favorable than making it an integrated feature of the API. IMHO, if a
server create operation doesn't return a task, then we failed, as that
is (to me) one of the primary cases where a task object is important.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-01-30 Thread Dan Smith
 I'm of the opinion that the scheduler should use objects, for all the
 reasons that Nova uses objects, but that they should not be Nova
 objects.  Ultimately what the scheduler needs is a concept of capacity,
 allocations, and locality of resources.  But the way those are modeled
 doesn't need to be tied to how Nova does it, and once the scope expands
 to include Cinder it may quickly turn out to be limiting to hold onto
 Nova objects.

Yeah, my response to the original question was going to be something like:

If the scheduler was staying in Nova, it would use NovaObjects like the
rest of Nova. Long-term Gantt should use whatever it wants and the API
between it and Nova will be something other than RPC and thus something
other than NovaObject anyway.

I think the point you're making here is that the models used for
communication between Nova and Gantt should be objecty, regardless of
what the backing implementation is on either side. I totally agree with
that.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Speaking at PyCON US 2014 about OpenStack?

2014-01-30 Thread Doug Hellmann
On Thu, Jan 30, 2014 at 11:14 AM, Anita Kuno ante...@anteaya.info wrote:

 On 01/30/2014 08:42 AM, Stefano Maffulli wrote:
  If you're going to talk about anything related to OpenStack at PyCON
  US/Canada this year, please let me know. We're collecting the list of
  talks related to the project.
 
  Cheers,
  stef
 
 Would it be possible to start an etherpad for this? I am considering
 offering a workshop or lab of some sort (if I haven't missed the
 deadline for that) but don't want to be stepping on toes if someone else
 is already covering that material.


The deadline for formal conference talks and tutorials has passed [1], but
you could still schedule an open space room on site [2].

[1] https://us.pycon.org/2014/speaking/cfp/
[2] https://us.pycon.org/2014/community/openspaces/

Doug




 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Edmund Troche

I favor the second option for the same reasons as Zane described, but also
don't think we need a LaunchConfiguration resource. How about just adding a
attribute to the resources such that the engine knows is not meant to be
handled in the usual way, and instead it is really a template (sorry for
the overloaded term) used in a scaling group. For example:

group:
type: OS::Heat::ScalingGroup
properties:
  scaled_resource: server_for_scaling

server_for_scaling:
use_for_scaling: true ( the name of this attribute is
clearly up for discussion ;-) )
type: OS::Nova::Server
properties:
  image: my_image
  flavor: m1.large

When the engine sees the use_for_scaling set to true, then it does not
call things like handle_create. Anyway, that's the general idea. I'm sure
there are many other ways to achieve a similar effect.


Edmund Troche



From:   Zane Bitter zbit...@redhat.com
To: openstack-dev@lists.openstack.org,
Date:   01/30/2014 09:43 AM
Subject:Re: [openstack-dev] [Heat] About LaunchConfiguration and
Autoscaling



On 30/01/14 06:01, Thomas Herve wrote:
 Hi all,

 While talking to Zane yesterday, he raised an interesting question about
whether or not we want to keep a LaunchConfiguration object for the native
autoscaling resources.

 The LaunchConfiguration object basically holds properties to be able to
fire new servers in a scaling group. In the new design, we will be able to
start arbitrary resources, so we can't keep a strict LaunchConfiguration
object as it exists, as we can have arbitrary properties.

 It may be still be interesting to store it separately to be able to reuse
it between groups.

 So either we do this:

 group:
type: OS::Heat::ScalingGroup
properties:
  scaled_resource: OS::Nova::Server
  resource_properties:
image: my_image
flavor: m1.large

The main advantages of this that I see are:

* It's one less resource.
* We can verify properties against the scaled_resource at the place the
LaunchConfig is defined. (Note: in _both_ models these would be verified
at the same place the _ScalingGroup_ is defined.)

 Or:

 group:
type: OS::Heat::ScalingGroup
properties:
  scaled_resource: OS::Nova::Server
  launch_configuration: server_config
 server_config:
type: OS::Heat::LaunchConfiguration
properties:
  image: my_image
  flavor: m1.large


I favour this one for a few reasons:

* A single LaunchConfiguration can be re-used by multiple scaling
groups. Reuse is good, and is one of the things we have been driving
toward with e.g. software deployments.
* Assuming the Autoscaling API and Resources use the same model (as they
should), in this model the Launch Configuration can be defined in a
separate template to the scaling group, if the user so chooses. Or it
can even be defined outside Heat and passed in as a parameter.
* We can do the same with the LaunchConfiguration for the existing
AWS-compatibility resources. That will allows us to fix the current
broken implementation that goes magically fishing in the local stack for
launch configs[1]. If we pick a model that is strictly less powerful
than stuff we already know we have to support, we will likely be stuck
with broken hacks forever :(

 (Not sure we can actually define dynamic properties, in which case it'd
be behind a top property.)

(This part is just a question of how the resource would look in Heat,
and the answer would not really effect the API.)

I think this would be possible, but it would require working around the
usual code we have for managing/validating properties. Probably not a
show-stopper, but it is more work. If we can do this there are a couple
more benefits to this way:

* Extremely deeply nested structures are unwieldy to deal with, both for
us as developers[2] and for users writing templates; shallower
hierarchies are better.
* You would be able to change an OS::Nova::Server resource into a
LaunchConfiguration, in most cases, just by changing the resource type.
(This also opens up the possibility of switching between them using the
environment, although I don't know how useful that would be.)

cheers,
Zane.

[1] https://etherpad.openstack.org/p/icehouse-summit-heat-exorcism
[2]
https://github.com/openstack/heat/blob/master/contrib/rackspace/heat/engine/plugins/auto_scale.py




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Speaking at PyCON US 2014 about OpenStack?

2014-01-30 Thread Anita Kuno
On 01/30/2014 09:51 AM, Doug Hellmann wrote:
 On Thu, Jan 30, 2014 at 11:14 AM, Anita Kuno ante...@anteaya.info wrote:
 
 On 01/30/2014 08:42 AM, Stefano Maffulli wrote:
 If you're going to talk about anything related to OpenStack at PyCON
 US/Canada this year, please let me know. We're collecting the list of
 talks related to the project.

 Cheers,
 stef

 Would it be possible to start an etherpad for this? I am considering
 offering a workshop or lab of some sort (if I haven't missed the
 deadline for that) but don't want to be stepping on toes if someone else
 is already covering that material.

 
 The deadline for formal conference talks and tutorials has passed [1], but
 you could still schedule an open space room on site [2].
 
 [1] https://us.pycon.org/2014/speaking/cfp/
 [2] https://us.pycon.org/2014/community/openspaces/
 
 Doug
Thanks Doug, I had thought I had seen something fly past me that
mentioned something about offer a workshop or something but it appears
that was over in September - too late.

So ignore my request and thanks anyway,
Anita.
 
 
 

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-01-30 07:38:38 -0800:
 On 30/01/14 06:01, Thomas Herve wrote:
  Hi all,
 
  While talking to Zane yesterday, he raised an interesting question about 
  whether or not we want to keep a LaunchConfiguration object for the native 
  autoscaling resources.
 
  The LaunchConfiguration object basically holds properties to be able to 
  fire new servers in a scaling group. In the new design, we will be able to 
  start arbitrary resources, so we can't keep a strict LaunchConfiguration 
  object as it exists, as we can have arbitrary properties.
 
  It may be still be interesting to store it separately to be able to reuse 
  it between groups.
 
  So either we do this:
 
  group:
 type: OS::Heat::ScalingGroup
 properties:
   scaled_resource: OS::Nova::Server
   resource_properties:
 image: my_image
 flavor: m1.large
 
 The main advantages of this that I see are:
 
 * It's one less resource.
 * We can verify properties against the scaled_resource at the place the 
 LaunchConfig is defined. (Note: in _both_ models these would be verified 
 at the same place the _ScalingGroup_ is defined.)
 
  Or:
 
  group:
 type: OS::Heat::ScalingGroup
 properties:
   scaled_resource: OS::Nova::Server
   launch_configuration: server_config
  server_config:
 type: OS::Heat::LaunchConfiguration
 properties:
   image: my_image
   flavor: m1.large
 
 
 I favour this one for a few reasons:
 
 * A single LaunchConfiguration can be re-used by multiple scaling 
 groups. Reuse is good, and is one of the things we have been driving 
 toward with e.g. software deployments.

I agree with the desire for re-use. In fact I am somewhat desperate to
have it as we try to write templates which allow assembling different
topologies of OpenStack deployment.

I would hope we would solve that at a deeper level, rather than making
resources for the things we think will need re-use. I think nested stacks
allow this level of re-use already anyway. Software config just allows
sub-resource composition.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack python clients libraries release process

2014-01-30 Thread Thierry Carrez
Tiago Mello wrote:
 Thanks for the answer! We are working on the
 https://blueprints.launchpad.net/python-glanceclient/+spec/cross-service-request-id
 and we were wondering what is the timing for getting a new version of
 the client and bump the version in nova requirements.txt...

In that case I'd approach the Glance PTL (Mark Washenberger) and ask him
when he'd like to cut the next release of python-glanceclient.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-01-30 Thread Jeremy Stanley
It's also worth pointing out a related side effect of that choice...

URL: 
http://logs.openstack.org/14/68714/3/gate/gate-ceilometer-python27/dc7e987/console.html#_2014-01-30_15_57_30_413
 

Uploads of branch tarballs are not stable and they're also not
atomic. If your job tries to retrieve that tarball at the same time
that it's being updated from a post-merge branch-tarball job, you
will end up with a truncated file and your job will fail. The larger
and more complex the tarball (for example nova's), the greater
chance you have to catch it at just the wrong moment.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Randall Burt
On Jan 30, 2014, at 12:09 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Zane Bitter's message of 2014-01-30 07:38:38 -0800:
 On 30/01/14 06:01, Thomas Herve wrote:
 Hi all,
 
 While talking to Zane yesterday, he raised an interesting question about 
 whether or not we want to keep a LaunchConfiguration object for the native 
 autoscaling resources.
 
 The LaunchConfiguration object basically holds properties to be able to 
 fire new servers in a scaling group. In the new design, we will be able to 
 start arbitrary resources, so we can't keep a strict LaunchConfiguration 
 object as it exists, as we can have arbitrary properties.
 
 It may be still be interesting to store it separately to be able to reuse 
 it between groups.
 
 So either we do this:
 
 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 resource_properties:
   image: my_image
   flavor: m1.large
 
 The main advantages of this that I see are:
 
 * It's one less resource.
 * We can verify properties against the scaled_resource at the place the 
 LaunchConfig is defined. (Note: in _both_ models these would be verified 
 at the same place the _ScalingGroup_ is defined.)

This looks a lot like OS::Heat::ResourceGroup, which I believe already 
addresses some of Zane's concerns around dynamic property validation.

 
 Or:
 
 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 launch_configuration: server_config
 server_config:
   type: OS::Heat::LaunchConfiguration
   properties:
 image: my_image
 flavor: m1.large
 
 
 I favour this one for a few reasons:
 
 * A single LaunchConfiguration can be re-used by multiple scaling 
 groups. Reuse is good, and is one of the things we have been driving 
 toward with e.g. software deployments.
 
 I agree with the desire for re-use. In fact I am somewhat desperate to
 have it as we try to write templates which allow assembling different
 topologies of OpenStack deployment.
 
 I would hope we would solve that at a deeper level, rather than making
 resources for the things we think will need re-use. I think nested stacks
 allow this level of re-use already anyway. Software config just allows
 sub-resource composition.

Agreed. Codifying re-use inside specific resource types is a game of catch-up I 
don't think we can win in the end.

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-30 Thread Vishvananda Ishaya

On Jan 30, 2014, at 6:26 AM, CARVER, PAUL pc2...@att.com wrote:

 Vishvananda Ishaya wrote:
  
 In testing I have been unable to saturate a 10g link using a single VM. Even 
 with multiple streams,
 the best I have been able to do (using virtio and vhost_net is about 7.8g.
  
 Can you share details about your hardware and vSwitch config (possibly off 
 list if that isn’t a valid openstack-dev topic)
  
 I haven’t been able to spend any time on serious performance testing, but 
 just doing preliminary testing on a HP BL460cG8 and Virtual Connect I haven’t 
 been able to push more than about 1Gbps using a pretty vanilla Havana install 
 with OvS and VLANs (no GRE).

This is using nova-network mode. There are likely new software bottlenecks 
introduced by Neutron/OVS, but a huge amount of performance tweaking is around 
jumbo frames and vhost-net. This article has a bunch of excellent suggestions:

http://buriedlede.blogspot.com/2012/11/driving-100-gigabit-network-with.html

Vish

  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-30 Thread Vipin Balachandran
This library is highly specific to VMware drivers in OpenStack and not a
generic VMware API client. As Doug mentioned, this library won't be useful
outside OpenStack. Also, it has some dependencies on openstack.common code
as well. Therefore it makes sense if we make this code as part of OSLO.

 

By the way, a work in progress review has been posted for the VMware
cinder driver integration with the OSLO common code
(https://review.openstack.org/#/c/70108/). The nova integration is
currently under progress.

 

Thanks,

Vipin

 

From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com] 
Sent: Wednesday, January 29, 2014 4:06 AM
To: Donald Stufft
Cc: OpenStack Development Mailing List (not for usage questions); Vipin
Balachandran
Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
straight to oslo.vmware

 

 

 

On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io wrote:


On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jan 28 2014, Doug Hellmann wrote:

 There are several reviews related to adding VMware interface code to
the
 oslo-incubator so it can be shared among projects (start at
 https://review.openstack.org/#/c/65075/7 if you want to look at the
code).

 I expect this code to be fairly stand-alone, so I wonder if we would be
 better off creating an oslo.vmware library from the beginning, instead
of
 bringing it through the incubator.

 Thoughts?

 This sounds like a good idea, but it doesn't look OpenStack specific, so
 maybe building a non-oslo library would be better.

 Let's not zope it! :)

+1 on not making it an oslo library.

 

Given the number of issues we've seen with stackforge libs in the gate,
I've changed my default stance on this point.

 

It's not clear from the code whether Vipin et al expect this library to be
useful for anyone not working with both OpenStack and VMware. Either way,
I anticipate having the library under the symmetric gating rules and
managed by the one of the OpenStack teams (oslo, nova, cinder?) and VMware
contributors should make life easier in the long run.

 

As far as the actual name goes, I'm not set on oslo.vmware it was just a
convenient name for the conversation.

 

Doug

 

 



 --
 Julien Danjou
 # Free Software hacker # independent consultant
 # http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
DCFA

 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-30 Thread Henry Nash
Vish,

Excellent idea to discuss this more widely.  To your point about domains not 
being well understood and that most policy files being just admin or not, the 
exception here is, of course, keystone itself - where we can use domains to 
support enable various levels of cloud/domain  project level admin type of 
capability via the policy file.  Although the default policy file we supply is 
a bit like the admin or not versions, we also supply a much richer sample for 
those who want to do admin delegation via domains:

https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json

The other point is that one thing we did introduce in Havana was the concept of 
domain inheritance (where a role assigned to a domain could be specified to be 
inherited by all projects within that domain).  This was an attempt to provide 
an rudimentary multi-ownership capability (within our current token formats 
and policy capabilities).

https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-inherit-ext.md

I'm not suggesting these solve all the issues, just that we should be aware of 
these in the upcoming discussions.

Henry
On 28 Jan 2014, at 18:35, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hi Everyone,
 
 I apologize for the obtuse title, but there isn't a better succinct term to 
 describe what is needed. OpenStack has no support for multiple owners of 
 objects. This means that a variety of private cloud use cases are simply not 
 supported. Specifically, objects in the system can only be managed on the 
 tenant level or globally.
 
 The key use case here is to delegate administration rights for a group of 
 tenants to a specific user/role. There is something in Keystone called a 
 “domain” which supports part of this functionality, but without support from 
 all of the projects, this concept is pretty useless.
 
 In IRC today I had a brief discussion about how we could address this. I have 
 put some details and a straw man up here:
 
 https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
 I would like to discuss this strawman and organize a group of people to get 
 actual work done by having an irc meeting this Friday at 1600UTC. I know this 
 time is probably a bit tough for Europe, so if we decide we need a regular 
 meeting to discuss progress then we can vote on a better time for this 
 meeting.
 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
 Please note that this is going to be an active team that produces code. We 
 will *NOT* spend a lot of time debating approaches, and instead focus on 
 making something that works and learning as we go. The output of this team 
 will be a MultiTenant devstack install that actually works, so that we can 
 ensure the features we are adding to each project work together.
 
 Vish
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Fox, Kevin M
The reuse case can be handled via using a nested Stack. The scaled_resource 
type property would allow that to happen in the first arrangement. I don't 
think you can specify a resource type/nested stack with a LaunchConfig which 
makes it much less preferable I think. So its less flexible and more verbose.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Thursday, January 30, 2014 9:09 AM
To: openstack-dev
Subject: Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

Excerpts from Zane Bitter's message of 2014-01-30 07:38:38 -0800:
 On 30/01/14 06:01, Thomas Herve wrote:
  Hi all,
 
  While talking to Zane yesterday, he raised an interesting question about 
  whether or not we want to keep a LaunchConfiguration object for the native 
  autoscaling resources.
 
  The LaunchConfiguration object basically holds properties to be able to 
  fire new servers in a scaling group. In the new design, we will be able to 
  start arbitrary resources, so we can't keep a strict LaunchConfiguration 
  object as it exists, as we can have arbitrary properties.
 
  It may be still be interesting to store it separately to be able to reuse 
  it between groups.
 
  So either we do this:
 
  group:
 type: OS::Heat::ScalingGroup
 properties:
   scaled_resource: OS::Nova::Server
   resource_properties:
 image: my_image
 flavor: m1.large

 The main advantages of this that I see are:

 * It's one less resource.
 * We can verify properties against the scaled_resource at the place the
 LaunchConfig is defined. (Note: in _both_ models these would be verified
 at the same place the _ScalingGroup_ is defined.)

  Or:
 
  group:
 type: OS::Heat::ScalingGroup
 properties:
   scaled_resource: OS::Nova::Server
   launch_configuration: server_config
  server_config:
 type: OS::Heat::LaunchConfiguration
 properties:
   image: my_image
   flavor: m1.large


 I favour this one for a few reasons:

 * A single LaunchConfiguration can be re-used by multiple scaling
 groups. Reuse is good, and is one of the things we have been driving
 toward with e.g. software deployments.

I agree with the desire for re-use. In fact I am somewhat desperate to
have it as we try to write templates which allow assembling different
topologies of OpenStack deployment.

I would hope we would solve that at a deeper level, rather than making
resources for the things we think will need re-use. I think nested stacks
allow this level of re-use already anyway. Software config just allows
sub-resource composition.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Devananda van der Veen
As far as nova-scheduler and Ironic go, I believe this is a solved problem.
Steps are:
- enroll hardware with proper specs (CPU, RAM, disk, etc)
- create flavors based on hardware specs
- scheduler filter matches requests exactly

There are, I suspect, three areas where this would fall short today:
- exposing to the user when certain flavors shouldn't be picked, because
there is no more hardware available which could match it
- ensuring that hardware is enrolled with the proper specs //
trouble-shooting when it is not
- a UI that does these well

If I understand your proposal correctly, you're suggesting that we
introduce non-deterministic behavior. If the scheduler filter falls back to
$flavor when $flavor is not available, even if the search is in ascending
order and upper-bounded by some percentage, the user is still likely to get
something other than what they requested. From a utilization and
inventory-management standpoint, this would be a headache, and from a user
standpoint, it would be awkward. Also, your proposal is only addressing the
case where hardware variance is small; it doesn't include a solution for
deployments with substantially different hardware.

I don't think introducing a non-deterministic hack when the underlying
services already work, just to provide a temporary UI solution, is
appropriate. But that's just my opinion.

Here's an alternate proposal to support same-arch but different
cpu/ram/disk hardware environments:
- keep the scheduler filter doing an exact match
- have the UI only allow the user to define one flavor, and have that be
the lowest common denominator of available hardware
- assign that flavor's properties to all nodes -- basically lie about the
hardware specs when enrolling them
- inform the user that, if they have heterogeneous hardware, they will get
randomly chosen nodes from their pool, and that scheduling on heterogeneous
hardware will be added in a future UI release

This will allow folks who are using TripleO at the commandline to take
advantage of their heterogeneous hardware, instead of crippling
already-existing functionality, while also allowing users who have slightly
(or wildly) different hardware specs to still use the UI.


Regards,
Devananda



On Thu, Jan 30, 2014 at 7:14 AM, Tomas Sedovic tsedo...@redhat.com wrote:

 On 30/01/14 15:53, Matt Wagner wrote:

 On 1/30/14, 5:26 AM, Tomas Sedovic wrote:

 Hi all,

 I've seen some confusion regarding the homogenous hardware support as
 the first step for the tripleo UI. I think it's time to make sure we're
 all on the same page.

 Here's what I think is not controversial:

 1. Build the UI and everything underneath to work with homogenous
 hardware in the Icehouse timeframe
 2. Figure out how to support heterogenous hardware and do that (may or
 may not happen within Icehouse)

 The first option implies having a single nova flavour that will match
 all the boxes we want to work with. It may or may not be surfaced in the
 UI (I think that depends on our undercloud installation story).

 Now, someone (I don't honestly know who or when) proposed a slight step
 up from point #1 that would allow people to try the UI even if their
 hardware varies slightly:

 1.1 Treat similar hardware configuration as equal

 The way I understand it is this: we use a scheduler filter that wouldn't
 do a strict match on the hardware in Ironic. E.g. if our baremetal
 flavour said 16GB ram and 1TB disk, it would also match a node with 24GB
 ram or 1.5TB disk.

 The UI would still assume homogenous hardware and treat it as such. It's
 just that we would allow for small differences.

 This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
 when the flavour says 32. We would treat the flavour as a lowest common
 denominator.


 Does Nova already handle this? Or is it built on exact matches?


 It's doing an exact match as far as I know. This would likely involve
 writing a custom filter for nova scheduler and updating nova.conf
 accordingly.



 I guess my question is -- what is the benefit of doing this? Is it just
 so people can play around with it? Or is there a lasting benefit
 long-term? I can see one -- match to the closest, but be willing to give
 me more than I asked for if that's all that's available. Is there any
 downside to this being permanent behavior?


 Absolutely not a long term thing. This is just to let people play around
 with the MVP until we have the proper support for heterogenous hardware in.

 It's just an idea that would increase the usefulness of the first version
 and should be trivial to implement and take out.

 If neither is the case or if we will in fact manage to have a proper
 heterogenous hardware support early (in Icehouse), it doesn't make any
 sense to do this.


 I think the lowest-common-denominator match will be familiar to
 sysadmins, too. Want to do RAID striping across a 500GB and a 750GB
 disk? You'll get a striped 500GB volume.



 

Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-01-30 Thread Yathiraj Udupi (yudupi)
It is really good we are reviving the conversation we started during the last 
summit in Hongkong during one of the scheduler sessions called “Smart resource 
placement”.   This is the document we used to discuss during the session.  
Probably you may have seen this before:
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit

The idea is to separate out the logic for the placement decision engine from 
the actual request and the final provisioning  phase.   The placement engine 
itself can be pluggable, and as we show in the solver scheduler blueprint,  we 
show how it fits inside of Nova.

The discussions at the summit and in our weekly scheduler meetings led to us 
starting the “Smart resource placement” idea inside of Nova, and then take it 
to a unified global level spanning cross services such as cinder and neutron.

Like you point out, I do agree the two entities of placement advisor, and the 
placement engine, but I think there should be a third one – the provisioning 
engine, which should be responsible for whatever it takes to finally create the 
instances, after the placement decision has been taken.
It is good to take incremental approaches, hence we should try to get patches 
like these  get accepted first within nova, and then slowly split up the logic 
into separate entities.

Thanks,
Yathi.





On 1/30/14, 7:14 AM, Gil Rapaport g...@il.ibm.commailto:g...@il.ibm.com 
wrote:

Hi all,

Excellent definition of the issue at hand.
The recent blueprints of policy-based-scheduler and solver-scheduler indeed 
highlight a possible weakness in the current design, as despite their 
completely independent contributions (i.e. which filters to apply per request 
vs. how to compute a valid placement) their implementation as drivers makes 
combining them non-trivial.

As Alex Glikson hinted a couple of weekly meetings ago, our approach to this is 
to think of the driver's work as split between two entities:
-- A Placement Advisor, that constructs placement problems for scheduling 
requests (filter-scheduler and policy-based-scheduler)
-- A Placement Engine, that solves placement problems (HostManager in 
get_filtered_hosts() and solver-scheduler with its LP engine).

Such modularity should allow developing independent mechanisms that can be 
combined seamlessly through a unified  well-defined protocol based on 
constructing placement problem objects by the placement advisor and then 
passing them to the placement engine, which returns the solution. The protocol 
can be orchestrated by the scheduler manager.

As can be seen at this point already, the policy-based-scheduler blueprint can 
now be positioned as an improvement of the placement advisor. Similarly, the 
solver-scheduler blueprint can be positioned as an improvement of the placement 
engine.

I'm working on a wiki page that will get into the details.
Would appreciate your initial thoughts on this approach.

Regards,
Gil



From:Khanh-Toan Tran 
khanh-toan.t...@cloudwatt.commailto:khanh-toan.t...@cloudwatt.com
To:OpenStack Development Mailing List \(not for usage questions\) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:01/30/2014 01:43 PM
Subject:Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler 
andSolver Scheduler




Hi Sylvain,

1) Some Filters such as AggregateCoreFilter, AggregateRAMFilter can change its 
parameters for aggregates. But what if admin wants to change for all hosts in 
an availability-zone? Does he have to rewrite all the parameters in all 
aggregates? Or should we create a new AvailabilityZoneCoreFilter?

The Policy Based Scheduler (PBS)  blueprint separates the effect (filter 
according to Core) from its target (all hosts in an aggregate, or in an 
availability-zone). It will benefit all filters, not just CoreFilter or 
RAMFilter, so that we can avoid creating for each filter XFilter the 
AggregateXFilter and AvailabilityZoneWFilter from now on. Beside, if admin 
wants to apply the a filter to some aggregates (or availability-zone) and not 
the other (don’t call filters at all, not just modify parameters), he can do 
it. It help us avoid running all filters on all hosts.

2) In fact, we also prepare for a separated scheduler in which PBS is a very 
first step of it, that’s why we purposely separate the Policy Based Scheduler 
from Policy Based Scheduling Module (PBSM) [1] which is the core of our 
architecture. If you look at our code, you will see that 
Policy_Based_Scheduler.py is only slightly different from Filter Scheduler. 
That is because we just want a link from Nova-scheduler to PBSM. We’re trying 
to push some more management into scheduler without causing too much 
modification, as you can see in the patch .

Thus I’m very happy when Gantt is proposed. As I see it, Gantt is based on 
Nova-scheduler code, with the planning on replacing nova-scheduler in J. The 

Re: [openstack-dev] Proposed Logging Standards

2014-01-30 Thread Sanchez, Cristian A
Is there any technical reason of why Swift does not use oslo logging?
If not, I can work on incorporating that to Swift.

Thanks

Cristian

On 30/01/14 11:12, Sean Dague s...@dague.net wrote:

For all projects that use oslo logging (which is currently everything
except swift), this works.

   -Sean

On 01/30/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
 No idea, I only really work on Nova, but as this is in Oslo I expect so!
 
 Matt
 
 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 30 January 2014 13:44
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards

 Hi Matt,
 What about the rest of the components? Do they also have this
capability?
 Thanks

 Cristian

 On 30/01/14 04:59, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:

 Hi Cristian,

 The functionality already exists within Openstack (certainly it's
there
 in Nova) it's just not very well documented (something I keep meaning
 to
 do!)

 Basically you need to add the following to your nova.conf file:

 log_config=/etc/nova/logging.conf

 And then create /etc/nova/logging.conf with the configuration you want
 to use based on the Python Logging Module's ini configuration
format.

 Hope that helps,

 Matt

 -Original Message-
 From: Sanchez, Cristian A [mailto:cristian.a.sanc...@intel.com]
 Sent: 29 January 2014 17:57
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Proposed Logging Standards

 Hi Matthew,
 I¹m interested to help in this switch to python logging framework for
 shipping to  logstash/etc. Are you working on a blueprint for this?
 Cheers,

 Cristian

 On 27/01/14 11:07, Macdonald-Wallace, Matthew
 matthew.macdonald-wall...@hp.com wrote:

 Hi Sean,

 I'm currently working on moving away from the built-in logging to
 use log_config=filename and the python logging framework so that
 we can start shipping to logstash/sentry/insert other useful tool
here.

 I'd be very interested in getting involved in this, especially from
 a why do we have log messages that are split across multiple lines
 perspective!

 Cheers,

 Matt

 P.S. FWIW, I'd also welcome details on what the Audit level gives
 us that the others don't... :)

 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: 27 January 2014 13:08
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] Proposed Logging Standards

 Back at the beginning of the cycle, I pushed for the idea of doing
 some log  harmonization, so that the OpenStack logs, across
 services, made sense.
 I've
 pushed a proposed changes to Nova and Keystone over the past
 couple of days.

 This is going to be a long process, so right now I want to just
 focus on making  INFO level sane, because as someone that spends a
 lot of time staring at logs in  test failures, I can tell you it
 currently isn't.

 https://wiki.openstack.org/wiki/LoggingStandards is a few things
 I've written  down so far, comments welcomed.

 We kind of need to solve this set of recommendations once and for
 all up front,  because negotiating each change, with each project,
 isn't going to work (e.g -
 https://review.openstack.org/#/c/69218/)

 What I'd like to find out now:

 1) who's interested in this topic?
 2) who's interested in helping flesh out the guidelines for
 various log levels?
 3) who's interested in helping get these kinds of patches into
 various projects in  OpenStack?
 4) which projects are interested in participating (i.e. interested
 in prioritizing  landing these kinds of UX improvements)

 This is going to be progressive and iterative. And will require
 lots of folks  involved.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tzu-Mainn Chen
Wouldn't lying about the hardware specs when registering nodes be problematic 
for upgrades? Users would have 
to re-register their nodes. 

One reason why a custom filter feels attractive is that it provides us with a 
clear upgrade path: 

Icehouse 
* nodes are registered with correct attributes 
* create a custom scheduler filter that allows any node to match 
* users are informed that for this release, Tuskar will not differentiate 
between heterogeneous hardware 

J-Release 
* implement the proper use of flavors within Tuskar, allowing Tuskar to work 
with heterogeneous hardware 
* work with nova regarding scheduler filters (if needed) 
* remove the custom scheduler filter 

Mainn 

- Original Message -

 As far as nova-scheduler and Ironic go, I believe this is a solved problem.
 Steps are:
 - enroll hardware with proper specs (CPU, RAM, disk, etc)
 - create flavors based on hardware specs
 - scheduler filter matches requests exactly

 There are, I suspect, three areas where this would fall short today:
 - exposing to the user when certain flavors shouldn't be picked, because
 there is no more hardware available which could match it
 - ensuring that hardware is enrolled with the proper specs //
 trouble-shooting when it is not
 - a UI that does these well

 If I understand your proposal correctly, you're suggesting that we introduce
 non-deterministic behavior. If the scheduler filter falls back to $flavor
 when $flavor is not available, even if the search is in ascending order and
 upper-bounded by some percentage, the user is still likely to get something
 other than what they requested. From a utilization and inventory-management
 standpoint, this would be a headache, and from a user standpoint, it would
 be awkward. Also, your proposal is only addressing the case where hardware
 variance is small; it doesn't include a solution for deployments with
 substantially different hardware.

 I don't think introducing a non-deterministic hack when the underlying
 services already work, just to provide a temporary UI solution, is
 appropriate. But that's just my opinion.

 Here's an alternate proposal to support same-arch but different cpu/ram/disk
 hardware environments:
 - keep the scheduler filter doing an exact match
 - have the UI only allow the user to define one flavor, and have that be the
 lowest common denominator of available hardware
 - assign that flavor's properties to all nodes -- basically lie about the
 hardware specs when enrolling them
 - inform the user that, if they have heterogeneous hardware, they will get
 randomly chosen nodes from their pool, and that scheduling on heterogeneous
 hardware will be added in a future UI release

 This will allow folks who are using TripleO at the commandline to take
 advantage of their heterogeneous hardware, instead of crippling
 already-existing functionality, while also allowing users who have slightly
 (or wildly) different hardware specs to still use the UI.

 Regards,
 Devananda

 On Thu, Jan 30, 2014 at 7:14 AM, Tomas Sedovic  tsedo...@redhat.com  wrote:

  On 30/01/14 15:53, Matt Wagner wrote:
 

   On 1/30/14, 5:26 AM, Tomas Sedovic wrote:
  
 

Hi all,
   
  
 

I've seen some confusion regarding the homogenous hardware support as
   
  
 
the first step for the tripleo UI. I think it's time to make sure we're
   
  
 
all on the same page.
   
  
 

Here's what I think is not controversial:
   
  
 

1. Build the UI and everything underneath to work with homogenous
   
  
 
hardware in the Icehouse timeframe
   
  
 
2. Figure out how to support heterogenous hardware and do that (may or
   
  
 
may not happen within Icehouse)
   
  
 

The first option implies having a single nova flavour that will match
   
  
 
all the boxes we want to work with. It may or may not be surfaced in
the
   
  
 
UI (I think that depends on our undercloud installation story).
   
  
 

Now, someone (I don't honestly know who or when) proposed a slight step
   
  
 
up from point #1 that would allow people to try the UI even if their
   
  
 
hardware varies slightly:
   
  
 

1.1 Treat similar hardware configuration as equal
   
  
 

The way I understand it is this: we use a scheduler filter that
wouldn't
   
  
 
do a strict match on the hardware in Ironic. E.g. if our baremetal
   
  
 
flavour said 16GB ram and 1TB disk, it would also match a node with
24GB
   
  
 
ram or 1.5TB disk.
   
  
 

The UI would still assume homogenous hardware and treat it as such.
It's
   
  
 
just that we would allow for small differences.
   
  
 

This *isn't* proposing we match ARM to x64 or offer a box with 24GB RAM
   
  
 
when the flavour says 32. We would treat the flavour as a lowest common
   
  
 
denominator.
   
  
 

   Does Nova already handle this? Or is it built on exact matches?
  
 

  It's doing an exact match as far as I know. 

Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-30 Thread Soren Hansen
2100 UTC is 1 PM Pacific. :-)
Den 29/01/2014 17.01 skrev Vishvananda Ishaya vishvana...@gmail.com:

 I apologize for the confusion. The Wiki time of 2100 UTC is the correct
 time (Noon Pacific time). We can move tne next meeting to a different
 day/time that is more convienient for Europe.

 Vish


 On Jan 29, 2014, at 1:56 AM, Florent Flament 
 florent.flament-...@cloudwatt.com wrote:

  Hi Vishvananda,
 
  I would be interested in such a working group.
  Can you please confirm the meeting hour for this Friday ?
  I've seen 1600 UTC in your email and 2100 UTC in the wiki (
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting). 
 As I'm in Europe I'd prefer 1600 UTC.
 
  Florent Flament
 
  - Original Message -
  From: Vishvananda Ishaya vishvana...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Tuesday, January 28, 2014 7:35:15 PM
  Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
 
  Hi Everyone,
 
  I apologize for the obtuse title, but there isn't a better succinct term
 to describe what is needed. OpenStack has no support for multiple owners of
 objects. This means that a variety of private cloud use cases are simply
 not supported. Specifically, objects in the system can only be managed on
 the tenant level or globally.
 
  The key use case here is to delegate administration rights for a group
 of tenants to a specific user/role. There is something in Keystone called a
 “domain” which supports part of this functionality, but without support
 from all of the projects, this concept is pretty useless.
 
  In IRC today I had a brief discussion about how we could address this. I
 have put some details and a straw man up here:
 
  https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
  I would like to discuss this strawman and organize a group of people to
 get actual work done by having an irc meeting this Friday at 1600UTC. I
 know this time is probably a bit tough for Europe, so if we decide we need
 a regular meeting to discuss progress then we can vote on a better time for
 this meeting.
 
 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
  Please note that this is going to be an active team that produces code.
 We will *NOT* spend a lot of time debating approaches, and instead focus on
 making something that works and learning as we go. The output of this team
 will be a MultiTenant devstack install that actually works, so that we can
 ensure the features we are adding to each project work together.
 
  Vish
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-30 Thread Vishvananda Ishaya
Thanks Soren, you are correct! Yay Timezones

Vish

On Jan 30, 2014, at 10:39 AM, Soren Hansen so...@linux2go.dk wrote:

 2100 UTC is 1 PM Pacific. :-)
 
 Den 29/01/2014 17.01 skrev Vishvananda Ishaya vishvana...@gmail.com:
 I apologize for the confusion. The Wiki time of 2100 UTC is the correct time 
 (Noon Pacific time). We can move tne next meeting to a different day/time 
 that is more convienient for Europe.
 
 Vish
 
 
 On Jan 29, 2014, at 1:56 AM, Florent Flament 
 florent.flament-...@cloudwatt.com wrote:
 
  Hi Vishvananda,
 
  I would be interested in such a working group.
  Can you please confirm the meeting hour for this Friday ?
  I've seen 1600 UTC in your email and 2100 UTC in the wiki ( 
  https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting 
  ). As I'm in Europe I'd prefer 1600 UTC.
 
  Florent Flament
 
  - Original Message -
  From: Vishvananda Ishaya vishvana...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Tuesday, January 28, 2014 7:35:15 PM
  Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
 
  Hi Everyone,
 
  I apologize for the obtuse title, but there isn't a better succinct term to 
  describe what is needed. OpenStack has no support for multiple owners of 
  objects. This means that a variety of private cloud use cases are simply 
  not supported. Specifically, objects in the system can only be managed on 
  the tenant level or globally.
 
  The key use case here is to delegate administration rights for a group of 
  tenants to a specific user/role. There is something in Keystone called a 
  “domain” which supports part of this functionality, but without support 
  from all of the projects, this concept is pretty useless.
 
  In IRC today I had a brief discussion about how we could address this. I 
  have put some details and a straw man up here:
 
  https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
  I would like to discuss this strawman and organize a group of people to get 
  actual work done by having an irc meeting this Friday at 1600UTC. I know 
  this time is probably a bit tough for Europe, so if we decide we need a 
  regular meeting to discuss progress then we can vote on a better time for 
  this meeting.
 
  https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
  Please note that this is going to be an active team that produces code. We 
  will *NOT* spend a lot of time debating approaches, and instead focus on 
  making something that works and learning as we go. The output of this team 
  will be a MultiTenant devstack install that actually works, so that we can 
  ensure the features we are adding to each project work together.
 
  Vish
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

2014-01-30 Thread sahid
 For metering/usage purposes, does the old size of ephemeral disk
 continue to be shown in usage records, or does the size of the disk in
 the newly-selected instance type (flavor) get used? If the former, then
 this would be an avenue for users to Get more disk space than they are
 paying for. Something to look into...

Actually yes, the status of the instance is with the new flavor disk space
while the real space allocated for the instance is always the same.

We probably need to raise a ResizeError exception, also to keep a good backward
compatibility we can add a config like libvirt.use_strong_resize=True or
something else.

Regards,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-30 Thread David Stanek
That's why I love this site:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140130T2100


On Thu, Jan 30, 2014 at 1:46 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 Thanks Soren, you are correct! Yay Timezones

 Vish

 On Jan 30, 2014, at 10:39 AM, Soren Hansen so...@linux2go.dk wrote:

 2100 UTC is 1 PM Pacific. :-)
 Den 29/01/2014 17.01 skrev Vishvananda Ishaya vishvana...@gmail.com:

 I apologize for the confusion. The Wiki time of 2100 UTC is the correct
 time (Noon Pacific time). We can move tne next meeting to a different
 day/time that is more convienient for Europe.

 Vish


 On Jan 29, 2014, at 1:56 AM, Florent Flament 
 florent.flament-...@cloudwatt.com wrote:

  Hi Vishvananda,
 
  I would be interested in such a working group.
  Can you please confirm the meeting hour for this Friday ?
  I've seen 1600 UTC in your email and 2100 UTC in the wiki (
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting). 
 As I'm in Europe I'd prefer 1600 UTC.
 
  Florent Flament
 
  - Original Message -
  From: Vishvananda Ishaya vishvana...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Tuesday, January 28, 2014 7:35:15 PM
  Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
 
  Hi Everyone,
 
  I apologize for the obtuse title, but there isn't a better succinct
 term to describe what is needed. OpenStack has no support for multiple
 owners of objects. This means that a variety of private cloud use cases are
 simply not supported. Specifically, objects in the system can only be
 managed on the tenant level or globally.
 
  The key use case here is to delegate administration rights for a group
 of tenants to a specific user/role. There is something in Keystone called a
 domain which supports part of this functionality, but without support
 from all of the projects, this concept is pretty useless.
 
  In IRC today I had a brief discussion about how we could address this.
 I have put some details and a straw man up here:
 
  https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
  I would like to discuss this strawman and organize a group of people to
 get actual work done by having an irc meeting this Friday at 1600UTC. I
 know this time is probably a bit tough for Europe, so if we decide we need
 a regular meeting to discuss progress then we can vote on a better time for
 this meeting.
 
 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
  Please note that this is going to be an active team that produces code.
 We will *NOT* spend a lot of time debating approaches, and instead focus on
 making something that works and learning as we go. The output of this team
 will be a MultiTenant devstack install that actually works, so that we can
 ensure the features we are adding to each project work together.
 
  Vish
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

2014-01-30 Thread sahid
 In case it hasn't been considered yet, shrinking a filesystem can result
 in terrible fragmentation.  The block allocator in resize2fs does not do
 a great job of handling this case.  The result will be a very
 non-optimal file layout and measurably worse performance, especially for
 drives with a relatively high average seek time.

This is an interesting point and I really want to get more information 
about it, I done some search on the manual of resize2fs but nothing.
As well, what do you think about to use freezero after the resize if
available on the host, could it fix this kind of problem?

Best,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest thoughts

2014-01-30 Thread Ben Nemec

On 2014-01-30 09:28, James Slagle wrote:

devtest, our TripleO setup, has been rapidly evolving. We've added a
fair amount of configuration options for stuff like using actual
baremetal, and (soon) HA deployments by default. Also, the scripts
(which the docs are generated from) are being used for both CD and CI.

This is all great progress.

However, due to these changes,  I think that devtest no longer works
great as a tripleo developer setup. You haven't been able to complete
a setup following our docs for 1 week now. The patches are in review
to fix that, and they need to be properly reviewed and I'm not saying
they should be rushed. Just that it's another aspect of the problem of
trying to use devtest for CI/CD and a dev setup.

I think it might be time to have a developer setup vs. devtest, which
is more of a documented tripleo setup at this point.

In irc earlier this week (sorry if i misquoting the intent here), I
saw mention of getting setup easier by just using a seed to deploy an
overcloud.  I think that's a great idea.  We are all already probably
doing it :). Why not document that in some sort of fashion?

There would be some initial trade offs, around folks not necessarily
understanding the full devtest process. But, you don't necessarily
need to understand all of that to hack on the upgrade story, or
tuskar, or ironic.

These are just some additional thoughts around the process and mail I
sent earlier this week:
http://lists.openstack.org/pipermail/openstack-dev/2014-January/025726.html
But, I thought this warranted a broader discussion.


Another aspect I've noticed lately is that there has been discussion 
around making sure the defaults in devtest are production-ready, which 
seems to contradict both parts of the name devtest. :-)


I haven't really thought through how we would go about splitting things 
up, but I agree that it's a discussion we need to be having.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-30 Thread Prasad Vellanki
Zane
Thanks for putting this together. This will guide us as we develop some
resources in Heat.
As chmouel said it would be great if this can be converted to blog article.

thanks
prasadv


On Wed, Jan 29, 2014 at 11:09 PM, Chmouel Boudjnah chmo...@enovance.comwrote:

 Zane Bitter zbit...@redhat.com writes:

  As I said, figuring this all out is really hard to do, and the
  existing resources in Heat are by no means perfect (we even had a
  session at the Design Summit devoted to fixing some of them[1]). If
  anyone has a question about a specific model, feel free to ping me or
  add me to the review and I will do my best to help.

 Thanks for writing this up Zane, I have been often confused with the
 modeling system of Heat, it may be worthwhile to store this in
 documentation or a blog article.

 Cheers,
 Chmouel.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest thoughts

2014-01-30 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2014-01-30 11:08:44 -0800:
 On 2014-01-30 09:28, James Slagle wrote:
  devtest, our TripleO setup, has been rapidly evolving. We've added a
  fair amount of configuration options for stuff like using actual
  baremetal, and (soon) HA deployments by default. Also, the scripts
  (which the docs are generated from) are being used for both CD and CI.
  
  This is all great progress.
  
  However, due to these changes,  I think that devtest no longer works
  great as a tripleo developer setup. You haven't been able to complete
  a setup following our docs for 1 week now. The patches are in review
  to fix that, and they need to be properly reviewed and I'm not saying
  they should be rushed. Just that it's another aspect of the problem of
  trying to use devtest for CI/CD and a dev setup.
  
  I think it might be time to have a developer setup vs. devtest, which
  is more of a documented tripleo setup at this point.
  
  In irc earlier this week (sorry if i misquoting the intent here), I
  saw mention of getting setup easier by just using a seed to deploy an
  overcloud.  I think that's a great idea.  We are all already probably
  doing it :). Why not document that in some sort of fashion?
  
  There would be some initial trade offs, around folks not necessarily
  understanding the full devtest process. But, you don't necessarily
  need to understand all of that to hack on the upgrade story, or
  tuskar, or ironic.
  
  These are just some additional thoughts around the process and mail I
  sent earlier this week:
  http://lists.openstack.org/pipermail/openstack-dev/2014-January/025726.html
  But, I thought this warranted a broader discussion.
 
 Another aspect I've noticed lately is that there has been discussion 
 around making sure the defaults in devtest are production-ready, which 
 seems to contradict both parts of the name devtest. :-)
 

Hm, can you point to some of those discussions? We want the defaults in
all of OpenStack to be production ready, and we want devtest to work in
that way when it doesn't put undue burden on development or testing.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest thoughts

2014-01-30 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-01-30 07:28:01 -0800:
 devtest, our TripleO setup, has been rapidly evolving. We've added a
 fair amount of configuration options for stuff like using actual
 baremetal, and (soon) HA deployments by default. Also, the scripts
 (which the docs are generated from) are being used for both CD and CI.
 
 This is all great progress.
 
 However, due to these changes,  I think that devtest no longer works
 great as a tripleo developer setup. You haven't been able to complete
 a setup following our docs for 1 week now. The patches are in review
 to fix that, and they need to be properly reviewed and I'm not saying
 they should be rushed. Just that it's another aspect of the problem of
 trying to use devtest for CI/CD and a dev setup.
 

I wonder, if we have a gate which runs through devtest entirely, would
that reduce the instances where we've broken everybody? Seems like it
would, but the gate isn't going to read the docs, it is going to run the
script, so maybe it will still break sometimes.

BTW I do think those patches should be first priority.

 I think it might be time to have a developer setup vs. devtest, which
 is more of a documented tripleo setup at this point.


What if we just focus on breaking devtest less often? Seems like that is
achievable and then we don't diverge from CI.

 In irc earlier this week (sorry if i misquoting the intent here), I
 saw mention of getting setup easier by just using a seed to deploy an
 overcloud.  I think that's a great idea.  We are all already probably
 doing it :). Why not document that in some sort of fashion?


+1. I think a note at the end of devtest_seed which basically says If
you are not interested in testing HA baremetal, set these variables like
so and skip to devtest_overcloud. Great idea actually, as thats what I
do often when I know I'll be tearing down my setup later.

 There would be some initial trade offs, around folks not necessarily
 understanding the full devtest process. But, you don't necessarily
 need to understand all of that to hack on the upgrade story, or
 tuskar, or ironic.
 

Agreed totally. The processes are similar enough that when the time
comes that a user needs to think about working on things which impact
the undercloud they can back up to seed and then do that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Alembic migrations and absence of DROP column in sqlite

2014-01-30 Thread Trevor McKay

I was playing with alembic migration and discovered that
op.drop_column() doesn't work with sqlite.  This is because sqlite
doesn't support dropping a column (broken imho, but that's another
discussion).  Sqlite throws a syntax error.

To make this work with sqlite, you have to copy the table to a temporary
excluding the column(s) you don't want and delete the old one, followed
by a rename of the new table.

The existing 002 migration uses op.drop_column(), so I'm assuming it's
broken, too (I need to check what the migration test is doing).  I was
working on an 003.

How do we want to handle this?  Three good options I can think of:

1) don't support migrations for sqlite (I think no, but maybe)

2) Extend alembic so that op.drop_column() does the right thing (more
open-source contributions for us, yay :) )

3) Add our own wrapper in savanna so that we have a drop_column() method
that wraps copy/rename.

Ideas, comments?

Best,

Trevor


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Alembic migrations and absence of DROP column in sqlite

2014-01-30 Thread Jay Pipes
On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
 I was playing with alembic migration and discovered that
 op.drop_column() doesn't work with sqlite.  This is because sqlite
 doesn't support dropping a column (broken imho, but that's another
 discussion).  Sqlite throws a syntax error.
 
 To make this work with sqlite, you have to copy the table to a temporary
 excluding the column(s) you don't want and delete the old one, followed
 by a rename of the new table.
 
 The existing 002 migration uses op.drop_column(), so I'm assuming it's
 broken, too (I need to check what the migration test is doing).  I was
 working on an 003.
 
 How do we want to handle this?  Three good options I can think of:
 
 1) don't support migrations for sqlite (I think no, but maybe)
 
 2) Extend alembic so that op.drop_column() does the right thing (more
 open-source contributions for us, yay :) )
 
 3) Add our own wrapper in savanna so that we have a drop_column() method
 that wraps copy/rename.
 
 Ideas, comments?

Migrations should really not be run against SQLite at all -- only on the
databases that would be used in production. I believe the general
direction of the contributor community is to be consistent around
testing of migrations and to not run migrations at all in unit tests
(which use SQLite).

Boris (cc'd) may have some more to say on this topic.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes Jan 30

2014-01-30 Thread Alexander Ignatov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-30-18.05.html
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-01-30-18.05.log.html

Regards,
Alexander Ignatov




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Robert Collins
On 30 January 2014 23:26, Tomas Sedovic tsedo...@redhat.com wrote:
 Hi all,

 I've seen some confusion regarding the homogenous hardware support as the
 first step for the tripleo UI. I think it's time to make sure we're all on
 the same page.

 Here's what I think is not controversial:

 1. Build the UI and everything underneath to work with homogenous hardware
 in the Icehouse timeframe
 2. Figure out how to support heterogenous hardware and do that (may or may
 not happen within Icehouse)

 The first option implies having a single nova flavour that will match all
 the boxes we want to work with. It may or may not be surfaced in the UI (I
 think that depends on our undercloud installation story).

I don't agree that (1) implies a single nova flavour. In the context
of the discussion it implied avoiding doing our own scheduling, and
due to the many moving parts we never got beyond that.

My expectation is that (argh naming of things) a service definition[1]
will specify a nova flavour, right from the get go. That gives you
homogeneous hardware for any service
[control/network/block-storage/object-storage].

Jaromir's wireframes include the ability to define multiple such
definitions, so two definitions for compute, for instance (e.g. one
might be KVM, one Xen, or one w/GPUs and the other without, with a
different host aggregate configured).

As long as each definition has a nova flavour, users with multiple
hardware configurations can just create multiple definitions, done.

That is not entirely policy driven, so for longer term you want to be
able to say 'flavour X *or* Y can be used for this', but as a early
iteration it seems very straight forward to me.

 Now, someone (I don't honestly know who or when) proposed a slight step up
 from point #1 that would allow people to try the UI even if their hardware
 varies slightly:

 1.1 Treat similar hardware configuration as equal

I think this is a problematic idea, because of the points raised
elsewhere in the thread.

But more importantly, it's totally unnecessary. If one wants to handle
minor variations in hardware (e.g. 1TB vs 1.1TB disks) just register
them as being identical, with the lowest common denominator - Nova
will then treat them as equal.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Speaking at PyCON US 2014 about OpenStack?

2014-01-30 Thread Jarret Raim
Paul Kehrer and I are talking about the state of crypto in Python. The
talk isn't specifically about OpenStack, but we will be talking about
various OpenStack related issues including the Barbican service.


Thanks,
Jarret




On 1/30/14, 11:05 AM, Anita Kuno ante...@anteaya.info wrote:

On 01/30/2014 09:51 AM, Doug Hellmann wrote:
 On Thu, Jan 30, 2014 at 11:14 AM, Anita Kuno ante...@anteaya.info
wrote:
 
 On 01/30/2014 08:42 AM, Stefano Maffulli wrote:
 If you're going to talk about anything related to OpenStack at PyCON
 US/Canada this year, please let me know. We're collecting the list of
 talks related to the project.

 Cheers,
 stef

 Would it be possible to start an etherpad for this? I am considering
 offering a workshop or lab of some sort (if I haven't missed the
 deadline for that) but don't want to be stepping on toes if someone
else
 is already covering that material.

 
 The deadline for formal conference talks and tutorials has passed [1],
but
 you could still schedule an open space room on site [2].
 
 [1] https://us.pycon.org/2014/speaking/cfp/
 [2] https://us.pycon.org/2014/community/openspaces/
 
 Doug
Thanks Doug, I had thought I had seen something fly past me that
mentioned something about offer a workshop or something but it appears
that was over in September - too late.

So ignore my request and thanks anyway,
Anita.
 
 
 

 Thanks,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-30 Thread Mark Washenberger
On Wed, Jan 29, 2014 at 5:03 PM, Zane Bitter zbit...@redhat.com wrote:

 On 29/01/14 19:40, Jay Pipes wrote:

 On Wed, 2014-01-29 at 18:55 -0500, Zane Bitter wrote:

 I've noticed a few code reviews for new Heat resource types -
 particularly Neutron resource types - where folks are struggling to find
 the appropriate way to model the underlying API in Heat. This is a
 really hard problem, and is often non-obvious even to Heat experts, so
 here are a few tips that might help.

 Resources are nouns, they model Things. Ideally Things that have UUIDs.
 The main reason to have a resource is so you can reference its UUID (or
 some attribute) and pass it to another resource or to the user via an
 output.

 If two resources _have_ to be used together, they're really only one
 resource. Don't split them up - especially if the one whose UUID other
 resources depend on is the first to be created but not the only one
 actually required by the resource depending on it.


 Right. The above is precisely why I raised concerns about the image
 import/upload tasks work ongoing in Glance.

 https://wiki.openstack.org/wiki/Glance-tasks-import#
 Initial_Import_Request


 At least the dependencies there would be in the right order:

   ImportTask - Image - Server

 but if you were to model this in Heat, there should just be an Image
 resource that does the importing internally.

 (I'm not touching the question of whether Heat should have a Glance Image
 resource at all, which I'm deeply ambivalent about.)


Maybe I'm just missing the use case, but it seems like modeling anything
Glance-y in Heat doesn't quite make sense. If at all, the dependency would
run the other way (model heat-things in glance, just as we presently model
nova-things in glance). So I think we're in agreement.



 cheers,
 Zane.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-01-30 Thread Robert Li (baoli)
Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.

Thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] devtest thoughts

2014-01-30 Thread Ben Nemec

On 2014-01-30 13:32, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2014-01-30 11:08:44 -0800:

On 2014-01-30 09:28, James Slagle wrote:
 devtest, our TripleO setup, has been rapidly evolving. We've added a
 fair amount of configuration options for stuff like using actual
 baremetal, and (soon) HA deployments by default. Also, the scripts
 (which the docs are generated from) are being used for both CD and CI.

 This is all great progress.

 However, due to these changes,  I think that devtest no longer works
 great as a tripleo developer setup. You haven't been able to complete
 a setup following our docs for 1 week now. The patches are in review
 to fix that, and they need to be properly reviewed and I'm not saying
 they should be rushed. Just that it's another aspect of the problem of
 trying to use devtest for CI/CD and a dev setup.

 I think it might be time to have a developer setup vs. devtest, which
 is more of a documented tripleo setup at this point.

 In irc earlier this week (sorry if i misquoting the intent here), I
 saw mention of getting setup easier by just using a seed to deploy an
 overcloud.  I think that's a great idea.  We are all already probably
 doing it :). Why not document that in some sort of fashion?

 There would be some initial trade offs, around folks not necessarily
 understanding the full devtest process. But, you don't necessarily
 need to understand all of that to hack on the upgrade story, or
 tuskar, or ironic.

 These are just some additional thoughts around the process and mail I
 sent earlier this week:
 http://lists.openstack.org/pipermail/openstack-dev/2014-January/025726.html
 But, I thought this warranted a broader discussion.

Another aspect I've noticed lately is that there has been discussion
around making sure the defaults in devtest are production-ready, which
seems to contradict both parts of the name devtest. :-)



Hm, can you point to some of those discussions? We want the defaults in
all of OpenStack to be production ready, and we want devtest to work in
that way when it doesn't put undue burden on development or testing.



I'm thinking of things like 
http://eavesdrop.openstack.org/irclogs/%23tripleo/%23tripleo.2014-01-20.log starting at 2014-01-20T19:17:56.


I realize that wasn't saying we wouldn't have dev/CI options available, 
but having devtest default to production settings causes cognitive 
dissonance for me.  Maybe it's simply a naming thing - if it were called 
tripleo-deploy.sh or something it would make more sense to me.  And even 
if that happened, I still think we need some developer-specific 
documentation to cover things like skipping the undercloud and deploying 
seed-overcloud.  I have enough issues if I skip a single step from 
devtest because I think it isn't necessary - half the time I'm wrong and 
something breaks later on.  I would never have even tried completely 
skipping the undercloud on my own.


There are also things like pypi-openstack and pip-cache (which are at 
least mentioned in devtest) and James's new local image support that can 
be huge timesavers for developers, but we can't/won't use by default.  
Having a developer best practice guide that could say Set FOO=bar 
before starting devtest to take advantage of better squid caching and 
Skip the entire undercloud page if you're working only on the 
overcloud would be very helpful IMHO.


On a related note, I would point out 
https://review.openstack.org/#/c/67557/ which is something that I would 
find useful as a developer and has two -1's for no reason other than it 
isn't useful outside of development, at least as I see it.  I'm not sure 
this one is directly production vs. development, but I think it's an 
example of how devtest isn't especially developer-oriented at this 
point.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Tzu-Mainn Chen
- Original Message -
 On 30 January 2014 23:26, Tomas Sedovic tsedo...@redhat.com wrote:
  Hi all,
 
  I've seen some confusion regarding the homogenous hardware support as the
  first step for the tripleo UI. I think it's time to make sure we're all on
  the same page.
 
  Here's what I think is not controversial:
 
  1. Build the UI and everything underneath to work with homogenous hardware
  in the Icehouse timeframe
  2. Figure out how to support heterogenous hardware and do that (may or may
  not happen within Icehouse)
 
  The first option implies having a single nova flavour that will match all
  the boxes we want to work with. It may or may not be surfaced in the UI (I
  think that depends on our undercloud installation story).
 
 I don't agree that (1) implies a single nova flavour. In the context
 of the discussion it implied avoiding doing our own scheduling, and
 due to the many moving parts we never got beyond that.
 
 My expectation is that (argh naming of things) a service definition[1]
 will specify a nova flavour, right from the get go. That gives you
 homogeneous hardware for any service
 [control/network/block-storage/object-storage].
 
 Jaromir's wireframes include the ability to define multiple such
 definitions, so two definitions for compute, for instance (e.g. one
 might be KVM, one Xen, or one w/GPUs and the other without, with a
 different host aggregate configured).
 
 As long as each definition has a nova flavour, users with multiple
 hardware configurations can just create multiple definitions, done.
 
 That is not entirely policy driven, so for longer term you want to be
 able to say 'flavour X *or* Y can be used for this', but as a early
 iteration it seems very straight forward to me.
 
  Now, someone (I don't honestly know who or when) proposed a slight step up
  from point #1 that would allow people to try the UI even if their hardware
  varies slightly:
 
  1.1 Treat similar hardware configuration as equal
 
 I think this is a problematic idea, because of the points raised
 elsewhere in the thread.
 
 But more importantly, it's totally unnecessary. If one wants to handle
 minor variations in hardware (e.g. 1TB vs 1.1TB disks) just register
 them as being identical, with the lowest common denominator - Nova
 will then treat them as equal.

Thanks for the reply!  So if I understand correctly, the proposal is for:

Icehouse: one flavor per service role, so nodes are homogeneous per role
J: multiple flavors per service role

That sounds reasonable; the part that gives me pause is when you talk about
handling variations in hardware by registering the nodes as equal.  If those
differences vanish, then won't there be problems in the future when we might
be able to properly handle those variations?

Or do you propose that we only allow minor variations to be registered as 
equal, so
that the UI has to understand the concept of minor variances?

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Jay Dobies

Wouldn't lying about the hardware specs when registering nodes be
problematic for upgrades?  Users would have
to re-register their nodes.


This was my first impression too, the line basically lie about the 
hardware specs when enrolling them. It feels more wrong to have the 
user provide false data than it does to ignore that data for Icehouse. 
I'd rather have the data correct now and ignore it than tell users when 
they upgrade to Juno they have to re-enter all of their node data.


It's not heterogenous v. homogeneous support. It's whether or not we use 
the data. We can capture it now and not provide the user the ability to 
differentiate what something is deployed on. That's a heterogeneous 
enrivonment, but just a lack of fine-grained control over where the 
instances fall.


And all of this is simply for the time constraints of Icehouse's first 
pass. A known, temporary limitation.




One reason why a custom filter feels attractive is that it provides us
with a clear upgrade path:

Icehouse
   * nodes are registered with correct attributes
   * create a custom scheduler filter that allows any node to match
   * users are informed that for this release, Tuskar will not
differentiate between heterogeneous hardware

J-Release
   * implement the proper use of flavors within Tuskar, allowing Tuskar
to work with heterogeneous hardware
   * work with nova regarding scheduler filters (if needed)
   * remove the custom scheduler filter


Mainn



As far as nova-scheduler and Ironic go, I believe this is a solved
problem. Steps are:
- enroll hardware with proper specs (CPU, RAM, disk, etc)
- create flavors based on hardware specs
- scheduler filter matches requests exactly

There are, I suspect, three areas where this would fall short today:
- exposing to the user when certain flavors shouldn't be picked,
because there is no more hardware available which could match it
- ensuring that hardware is enrolled with the proper specs //
trouble-shooting when it is not
- a UI that does these well

If I understand your proposal correctly, you're suggesting that we
introduce non-deterministic behavior. If the scheduler filter falls
back to $flavor when $flavor is not available, even if the search
is in ascending order and upper-bounded by some percentage, the user
is still likely to get something other than what they requested.
 From a utilization and inventory-management standpoint, this would
be a headache, and from a user standpoint, it would be awkward.
Also, your proposal is only addressing the case where hardware
variance is small; it doesn't include a solution for deployments
with substantially different hardware.

I don't think introducing a non-deterministic hack when the
underlying services already work, just to provide a temporary UI
solution, is appropriate. But that's just my opinion.

Here's an alternate proposal to support same-arch but different
cpu/ram/disk hardware environments:
- keep the scheduler filter doing an exact match
- have the UI only allow the user to define one flavor, and have
that be the lowest common denominator of available hardware
- assign that flavor's properties to all nodes -- basically lie
about the hardware specs when enrolling them
- inform the user that, if they have heterogeneous hardware, they
will get randomly chosen nodes from their pool, and that scheduling
on heterogeneous hardware will be added in a future UI release

This will allow folks who are using TripleO at the commandline to
take advantage of their heterogeneous hardware, instead of crippling
already-existing functionality, while also allowing users who have
slightly (or wildly) different hardware specs to still use the UI.


Regards,
Devananda



On Thu, Jan 30, 2014 at 7:14 AM, Tomas Sedovic tsedo...@redhat.com
mailto:tsedo...@redhat.com wrote:

On 30/01/14 15:53, Matt Wagner wrote:

On 1/30/14, 5:26 AM, Tomas Sedovic wrote:

Hi all,

I've seen some confusion regarding the homogenous
hardware support as
the first step for the tripleo UI. I think it's time to
make sure we're
all on the same page.

Here's what I think is not controversial:

1. Build the UI and everything underneath to work with
homogenous
hardware in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and
do that (may or
may not happen within Icehouse)

The first option implies having a single nova flavour
that will match
all the boxes we want to work with. It may or may not be
surfaced in the

[openstack-dev] [oslo] starting work on oslo.test graduation

2014-01-30 Thread Doug Hellmann
I've started working on moving some of our test code out of the incubator
and into a library. I started with the test base classes and fixtures
because they're at the bottom of the dependency graph, and moving anything
else out was going to require copying the test stuff from the incubator,
and that seemed like it would just cause more cleanup work later.

The blueprint is at
https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-test and the
scratch repository where I've done the git filter-branch work is at
https://github.com/dhellmann/oslo.test. I'd like a few people (esp. Monty,
since he is listed as the maintainer of the fixture stuff) to take a look
at the results before I submit the change request to -infra to import the
repository into our git server next week.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How to model resources in Heat

2014-01-30 Thread Mark Washenberger
On Thu, Jan 30, 2014 at 1:54 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mark Washenberger's message of 2014-01-30 12:41:40 -0800:
  On Wed, Jan 29, 2014 at 5:03 PM, Zane Bitter zbit...@redhat.com wrote:
   (I'm not touching the question of whether Heat should have a Glance
 Image
   resource at all, which I'm deeply ambivalent about.)
  
 
  Maybe I'm just missing the use case, but it seems like modeling anything
  Glance-y in Heat doesn't quite make sense. If at all, the dependency
 would
  run the other way (model heat-things in glance, just as we presently
 model
  nova-things in glance). So I think we're in agreement.
 

 I'm pretty sure it is useful to model images in Heat.

 Consider this scenario:


 resources:
   build_done_handle:
 type: AWS::CloudFormation::WaitConditionHandle
   build_done:
 type: AWS::CloudFormation::WaitCondition
 properties:
   handle: {Ref: build_done_handle}
   build_server:
 type: OS::Nova::Server
 properties:
   image: build-server-image
   userdata:
 join [ ,
   - #!/bin/bash\n
   - build_an_image\n
   - cfn-signal -s SUCCESS 
   - {Ref: build_done_handle}
   - \n]
   built_image:
 type: OS::Glance::Image
 depends_on: build_done
 properties:
   fetch_url: join [ , [http://;, {get_attribute: [ build_server,
 fixed_ip ]}, /image_path]]
   actual_server:
 type: OS::Nova::Server
 properties:
   image: {Ref: built_image}


 Anyway, seems rather useful. Maybe I'm reaching.


Perhaps I am confused. It would be good to resolve that.

I think this proposal makes sense but is distinct from modeling the image
directly. Would it be fair to say that above you are modeling an
image-build process, and the image id/url is an output of that process?
Maybe the distinction I'm making is too fine. The difference is that once
an Image exists, you can pretty much just *download* it, you can't really
do dynamic stuff to it like you can with a nova server instance.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Devananda van der Veen
I was responding based on Treat similar hardware configuration as equal. When
there is a very minor difference in hardware (eg, 1TB vs 1.1TB disks),
enrolling them with the same spec (1TB disk) is sufficient to solve all
these issues and mask the need for multiple flavors, and the hardware
wouldn't need to be re-enrolled. My suggestion does not address the desire
to support significant variation in hardware specs, such as 8GB RAM vs 64GB
RAM, in which case, there is no situation in which I think those
differences should be glossed over, even as a short-term hack in Icehouse.

if our baremetal flavour said 16GB ram and 1TB disk, it would also match a
node with 24GB ram or 1.5TB disk.

I think this will lead to a lot of confusion, and difficulty with inventory
/ resource management. I don't think it's suitable even as a
first-approximation.

Put another way, I dislike the prospect of removing currently-available
functionality (an exact-match scheduler and support for multiple flavors)
to enable ease-of-use in a UI. Not that I dislike UIs or anything... it
just feels like two steps backwards. If the UI is limited to homogeneous
hardware, accept that; don't take away heterogeneous hardware support from
the rest of the stack.


Anyway, it sounds like Robert has a solution in mind, so this is all moot :)

Cheers,
Devananda



On Thu, Jan 30, 2014 at 1:30 PM, Jay Dobies jason.dob...@redhat.com wrote:

  Wouldn't lying about the hardware specs when registering nodes be
 problematic for upgrades?  Users would have
 to re-register their nodes.


 This was my first impression too, the line basically lie about the
 hardware specs when enrolling them. It feels more wrong to have the user
 provide false data than it does to ignore that data for Icehouse. I'd
 rather have the data correct now and ignore it than tell users when they
 upgrade to Juno they have to re-enter all of their node data.

 It's not heterogenous v. homogeneous support. It's whether or not we use
 the data. We can capture it now and not provide the user the ability to
 differentiate what something is deployed on. That's a heterogeneous
 enrivonment, but just a lack of fine-grained control over where the
 instances fall.

 And all of this is simply for the time constraints of Icehouse's first
 pass. A known, temporary limitation.


 One reason why a custom filter feels attractive is that it provides us
 with a clear upgrade path:

 Icehouse
* nodes are registered with correct attributes
* create a custom scheduler filter that allows any node to match
* users are informed that for this release, Tuskar will not
 differentiate between heterogeneous hardware

 J-Release
* implement the proper use of flavors within Tuskar, allowing Tuskar
 to work with heterogeneous hardware
* work with nova regarding scheduler filters (if needed)
* remove the custom scheduler filter


 Mainn

 


 As far as nova-scheduler and Ironic go, I believe this is a solved
 problem. Steps are:
 - enroll hardware with proper specs (CPU, RAM, disk, etc)
 - create flavors based on hardware specs
 - scheduler filter matches requests exactly

 There are, I suspect, three areas where this would fall short today:
 - exposing to the user when certain flavors shouldn't be picked,
 because there is no more hardware available which could match it
 - ensuring that hardware is enrolled with the proper specs //
 trouble-shooting when it is not
 - a UI that does these well

 If I understand your proposal correctly, you're suggesting that we
 introduce non-deterministic behavior. If the scheduler filter falls
 back to $flavor when $flavor is not available, even if the search
 is in ascending order and upper-bounded by some percentage, the user
 is still likely to get something other than what they requested.
  From a utilization and inventory-management standpoint, this would
 be a headache, and from a user standpoint, it would be awkward.
 Also, your proposal is only addressing the case where hardware
 variance is small; it doesn't include a solution for deployments
 with substantially different hardware.

 I don't think introducing a non-deterministic hack when the
 underlying services already work, just to provide a temporary UI
 solution, is appropriate. But that's just my opinion.

 Here's an alternate proposal to support same-arch but different
 cpu/ram/disk hardware environments:
 - keep the scheduler filter doing an exact match
 - have the UI only allow the user to define one flavor, and have
 that be the lowest common denominator of available hardware
 - assign that flavor's properties to all nodes -- basically lie
 about the hardware specs when enrolling them
 - inform the user that, if they have heterogeneous hardware, they
 will get randomly chosen nodes from 

Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-01-30 Thread Jordan OMara

On 30/01/14 16:14 -0500, Tzu-Mainn Chen wrote:


Thanks for the reply!  So if I understand correctly, the proposal is for:

Icehouse: one flavor per service role, so nodes are homogeneous per role
J: multiple flavors per service role

That sounds reasonable; the part that gives me pause is when you talk about
handling variations in hardware by registering the nodes as equal.  If those
differences vanish, then won't there be problems in the future when we might
be able to properly handle those variations?

Or do you propose that we only allow minor variations to be registered as 
equal, so
that the UI has to understand the concept of minor variances?



Back to your original point, the idea of are we going to allow seems
fraught with peril. Do we have some kind of tolerance for what hardware the user
is allowed to register after they register their first one? This
sounds like a recipe for user frustration


Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpYmCvfhgVO3.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-30 Thread Doug Hellmann
On Thu, Jan 30, 2014 at 12:38 PM, Vipin Balachandran 
vbalachand...@vmware.com wrote:

 This library is highly specific to VMware drivers in OpenStack and not a
 generic VMware API client. As Doug mentioned, this library won't be useful
 outside OpenStack. Also, it has some dependencies on openstack.common code
 as well. Therefore it makes sense if we make this code as part of OSLO.


I think we have consensus that, assuming you are committing to API
stability, this set of code does not need to go through the incubator
before becoming a library. How stable is the current API?

If it stable and is not going to be useful to anyone outside of OpenStack,
we can create an oslo.vmware library for it. I can start working with
-infra next week to set up the repository.

We will need someone on your team to be designated as the lead maintainer,
to coordinate with the Oslo PTL for release management issues and bug
triage. Is that you, Vipin?

We will also need to have a set of reviewers for the new repository. I'll
add oslo-core, but it will be necessary for a few people familiar with the
code to also be included. If you have anyone from nova or cinder who should
be a reviewer, we can add them, too. Please send me a list of names and the
email addresses used in gerrit so I can add them to the reviewer list when
the repository is created.

Doug





 By the way, a work in progress review has been posted for the VMware
 cinder driver integration with the OSLO common code (
 https://review.openstack.org/#/c/70108/). The nova integration is
 currently under progress.



 Thanks,

 Vipin



 *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 *Sent:* Wednesday, January 29, 2014 4:06 AM
 *To:* Donald Stufft
 *Cc:* OpenStack Development Mailing List (not for usage questions); Vipin
 Balachandran
 *Subject:* Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
 straight to oslo.vmware







 On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io wrote:


 On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

  On Tue, Jan 28 2014, Doug Hellmann wrote:
 
  There are several reviews related to adding VMware interface code to the
  oslo-incubator so it can be shared among projects (start at
  https://review.openstack.org/#/c/65075/7 if you want to look at the
 code).
 
  I expect this code to be fairly stand-alone, so I wonder if we would be
  better off creating an oslo.vmware library from the beginning, instead
 of
  bringing it through the incubator.
 
  Thoughts?
 
  This sounds like a good idea, but it doesn't look OpenStack specific, so
  maybe building a non-oslo library would be better.
 
  Let's not zope it! :)

 +1 on not making it an oslo library.



 Given the number of issues we've seen with stackforge libs in the gate,
 I've changed my default stance on this point.



 It's not clear from the code whether Vipin et al expect this library to be
 useful for anyone not working with both OpenStack and VMware. Either way, I
 anticipate having the library under the symmetric gating rules and managed
 by the one of the OpenStack teams (oslo, nova, cinder?) and VMware
 contributors should make life easier in the long run.



 As far as the actual name goes, I'm not set on oslo.vmware it was just a
 convenient name for the conversation.



 Doug






 
  --
  Julien Danjou
  # Free Software hacker # independent consultant
  # http://julien.danjou.info

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 -
 Donald Stufft
 PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
 DCFA



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ha][agents] host= parameter

2014-01-30 Thread Itsuro ODA
Hi,

 I haven't found any documentation about it. As far as I discovered,
 it's being used to provide Active/Passive replication of agents, as
 you can manage agent's on different hosts to register with the same ID to
 neutron
 (of course, *never* at the same time).

Yes. We use the host= parameter for the purpose described above.

Thanks.
-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >