Re: [openstack-dev] [Group-Based-Policy] Use cases for External Policy chains

2015-03-22 Thread Igor Cardoso
Ivar,

With the currently supported set of services I also agree, but as more
services get supported in the future, or we hand out that choice to
tenants' VMs, it starts to justify a generic model that does not
restrict whether EPs should provide service chains.

That said, I do not yet totally understand the fundamental difference
between having providers/consumers or just changing policy
classifiers' directions. For instance, I could have a provider PTG and
a consumer EP with the traffic headed towards PTG redirecting to some
chain. Likewise, PTG could be the consumer and EP the provider and
still have traffic headed towards the PTG redirecting to some chain.
About this I would very much appreciate a simple clarification.

For the use cases, and now looking at providers/consumers as more than
traffic directionality, an EP could be created as an external source
of video which would then pass through a video transcoding service
function (eventually deployed as a Nova instance) before reaching the
consuming PTGs (that could aggregate users of a telecommunications
service provider, per home, region, subscription type, etc.,
in the context of NFV). It's just an example from the tip of my head.

Cheers,

On Thu, Mar 19, 2015 at 10:28 PM Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Hello,

 As a follow up on [0] I have a question for the community.
 There are multiple use cases for a PTG *providing* a ServiceChain which is
 *consumed* by an External Policy (think about LB/FW/IDS and so forth).
 However, given the current set of services we support, I don't see any use
 case for having the External Policy as the provider on a PRS with a chain.

 Am I missing something? And if not, how should we manage a REDIRECT action
 provided by an External Policy? We could either ignore, validate or treat
 that particular action just like a normal ALLOW.

 Thanks for your feedbacks,
 Ivar.

 [0] https://bugs.launchpad.net/group-based-policy/+bug/1432779
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] git review

2015-03-22 Thread Gary Kotton
Hi,
Any idea when this will be up and running again:


gkotton@ubuntu:~/nova$ git review

Problem running 'git remote update gerrit'

Fetching gerrit

ssh: connect to host review.openstack.org port 29418: Network is unreachable

fatal: Could not read from remote repository.


Please make sure you have the correct access rights

and the repository exists.

error: Could not fetch gerrit

Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] git review

2015-03-22 Thread Anita Kuno
On 03/22/2015 03:33 AM, Gary Kotton wrote:
 Hi,
 Any idea when this will be up and running again:
 
 
 gkotton@ubuntu:~/nova$ git review
 
 Problem running 'git remote update gerrit'
 
 Fetching gerrit
 
 ssh: connect to host review.openstack.org port 29418: Network is unreachable
 
 fatal: Could not read from remote repository.
 
 
 Please make sure you have the correct access rights
 
 and the repository exists.
 
 error: Could not fetch gerrit
 
 Thanks
 Gary
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Hi Gary:

We updated the os that gerrit runs on yesterday, changing the server to
do so.

This post explains the details:
http://lists.openstack.org/pipermail/openstack-infra/2015-February/002425.html

If you are behind a firewall and haven't changed the firewall rules you
may be experiencing a problem.

We posted the email almost 6 weeks ago to give folks time to change
their firewall rules so as to avoid issues.

Let us know if what you are experiencing is not explained by the above
linked mailing list post.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron extenstions

2015-03-22 Thread Gary Kotton
Hi,
Regarding setting the MTU on the API level: this was my bad. This is a read 
only value. The MTU is learnt from the plugin.
Thanks
Gary

From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Saturday, March 21, 2015 at 12:49 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Neutron extenstions

It's good to have trackers for cleaning up the implementation of these 
features. I hope we might be able soon to provide more details concerning what 
cleaning up activities should happen here.

Core vs extension is often incorrectly misunderstood for essential vs 
optional. I believe this classification is wrong, in my opinion. Indeed I 
believe that there will be a lot of people who consider the network entity 
optional (why not just create subnets?), whereas there are a lot of people who 
consider the floating IP concept essential (ever tried to run any sort of app 
on OpenStack clouds without them?).
The API extensions serve two purposes: flexibility and evolution of the API. 
While they've been successful at the former (even too much), they've been a bit 
of failure at the latter. This is why, as Akihiro suggested, we would like to 
move away from extensions as a mechanisms for evolving the API, through a 
process which starts with condensing a lot of them into a super-core or 
unified API.
On the other hand, it is also important to note that this plan has not been 
approved anywhere, so reviewers, in theory should stick to the everything is 
an extension model.

Now everything I wrote in the last paragraphs is of little to no importance in 
the present situation.
I think the answers we want are:
1) Does it make sense to present these attributes to *all* users?
2) If the answer to #1 is yes, what should be the correct behaviour for plugins 
which are unable to understand these new attributes?


I think this is probably a good point to separate the discussion between the 
MTU extension and the vlan_transparent one.

The MTU issue has been a long-standing problem for neutron users. What this 
extension is doing is simply, in my opinion, enabling API control over an 
aspect users were dealing with previously through custom made scripts.
It solves a fundamental issue, and it seems like it does that in a reasonable 
way. The implementation is also complete, in the sense that there is a 
component shipped with neutron which supports it - the DHCP agent. It's not 
merely a management layer change that then somehow something will implement.

My personal opinion is that I welcome an appropriate MTU management. Is not 
exposing details of the backend technology as it's the tenant network MTU we 
are talking about.
If a plugin does not support specifically setting the MTU parameter, I would 
raise a 500 NotImplemented error. This will probably create a precedent, but as 
I also stated in the past, I tend to believe this might actually be better than 
the hide  seek game we do with extension.

The vlan_transparent feature serves a specific purpose of a class of 
applications - NFV apps.
The requirement is well understood, and the goal of these feature is to support 
exactly this; It has been speculated during the review process whether this was 
actually a provider network attribute. In theory it is something that 
characterises how the network should be implemented in the backend. However it 
was not possible to make this ad admin attribute because also non-admins 
might require a vlan_transparent network. Proper RBAC might allow us to expose 
this attribute only to a specific class of users, but Neutron does not yet have 
RBAC [1]

Because of its nature vlan_transparent is an attribute that probably several 
plugins will not be able to understand. Regardless of what the community 
decides regardless extensions vs non-extension, the code as it is implies that 
this flag is present in every request - defaulting to False. This can lead to 
somewhat confusing situation, because users can set it to True, and a get a 200 
response. As a user, I would think that Neutron has prepared for me a nice 
network which is vlan transparent... but if Neutron is running any plugin which 
does not support this extension I would be in a for a huge disappointment when 
I discover my network is not vlan transparent at all!
Users need feedback, and need consistency between the desired state they 
express in the API and the actual state which is implemented in the control 
plane. I think this is something which should be fixed in the cleanup 
activities.
I reckon that perhaps, as a short term measure, the configuration flag Armando 
mentioned might be used to obscure completely the API attribute if a deployer 
chooses not to support vlan transparent networks.
Anyway, since I did not follow the review of the implementation I'll leave 

Re: [openstack-dev] [ceilometer] Pipeline for notifications does not seem to work - SOLVED

2015-03-22 Thread Tim Bell
I found a way to do it using the documentation at 
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-pipeline-transformers.html#d6e11759.

The trick was to use arithmetic and the $(cpu).resource_metadata.vcpus.

- name: hs06_sink
  transformers:
  - name: arithmetic
parameters:
target:
name: hs06
unit: HS06
type: gauge
expr: $(cpu).resource_metadata.vcpus*500*0.98
  publishers:
  - notifier://

Thanks for the various suggestions,

Tim

 -Original Message-
 From: Igor Degtiarov [mailto:idegtia...@mirantis.com]
 Sent: 21 March 2015 16:30
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications does not
 seem to work
 
 I am just curious have you restarted ceilometer services after pipeline.yaml 
 has
 been changed?
 Igor Degtiarov
 Software Engineer
 Mirantis Inc
 www.mirantis.com
 
 
 On Sat, Mar 21, 2015 at 1:21 PM, Tim Bell tim.b...@cern.ch wrote:
  No errors in the notification logs.
 
 
 
  Should this work with the default ceilometer.conf file or do I need to
  enable anything ?
 
 
 
  I’ve also tried using arithmetic. When I have a meter like “cpu” for
  the source, this fires the expression evaluation without problems.
  However, I can’t find a good way of doing the appropriate calculations
  using the number of cores. Sample calculation is below
 
 
 
  expr: $(cpu)*0.98+$(vcpus)*10.0
 
 
 
  I have tried $(cpu.resource_metdata.vcpus) and
  $(cpu.resource_metdata.cpu_number) also. Any suggestions on an
  alternative approach that could work ?
 
 
 
  Any suggestions for the variable name to get at the number of cores
  when I’m evaluating an expression fired by the cpu time ?
 
 
 
  Tim
 
 
 
  From: gordon chung [mailto:g...@live.ca]
  Sent: 20 March 2015 20:55
  To: OpenStack Development Mailing List not for usage questions
 
 
  Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications
  does not seem to work
 
 
 
  i can confirm it works for me as well... are there any noticeable
  errors in the ceilometer-agent-notifications log? the snippet below
  looks sane to me though.
 
  cheers,
  gord
 
  From: idegtia...@mirantis.com
  Date: Fri, 20 Mar 2015 18:35:56 +0200
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [ceilometer] Pipeline for notifications
  does not seem to work
 
  Hi Tim
 
  I've check your case on my devstack. And I've received new hs06 meter
  in my meter list.
 
  So something wrong with your local env.
 
 
  Cheers,
  Igor D.
  Igor Degtiarov
  Software Engineer
  Mirantis Inc
  www.mirantis.com
 
 
  On Fri, Mar 20, 2015 at 5:40 PM, Tim Bell tim.b...@cern.ch wrote:
  
  
   I’m running Juno with ceilometer and trying to produce a new meter
   which is based on vcpus * F (where F is a constant that is
   different for each hypervisor).
  
  
  
   When I create a VM, I get a new sample for vcpus.
  
  
  
   However, it does not appear to fire the transformer.
  
  
  
   The same approach using “cpu” works OK but this one is polling on a
   regular interval rather than a one off notification when the VM is
   created.
  
  
  
   Any suggestions or alternative approaches for how to get a sample
   based the number of cores scaled by a fixed constant?
  
  
  
   Tim
  
  
  
   In my pipeline.yaml sources,
  
  
  
   - name: vcpu_source
  
   interval: 180
  
   meters:
  
   - vcpus
  
   sinks:
  
   - hs06_sink
  
  
  
   In my transformers, I have
  
  
  
   - name: hs06_sink
  
   transformers:
  
   - name: unit_conversion
  
   parameters:
  
   target:
  
   name: hs06
  
   unit: HS06
  
   type: gauge
  
   scale: 47.0
  
   publishers:
  
   - notifier://
  
  
  
  
  
  
  
  
  
  
 _
 __
   ___ OpenStack Development Mailing List (not for usage
   questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 _
 
  _ OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 _
 _
   OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-22 Thread Jay Pipes

On 03/20/2015 05:16 PM, Kevin Benton wrote:

To clarify a bit, we obviously divide lots of things by tenant (quotas,
network listing, etc). The difference is that we have nothing right now
that has to be unique within a tenant. Are there objects that are
uniquely scoped to a tenant in Nova/Glance/etc?


Yes. Virtually everything is :)

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Re: Why we didn't use k8s in kolla?

2015-03-22 Thread Steven Dake (stdake)
FenghuaFeng,

Ccing openstack-dev

1. Kubernetes doesn’t offer a control or integration point.  We have that now 
with docker-compose.
2. Kubernetes doesn’t offer super privileged containers.  We need that in order 
to operate an OpenStack environment.

Regards
-steve

From: 449171342 449171...@qq.commailto:449171...@qq.com
Date: Sunday, March 22, 2015 at 1:47 AM
To: Steven Dake std...@cisco.commailto:std...@cisco.com
Subject: Why we didn't use k8s in kolla?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][pci-passthrough] Error: An object of type PciDevicePoolList is required here

2015-03-22 Thread Moshe Levi
Hi,

In the latest master nova code I am keep getting this error  An object of type 
PciDevicePoolList is required here

My nova.conf contains  pci_passthrough_whitelist.

When I tried to launch vm after devstack installation the vm was successfully 
booted.
When I restart the compute node and then try to launch vm I get a failure due 
to  error An object of type PciDevicePoolList is required here. (It doesn't 
matter if it vm with normal or vm with direct port )

In the debugger  I can see the in that one of resources sent to the scheduler 
is pci_device_pools which is a list for example ('pci_device_pools': 
[{'count': 7, 'vendor_id': u'15b3', 'product_id': u'1004', 'tags': 
{u'numa_node': None, u'physical_network': u'physnet1'}}])
When this resource saved into the database I get  the above error.
Please note I can reproduce this issue only after I restart the compute node.
Removing the pci_device_pools key from the resources (remove it from 
self.compute_node in the resource_tracker) fix this issue, but I am not sure 
that it is the correct way to go.

Is anyone see  this issue?
Should the pci_device_pools be sent to the scheduler?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][cinder][nova][neutron] going forward to oslo-config-generator ...

2015-03-22 Thread Doug Hellmann
Excerpts from Gary Kotton's message of 2015-03-21 16:36:07 +:
 Hi,
 One of the issues that we had in Nova was that when we moved to oslo
 libraries configuration options support by the libraries were no longer
 present in the generated configuration file. Is this something that is
 already supported or planned (sorry for being a little ignorant here).

The new config generator uses entry points declared in the libraries to
discover their options. This is one of the main reasons for moving away
from the old generator that was scanning code in the local directory --
that code no longer contains all of the options.

 In neutron things may be a little more challenging as there are many
 different plugins and with the decomposition things may have additional
 challenges. The configuration binding is done via the external decomposed
 code and not in the neutron code base. So it is not clear how that code
 may be parsed to generate the sample configuration.

The new generator expects the code to be installed. You then call it
with a list of namespaces to check for options. This lets you add
libraries, as mentioned above, as well as partitioning the options
within the app based on which service actually uses them. You could, for
example, have a different namespace for neutron.api and
neutron.agent and generate different sample configs for those 2
daemons.

Doug

 Thanks
 Gary
 
 On 3/21/15, 12:01 AM, Jay S. Bryant jsbry...@electronicjungle.net
 wrote:
 
 All,
 
 Let me start with the TLDR;
 
 Cinder, Nova and Neutron have lots of configuration options that need to
 be processed by oslo-config-generator to create the
 project.conf.sample file.  There are a couple of different ways this
 could be done.  I have one proposal out, which has raised concerns,
 there is a second approach that could be taken which I am proposing
 below.  Please read on if you have a strong opinion on the precedent we
 will try to set in Cinder.  :-)
 
 We discussed in the oslo meeting a couple of weeks ago a plan for how
 Cinder was going to blaze a trail to the new oslo-config-generator.  The
 result of that discussion and work is here:  [1]  It needs some more
 work but has the bare bones pieces there to move to using
 oslo-config-generator.
 
 With the change I have written extensive hacking checks that ensure that
 any lists that are registered with register_opts() are included in the
 base cinder/opts.py file that is then a single entry point that pulls
 all of the options together to generate the cinder.conf.sample file.
 This has raised concern, however, that whenever a developer adds a new
 list of configuration options, they are going to have to know to go back
 to cinder/opts.py and add their module and option list there.  The
 hacking check should catch this before code is submitted, but we are
 possibly setting ourselves up for cases where the patch will fail in the
 gate because updates are not made in all the correct places and because
 pep8 isn't run before the patch is pushed.
 
 It is important to note, that this will not happen every time a
 configuration option is changed or added, as was the case with the old
 check-uptodate.sh script.  Only when a new list of configuration options
 is added which is a much less likely occurrence.  To avoid this
 happening at all it was proposed by the Cinder team that we use the code
 I wrote for the hacking checks to dynamically go through the files and
 create cinder/opts.py whenever 'tox -egenconfig' is run.  Doing this
 makes me uncomfortable as it is not consistent with anything else I am
 familiar with in OpenStack and is not consistent with what other
 projects are doing to handle this problem.  In discussion with Doug
 Hellman, the approach also seemed to cause him concern.  So, I don't
 believe that is the right solution.
 
 An alternative that may be a better solution was proposed by Doug:
 
 We could even further reduce the occurrence of such issues by moving the
 list_opts() function down into each driver and have an entry point for
 oslo.config.opts in setup.cfg for each of the drivers.  As with the
 currently proposed solution, the developer doesn't have to edit a top
 level file for a new configuration option.  This solution adds that the
 developer doesn't have to edit a top level file to add a new
 configuration item list to their driver.  With this approach the change
 would happen in the driver's list_opts() function, rather than in
 cinder/opts.py .  The only time that setup.cfg would needed to edited is
 when a new package is added or when a new driver is added.  This would
 reduce some of the already minimal burden on the developer.  We,
 however, would need to agree upon some method for aggregating together
 the options lists on a per package (i.e. cinder.scheduler, cinder.api)
 level.  This approach, however, also has the advantage of providing a
 better indication in the sample config file of where the options are
 coming from.  That is an improvement over 

Re: [openstack-dev] [heat][congress] Stack lifecycle plugpoint as an enabler for cloud provider's services

2015-03-22 Thread VACHNIS, AVI (AVI)
Thanks Zane. Please see inline.

-Avi

 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: יום ו 20 מרץ 2015 22:13
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [heat][congress] Stack lifecycle plugpoint
 as an enabler for cloud provider's services
 
 On 19/03/15 06:17, VACHNIS, AVI (AVI) wrote:
  Hi,
 
  I'm looking at this interesting blueprint
 https://blueprints.launchpad.net/heat/+spec/stack-lifecycle-plugpoint
 and I hope you can easily clarify some things to me.
  I see the following statements related to this BP:
  * [in problem description section]: There are at least two primary
 use cases. (1) Enabling holistic (whole-pattern) scheduling of the
 virtual resources in a template instance (stack) prior to creating or
 deleting them. This would usually include making decisions about where
 to host virtual resources in the physical infrastructure to satisfy
 policy requirements. 
  * [in proposed change section]: Pre and post operation methods
 should not modify the parameter stack(s). Any modifications would be
 considered to be a bug. 
  * [in Patch set 7 comment by Thomas]: Are the plug-ins allowed to
 modify the stack? If yes, what parts can they modify? If not, should
 some code be added to restrict modifications?
  * [in Patch set 8 comment by Bill] : @Thomas Spatzier, The cleanest
 approach would be to not allow changes to the stack parameter(s). Since
 this is cloud-provider-supplied code, I think that it is reasonable to
 not enforce this restriction, and to treat any modifications of the
 stack parameter(s) as a defect in the cloud provider's extension code.
 
 I think you're asking the wrong question here; it isn't a question of
 what is _allowed_. The plugin runs in the same memory space as
 everything else in Heat. It's _allowed_ to do anything that is possible
 from Python code. The question for anyone writing a plugin is whether
 that's smart.

Right, of course we can do anything in python :) . By _allowed_ I meant would 
it be wise and future proof.
  
 
 In terms of guarantees, we can't offer any at all since we don't have
 any plugins in-tree and participating in continuous integration
 testing.
 
 The plugin interface itself should be considered stable (i.e. there
 would be a long deprecation cycle for any backward-incompatible
 changes), and if anyone brought an accidental breakage to our attention
 I think it would be cause for a revert or fix.
 
 The read-only behaviour of the arguments passed to the plugin as
 parameters (e.g. the Stack object) is not considered stable. In
 practice it tends to change relatively slowly, but there's much less
 attention paid to not breaking this for lifecycle plugins than there is
 e.g. for Resource plugins.
 
 Finally, the behaviour on write of the arguments is not only not
 considered stable, but support for it even working once is explicitly
 disclaimed. You are truly on your own if you try this.

I totally understand and agree with this statement... unless a more 
sophisticated construct (as you said below) is provided and stitched to a 
specific argument.
Just for illustration, today user can bind a server's property with an output 
of a another server/SoftwareConfig. It doesn’t feel so odd since the user has 
constructed dynamic value by using an official types model, Aka, a stable API.
Now let’s assume we came to a conclusion that a Placement resource type is 
something we want in HOT (we've to thoroughly justify why, but let assume for 
now) then a user will bind this type's output to a certain stack resource 
property. I guess this is still the same as with any other type dependency (one 
output affect other property).  
Now, since we have the cloud provider plug-point and a Resource Placement Type 
(assume we have ;), I thought it will be appealing for cloud providers to 
provide their own implementation (e.g. override the default) for the Resource 
Placement Type interface. 
I can think of some alternative implementation but just wanted to bring the 
concept to the team for review early as possible.

 
   From the problem description one might understand it's desired to
 allow modification of resource placement (i.e. making decisions where
 to host...) by cloud provider plug-point code. Does should not modify
 the parameter stack blocks this desired capability? Or is it just a
 rule not to touch original parameters' values but still allow
 modifying, let's say availability_zone property as long it's not
 effected by stack parameters?
 
 I don't think the word 'parameter' there refers to the user-supplied
 template parameters, it refers to the formal parameter of the plugin's
 do_p[re|ost]_op() method named 'stack'.
 
 On the availability zone thing specifically, I think the way forward is
 to give cloud operators a more sophisticated way of selecting the AZ
 when the user doesn't specify one (i.e. just requests the default).
 That could happen inside Heat, but it would probably be 

Re: [openstack-dev] [Neutron][IPAM] Uniqueness of subnets within a tenant

2015-03-22 Thread Ian Wells
On 22 March 2015 at 07:48, Jay Pipes jaypi...@gmail.com wrote:

 On 03/20/2015 05:16 PM, Kevin Benton wrote:

 To clarify a bit, we obviously divide lots of things by tenant (quotas,
 network listing, etc). The difference is that we have nothing right now
 that has to be unique within a tenant. Are there objects that are
 uniquely scoped to a tenant in Nova/Glance/etc?


 Yes. Virtually everything is :)


Everything is owned by a tenant.  Very few things are one per tenant, where
is where this feels like it's leading.

Seems to me that an address pool corresponds to a network area that you can
route across (because routing only works over a network with unique
addresses and that's what an address pool does for you).  We have those
areas and we use NAT to separate them (setting aside the occasional
isolated network area with no external connections).  But NAT doesn't
separate tenants, it separates externally connected routers: one tenant can
have many of those routers, or one router can be connected to networks in
both tenants.  We just happen to frequently use the one external router per
tenant model, which is why address pools *appear* to be one per tenant.  I
think, more accurately, an external router should be given an address pool,
and tenants have nothing to do with it.
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] PTL elections

2015-03-22 Thread John Griffith
On Fri, Mar 20, 2015 at 4:21 AM, Sergey Lukjanov slukja...@mirantis.com
wrote:

 Hi John,

 Murano isn't official project and so we've started the election process
 earlier, you could see dates in the first email in this thread. There was
 only one candidate, so, voting itself was bypassed.

 till 05:59 UTC March 17, 2015: Open candidacy to PTL positions
 March 17, 2015 - 1300 UTC March 24, 2015: PTL elections

 The link [1] was an example of how it was going a year ago (April 2014),
 probably I've used bad wording :(

 The another link in my initial mail specifies the time frame for current
 Murano PTL election:

 https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty

 Thanks.

 On Fri, Mar 20, 2015 at 7:01 AM, John Griffith john.griffi...@gmail.com
 wrote:



 On Wed, Mar 18, 2015 at 6:59 AM, Serg Melikyan smelik...@mirantis.com
 wrote:

 Thank you!

 On Wed, Mar 18, 2015 at 8:28 AM, Sergey Lukjanov slukja...@mirantis.com
  wrote:

 The PTL candidacy proposal time frame ended and we have only one
 candidate.

 So, Serg Melikyan, my congratulations!

 Results documented in
 https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty#PTL

 On Wed, Mar 11, 2015 at 2:04 AM, Sergey Lukjanov 
 slukja...@mirantis.com wrote:

 Hi folks,

 due to the requirement to have officially elected PTL, we're running
 elections for the Murano PTL for Kilo and Liberty cycles. Schedule
 and policies are fully aligned with official OpenStack PTLs elections.

 You can find more info in official elections wiki page [0] and the same
 page for Murano elections [1], additionally some more info in the past
 official nominations opening email [2].

 Timeline:

 till 05:59 UTC March 17, 2015: Open candidacy to PTL positions
 March 17, 2015 - 1300 UTC March 24, 2015: PTL elections

 To announce your candidacy please start a new openstack-dev at
 lists.openstack.org mailing list thread with the following subject:
 [murano] PTL Candidacy.

 [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
 [1] https://wiki.openstack.org/wiki/Murano/PTL_Elections_Kilo_Liberty
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html

 Thank you.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​
 Certainly not disputing/challenging this, but I'm slightly confused;
 isn't the proposal deadline April 4?  You referenced it yourself in the
 link here: [1].  Or is there some special process unique to Murano?


 [1] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
 ​


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Thanks for clarifying.

John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] updated Fedora Atomic image available - needs testing

2015-03-22 Thread Steven Dake (stdake)


On 3/22/15, 8:10 AM, Adrian Otto adrian.o...@rackspace.com wrote:

Also,

Please track any progress in this task:

https://bugs.launchpad.net/magnum/+bug/1434468

If any Magnum updates are needed, link them to that bug ticket, please.
Also, it would be nice to see an update on this here on this thread as
well.

Adrian

Folks,

My workstation is busted ATM so I am unable to test this, and Hongbin¹s
networking is malfunctioning so he wasn¹t able to confirm this solution.

If a couple cores could test the change that would be fantastic.

https://review.openstack.org/#/c/12/



 On Mar 20, 2015, at 8:33 AM, Steven Dake (stdake) std...@cisco.com
wrote:
 
 Hey folks,
 
 I have manually updated the Fedora 21 Atomic image via rpm-ostree
upgrade.  This image includes kubernetes 0.11 which some people have
said is required to use kubectl with current Magnum master.  I don¹t
have time for the next week to heavily test, but if someone could run
this image through testing with Magnum, I¹d appreciate it.
 
 https://fedorapeople.org/groups/heat/kolla/fedora-21-atomic-2.qcow2
 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!

2015-03-22 Thread shihanzhang
+1 to deprecate this option


At 2015-03-21 02:57:09, Assaf Muller amul...@redhat.com wrote:
Hello everyone,

The use_namespaces option in the L3 and DHCP Neutron agents controls if you
can create multiple routers and DHCP networks managed by a single L3/DHCP 
agent,
or if the agent manages only a single resource.

Are the setups out there *not* using the use_namespaces option? I'm curious as
to why, and if it would be difficult to migrate such a setup to use namespaces.

I'm asking because use_namespaces complicates Neutron code for what I gather
is an option that has not been relevant for years. I'd like to deprecate the 
option
for Kilo and remove it in Liberty.


Assaf Muller, Cloud Networking Engineer
Red Hat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Is it possible to do instance scale for a node in a living cluster ?

2015-03-22 Thread Li, Chen
Hi Sahara,

Recently, I learned Sahara support nodes scale up  scale down, which means 
scale has been limited to node number.
Is it possible to do a instance scale?

For example, I build a small cluster at first, several slave nodes (running 
datanode  nodemanager) and a single master node (running all other processes)
I keep increasing the number of slave nodes, and in a moment, my master node 
has become the performance bottleneck of the whole cluster.

In this case, I would like to do several things, such as:

1.   resize the master node. Using command like nova resize, to add more 
cpu  memory and other resources for this single instance.

2.   Split processes on the master node to several nodes.

I think it make sense to users.
Or the whole performance bottleneck staff would never happen in real world  
???

Another example, I already have a big cluster, and now I want to enable a new 
service on it.

1.   I would like to start a new node for the new service and add the new 
node into my cluster.

2.   Or, just start the new service on a node which already in the cluster.

Currently, Sahara start cluster and manage all nodes based on templates, so 
everything on living has to be pre-defined.
Things on above breaks the whole pre-defined thing.

So, my question here is:
 Is it possible for Sahara to do things like that ?

 Would Sahara want to support things like this ?
 If yes, any plans in the past, in the future?
 If not, any special reasons ??


Looking forward to your reply.

Thanks.
-chen



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Multi Region Designate

2015-03-22 Thread Anik
Hi,
Are there any plans to have multi region DNS service through designate ?
For example If a tenant has projects in multiple regions and wants to use a 
single (flat) external domain name space for floating IPs, what is the proposed 
solution for such a use case using Designate ? Regards,Anik__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] 'Chassis' element in Ironic

2015-03-22 Thread Ganapathy, Sandhya
Dear All,

In one of the Ironic IRC meetings, a discussion on whether to retain the 
'Chassis' element in Ironic or not came up.
I am interested to know whether this is still valid and a decision is to be 
made on whether the component is retained or not.

Thanks,
Sandhya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Updating 'scheduled_at' field of nova instances in the database.

2015-03-22 Thread Deepthi Dharwar
All the VM information is stored in the instances table. 
This includes all the time related field like scheduled_at, launched_at etc.

After upgrading to Juno, I have noticed that my 'scheduled_at' field
is not getting reflected at all in the database. I do see my VMs
being spawned and running just fine. However, the 'launched_at' time 
does get reflected rightly.


MariaDB [nova] select created_at, deleted_at, host,scheduled_at, launched_at 
from instances;
+-+-+---+--+-+
| created_at  | deleted_at  | host  | 
scheduled_at | launched_at |
+-+-+---+--+-+
| 2015-03-09 20:00:41 | 2015-03-10 17:12:11 | localhost | NULL  
   | 2015-03-09 20:01:30 |
| 2015-03-11 05:53:13 | NULL| localhost | NULL  
   | 2015-03-18 19:48:12 |


Can anyone let me know if this is a genuine issue or have there been
a recent change in regard to updating this field ? 

I am basically trying to find as to how long a particular VM is running on a 
given host.
I was using the current time - scheduled time for the same.
Is there a better way to get this value ?

Regards,


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev