Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-01-07 21:26:44 -0800:
 On Wed, 2014-01-08 at 17:20 +1300, Robert Collins wrote:
  On 8 January 2014 12:18, James Slagle james.sla...@gmail.com wrote:
   Sure, the crux of the problem was likely that versions in the distro
   were too old and they needed to be updated.  But unless we take on
   building the whole OS from source/git/whatever every time, we're
   always going to have that issue.  So, an additional benefit of
   packages is that you can install a known good version of an OpenStack
   component that is known to work with the versions of dependent
   software you already have installed.
  
  The problem is that OpenStack is building against newer stuff than is
  in distros, so folk building on a packaging toolchain are going to
  often be in catchup mode - I think we need to anticipate package based
  environments running against releases rather than CD.
 
 It's about matching the expectations of the end user (deployer). If they
 lean towards a CD model, then git based OpenStack deployments are
 clearly a necessity -- otherwise we'd need to maintain a set of package
 archives built (and tested!) for every project and every commit to
 master.
 
 If they lean towards a more traditional release model, then OpenStack
 packages maintained (and tested!) by the distros are a better fit for
 the end user.
 
 However...
 
 Both camps should be able to experience the benefits of having an
 OpenStack undercloud building an OpenStack overcloud, without forcing
 the end user to adopt a methodology they may not yet be comfortable
 with.
 

Jay's statements highlight that I have not been clear during this
thread. I don't mean to force any of the methods on anyone, and agree
with Jay that we should be able to leverage OpenStack to deploy OpenStack
in many different ways.

My reason for questioning the value of packages is that we have limited
resources, and I don't want to direct them toward an effort solely because
thats what we've always done. I'm a packager myself, so there is this
desire, sometimes, to just go back to the tools I'm comfortable with.

But if there are developers who want and/or need to put time into using
packages then by all means, I welcome you with open arms. I think the
thread has proven to me that there is value in giving people the option
to use TripleO's tools with packages.

 As an aside, Fuel has an overcloud builder that uses Puppet to deploy
 OpenStack with packages [1]. I understand the Fuel dev team at Mirantis
 is keen to join forces with the Triple-O contributor community and
 reduce duplicative efforts. This may be the perfect place to pull some
 practices from Fuel to enable some flexibility in how
 tripleo-image-elements constructs things.
 
 If you want a good indication of how much overlap there is, just look at
 the list of puppet modules in Fuel [2] vs. the list of elements in t-i-e
 [3].
 

Nice comparison.

 Sure, t-i-e is Bash scripts and Fuel is Puppet manifests, but they are
 trying to do the same fundamental thing: produce an Overcloud entirely
 through automation. There are definite advantages to both approaches.
 Would be great to put our heads together and get the best of both
 worlds. I think the end user would benefit.
 

Right, we are quite well aligned there. Where we run afoul of each
other is that Puppet is a religion, and thus Fuel's puppet pieces are
alienating the Chef believers. We'd rather not do that in TripleO.
Reminding myself of this is why I realize above that not packages
is also a religion, so I need to make sure I remain secular.

I think we should actually use this as an example of where to plug
Puppet in on top of the TripleO native tools which are emphatically not
ever going to be a replacement for Puppet beyond the minimal needed to
get a running, testable cloud.

And then perhaps the Chef clergy can do the same, and we'll have a nice
example of how to layer tools on top of TripleO's modular design.

 
 [1] Just one example... here is the Glance installation by package:
 https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/glance/manifests/registry.pp#L76
 [2]
 https://github.com/stackforge/fuel-library/tree/master/deployment/puppet
 [3]
 https://github.com/openstack/tripleo-image-elements/tree/master/elements
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi,

I've started to work on the idea of supporting a kind of tenant/project based 
configuration for Ceilometer. Unfortunately I haven't reached the point of 
having a blueprint that could be registered until now. I do not have a deep 
knowledge about the collector and compute agent services, but this feature 
would require some deep changes for sure. Currently there are pipelines for 
data collection and transformation, where the counters can be specified, about 
which data should be collected and also the time interval for data collection 
and so on. These pipelines can be configured now globally in the pipeline.yaml 
file, which is stored right next to the Ceilometer configuration files.

In my view, we could keep the dynamic meter configuration bp with considering 
to extend it to dynamic configuration of Ceilometer, not just the meters and we 
could have a separate bp for the project based configuration of meters.

If it is ok for you, I will register the bp for this per-project tenant 
settings with some details, when I'm finished with the initial design of how 
this feature could work.

Best Regards,
Ildiko

-Original Message-
From: Neal, Phil [mailto:phil.n...@hp.com] 
Sent: Tuesday, January 07, 2014 11:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

For multi-node deployments, implementing something like inotify would allow 
administrators to push configuration changes out to multiple targets using 
puppet/chef/etc. and have the daemons pick it up without restart. Thumbs up to 
that.

As Tim Bell suggested, API-based enabling/disabling would allow users to update 
meters via script, but then there's the question of how to work out the global 
vs. per-project tenant settings...right now we collect specified meters for all 
available projects, and the API returns whatever data is stored minus filtered 
values. Maybe I'm missing something in the suggestion, but turning off 
collection for an individual project seems like it'd require some deep changes.

Vijay, I'll repeat dhellmann's request: do you have more detail in another doc? 
:-)

-   Phil

 -Original Message-
 From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo) 
 [mailto:vijayakumar.kodam@nsn.com]
 Sent: Tuesday, January 07, 2014 2:49 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: chmo...@enovance.com
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
 Sent: Monday, January 06, 2014 2:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 
 
 
 
 
 On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata 
 Consultancy Ser - FI/Espoo) vijayakumar.kodam@nsn.com wrote:
 
 In this case, simply changing the meter properties in a configuration 
 file should be enough. There should be an inotify signal which shall 
 notify ceilometer of the changes in the config file. Then ceilometer 
 should automatically update the meters without restarting.
 
 
 
 Why it cannot be something configured by the admin with inotifywait(1) 
 command?
 
 
 
 Or this can be an API call for enabling/disabling meters which could 
 be more useful without having to change the config files.
 
 
 
 Chmouel.
 
 
 
 I haven't tried inotifywait() in this implementation. I need to check 
 if it will be useful for the current implementation.
 
 Yes. API call could be more useful than changing the config files manually.
 
 
 
 Thanks,
 
 VijayKumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Need your help to restore one commit.

2014-01-08 Thread Qing Xin Meng


Dear PTL,

I am working on a commit 'Deploy VMware vCenter templates' which was -2 in
Havana due to feature freeze. Could you please help to remove the -2 so as
we can move on?
https://review.openstack.org/#/c/34903/

Thanks!


Best Regards
---
Meng Qing Xin___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Gary Kotton
Hi,
In order for the VM to be booted the image needs to be on a datastore 
accessible by the host. By default the data tore will not have the image. This 
is copied from glance tot he datastore. This is most probably where the problem 
is. This may take a while depending on the connectivity between the openstack 
setup and  your backbend datastore. Once you have done this you will see a 
directory on the datastore called vmware_base. This will contain that image. 
From then on it should be smooth sailing.
Please note that we are working on a number of things to improve this:

 1.  Image cache aging (blueprint is implemented and pending review)
 2.  Adding a Vmware glance datastore – which will greatly improve the copy 
process described above

Thanks
Gary

From: Ray Sun xiaoq...@gmail.commailto:xiaoq...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, January 8, 2014 4:30 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found it's 
very slow when I try to create a new VM, for example, 7GB Windows Image spent 3 
hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H Expect: -v --insecure --upload-file windows2012_server_cn_x64.iso 
https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenterdsName=datastore2https://urldefense.proofpoint.com/v1/url?u=https://administrator:root123.%40200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath%3DdataCenter%26dsName%3Ddatastore2k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lhA2fUha%2FtHWjSl8QcZq5lQ9MSETFSwBcyKMNtXnhx0%3D%0As=e4e18c64b88329d7c54cdaef299e186aa13f2dd63f37654a0171a70d70b42cd3

The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for 
vcenter. Please help.

Best Regards
-- Ray
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Discuss the option delete_on_termination

2014-01-08 Thread 黎林果
Hi All,

   Attach a volume when creating a server, the API contains
'block_device_mapping', such as:
block_device_mapping: [
{
volume_id: VOLUME_ID,
device_name: /dev/vdc,
delete_on_termination: true
}
]

It allows the option 'delete_on_termination', but in the code it's
hardcoded to True. Why?

Another situation, attach a volume to an exists server, there is
not the option 'delete_on_termination'.

  Should we add the 'delete_on_termination' when attach a volume to an
exists server or modify the value from the params?


  See also:
https://blueprints.launchpad.net/nova/+spec/add-delete-on-termination-option


Best regards!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2014-01-08 Thread Erik Moe
I feel that we are getting quite far away from supporting my use case. Use
case: VM wants to connect to different 'normal' Neutron networks from one
VNIC. VLANs are proposed in blueprint since it's a common way to separate
'networks'. It is just a way to connect to different Neutron networks, it
does not put requirements on method used for tenant separation in Neutron.
Ability to specify VID from user is there since, for this use case, the
service would be used by normal tenants, and preferable not exposing
Neutron internals (that might not use VLANS at all for tenant separation).
Also several VMs could specify the same VID for connecting to different
Neutron networks, this to avoid dependencies between tenants.

We would like to have this functionality close to the VNIC, not requiring a
extra 'hop' in the network both for latency, throughput performance and
fault management. The strange optimizations are there because of this.

Also, for this use case, the APIs from a user perspective could be cleaner.

Maybe we should break out this use case from the L2-gateway?

/Erik



On Mon, Dec 23, 2013 at 10:09 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 I think we have two different cases here - one where a 'trunk' network
 passes all VLANs, which is potentially supportable by anything that's not
 based on VLANs for separation, and one where a trunk can't feasibly do that
 but where we could make it pass a restricted set of VLANs by mapping.

 In the former case, obviously we need no special awareness of the nature
 of the network to implement an L2 gateway.

 In the latter case, we're looking at a specialisation of networks, one
 where you would first create them with a set of VLANs you wanted to pass
 (and - presumably - the driver would say 'ah, I must allocate multiple
 VLANs to this network rather than just one'.  You've jumped in with two
 optimisations on top of that:

 - we can precalculate the VLANs the network needs to pass in some cases,
 because it's the sum of VLANs that L2 gateways on that network know about
 - we can use L2 gateways to make the mapping from 'tenant' VLANs to
 'overlay' VLANs

 They're good ideas but they add some limitations to what you can do with
 trunk networks that aren't actually necessary in a number of solutions.

 I wonder if we should try the general case first with e.g. a
 Linuxbridge/GRE based infrastructure, and then add the optimisations
 afterwards.  If I were going to do that optimisation I'd start with the
 capability mechanism and add the ability to let the tenant specify the
 specific VLAN tags which must be passed (as you normally would on a
 physical switch). I'd then have two port types - a user-facing one that
 ensures the entry and exit mapping is made on the port, and an
 administrative one which exposes that mapping internally and lets the
 client code (e.g. the L2 gateway) do the mapping itself.  But I think it
 would be complicated, and maybe even has more complexity than is
 immediately apparent (e.g. we're effectively allocating a cluster-wide
 network to get backbone segmentation IDs for each VLAN we pass, which is
 new and different) hence my thought that we should start with the easy case
 first just to have something working, and see how the tenant API feels.  We
 could do this with a basic bit of gateway code running on a system using
 Linuxbridge + GRE, I think - the key seems to be avoiding VLANs in the
 overlay and then the problem is drastically simplified.
 --
 Ian.


 On 21 December 2013 23:00, Erik Moe emoe...@gmail.com wrote:

 Hi Ian,

 I think your VLAN trunking capability proposal can be a good thing, so
 the user can request a Neutron network that can trunk VLANs without caring
 about detailed information regarding which VLANs to pass. This could be
 used for use cases there user wants to pass VLANs between endpoints on a L2
 network etc.

 For the use case there a VM wants to connect to several normal Neutron
 networks using VLANs, I would prefer a solution that did not require a
 Neutron trunk network. Possibly by connecting a L2-gateway directly to the
 Neutron 'vNic' port, or some other solution. IMHO it would be good to map
 VLAN to Neutron network as soon as possible.

 Thanks,
 Erik



 On Thu, Dec 19, 2013 at 2:15 PM, Ian Wells ijw.ubu...@cack.org.ukwrote:

 On 19 December 2013 06:35, Isaku Yamahata isaku.yamah...@gmail.comwrote:


 Hi Ian.

 I can't see your proposal. Can you please make it public viewable?


 Crap, sorry - fixed.


  Even before I read the document I could list three use cases.  Eric's
  covered some of them himself.

 I'm not against trunking.
 I'm trying to understand what requirements need trunk network in
 the figure 1 in addition to L2 gateway directly connected to VM via
 trunk port.


 No problem, just putting the information there for you.

 --
 Ian.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
Greetings,

I have a question related to cold migration.

Now in OpenStack nova, we support live migration, cold migration and resize.

For live migration, we do not need to confirm after live migration finished.

For resize, we need to confirm, as we want to give end user an opportunity
to rollback.

The problem is cold migration, because cold migration and resize share same
code path, so once I submit a cold migration request and after the cold
migration finished, the VM will goes to verify_resize state, and I need to
confirm resize. I felt a bit confused by this, why do I need to verify
resize for a cold migration operation? Why not reset the VM to original
state directly after cold migration?

Also, I think that probably we need split compute.api.resize() to two apis:
one is for resize and the other is for cold migrations.

1) The VM state can be either ACTIVE and STOPPED for a resize operation
2) The VM state must be STOPPED for a cold migrate operation.

Any comments?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Unit tests, gating, and real databases

2014-01-08 Thread Roman Podoliaka
Hi Ivan,

Indeed, nodepool nodes have MySQL and PostgreSQL installed and
running. There are databases you can access from your tests
(mysql://openstack_citest:openstack_citest@localhost/openstack_citest
and postgresql://openstack_citest:openstack_citest@localhost/openstack_citest).
[1] is a great example how it's actually used for running
backend-specific DB test cases in oslo-incubator.

Besides, openstack_citest user in PostgreSQL is allowed to create/drop
databases, which enables us to implement a slightly different approach
of running DB tests [2]. This might be very useful when you need more
that one DB schema (e.g. to run tests concurrently).

Thanks,
Roman

[1] https://review.openstack.org/#/c/54375/
[2] https://review.openstack.org/#/c/47818/

On Fri, Jan 3, 2014 at 9:17 PM, Ivan Melnikov
imelni...@griddynamics.com wrote:
 Hi there,

 As far as I understand, slaves that run gate-*-python27 and python26
 jobs have MySQL and Postgres servers installed and running so we can
 test migrations and do functional testing for database-related code.
 I wanted to use this to improve TaskFlow gating, but I failed to find
 docs about it and to derive how this database instances should be
 used from nova and oslo.db tests code.

 Can anyone give some hints or pointers on where should I get
 connection config and what can I do with those database servers in
 unit and functional tests?

 --
 WBR,
 Ivan A. Melnikov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Derek Higgins
On 08/01/14 05:07, Clint Byrum wrote:
 Excerpts from Fox, Kevin M's message of 2014-01-07 16:27:35 -0800:
 Another piece to the conversation I think is update philosophy. If
 you are always going to require a new image and no customization after
 build ever, ever, the messiness that source usually cause in the file
 system image really doesn't matter. The package system allows you to
 easily update, add, and remove packages bits at runtime cleanly. In
 our experimenting with OpenStack, its becoming hard to determine
 which philosophy is better. Golden Images for some things make a lot
 of sense. For other random services, the maintenance of the Golden
 Image seems to be too much to bother with and just installing a few
 packages after image start is preferable. I think both approaches are
 valuable. This may not directly relate to what is best for Triple-O
 elements, but since we are talking philosophy anyway...

 
 The golden image approach should be identical to the package approach if
 you are doing any kind of testing work-flow.
 
 Just install a few packages is how you end up with, as Robert said,
 snowflakes. The approach we're taking with diskimage-builder should
 result in that image building extremely rapidly, even if you compiled
 those things from source.

This is the part of your argument I don't understand, creating images
with packages is no more likely to result in snowflakes then creating
images from sources in git.

You would build an image using packages and at the end of the build
process you can lock the package versions. Regardless of how the image
is built you can consider it a golden image. This image is then deployed
to your hosts and not changed.

We would still be using diskimage-builder the main difference to the
whole process is we would end up with a image that has more packages
installed and no virtual envs.

 
 What I'm suggesting is that you still need to test everything every
 change you make, so you should just use the same work flow for
 everything.
 
 Again though, I think if you wish to make the argument that packages
 are undesirable, then ALL packages are probably undesirable for the same
 reasons. Right? Why not make elements for all dependencies, instead
 of using distro packages to get you 90% of the way there and then
 using source just for OpenStack bits. If you always want the newest,
 latest greatest Neutron, don't you want the newest VSwitch too? I'd
 argue though there is a point of diminishing returns with source that
 packages fill. Then the argument is where is the point. Some folks think
 the point is all the way over to just use packages for everything.

 
 I've already stated that the distro is great for utilities, libraries,
 drivers, kernels, etc. These are platform things, not application
 things. OpenVSwitch _is_ part of the application, and I would entertain
 building it for images if we found ourselves in need of hyper-advanced
 features.
 
 What I've bee suggesting is that if I'm going to do the work to get
 latest OpenVSwitch, if I do it in a source-image way, I don't have to
 repeat it for every distribution's packaging tool chain.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Sharing the load test result

2014-01-08 Thread Swann Croiset
Hi,

Your result is interesting and not surprising due to the different design
you have described.
The Ceilo team will work on the improvements IIUC.
I found two relevant links [1] [2]

@jay : the first case seems to be impossible, no scalable .. I bet for the
last :)

@June li
I am curious to know how have you generate load to Ceilometer with Ganglia ?

what was the system usage of your servers during the 2 tests  ? cpu, mem,
io..
what are response time for alarm evaluations for Ceilometer, 50 seconds in
mean  ?

btw thank you for sharing your tests.

[1] https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
[2]
https://etherpad.openstack.org/p/icehouse-summit-ceilometer-future-of-alarming



2014/1/7 Jay Pipes jaypi...@gmail.com

 On Mon, 2014-01-06 at 00:14 +, Deok-June Yi wrote:
  Hi, Ceilometer team.
 
  I'm writing to share my load test result and ask you for advice about
  Ceilometer.
 
  Before starting, for whom doesn’t know Synaps [1], Synaps is
  'monitoring as a service' project that provides AWS CloudWatch
  compatible API. It was discussed to be merged with Ceilometer project
  at Grizzly design phase, but Synaps developers could not join the
  project for it at that time. And now Ceilometer has its own alarming
  feature.
 
  A few days ago, I installed Ceilometer and Synaps on my test
  environment and ran load test for over 2 days to evaluate their
  alarming feature in the aspect of real-time requirement. Here I attached
  test environment diagram and test result. The load model was as below.
  1.  Create 5,000 alarms
  2.  [Every 1 minute] Create 5,000 samples
 
  As a result, alarm evaluation time of Ceilometer was not predictable,
  whereas Synaps evaluated all alarms within 2 seconds every minute.
 
  This comes from two different design decisions for alarm evaluation
  between Ceilometer and Synaps. Synaps does not read database but read
  in-memory and in-stream data for alarming while Ceilometer involves
  database read operations with REST API call.

 So you are saying that the Synaps server is storing 14,400,000 samples
 in memory (2 days of 5000 samples per minute)? Or are you saying that
 Synaps is storing just the 5000 alarm records in memory and then
 processing (determining if the alarm condition was met) the samples as
 they pass through to a backend data store? I think it is the latter but
 I just want to make sure :)

 Best,
 -jay

  I think Ceilometer is better to allow creating alarms with more complex
  query on metrics. However Synaps is better if we have real-time
  requirement with alarm evaluation.
 
  So, how about re-opening the blueprint, cw-publish [2]?  It was
  discussed and designed [3] at the start of Grizzly development cycle,
  but it has not been implemented. And now I would like to work for it. Or
  is there any good idea to fulfill the real-time requirement with
  Ceilometer?
 
  Please, don't hesitate in contacting me.
 
  Thank you,
  June Yi
 
  [1] https://wiki.openstack.org/Synaps
  [2] https://blueprints.launchpad.net/ceilometer/+spec/cw-publish
  [3]
 https://wiki.openstack.org/wiki/Ceilometer/blueprints/multi-publisher
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread David Xie
In nova/compute/api.py#2289, function resize, there's a parameter named
flavor_id, if it is None, it is considered as cold migration. Thus, nova
should skip resize verifying. However, it doesn't.

Like Jay said, we should skip this step during cold migration, does it make
sense?


On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:

 Greetings,

 I have a question related to cold migration.

 Now in OpenStack nova, we support live migration, cold migration and
 resize.

 For live migration, we do not need to confirm after live migration
 finished.

 For resize, we need to confirm, as we want to give end user an opportunity
 to rollback.

 The problem is cold migration, because cold migration and resize share
 same code path, so once I submit a cold migration request and after the
 cold migration finished, the VM will goes to verify_resize state, and I
 need to confirm resize. I felt a bit confused by this, why do I need to
 verify resize for a cold migration operation? Why not reset the VM to
 original state directly after cold migration?

 Also, I think that probably we need split compute.api.resize() to two
 apis: one is for resize and the other is for cold migrations.

 1) The VM state can be either ACTIVE and STOPPED for a resize operation
 2) The VM state must be STOPPED for a cold migrate operation.

 Any comments?

 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best Regards,
David Xie
Founder of ScriptFan technology community - http://scriptfan.com
Manager of Xi'an GDG (Google Developer Group)
http://about.me/davidx

-
Everything happens for a reason. Attitude determines everything!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Tempest Testcases for ML2 Mechanism drivers - Help

2014-01-08 Thread Eugene Nikanorov
Trinath,

Tempest tests should be backend-agnostic, so specific tests for your
mechanism driver is not needed.
You need specific testing environment for that which will run tempest tests
against a deployment with mechanism drivers you want to test. See also:
http://ci.openstack.org/third_party.html

Thanks,
Eugene.


On Wed, Jan 8, 2014 at 10:56 AM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:

  Hi  –



 With respect to writing test cases with Tempest for Neutron,



 I have a Freescale Mechanism Driver which support a Cloud Resource
 Discovery (CRD) Service .



 The complete data flow is show below.



 [image: cid:image001.png@01CF0BA3.7DB6CAE0]



 The FSL Mechanism driver partly depends on ML2 Plug-in (_pre_commint and
 _post_commit definitions) and Partly depends on CRD Service (with CRD
 client calls in _post_commit definitions).



 I want to write Tempest test cases for the FSL Mechanism driver.



 Kindly please help me on how to write the Tempest test cases for these
 kind of mechanism drivers.



 Also, help me on how to submit the BP document, Code base and Test cases
 for review.



 Thanking you for the help.



 --

 Trinath Somanchi - B39208

 trinath.soman...@freescale.com | extn: 4048



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image001.png___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Availability of external testing logs

2014-01-08 Thread Rossella Sblendido
Hi all,

going back to the original topic of making the logs public, I have a
question:

how long should the logs be kept? One week? One month?

cheers,

Rossella


On Tue, Jan 7, 2014 at 1:22 PM, Torbjorn Tornkvist kruska...@gmail.comwrote:

  My problem seem to be the same as reported here:


 https://bitbucket.org/pypa/setuptools/issue/129/assertionerror-egg-info-pkg-info-is-not-a

 Not quite shure however how to bring in the fix into my setup.

 Cheers, Tobbe


 On 2014-01-07 10:38, Torbjorn Tornkvist wrote:

 Hi,

 Sorry for the problems.
 I've missed any direct mails to me (I'm drowning in Openstack mails...)
 I will make sure our Jenkins setup won't be left unattended in the future.

 How can I remove those '-1' votes?

 It seems that from:  Jan 2, 2014 5:46:26 PM
 after change: https://review.openstack.org/#/c/64696/

 something happend that makes the my tox crash with a traceback.
 I'll include the traceback below in case someone can give some help.
 (I'm afraid I don't know anything about python...)
 ---
 vagrant@quantal64:~/neutron$ sudo tox -e py27 -r --
 neutron.tests.unit.ml2.test_mechanism_ncs

 GLOB sdist-make: /home/vagrant/neutron/setup.py
 py27 create: /home/vagrant/neutron/.tox/py27
 ERROR: invocation failed, logfile:
 /home/vagrant/neutron/.tox/py27/log/py27-0.log
 ERROR: actionid=py27
 msg=getenv
 cmdargs=['/usr/bin/python2.7',
 '/usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv.py',
 '--setuptools', '--python', '/usr/bin/python2.7', 'py27']
 env={'LC_NUMERIC': 'sv_SE.UTF-8', 'LOGNAME': 'root', 'USER': 'root',
 'HOME': '/home/vagrant', 'LC_PAPER': 'sv_SE.UTF-8', 'PATH':
 '/home/vagrant/neutron/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'DISPLAY': 'localhost:10.0', 'LANG': 'en_US.utf8', 'TERM':
 'xterm-256color', 'SHELL': '/bin/bash', 'LANGUAGE': 'en_US:',
 'LC_MEASUREMENT': 'sv_SE.UTF-8', 'SUDO_USER': 'vagrant', 'USERNAME':
 'root', 'LC_IDENTIFICATION': 'sv_SE.UTF-8', 'LC_ADDRESS': 'sv_SE.UTF-8',
 'SUDO_UID': '1000', 'VIRTUAL_ENV': '/home/vagrant/neutron/.tox/py27',
 'SUDO_COMMAND': '/usr/local/bin/tox -e py27 -r --
 neutron.tests.unit.ml2.test_mechanism_ncs', 'SUDO_GID': '1000',
 'LC_TELEPHONE': 'sv_SE.UTF-8', 'LC_MONETARY': 'sv_SE.UTF-8', 'LC_NAME':
 'sv_SE.UTF-8', 'MAIL': '/var/mail/root', 'LC_TIME': 'sv_SE.UTF-8',
 'LS_COLORS':
 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01
 ;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
 Already using interpreter /usr/bin/python2.7
 New python executable in py27/bin/python2.7
 Also creating executable in py27/bin/python
 Installing setuptools, pip...
   Complete output from command
 /home/vagrant/neutron/.tox/py27/bin/python2.7 -c import sys, pip;
 pip...ll\] + sys.argv[1:]) setuptools pip:
   Traceback (most recent call last):
   File string, line 1, in module
   File
 /usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py,
 line 9, in module
   File
 /usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py,
 line 8, in module
   File
 /usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py,
 line 2696, in module
   File
 /usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py,
 line 429, in __init__
   File
 /usr/local/lib/python2.7/dist-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py,
 line 443, in add_entry
   File
 

Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Jan Provaznik

On 01/07/2014 09:01 PM, James Slagle wrote:

Hi,

I'd like to discuss some possible ways we could install the OpenStack
components from packages in tripleo-image-elements.  As most folks are
probably aware, there is a fork of tripleo-image-elements called
tripleo-puppet-elements which does install using packages, but it does
so using Puppet to do the installation and for managing the
configuration of the installed components.  I'd like to kind of set
that aside for a moment and just discuss how we might support
installing from packages using tripleo-image-elements directly and not
using Puppet.

One idea would be to add support for a new type (or likely 2 new
types: rpm and dpkg) to the source-repositories element.
source-repositories already knows about the git, tar, and file types,
so it seems somewhat natural to have additional types for rpm and
dpkg.

A complication with that approach is that the existing elements assume
they're setting up everything from source.  So, if we take a look at
the nova element, and specifically install.d/74-nova, that script does
stuff like install a nova service, adds a nova user, creates needed
directories, etc.  All of that wouldn't need to be done if we were
installing from rpm or dpkg, b/c presumably the package would take
care of all that.

We could fix that by making the install.d scripts only run if you're
installing a component from source.  In that sense, it might make
sense to add a new hook, source-install.d and only run those scripts
if the type is a source type in the source-repositories configuration.
  We could then have a package-install.d to handle the installation
from the packages type.   The install.d hook could still exist to do
things that might be common to the 2 methods.

Thoughts on that approach or other ideas?

I'm currently working on a patchset I can submit to help prove it out.
  But, I'd like to start discussion on the approach now to see if there
are other ideas or major opposition to that approach.



Hi James,
I think it would be really nice to be able install openstack+deps from 
packages and many users (and cloud providers) would appreciate it.


Among other things, with packages provided by a distro you get more 
stability in compare to installing openstack from git repos and fetching 
newest possible dependencies from pypi.


In a real deployment setup  I don't want to use more-than-necessary 
newer packages/dependencies when building images - taking an example 
from last days I wouldn't have to bother with newer pip package which 
breaks image building.


Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Novnc switch to sockjs-client instead of websockify

2014-01-08 Thread Daniel P. Berrange
On Wed, Jan 01, 2014 at 02:33:09PM +0800, Thomas Goirand wrote:
 Hi,
 
 I was wondering if it would be possible for NoVNC to switch from
 websockify to sockjs-client, which is available here:
 
 https://github.com/sockjs/sockjs-client
 
 This has the advantage of not using flash at all (pure javascript), and
 continuing to work on all browsers, with a much cleaner licensing.

What is the problem with licensing of websockify ?

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple config files for neutron server

2014-01-08 Thread Sean Dague
On 01/06/2014 02:58 PM, Jay Pipes wrote:
 On Mon, 2014-01-06 at 23:45 +0400, Eugene Nikanorov wrote:
 Hi folks,


 Recently we had a discussion with Sean Dague on the matter.
 Currently Neutron server has a number of configuration files used for
 different purposes:
  - neutron.conf - main configuration parameters, plugins, db and mq
 connections
  - plugin.ini - plugin-specific networking settings
  - conf files for ml2 mechanisms drivers (AFAIK to be able to use
 several mechanism drivers we need to pass all of these conf files to
 neutron server)
  - services.conf - recently introduced conf-file to gather
 vendor-specific parameters for advanced services drivers.
 Particularly, services.conf was introduced to avoid polluting
 'generic' neutron.conf with vendor parameters and sections.


 The discussion with Sean was about whether to add services.conf to
 neutron-server launching command in devstack
 (https://review.openstack.org/#/c/64377/ ). services.conf would be 3rd
 config file that is passed to neutron-server along with neutron.conf
 and plugin.ini.


 Sean has an argument that providing many conf files in a command line
 is not a good practice, suggesting setting up configuration directory
 instead. There is no such capability in neutron right now so I'd like
 to hear opinions on this before putting more efforts in resolving this
 in with other approach than used in the patch on review.
 
 I'd say just put the additional conf file on the command line for now.
 Adding in support to oslo.cfg for a config directory can come later.
 
 Just my 2 cents,

So the net of that is that in a production environment, in order to
change some services, you'd be expected to change the init scripts to
list the right config files.

That seems *really* weird, and also really different from the rest of
OpenStack services. It also means you can't use the oslo config
generator to generate documented samples.

If neutron had been running a grenade job, it would have blocked this
attempted change, because it would require adding config files between
releases.

So this all smells pretty bad to me. Especially in the context of
migration paths from nova (which handles this very differently) = neutron.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-08 Thread Nir Yechiel
Hi Dong, 

Can you please clarify this blueprint? Currently in Neutron, If an instance has 
a floating IP, then that will be used for both inbound and outbound traffic. If 
an instance does not have a floating IP, it can make connections out using the 
gateway IP (SNAT using PAT/NAT Overload). Does the idea in this blueprint is to 
implement PAT on both directions using only the gateway IP? Also, did you see 
this one [1]? 

Thanks, 
Nir 

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Sean Dague
On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
 On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com wrote:
 Should we rather revert patch to make gate working?

 
 I think it is always good to have test packages reside in
 test-requirements.txt. So -1 on reverting that patch.
 
 Here [1] is a temporary solution.
 
 Regards,
 Noorul
 
 [1] https://review.openstack.org/65414

If Solum is trying to be on the road to being an OpenStack project, why
would it go out of it's way to introduce an incompatibility in the way
all the actual OpenStack packages work in the gate?

Seems very silly to me, because you'll have to add oslo.sphinx back into
test-requirements.txt the second you want to be considered for incubation.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder unit test failing

2014-01-08 Thread iKhan
Hi,

I am trying to run cinder unit tests via run_tests.sh since my tox has some
issues. Following is  the error I am getting while running run_tests.sh

error in setup command: Error parsing /RMCUT/cinder/havana/setup.cfg:
OSError: [Errno 2] No such file or directory

Running ` python setup.py testr --testr-args='--subunit --concurrency 1  '`

error in setup command: Error parsing /RMCUT/cinder/havana/setup.cfg:
OSError: [Errno 2] No such file or directory


Can anyone help me out?

-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Ray Sun
Gary,
Thanks. Curretly, our upload speed is in the normal range?

Best Regards
-- Ray


On Wed, Jan 8, 2014 at 4:31 PM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 In order for the VM to be booted the image needs to be on a datastore
 accessible by the host. By default the data tore will not have the image.
 This is copied from glance tot he datastore. This is most probably where
 the problem is. This may take a while depending on the connectivity between
 the openstack setup and  your backbend datastore. Once you have done this
 you will see a directory on the datastore called vmware_base. This will
 contain that image. From then on it should be smooth sailing.
 Please note that we are working on a number of things to improve this:

1. Image cache aging (blueprint is implemented and pending review)
2. Adding a Vmware glance datastore – which will greatly improve the
copy process described above

 Thanks
 Gary

 From: Ray Sun xiaoq...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Wednesday, January 8, 2014 4:30 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Nova][Vmware]Bad Performance when creating a
 new VM

 Stackers,
 I tried to create a new VM using the driver VMwareVCDriver, but I found
 it's very slow when I try to create a new VM, for example, 7GB Windows
 Image spent 3 hours.

 Then I tried to use curl to upload a iso to vcenter directly.

 curl -H Expect: -v --insecure --upload-file
 windows2012_server_cn_x64.iso 
 https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenterdsName=datastore2https://urldefense.proofpoint.com/v1/url?u=https://administrator:root123.%40200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath%3DdataCenter%26dsName%3Ddatastore2k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lhA2fUha%2FtHWjSl8QcZq5lQ9MSETFSwBcyKMNtXnhx0%3D%0As=e4e18c64b88329d7c54cdaef299e186aa13f2dd63f37654a0171a70d70b42cd3
 

 The average speed is 0.8 MB/s.

 Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

 I am not sure if there any special configurations for web interface for
 vcenter. Please help.

 Best Regards
 -- Ray

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam Kamal Malmiyoda
On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net wrote:

 On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
  On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
  gokrokvertsk...@mirantis.com wrote:
  Should we rather revert patch to make gate working?
 
 
  I think it is always good to have test packages reside in
  test-requirements.txt. So -1 on reverting that patch.
 
  Here [1] is a temporary solution.
 
  Regards,
  Noorul
 
  [1] https://review.openstack.org/65414

 If Solum is trying to be on the road to being an OpenStack project, why
 would it go out of it's way to introduce an incompatibility in the way
 all the actual OpenStack packages work in the gate?

 Seems very silly to me, because you'll have to add oslo.sphinx back into
 test-requirements.txt the second you want to be considered for incubation.


I am not sure why it seems silly to you. We are not anyhow removing
oslo.sphinx from the repository. We are just removing it before installing
the packages from test-requirements.txt in the devstack gate. How does that
affects incubation? Am I missing something?

Regards,
Noorul

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-01-07 07:16, Doug Hellmann wrote:




 On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.comwrote:

  I have been seeing this problem also.

 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there, which
 installed oslo.sphinx.

 Strange thing is that the oslo.sphinx installed a directory called oslo
 in /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
 this package installed like so I get the same error you get with
 oslo.config.


  The oslo libraries use python namespace packages, which manifest
 themselves as a directory in site-packages (or dist-packages) with
 sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
 can be packaged separately, but still installed under the oslo directory
 and imported as oslo.sphinx and oslo.config.

 My guess is that installing oslo.sphinx globally (with sudo), set up 2
 copies of the namespace package (one in the global dist-packages and
 presumably one in the virtualenv being used for the tests).

   Actually I think it may be the opposite problem, at least where I'm
 currently running into this.  oslo.sphinx is only installed in the venv and
 it creates a namespace package there.  Then if you try to load oslo.config
 in the venv it looks in the namespace package, doesn't find it, and bails
 with a missing module error.

 I'm personally running into this in tempest - I can't even run pep8 out of
 the box because the sample config check fails due to missing oslo.config.
 Here's what I'm seeing:

 In the tox venv:
 (pep8)[fedora@devstack site-packages]$ ls oslo*
 oslo.sphinx-1.1-py2.7-nspkg.pth

 oslo:
 sphinx

 oslo.sphinx-1.1-py2.7.egg-info:
 dependency_links.txt  namespace_packages.txt  PKG-INFO top_level.txt
 installed-files.txt   not-zip-safeSOURCES.txt


 And in the system site-packages:
 [fedora@devstack site-packages]$ ls oslo*
 oslo.config.egg-link  oslo.messaging.egg-link


 Since I don't actually care about oslo.sphinx in this case, I also found
 that deleting it from the venv fixes the problem, but obviously that's just
 a hacky workaround.  My initial thought is to install oslo.sphinx in
 devstack the same way as oslo.config and oslo.messaging, but I assume
 there's a reason we didn't do it that way in the first place so I'm not
 sure if that will work.

 So I don't know what the proper fix is, but I thought I'd share what I've
 found so far.  Also, I'm not sure if this even relates to the ceilometer
 issue since I wouldn't expect that to be running in a venv, but it may have
 a similar issue.


I wonder if the issue is actually that we're using pip install -e for
oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
things work properly if those packages are installed to the global
site-packages from PyPI instead? We don't want to change the way devstack
installs them, but it would give us another data point.

Another solution is to have a list of dependencies needed for building
documentation, separate from the tests, since oslo.sphinx isn't needed for
the tests.

Doug




 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly meeting Thursday 09.01.2014

2014-01-08 Thread Eugene Nikanorov
Hi neutrons,

Lets continue keeping our regular lbaas meetings. Let's gather on
#openstack-meeting at 14-00 UTC on this Thursday, 09.01.2014.

We'll discuss our progress and future plans.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Sergey Skripnick





I'd like to explore whether the paramiko team will accept this code (or
something like it). This seems like a perfect opportunity for us to  
contribute

upstream.


+1

The patch is not big and the code seems simple and reasonable enough
to live within paramiko.

Cheers,
FF




I sent a pull request [0] but there is two things:

 nobody know when (and if) it will be merged
 it is still a bit low-level, unlike a patch in oslo

About spur: spur is looks ok, but it a bit complicated inside (it uses
separate threads for non-blocking stdin/stderr reading [1]) and I don't
know how it would work with eventlet.

[0] https://github.com/paramiko/paramiko/pull/245
[1] https://github.com/mwilliamson/spur.py/blob/master/spur/io.py#L22

--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-08 Thread Matt Riedemann



On Tuesday, January 07, 2014 4:53:01 PM, Michael Still wrote:

Hi. Thanks for reaching out about this.

It seems this patch has now passed turbo hipster, so I am going to
treat this as a more theoretical question than perhaps you intended. I
should note though that Joshua Hesketh and I have been trying to read
/ triage every turbo hipster failure, but that has been hard this week
because we're both at a conference.

The problem this patch faced is that we are having trouble defining
what is a reasonable amount of time for a database migration to run
for. Specifically:

2014-01-07 14:59:32,012 [output] 205 - 206...
2014-01-07 14:59:32,848 [heartbeat]
2014-01-07 15:00:02,848 [heartbeat]
2014-01-07 15:00:32,849 [heartbeat]
2014-01-07 15:00:39,197 [output] done

So applying migration 206 took slightly over a minute (67 seconds).
Our historical data (mean + 2 standard deviations) says that this
migration should take no more than 63 seconds. So this only just
failed the test.

However, we know there are issues with our methodology -- we've tried
normalizing for disk IO bandwidth and it hasn't worked out as well as
we'd hoped. This week's plan is to try to use mysql performance schema
instead, but we have to learn more about how it works first.

I apologise for this mis-vote.

Michael

On Wed, Jan 8, 2014 at 1:44 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:



On 12/30/2013 6:21 AM, Michael Still wrote:


Hi.

The purpose of this email to is apologise for some incorrect -1 review
scores which turbo hipster sent out today. I think its important when
a third party testing tool is new to not have flakey results as people
learn to trust the tool, so I want to explain what happened here.

Turbo hipster is a system which takes nova code reviews, and runs
database upgrades against them to ensure that we can still upgrade for
users in the wild. It uses real user datasets, and also times
migrations and warns when they are too slow for large deployments. It
started voting on gerrit in the last week.

Turbo hipster uses zuul to learn about reviews in gerrit that it
should test. We run our own zuul instance, which talks to the
openstack.org zuul instance. This then hands out work to our pool of
testing workers. Another thing zuul does is it handles maintaining a
git repository for the workers to clone from.

This is where things went wrong today. For reasons I can't currently
explain, the git repo on our zuul instance ended up in a bad state (it
had a patch merged to master which wasn't in fact merged upstream
yet). As this code is stock zuul from openstack-infra, I have a
concern this might be a bug that other zuul users will see as well.

I've corrected the problem for now, and kicked off a recheck of any
patch with a -1 review score from turbo hipster in the last 24 hours.
I'll talk to the zuul maintainers tomorrow about the git problem and
see what we can learn.

Thanks heaps for your patience.

Michael



How do I interpret the warning and -1 from turbo-hipster on my patch here
[1] with the logs here [2]?

I'm inclined to just do 'recheck migrations' on this since this patch
doesn't have anything to do with this -1 as far as I can tell.

[1] https://review.openstack.org/#/c/64725/4/
[2]
https://ssl.rcbops.com/turbo_hipster/logviewer/?q=/turbo_hipster/results/64/64725/4/check/gate-real-db-upgrade_nova_mysql_user_001/5186e53/user_001.log

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






Another question.  This patch [1] failed turbo-hipster after it was 
approved but I don't know if that's a gating or just voting job, i.e. 
should someone do 'reverify migrations' on that patch or just let it 
sit and ignore turbo-hipster?


[1] https://review.openstack.org/#/c/59824/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Russell Bryant
On 01/08/2014 04:52 AM, Jay Lau wrote:
 Greetings,
 
 I have a question related to cold migration.
 
 Now in OpenStack nova, we support live migration, cold migration and resize.
 
 For live migration, we do not need to confirm after live migration finished.
 
 For resize, we need to confirm, as we want to give end user an
 opportunity to rollback.
 
 The problem is cold migration, because cold migration and resize share
 same code path, so once I submit a cold migration request and after the
 cold migration finished, the VM will goes to verify_resize state, and I
 need to confirm resize. I felt a bit confused by this, why do I need to
 verify resize for a cold migration operation? Why not reset the VM to
 original state directly after cold migration?

The confirm step definitely makes more sense for the resize case.  I'm
not sure if there was a strong reason why it was also needed for cold
migration.

If nobody comes up with a good reason to keep it, I'm fine with removing
it.  It can't be changed in the v2 API, though.  This would be a v3 only
change.

 Also, I think that probably we need split compute.api.resize() to two
 apis: one is for resize and the other is for cold migrations.
 
 1) The VM state can be either ACTIVE and STOPPED for a resize operation
 2) The VM state must be STOPPED for a cold migrate operation.

I'm not sure why would require different states here, though.  ACTIVE
and STOPPED are allowed now.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Sean Dague

On 01/08/2014 09:26 AM, Noorul Islam Kamal Malmiyoda wrote:


On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:
 
  On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
   On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
   gokrokvertsk...@mirantis.com
mailto:gokrokvertsk...@mirantis.com wrote:
   Should we rather revert patch to make gate working?
  
  
   I think it is always good to have test packages reside in
   test-requirements.txt. So -1 on reverting that patch.
  
   Here [1] is a temporary solution.
  
   Regards,
   Noorul
  
   [1] https://review.openstack.org/65414
 
  If Solum is trying to be on the road to being an OpenStack project, why
  would it go out of it's way to introduce an incompatibility in the way
  all the actual OpenStack packages work in the gate?
 
  Seems very silly to me, because you'll have to add oslo.sphinx back into
  test-requirements.txt the second you want to be considered for
incubation.
 

I am not sure why it seems silly to you. We are not anyhow removing
oslo.sphinx from the repository. We are just removing it before
installing the packages from test-requirements.txt in the devstack gate.
How does that affects incubation? Am I missing something?


So maybe I'm missing something. I don't see how the patch in question or 
the mailing list thread is related to the solum fail. Perhaps being more 
specific about why removing oslo.sphinx from test-requirements.txt is 
the right work around would be good. Because the nature of the fix (hot 
patching requirements) means that by nature something is not working as 
designed.


As far as I can tell this is just an ordering issue. So figure out the 
correct order that things need to happen in.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread John Garbutt
On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
 In nova/compute/api.py#2289, function resize, there's a parameter named
 flavor_id, if it is None, it is considered as cold migration. Thus, nova
 should skip resize verifying. However, it doesn't.

 Like Jay said, we should skip this step during cold migration, does it make
 sense?

Not sure.

 On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:

 Greetings,

 I have a question related to cold migration.

 Now in OpenStack nova, we support live migration, cold migration and
 resize.

 For live migration, we do not need to confirm after live migration
 finished.

 For resize, we need to confirm, as we want to give end user an opportunity
 to rollback.

 The problem is cold migration, because cold migration and resize share
 same code path, so once I submit a cold migration request and after the cold
 migration finished, the VM will goes to verify_resize state, and I need to
 confirm resize. I felt a bit confused by this, why do I need to verify
 resize for a cold migration operation? Why not reset the VM to original
 state directly after cold migration?

I think the idea was allow users/admins to check everything went OK,
and only delete the original VM when the have confirmed the move went
OK.

I thought there was an auto_confirm setting. Maybe you want
auto_confirm cold migrate, but not auto_confirm resize?

 Also, I think that probably we need split compute.api.resize() to two
 apis: one is for resize and the other is for cold migrations.

 1) The VM state can be either ACTIVE and STOPPED for a resize operation
 2) The VM state must be STOPPED for a cold migrate operation.

We just stop the VM them perform the migration.
I don't think we need to require its stopped first.
Am I missing something?

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Gary Kotton


From: Ray Sun xiaoq...@gmail.commailto:xiaoq...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, January 8, 2014 4:09 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new 
VM

Gary,
Thanks. Curretly, our upload speed is in the normal range?

[Gary] #hrs for a 7G file is far too long. For testing I have a 1G image. This 
takes about 2 minutes to upload to the cache. Please note that I am running on 
a virtual setup so thing take far longer than they would if it was bare metal


Best Regards
-- Ray


On Wed, Jan 8, 2014 at 4:31 PM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:
Hi,
In order for the VM to be booted the image needs to be on a datastore 
accessible by the host. By default the data tore will not have the image. This 
is copied from glance tot he datastore. This is most probably where the problem 
is. This may take a while depending on the connectivity between the openstack 
setup and  your backbend datastore. Once you have done this you will see a 
directory on the datastore called vmware_base. This will contain that image. 
From then on it should be smooth sailing.
Please note that we are working on a number of things to improve this:

 1.  Image cache aging (blueprint is implemented and pending review)
 2.  Adding a Vmware glance datastore – which will greatly improve the copy 
process described above

Thanks
Gary

From: Ray Sun xiaoq...@gmail.commailto:xiaoq...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, January 8, 2014 4:30 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found it's 
very slow when I try to create a new VM, for example, 7GB Windows Image spent 3 
hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H Expect: -v --insecure --upload-file windows2012_server_cn_x64.iso 
https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenterdsName=datastore2https://urldefense.proofpoint.com/v1/url?u=https://administrator:root123.%40200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath%3DdataCenter%26dsName%3Ddatastore2k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lhA2fUha%2FtHWjSl8QcZq5lQ9MSETFSwBcyKMNtXnhx0%3D%0As=e4e18c64b88329d7c54cdaef299e186aa13f2dd63f37654a0171a70d70b42cd3

The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for 
vcenter. Please help.

Best Regards
-- Ray

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=9a6QiCBNYRRJYDjh9SisIB0JUGIlDmd5rmicgbmlEVw%3D%0As=2323d9a1eee95934422fcce0089a81480b96a8acf29badb57e47d81fda353394


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Anne Gentle
On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda 
noo...@noorul.com wrote:


 On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net wrote:
 
  On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
   On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
   gokrokvertsk...@mirantis.com wrote:
   Should we rather revert patch to make gate working?
  
  
   I think it is always good to have test packages reside in
   test-requirements.txt. So -1 on reverting that patch.
  
   Here [1] is a temporary solution.
  
   Regards,
   Noorul
  
   [1] https://review.openstack.org/65414
 
  If Solum is trying to be on the road to being an OpenStack project, why
  would it go out of it's way to introduce an incompatibility in the way
  all the actual OpenStack packages work in the gate?
 
  Seems very silly to me, because you'll have to add oslo.sphinx back into
  test-requirements.txt the second you want to be considered for
 incubation.
 

 I am not sure why it seems silly to you. We are not anyhow removing
 oslo.sphinx from the repository. We are just removing it before installing
 the packages from test-requirements.txt

in the devstack gate. How does that affects incubation? Am I missing
 something?


Docs are a requirement, and contributor docs are required for applying for
incubation. [1] Typically these are built through Sphinx and consistency is
gained through oslo.sphinx, also eventually we can offer consistent
extensions. So a perception that you're skipping docs would be a poor
reflection on your incubation application. I don't think that's what's
happening here, but I want to be sure you understand the consistency and
doc needs.

See also
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
similar issues, we're trying to figure out the best solution. Stay
tuned.

Thanks,
Anne


1.
https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements

 Regards,
 Noorul

  -Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Nova][Docker] Devstack with docker driver

2014-01-08 Thread Eric Windisch
On Tue, Jan 7, 2014 at 11:13 PM, Swapnil Kulkarni 
swapnilkulkarni2...@gmail.com wrote:

 Let me know in case I can be of any help getting this resolved.


Please try running the failing 'docker run' command manually and without
the '-d' argument. I've been able to reproduce  an error myself, but wish
to confirm that this matches the error you're seeing.

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-08 Thread Sean Dague

On 01/08/2014 09:48 AM, Matt Riedemann wrote:




Another question.  This patch [1] failed turbo-hipster after it was
approved but I don't know if that's a gating or just voting job, i.e.
should someone do 'reverify migrations' on that patch or just let it sit
and ignore turbo-hipster?

[1] https://review.openstack.org/#/c/59824/


So instead of trying to fix the individual runs, because t-h runs pretty 
fast, can you just fix it with bulk. It seems like the issue in a 
migration taking a long time isn't a race in OpenStack, it's completely 
variability in the underlying system.


And it seems that the failing case is going to be 100% repeatable, and 
infrequent.


So it seems like you could solve the fail side by only reporting fail 
results on 3 fails in a row: RESULT  RESULT  RESULT


Especially valid if Results are coming from different AZs, so any local 
issues should be masked.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread David Xie
On Wednesday, 8 January, 2014 at 22:53, John Garbutt wrote:
 On 8 January 2014 10:02, David Xie david.script...@gmail.com 
 (mailto:david.script...@gmail.com) wrote:
  In nova/compute/api.py#2289, function resize, there's a parameter named
  flavor_id, if it is None, it is considered as cold migration. Thus, nova
  should skip resize verifying. However, it doesn't.
   
  Like Jay said, we should skip this step during cold migration, does it make
  sense?
   
  
  
 Not sure.
  
  On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com 
  (mailto:jay.lau@gmail.com) wrote:

   Greetings,

   I have a question related to cold migration.

   Now in OpenStack nova, we support live migration, cold migration and
   resize.

   For live migration, we do not need to confirm after live migration
   finished.

   For resize, we need to confirm, as we want to give end user an opportunity
   to rollback.

   The problem is cold migration, because cold migration and resize share
   same code path, so once I submit a cold migration request and after the 
   cold
   migration finished, the VM will goes to verify_resize state, and I need to
   confirm resize. I felt a bit confused by this, why do I need to verify
   resize for a cold migration operation? Why not reset the VM to original
   state directly after cold migration?

   
   
  
  
 I think the idea was allow users/admins to check everything went OK,
 and only delete the original VM when the have confirmed the move went
 OK.
  
 I thought there was an auto_confirm setting. Maybe you want
 auto_confirm cold migrate, but not auto_confirm resize?
  
[David] If user run cold migration command by CLI, confirmation does make 
sense. But what if this action is called by a service or other process, there’s 
no chance for user to confirm it and maybe it’s better to auto confirm it.

BTW, is there a auto_confirm setting for cold migration? If so, that’s all what 
I need.
  
   Also, I think that probably we need split compute.api.resize() to two
   apis: one is for resize and the other is for cold migrations.

   1) The VM state can be either ACTIVE and STOPPED for a resize operation
   2) The VM state must be STOPPED for a cold migrate operation.

   
  
  
 We just stop the VM them perform the migration.
 I don't think we need to require its stopped first.
 Am I missing something?
  
 Thanks,
 John
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread James Slagle
On Tue, Jan 7, 2014 at 11:20 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 8 January 2014 12:18, James Slagle james.sla...@gmail.com wrote:
 Sure, the crux of the problem was likely that versions in the distro
 were too old and they needed to be updated.  But unless we take on
 building the whole OS from source/git/whatever every time, we're
 always going to have that issue.  So, an additional benefit of
 packages is that you can install a known good version of an OpenStack
 component that is known to work with the versions of dependent
 software you already have installed.

 The problem is that OpenStack is building against newer stuff than is
 in distros, so folk building on a packaging toolchain are going to
 often be in catchup mode - I think we need to anticipate package based
 environments running against releases rather than CD.

I just don't see anyone not building on a packaging toolchain, given
that we're all running the distro of our choice and pip/virtualenv/etc
are installed from distro packages.  Trying to isolate the building of
components with pip installed virtualenvs was still a problem.  Short
of uninstalling the build tools packages from the cloud image and then
wget'ing the pip tarball, I don't think there would have been a good
way around this particular problem.  Which, that approach may
certainly make some sense for a CD scenario.

Agreed that packages against releases makes sense.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] new (docs) requirement for third party CI

2014-01-08 Thread Matt Riedemann
I'd like to propose that we add another item to the list here [1] that 
is basically related to what happens when the 3rd party CI job votes a 
-1 on your patch.  This would include:


1. Documentation on how to analyze the results and a good overview of 
what the job does (like the docs we have for check/gate testing now).

2. How to recheck the specific job if needed, i.e. 'recheck migrations'.
3. Who to contact if you can't figure out what's going on with the job.

Ideally this information would be in the comments when the job scores a 
-1 on your patch, or at least it would leave a comment with a link to a 
wiki for that job like we have with Jenkins today.


I'm all for more test coverage but we need some solid documentation 
around that when it's not owned by the community so we know what to do 
with the results if they seem like false negatives.


If no one is against this or has something to add, I'll update the wiki.

[1] 
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa ildiko.van...@ericsson.comwrote:

 Hi,

 I've started to work on the idea of supporting a kind of tenant/project
 based configuration for Ceilometer. Unfortunately I haven't reached the
 point of having a blueprint that could be registered until now. I do not
 have a deep knowledge about the collector and compute agent services, but
 this feature would require some deep changes for sure. Currently there are
 pipelines for data collection and transformation, where the counters can be
 specified, about which data should be collected and also the time interval
 for data collection and so on. These pipelines can be configured now
 globally in the pipeline.yaml file, which is stored right next to the
 Ceilometer configuration files.


Yes, the data collection was designed to be configured and controlled by
the deployer, not the tenant. What benefits do we gain by giving that
control to the tenant?



 In my view, we could keep the dynamic meter configuration bp with
 considering to extend it to dynamic configuration of Ceilometer, not just
 the meters and we could have a separate bp for the project based
 configuration of meters.


Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
are the needs for dynamic configuration updates in ceilometer different
from the other services?

Doug




 If it is ok for you, I will register the bp for this per-project tenant
 settings with some details, when I'm finished with the initial design of
 how this feature could work.

 Best Regards,
 Ildiko

 -Original Message-
 From: Neal, Phil [mailto:phil.n...@hp.com]
 Sent: Tuesday, January 07, 2014 11:50 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

 For multi-node deployments, implementing something like inotify would
 allow administrators to push configuration changes out to multiple targets
 using puppet/chef/etc. and have the daemons pick it up without restart.
 Thumbs up to that.

 As Tim Bell suggested, API-based enabling/disabling would allow users to
 update meters via script, but then there's the question of how to work out
 the global vs. per-project tenant settings...right now we collect specified
 meters for all available projects, and the API returns whatever data is
 stored minus filtered values. Maybe I'm missing something in the
 suggestion, but turning off collection for an individual project seems like
 it'd require some deep changes.

 Vijay, I'll repeat dhellmann's request: do you have more detail in another
 doc? :-)

 -   Phil

  -Original Message-
  From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
  [mailto:vijayakumar.kodam@nsn.com]
  Sent: Tuesday, January 07, 2014 2:49 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: chmo...@enovance.com
  Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
  From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
  Sent: Monday, January 06, 2014 2:19 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 
 
 
 
 
  On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata
  Consultancy Ser - FI/Espoo) vijayakumar.kodam@nsn.com wrote:
 
  In this case, simply changing the meter properties in a configuration
  file should be enough. There should be an inotify signal which shall
  notify ceilometer of the changes in the config file. Then ceilometer
  should automatically update the meters without restarting.
 
 
 
  Why it cannot be something configured by the admin with inotifywait(1)
  command?
 
 
 
  Or this can be an API call for enabling/disabling meters which could
  be more useful without having to change the config files.
 
 
 
  Chmouel.
 
 
 
  I haven't tried inotifywait() in this implementation. I need to check
  if it will be useful for the current implementation.
 
  Yes. API call could be more useful than changing the config files
 manually.
 
 
 
  Thanks,
 
  VijayKumar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam K M
Anne Gentle a...@openstack.org writes:

 On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda 
 noo...@noorul.com wrote:


 On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net wrote:
 
  On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
   On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
   gokrokvertsk...@mirantis.com wrote:
   Should we rather revert patch to make gate working?
  
  
   I think it is always good to have test packages reside in
   test-requirements.txt. So -1 on reverting that patch.
  
   Here [1] is a temporary solution.
  
   Regards,
   Noorul
  
   [1] https://review.openstack.org/65414
 
  If Solum is trying to be on the road to being an OpenStack project, why
  would it go out of it's way to introduce an incompatibility in the way
  all the actual OpenStack packages work in the gate?
 
  Seems very silly to me, because you'll have to add oslo.sphinx back into
  test-requirements.txt the second you want to be considered for
 incubation.
 

 I am not sure why it seems silly to you. We are not anyhow removing
 oslo.sphinx from the repository. We are just removing it before installing
 the packages from test-requirements.txt

 in the devstack gate. How does that affects incubation? Am I missing
 something?


 Docs are a requirement, and contributor docs are required for applying for
 incubation. [1] Typically these are built through Sphinx and consistency is
 gained through oslo.sphinx, also eventually we can offer consistent
 extensions. So a perception that you're skipping docs would be a poor
 reflection on your incubation application. I don't think that's what's
 happening here, but I want to be sure you understand the consistency and
 doc needs.

 See also
 http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
 similar issues, we're trying to figure out the best solution. Stay
 tuned.


I have seen that, also posted solum issue [1] there yesterday. I started
this thread to have consensus on making solum devstack gate non-voting
until the issue gets fixed. Also proposed a temporary solution with
which we can solve the issue for the time being. Since the gate is
failing for all the patches, it is affecting every patch.

Regards,
Noorul

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html
[2] https://review.openstack.org/65414



 1.
 https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements

 Regards,
 Noorul

  -Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com
  http://dague.net
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
Thanks Russell, OK, will file a bug for first issue.

For second question, I want to show some of my comments here. I think that
we should disable cold migration for an ACTIVE VM as cold migrating will
first destroy the VM then re-create the VM when using KVM, I did not see a
use case why someone want to do such a case.

Even further, this might make end user confused, its really strange both
cold migration and live migration can migrate an ACTIVE VM. Cold migration
should only target STOPPED VM instance.

What do you think?

Thanks,

Jay



2014/1/8 Russell Bryant rbry...@redhat.com

 On 01/08/2014 04:52 AM, Jay Lau wrote:
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
 resize.
 
  For live migration, we do not need to confirm after live migration
 finished.
 
  For resize, we need to confirm, as we want to give end user an
  opportunity to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after the
  cold migration finished, the VM will goes to verify_resize state, and I
  need to confirm resize. I felt a bit confused by this, why do I need to
  verify resize for a cold migration operation? Why not reset the VM to
  original state directly after cold migration?

 The confirm step definitely makes more sense for the resize case.  I'm
 not sure if there was a strong reason why it was also needed for cold
 migration.

 If nobody comes up with a good reason to keep it, I'm fine with removing
 it.  It can't be changed in the v2 API, though.  This would be a v3 only
 change.

  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.

 I'm not sure why would require different states here, though.  ACTIVE
 and STOPPED are allowed now.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 9:34 AM, Sergey Skripnick sskripn...@mirantis.comwrote:




 I'd like to explore whether the paramiko team will accept this code (or
 something like it). This seems like a perfect opportunity for us to
 contribute
 upstream.


 +1

 The patch is not big and the code seems simple and reasonable enough
 to live within paramiko.

 Cheers,
 FF



 I sent a pull request [0] but there is two things:

  nobody know when (and if) it will be merged
  it is still a bit low-level, unlike a patch in oslo


Let's give the paramkio devs a little time to review it.



 About spur: spur is looks ok, but it a bit complicated inside (it uses
 separate threads for non-blocking stdin/stderr reading [1]) and I don't
 know how it would work with eventlet.


That does sound like it might cause issues. What would we need to do to
test it?

Doug




 [0] https://github.com/paramiko/paramiko/pull/245
 [1] https://github.com/mwilliamson/spur.py/blob/master/spur/io.py#L22


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Jay Lau
2014/1/8 John Garbutt j...@johngarbutt.com

 On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
  In nova/compute/api.py#2289, function resize, there's a parameter named
  flavor_id, if it is None, it is considered as cold migration. Thus, nova
  should skip resize verifying. However, it doesn't.
 
  Like Jay said, we should skip this step during cold migration, does it
 make
  sense?

 Not sure.

  On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:
 
  Greetings,
 
  I have a question related to cold migration.
 
  Now in OpenStack nova, we support live migration, cold migration and
  resize.
 
  For live migration, we do not need to confirm after live migration
  finished.
 
  For resize, we need to confirm, as we want to give end user an
 opportunity
  to rollback.
 
  The problem is cold migration, because cold migration and resize share
  same code path, so once I submit a cold migration request and after the
 cold
  migration finished, the VM will goes to verify_resize state, and I need
 to
  confirm resize. I felt a bit confused by this, why do I need to verify
  resize for a cold migration operation? Why not reset the VM to original
  state directly after cold migration?

 I think the idea was allow users/admins to check everything went OK,
 and only delete the original VM when the have confirmed the move went
 OK.

 I thought there was an auto_confirm setting. Maybe you want
 auto_confirm cold migrate, but not auto_confirm resize?







*[Jay] John, yes, that can also reach my goal. Now we only have
resize_confirm_window to handle auto confirm without considering it is
resize or cold migration. # Automatically confirm resizes after N seconds.
Set to 0 to# disable. (integer value)#resize_confirm_window=0 *
*Perhaps we can add another parameter say cold_migrate_confirm_window to
handle confirm for cold migration.*


  Also, I think that probably we need split compute.api.resize() to two
  apis: one is for resize and the other is for cold migrations.
 
  1) The VM state can be either ACTIVE and STOPPED for a resize operation
  2) The VM state must be STOPPED for a cold migrate operation.

 We just stop the VM them perform the migration.
 I don't think we need to require its stopped first.
 Am I missing something?

*[Jay] Yes, but just curious why someone want to cold migrate an ACTIVE VM?
They can use live migration instead and this can also make sure the VM
migrate seamlessly.*


 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Implement NAPT in neutron (https://blueprints.launchpad.net/neutron/+spec/neutron-napt-api)

2014-01-08 Thread Dong Liu

在 2014年1月8日,20:24,Nir Yechiel nyech...@redhat.com 写道:

 Hi Dong,
 
 Can you please clarify this blueprint? Currently in Neutron, If an instance 
 has a floating IP, then that will be used for both inbound and outbound 
 traffic. If an instance does not have a floating IP, it can make connections 
 out using the gateway IP (SNAT using PAT/NAT Overload). Does the idea in this 
 blueprint is to implement PAT on both directions using only the gateway IP? 
 Also, did you see this one [1]? 
 
 Thanks,
 Nir
 
 [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding


I think my idea is duplicated with this one. 
https://blueprints.launchpad.net/neutron/+spec/access-vms-via-port-mapping

Sorry for missing this.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Eric Windisch



 About spur: spur is looks ok, but it a bit complicated inside (it uses
 separate threads for non-blocking stdin/stderr reading [1]) and I don't
 know how it would work with eventlet.


 That does sound like it might cause issues. What would we need to do to
 test it?


Looking at the code, I don't expect it to be an issue. The monkey-patching
will cause eventlet.spawn to be called for threading.Thread. The code looks
eventlet-friendly enough on the surface. Error handing around file
read/write could be affected, but it also looks fine.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
Hi,

-Original Message-
From: ext Neal, Phil [mailto:phil.n...@hp.com] 
Sent: Wednesday, January 08, 2014 12:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer


For multi-node deployments, implementing something like inotify would allow 
administrators to push configuration changes out to multiple targets using 
puppet/chef/etc. and have the daemons pick it up without restart. Thumbs up 
to that.


Thanks!

As Tim Bell suggested, API-based enabling/disabling would allow users to 
update meters via script, but then there's the question of how to work out the 
global vs. per-project tenant settings...right now we collect specified 
meters for all available projects, and the API returns whatever data is stored 
minus filtered values. Maybe I'm missing something in the suggestion, but 
turning off collection for an individual project seems like it'd require some 
deep changes.

Vijay, I'll repeat dhellmann's request: do you have more detail in another 
doc? :-)

-  Phil

I concur with the opinion to use APIs for dynamically enabling/disabling 
meters. I have updated the design accordingly.

According to the latest update:
User calls the API(1) to disable a meter along with a meter id. 
Ceilometer-api handles the api request, adds the meter id to disabled_meters 
config file and informs ceilometer agents. 
Ceilometer agents will read the disabled_meters config file and disables the 
meter.

More detailed information about this blueprint can be found at
https://etherpad.openstack.org/p/dynamic-meters

There will be no inotify() or inotifywait() calls to monitor the modifications 
of the configuration file.
Whenever the APIs are called, ceilometer-api upon receiving the request, will 
modify the config file and informs the ceilometer agents.

There are no per-project settings currently considered for this blueprint. 
IMHO, per-project settings should be implemented for all the 
meters/resources/APIs in ceilometer and should be handled by a different 
blueprint.

Regards,
VijayKumar

 -Original Message-
 From: Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
 [mailto:vijayakumar.kodam@nsn.com]
 Sent: Tuesday, January 07, 2014 2:49 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: chmo...@enovance.com
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 From: ext Chmouel Boudjnah [mailto:chmo...@enovance.com]
 Sent: Monday, January 06, 2014 2:19 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 
 
 
 
 
 On Mon, Jan 6, 2014 at 12:52 PM, Kodam, Vijayakumar (EXT-Tata Consultancy
 Ser - FI/Espoo) vijayakumar.kodam@nsn.com wrote:
 
 In this case, simply changing the meter properties in a configuration file
 should be enough. There should be an inotify signal which shall notify
 ceilometer of the changes in the config file. Then ceilometer should
 automatically update the meters without restarting.
 
 
 
 Why it cannot be something configured by the admin with inotifywait(1)
 command?
 
 
 
 Or this can be an API call for enabling/disabling meters which could be more
 useful without having to change the config files.
 
 
 
 Chmouel.
 
 
 
 I haven't tried inotifywait() in this implementation. I need to check if it 
 will be
 useful for the current implementation.
 
 Yes. API call could be more useful than changing the config files manually.
 
 
 
 Thanks,
 
 VijayKumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)


From: ext Tim Bell [mailto:tim.b...@cern.ch]
Sent: Tuesday, January 07, 2014 8:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer


Thinking using inotify/configuration file changes to implement dynamic meters, 
this would be limited to administrators of ceilometer itself (i.e. with write 
access to the file) rather than the project administrators (as defined by 
keystone roles). Thus, as a project administrator who is not the cloud admin, I 
could not enable/disable the meter for a project only.

It would mean that scripting meter on/off would not be possible if there was 
not an API to perform this.

Not sure if these requirements are significant and the associated impact on 
implementation complexity, but they may be relevant in scoping out the 
blueprint and subsequent changes

Tim
Tim,

Agree with your suggestion. I have updated the design by adding APIs. Whenever 
an API request is received by the ceilometer-api, it shall modify the config 
file and inform the ceilometer agents.
You can find detailed information at
https://etherpad.openstack.org/p/dynamic-meters

Regards,
VijayKumar

From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: 06 January 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer



On Tue, Dec 31, 2013 at 4:53 AM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - 
FI/Espoo) vijayakumar.kodam@nsn.commailto:vijayakumar.kodam@nsn.com 
wrote:
Hi,

Currently there is no way to enable or disable meters without restarting 
ceilometer.

There are cases where operators do not want to run all the meters continuously.
In these cases, there should be a way to disable or enable them dynamically.

We are working on this feature right now. I have also created a blueprint for 
the same.
https://blueprints.launchpad.net/ceilometer/+spec/dynamic-meters

We would love to hear your views on this feature.

There isn't much detail in the blueprint. Do you have a more comprehensive 
document you can link to that talks about how you intend for it to work?

Doug



Regards,
VijayKumar Kodam




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Jay Dobies
There were so many places in this thread that I wanted to jump in on as 
I caught up, it makes sense to just summarize things in once place 
instead of a half dozen quoted replies.


I agree with the sentiments about flexibility. Regardless of my personal 
preference on source v. packages, it's been my experience that the 
general mindset of production deployment is that new ideas move slowly. 
Admins are set in their ways and policies are in place on how things are 
consumed.


Maybe the newness of all things cloud-related and image-based management 
for scale is a good time to shift the mentality out of packages (again, 
I'm not suggesting whether or not it should be shifted). But I worry 
about adoption if we don't provide an option for people to use blessed 
distro packages, either because of company policy or years of habit and 
bias. If done correctly, there's no difference between a package and a 
particular tag in a source repository, but there is a psychological 
component there that I think we need to account for, assuming someone is 
willing to bite off the implementation costs (which is sounds like there 
is).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Julien Danjou
On Wed, Jan 08 2014, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo) 
wrote:

 According to the latest update:
 User calls the API(1) to disable a meter along with a meter id. 

What's an user? An end-user or an operator?
I don't think we want to allow a user to disable a meter. I don't see
the use case, and if you use a meter for billing, it's just a terrible
idea.

If you talk about operators, it's just a problem of managing a
configuration file that is not different than the rest of OpenStack. I
think Doug and Chmouel already answered that, and I sit on that line.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Georgy Okrokvertskhov
Hi,

I do understand why there is a push back for this patch. This patch is for
infrastructure project which works for multiple projects. Infra maintainers
should not know specifics of each project in details. If this patch is a
temporary solution then who will be responsible to remove it?

If we need start this gate I propose to revert all patches which led to
this inconsistent state and apply workaround in Solum repository which is
under Solum team full control and review. We need to open a bug in Solum
project to track this.

Thanks
Georgy


On Wed, Jan 8, 2014 at 7:09 AM, Noorul Islam K M noo...@noorul.com wrote:

 Anne Gentle a...@openstack.org writes:

  On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda 
  noo...@noorul.com wrote:
 
 
  On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net wrote:
  
   On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
Should we rather revert patch to make gate working?
   
   
I think it is always good to have test packages reside in
test-requirements.txt. So -1 on reverting that patch.
   
Here [1] is a temporary solution.
   
Regards,
Noorul
   
[1] https://review.openstack.org/65414
  
   If Solum is trying to be on the road to being an OpenStack project,
 why
   would it go out of it's way to introduce an incompatibility in the way
   all the actual OpenStack packages work in the gate?
  
   Seems very silly to me, because you'll have to add oslo.sphinx back
 into
   test-requirements.txt the second you want to be considered for
  incubation.
  
 
  I am not sure why it seems silly to you. We are not anyhow removing
  oslo.sphinx from the repository. We are just removing it before
 installing
  the packages from test-requirements.txt
 
  in the devstack gate. How does that affects incubation? Am I missing
  something?
 
 
  Docs are a requirement, and contributor docs are required for applying
 for
  incubation. [1] Typically these are built through Sphinx and consistency
 is
  gained through oslo.sphinx, also eventually we can offer consistent
  extensions. So a perception that you're skipping docs would be a poor
  reflection on your incubation application. I don't think that's what's
  happening here, but I want to be sure you understand the consistency and
  doc needs.
 
  See also
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
  similar issues, we're trying to figure out the best solution. Stay
  tuned.
 

 I have seen that, also posted solum issue [1] there yesterday. I started
 this thread to have consensus on making solum devstack gate non-voting
 until the issue gets fixed. Also proposed a temporary solution with
 which we can solve the issue for the time being. Since the gate is
 failing for all the patches, it is affecting every patch.

 Regards,
 Noorul

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html
 [2] https://review.openstack.org/65414

 
 
  1.
 
 https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements
 
  Regards,
  Noorul
 
   -Sean
  
   --
   Sean Dague
   Samsung Research America
   s...@dague.net / sean.da...@samsung.com
   http://dague.net
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Julien Danjou
On Wed, Jan 08 2014, Ildikó Váncsa wrote:

(Your answers are very hard to read inline in my text MUA, it'd really
help if you could quote properly with  the emails you answer to).

 ildikov: Sorry, my explanation was not clear. I meant there the
 configuration of data collection for projects, what was mentioned by Tim
 Bell in a previous email. This would mean that the project administrator is
 able to create a data collection configuration for his/her own project,
 which will not affect the other project's configuration. The tenant would be
 able to specify meters (enabled/disable based on which ones are needed) for
 the given project also with project specific time intervals, etc.

I still don't see the point. A user can send any sample it wants on any
interval using the REST API. There's no sense of enable or disabling
meters.
Please describe me a real use case, for now I still can't understand
what you want to do.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Derek Higgins's message of 2014-01-08 02:11:09 -0800:
 On 08/01/14 05:07, Clint Byrum wrote:
  Excerpts from Fox, Kevin M's message of 2014-01-07 16:27:35 -0800:
  Another piece to the conversation I think is update philosophy. If
  you are always going to require a new image and no customization after
  build ever, ever, the messiness that source usually cause in the file
  system image really doesn't matter. The package system allows you to
  easily update, add, and remove packages bits at runtime cleanly. In
  our experimenting with OpenStack, its becoming hard to determine
  which philosophy is better. Golden Images for some things make a lot
  of sense. For other random services, the maintenance of the Golden
  Image seems to be too much to bother with and just installing a few
  packages after image start is preferable. I think both approaches are
  valuable. This may not directly relate to what is best for Triple-O
  elements, but since we are talking philosophy anyway...
 
  
  The golden image approach should be identical to the package approach if
  you are doing any kind of testing work-flow.
  
  Just install a few packages is how you end up with, as Robert said,
  snowflakes. The approach we're taking with diskimage-builder should
  result in that image building extremely rapidly, even if you compiled
  those things from source.
 
 This is the part of your argument I don't understand, creating images
 with packages is no more likely to result in snowflakes then creating
 images from sources in git.
 
 You would build an image using packages and at the end of the build
 process you can lock the package versions. Regardless of how the image
 is built you can consider it a golden image. This image is then deployed
 to your hosts and not changed.
 
 We would still be using diskimage-builder the main difference to the
 whole process is we would end up with a image that has more packages
 installed and no virtual envs.
 

I'm not saying building images from packages will encourage
snowflakes. I'm saying installing and updating on systems using packages
encourages snowflakes. Kevin was suggesting that the image workflow
wouldn't fit for everything, and thus was opening up the just install
a few packages on a system can of worms. I'm saying to Kevin, don't
do that, just make your image work-flow tighter, and suggesting it is
worth it to do that to avoid having snowflakes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 11:16 AM, Ildikó Váncsa
ildiko.van...@ericsson.comwrote:

  Hi Doug,



 See my answers inline.



 Best Regards,

 Ildiko



 *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 *Sent:* Wednesday, January 08, 2014 4:10 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer







 On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa ildiko.van...@ericsson.com
 wrote:

 Hi,

 I've started to work on the idea of supporting a kind of tenant/project
 based configuration for Ceilometer. Unfortunately I haven't reached the
 point of having a blueprint that could be registered until now. I do not
 have a deep knowledge about the collector and compute agent services, but
 this feature would require some deep changes for sure. Currently there are
 pipelines for data collection and transformation, where the counters can be
 specified, about which data should be collected and also the time interval
 for data collection and so on. These pipelines can be configured now
 globally in the pipeline.yaml file, which is stored right next to the
 Ceilometer configuration files.



 Yes, the data collection was designed to be configured and controlled by
 the deployer, not the tenant. What benefits do we gain by giving that
 control to the tenant?



 ildikov: Sorry, my explanation was not clear. I meant there the
 configuration of data collection for projects, what was mentioned by Tim
 Bell in a previous email. This would mean that the project administrator is
 able to create a data collection configuration for his/her own project,
 which will not affect the other project’s configuration. The tenant would
 be able to specify meters (enabled/disable based on which ones are needed)
 for the given project also with project specific time intervals, etc.


OK, I think some of the confusion is terminology. Who is a project
administrator? Is that someone with access to change ceilometer's
configuration file directly? Someone with a particular role using the API?
Or something else?






 In my view, we could keep the dynamic meter configuration bp with
 considering to extend it to dynamic configuration of Ceilometer, not just
 the meters and we could have a separate bp for the project based
 configuration of meters.



 Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
 are the needs for dynamic configuration updates in ceilometer different
 from the other services?



 ildikov: There are some parameters in the configuration file of
 Ceilometer, like log options and notification types, which would be good to
 be able to configure them dynamically. I just wanted to reflect to that
 need. As I see, there are two options here. The first one is to identify
 the group of the dynamically modifiable parameters and move them to the API
 level. The other option could be to make some modifications in oslo.config
 too, so other services also could use the benefits of dynamic
 configuration. For example the log settings could be a good candidate, as
 for example the change of log levels, without service restart, in case
 debugging the system can be a useful feature for all of the OpenStack
 services.


I misspoke earlier. If we're talking about meters, those are actually
defined by the pipeline file (not oslo.config). So if we do want that file
re-read automatically, we can implement that within ceilometer itself,
though I'm still reluctant to say we want to provide API access for
modifying those settings. That's *really* not something we've designed the
rest of the system to accommodate, so I don't know what side-effects we
might introduce.

As far as the other configuration settings, we had the conversation about
updating those through some sort of API early on, and decided that there
are already lots of operational tools out there to manage changes to those
files. I would need to see a list of which options people would want to
have changed through an API to comment further.

Doug





 Doug






 If it is ok for you, I will register the bp for this per-project tenant
 settings with some details, when I'm finished with the initial design of
 how this feature could work.

 Best Regards,
 Ildiko


 -Original Message-
 From: Neal, Phil [mailto:phil.n...@hp.com]
 Sent: Tuesday, January 07, 2014 11:50 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

 For multi-node deployments, implementing something like inotify would
 allow administrators to push configuration changes out to multiple targets
 using puppet/chef/etc. and have the daemons pick it up without restart.
 Thumbs up to that.

 As Tim Bell suggested, API-based enabling/disabling would allow users to
 update meters via script, but then there's the question of how to work out
 the global vs. per-project tenant settings...right now we collect specified

Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam Kamal Malmiyoda
On Jan 8, 2014 9:58 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,

 I do understand why there is a push back for this patch. This patch is
for infrastructure project which works for multiple projects. Infra
maintainers should not know specifics of each project in details. If this
patch is a temporary solution then who will be responsible to remove it?


I am not sure who is responsible for solum related configurations in infra
project. I see that almost all the infra config for solum project is done
by solum members. So I think any solum member can submit a patch to revert
this once we have a permanent solution.

 If we need start this gate I propose to revert all patches which led to
this inconsistent state and apply workaround in Solum repository which is
under Solum team full control and review. We need to open a bug in Solum
project to track this.


The problematic patch [1] solves a specific problem. Do we have other ways
to solve it?

Regards,
Noorul

[1] https://review.openstack.org/#/c/64226

 Thanks
 Georgy


 On Wed, Jan 8, 2014 at 7:09 AM, Noorul Islam K M noo...@noorul.com
wrote:

 Anne Gentle a...@openstack.org writes:

  On Wed, Jan 8, 2014 at 8:26 AM, Noorul Islam Kamal Malmiyoda 
  noo...@noorul.com wrote:
 
 
  On Jan 8, 2014 6:11 PM, Sean Dague s...@dague.net wrote:
  
   On 01/07/2014 11:27 PM, Noorul Islam Kamal Malmiyoda wrote:
On Wed, Jan 8, 2014 at 9:43 AM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
Should we rather revert patch to make gate working?
   
   
I think it is always good to have test packages reside in
test-requirements.txt. So -1 on reverting that patch.
   
Here [1] is a temporary solution.
   
Regards,
Noorul
   
[1] https://review.openstack.org/65414
  
   If Solum is trying to be on the road to being an OpenStack project,
why
   would it go out of it's way to introduce an incompatibility in the
way
   all the actual OpenStack packages work in the gate?
  
   Seems very silly to me, because you'll have to add oslo.sphinx back
into
   test-requirements.txt the second you want to be considered for
  incubation.
  
 
  I am not sure why it seems silly to you. We are not anyhow removing
  oslo.sphinx from the repository. We are just removing it before
installing
  the packages from test-requirements.txt
 
  in the devstack gate. How does that affects incubation? Am I missing
  something?
 
 
  Docs are a requirement, and contributor docs are required for applying
for
  incubation. [1] Typically these are built through Sphinx and
consistency is
  gained through oslo.sphinx, also eventually we can offer consistent
  extensions. So a perception that you're skipping docs would be a poor
  reflection on your incubation application. I don't think that's what's
  happening here, but I want to be sure you understand the consistency
and
  doc needs.
 
  See also
 
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023582.htmlfor
  similar issues, we're trying to figure out the best solution. Stay
  tuned.
 

 I have seen that, also posted solum issue [1] there yesterday. I started
 this thread to have consensus on making solum devstack gate non-voting
 until the issue gets fixed. Also proposed a temporary solution with
 which we can solve the issue for the time being. Since the gate is
 failing for all the patches, it is affecting every patch.

 Regards,
 Noorul

 [1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023618.html
 [2] https://review.openstack.org/65414

 
 
  1.
 
https://github.com/openstack/governance/blob/master/reference/incubation-integration-requirements
 
  Regards,
  Noorul
 
   -Sean
  
   --
   Sean Dague
   Samsung Research America
   s...@dague.net / sean.da...@samsung.com
   http://dague.net
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing 

Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Georgy Okrokvertskhov
Hi Kurt,

As for WSGI middleware I think about Pecan hooks which can be added before
actual controller call. Here is an example how we added a hook for keystone
information collection:
https://review.openstack.org/#/c/64458/4/solum/api/auth.py

What do you think, will this approach with Pecan hooks work?

Thanks
Georgy


On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths kurt.griffi...@rackspace.com
 wrote:

  You might also consider doing this in WSGI middleware:

  Pros:

- Consolidates policy code in once place, making it easier to audit
and maintain
- Simple to turn policy on/off – just don’t insert the middleware when
off!
- Does not preclude the use of oslo.policy for rule checking
- Blocks unauthorized requests before they have a chance to touch the
web framework or app. This reduces your attack surface and can improve
performance   (since the web framework has yet to parse the request).

 Cons:

- Doesn't work for policies that require knowledge that isn’t
available this early in the pipeline (without having to duplicate a lot of
code)
- You have to parse the WSGI environ dict yourself (this may not be a
big deal, depending on how much knowledge you need to glean in order to
enforce the policy).
- You have to keep your HTTP path matching in sync with with your
route definitions in the code. If you have full test coverage, you will
know when you get out of sync. That being said, API routes tend to be quite
stable in relation to to other parts of the code implementation once you
have settled on your API spec.

 I’m sure there are other pros and cons I missed, but you can make your own
 best judgement whether this option makes sense in Solum’s case.

   From: Doug Hellmann doug.hellm...@dreamhost.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Tuesday, January 7, 2014 at 6:54 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
 SecureController vs. Nova policy




 On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

 Hi Dough,

  Thank you for pointing to this code. As I see you use OpenStack policy
 framework but not Pecan security features. How do you implement fine grain
 access control like user allowed to read only, writers and admins. Can you
 block part of API methods for specific user like access to create methods
 for specific user role?


  The policy enforcement isn't simple on/off switching in ceilometer, so
 we're using the policy framework calls in a couple of places within our API
 code (look through v2.py for examples). As a result, we didn't need to
 build much on top of the existing policy module to interface with pecan.

  For your needs, it shouldn't be difficult to create a couple of
 decorators to combine with pecan's hook framework to enforce the policy,
 which might be less complex than trying to match the operating model of the
 policy system to pecan's security framework.

  This is the sort of thing that should probably go through Oslo and be
 shared, so please consider contributing to the incubator when you have
 something working.

  Doug




  Thanks
 Georgy


 On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




  On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

  Hi,

  In Solum project we will need to implement security and ACL for Solum
 API. Currently we use Pecan framework for API. Pecan has its own security
 model based on SecureController class. At the same time OpenStack widely
 uses policy mechanism which uses json files to control access to specific
 API methods.

  I wonder if someone has any experience with implementing security and
 ACL stuff with using Pecan framework. What is the right way to provide
 security for API?


   In ceilometer we are using the keystone middleware and the policy
 framework to manage arguments that constrain the queries handled by the
 storage layer.


 http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

  and


 http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

  Doug




  Thanks
  Georgy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




   --
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [OpenStack][Nova][cold migration] Why we need confirm resize after cold migration

2014-01-08 Thread Russell Bryant
On 01/08/2014 09:53 AM, John Garbutt wrote:
 On 8 January 2014 10:02, David Xie david.script...@gmail.com wrote:
 In nova/compute/api.py#2289, function resize, there's a parameter named
 flavor_id, if it is None, it is considered as cold migration. Thus, nova
 should skip resize verifying. However, it doesn't.

 Like Jay said, we should skip this step during cold migration, does it make
 sense?
 
 Not sure.
 
 On Wed, Jan 8, 2014 at 5:52 PM, Jay Lau jay.lau@gmail.com wrote:

 Greetings,

 I have a question related to cold migration.

 Now in OpenStack nova, we support live migration, cold migration and
 resize.

 For live migration, we do not need to confirm after live migration
 finished.

 For resize, we need to confirm, as we want to give end user an opportunity
 to rollback.

 The problem is cold migration, because cold migration and resize share
 same code path, so once I submit a cold migration request and after the cold
 migration finished, the VM will goes to verify_resize state, and I need to
 confirm resize. I felt a bit confused by this, why do I need to verify
 resize for a cold migration operation? Why not reset the VM to original
 state directly after cold migration?
 
 I think the idea was allow users/admins to check everything went OK,
 and only delete the original VM when the have confirmed the move went
 OK.
 
 I thought there was an auto_confirm setting. Maybe you want
 auto_confirm cold migrate, but not auto_confirm resize?

I suppose we could add an API parameter to auto-confirm these things.
That's probably a good compromise.

 Also, I think that probably we need split compute.api.resize() to two
 apis: one is for resize and the other is for cold migrations.

 1) The VM state can be either ACTIVE and STOPPED for a resize operation
 2) The VM state must be STOPPED for a cold migrate operation.
 
 We just stop the VM them perform the migration.
 I don't think we need to require its stopped first.
 Am I missing something?

Don't think so ... I think we should leave it as is.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Ben Nemec
 

On 2014-01-08 08:24, Doug Hellmann wrote: 

 On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-07 07:16, Doug Hellmann wrote: 
 
 On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.com wrote:
 
 I have been seeing this problem also. 
 
 My problem is actually with oslo.sphinx. I ran sudo pip install -r 
 test-requirements.txt in cinder so that I could run the tests there, which 
 installed oslo.sphinx. 
 
 Strange thing is that the oslo.sphinx installed a directory called oslo in 
 /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this 
 package installed like so I get the same error you get with oslo.config. 
 
 The oslo libraries use python namespace packages, which manifest themselves 
 as a directory in site-packages (or dist-packages) with sub-packages but no 
 __init__.py(c). That way oslo.sphinx and oslo.config can be packaged 
 separately, but still installed under the oslo directory and imported as 
 oslo.sphinx and oslo.config. 
 
 My guess is that installing oslo.sphinx globally (with sudo), set up 2 copies 
 of the namespace package (one in the global dist-packages and presumably one 
 in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
currently running into this. oslo.sphinx is only installed in the venv
and it creates a namespace package there. Then if you try to load
oslo.config in the venv it looks in the namespace package, doesn't find
it, and bails with a missing module error. 

I'm personally running into this in tempest - I can't even run pep8 out
of the box because the sample config check fails due to missing
oslo.config. Here's what I'm seeing: 

In the tox venv: 
(pep8)[fedora@devstack site-packages]$ ls oslo*
oslo.sphinx-1.1-py2.7-nspkg.pth

oslo:
sphinx

oslo.sphinx-1.1-py2.7.egg-info:
dependency_links.txt namespace_packages.txt PKG-INFO top_level.txt
 installed-files.txt not-zip-safe SOURCES.txt 

And in the system site-packages: 
[fedora@devstack site-packages]$ ls oslo*
oslo.config.egg-link oslo.messaging.egg-link 

Since I don't actually care about oslo.sphinx in this case, I also found
that deleting it from the venv fixes the problem, but obviously that's
just a hacky workaround. My initial thought is to install oslo.sphinx in
devstack the same way as oslo.config and oslo.messaging, but I assume
there's a reason we didn't do it that way in the first place so I'm not
sure if that will work. 

So I don't know what the proper fix is, but I thought I'd share what
I've found so far. Also, I'm not sure if this even relates to the
ceilometer issue since I wouldn't expect that to be running in a venv,
but it may have a similar issue. 

I wonder if the issue is actually that we're using pip install -e for
oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
things work properly if those packages are installed to the global
site-packages from PyPI instead? We don't want to change the way
devstack installs them, but it would give us another data point. 

Another solution is to have a list of dependencies needed for building
documentation, separate from the tests, since oslo.sphinx isn't needed
for the tests. 

It does work if I remove the pip install -e version of oslo.config and
reinstall from the pypi package, so this appears to be an issue with the
egg-links. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 10:43 AM, Eric Windisch ewindi...@docker.com wrote:



 About spur: spur is looks ok, but it a bit complicated inside (it uses
 separate threads for non-blocking stdin/stderr reading [1]) and I don't
 know how it would work with eventlet.


 That does sound like it might cause issues. What would we need to do to
 test it?


 Looking at the code, I don't expect it to be an issue. The monkey-patching
 will cause eventlet.spawn to be called for threading.Thread. The code looks
 eventlet-friendly enough on the surface. Error handing around file
 read/write could be affected, but it also looks fine.


Thanks for that analysis Eric.

Is there any reason for us to prefer one approach over the other, then?

Doug





 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 10:19 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Jan 8, 2014 at 9:34 AM, Sergey Skripnick 
 sskripn...@mirantis.comwrote:




 I'd like to explore whether the paramiko team will accept this code (or
 something like it). This seems like a perfect opportunity for us to
 contribute
 upstream.


 +1

 The patch is not big and the code seems simple and reasonable enough
 to live within paramiko.

 Cheers,
 FF



 I sent a pull request [0] but there is two things:

  nobody know when (and if) it will be merged
  it is still a bit low-level, unlike a patch in oslo


 Let's give the paramkio devs a little time to review it.


I had a brief conversation with Jeff Forcier, and he likes the idea of
having some version of run() in paramiko. He will comment on the pull
request with some details about what his plans were, but I think we can
count on this going into a version of paramiko -- especially if we help.

Doug






 About spur: spur is looks ok, but it a bit complicated inside (it uses
 separate threads for non-blocking stdin/stderr reading [1]) and I don't
 know how it would work with eventlet.


 That does sound like it might cause issues. What would we need to do to
 test it?

 Doug




 [0] https://github.com/paramiko/paramiko/pull/245
 [1] https://github.com/mwilliamson/spur.py/blob/master/spur/io.py#L22


 --
 Regards,
 Sergey Skripnick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 11:31 AM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-01-08 08:24, Doug Hellmann wrote:




 On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-01-07 07:16, Doug Hellmann wrote:




 On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.comwrote:

  I have been seeing this problem also.

 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there, which
 installed oslo.sphinx.

 Strange thing is that the oslo.sphinx installed a directory called oslo
 in /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
 this package installed like so I get the same error you get with
 oslo.config.


  The oslo libraries use python namespace packages, which manifest
 themselves as a directory in site-packages (or dist-packages) with
 sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
 can be packaged separately, but still installed under the oslo directory
 and imported as oslo.sphinx and oslo.config.

 My guess is that installing oslo.sphinx globally (with sudo), set up 2
 copies of the namespace package (one in the global dist-packages and
 presumably one in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
 currently running into this.  oslo.sphinx is only installed in the venv and
 it creates a namespace package there.  Then if you try to load oslo.config
 in the venv it looks in the namespace package, doesn't find it, and bails
 with a missing module error.

 I'm personally running into this in tempest - I can't even run pep8 out
 of the box because the sample config check fails due to missing
 oslo.config.  Here's what I'm seeing:

 In the tox venv:
 (pep8)[fedora@devstack site-packages]$ ls oslo*
 oslo.sphinx-1.1-py2.7-nspkg.pth

 oslo:
 sphinx

 oslo.sphinx-1.1-py2.7.egg-info:
 dependency_links.txt  namespace_packages.txt  PKG-INFO top_level.txt
 installed-files.txt   not-zip-safeSOURCES.txt


 And in the system site-packages:
 [fedora@devstack site-packages]$ ls oslo*
 oslo.config.egg-link  oslo.messaging.egg-link


 Since I don't actually care about oslo.sphinx in this case, I also found
 that deleting it from the venv fixes the problem, but obviously that's just
 a hacky workaround.  My initial thought is to install oslo.sphinx in
 devstack the same way as oslo.config and oslo.messaging, but I assume
 there's a reason we didn't do it that way in the first place so I'm not
 sure if that will work.

 So I don't know what the proper fix is, but I thought I'd share what I've
 found so far.  Also, I'm not sure if this even relates to the ceilometer
 issue since I wouldn't expect that to be running in a venv, but it may have
 a similar issue.


  I wonder if the issue is actually that we're using pip install -e for
 oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
 things work properly if those packages are installed to the global
 site-packages from PyPI instead? We don't want to change the way devstack
 installs them, but it would give us another data point.

 Another solution is to have a list of dependencies needed for building
 documentation, separate from the tests, since oslo.sphinx isn't needed for
 the tests.



 It does work if I remove the pip install -e version of oslo.config and
 reinstall from the pypi package, so this appears to be an issue with the
 egg-links.


You had already tested installing oslo.sphinx with pip install -e, right?
That's probably the least-wrong answer. Either that or move oslo.sphinx to
a different top level package to avoid conflicting with runtime code.

Doug



 -Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-01-08 07:03:39 -0800:
 On Tue, Jan 7, 2014 at 11:20 PM, Robert Collins
 robe...@robertcollins.net wrote:
  On 8 January 2014 12:18, James Slagle james.sla...@gmail.com wrote:
  Sure, the crux of the problem was likely that versions in the distro
  were too old and they needed to be updated.  But unless we take on
  building the whole OS from source/git/whatever every time, we're
  always going to have that issue.  So, an additional benefit of
  packages is that you can install a known good version of an OpenStack
  component that is known to work with the versions of dependent
  software you already have installed.
 
  The problem is that OpenStack is building against newer stuff than is
  in distros, so folk building on a packaging toolchain are going to
  often be in catchup mode - I think we need to anticipate package based
  environments running against releases rather than CD.
 
 I just don't see anyone not building on a packaging toolchain, given
 that we're all running the distro of our choice and pip/virtualenv/etc
 are installed from distro packages.  Trying to isolate the building of
 components with pip installed virtualenvs was still a problem.  Short
 of uninstalling the build tools packages from the cloud image and then
 wget'ing the pip tarball, I don't think there would have been a good
 way around this particular problem.  Which, that approach may
 certainly make some sense for a CD scenario.
 

I will definitely concede that we find problems at a high rate during
image builds, and that we would not if we just waited for others to solve
those problems. However, when we do solve those problems, we solve them
for everyone downstream from us. That is one reason it is so desirable
to keep our work in TripleO as far upstream as possible. Package work is
inherently downstream.

Also it is worth noting that problems at image build time are much simpler
to handle, because they happen on a single machine generally. That is
one reason I down play those issues. For anyone not interested in running
CD, we have the release process to handle such problems and they should
_never_ see any of these issues, whether running from packages or on
stable branches in the git repos.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Jay Dobies's message of 2014-01-08 08:09:51 -0800:
 There were so many places in this thread that I wanted to jump in on as 
 I caught up, it makes sense to just summarize things in once place 
 instead of a half dozen quoted replies.
 
 I agree with the sentiments about flexibility. Regardless of my personal 
 preference on source v. packages, it's been my experience that the 
 general mindset of production deployment is that new ideas move slowly. 
 Admins are set in their ways and policies are in place on how things are 
 consumed.
 
 Maybe the newness of all things cloud-related and image-based management 
 for scale is a good time to shift the mentality out of packages (again, 
 I'm not suggesting whether or not it should be shifted). But I worry 
 about adoption if we don't provide an option for people to use blessed 
 distro packages, either because of company policy or years of habit and 
 bias. If done correctly, there's no difference between a package and a 
 particular tag in a source repository, but there is a psychological 
 component there that I think we need to account for, assuming someone is 
 willing to bite off the implementation costs (which is sounds like there 
 is).
 

Thanks for your thoughts Jay. I agree, what we're doing is kind of weird
sounding. Not everybody will be on-board with their OpenStack cloud being
wildly different from their existing systems. We definitely need to do
work to make it easy for them to get on the new train of thinking one
step at a time. Just having an OpenStack cloud will do a lot for any
org that has none.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Ben Nemec
 

On 2014-01-08 10:50, Doug Hellmann wrote: 

 On Wed, Jan 8, 2014 at 11:31 AM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-08 08:24, Doug Hellmann wrote: 
 
 On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-07 07:16, Doug Hellmann wrote: 
 
 On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.com wrote:
 
 I have been seeing this problem also. 
 
 My problem is actually with oslo.sphinx. I ran sudo pip install -r 
 test-requirements.txt in cinder so that I could run the tests there, which 
 installed oslo.sphinx. 
 
 Strange thing is that the oslo.sphinx installed a directory called oslo in 
 /usr/local/lib/python2.7/dist-packages with no __init__.py file. With this 
 package installed like so I get the same error you get with oslo.config. 
 
 The oslo libraries use python namespace packages, which manifest themselves 
 as a directory in site-packages (or dist-packages) with sub-packages but no 
 __init__.py(c). That way oslo.sphinx and oslo.config can be packaged 
 separately, but still installed under the oslo directory and imported as 
 oslo.sphinx and oslo.config. 
 
 My guess is that installing oslo.sphinx globally (with sudo), set up 2 copies 
 of the namespace package (one in the global dist-packages and presumably one 
 in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
currently running into this. oslo.sphinx is only installed in the venv
and it creates a namespace package there. Then if you try to load
oslo.config in the venv it looks in the namespace package, doesn't find
it, and bails with a missing module error. 

I'm personally running into this in tempest - I can't even run pep8 out
of the box because the sample config check fails due to missing
oslo.config. Here's what I'm seeing: 

In the tox venv: 
(pep8)[fedora@devstack site-packages]$ ls oslo*
oslo.sphinx-1.1-py2.7-nspkg.pth

oslo:
sphinx

oslo.sphinx-1.1-py2.7.egg-info:
dependency_links.txt namespace_packages.txt PKG-INFO top_level.txt
 installed-files.txt not-zip-safe SOURCES.txt 

And in the system site-packages: 
[fedora@devstack site-packages]$ ls oslo*
oslo.config.egg-link oslo.messaging.egg-link 

Since I don't actually care about oslo.sphinx in this case, I also found
that deleting it from the venv fixes the problem, but obviously that's
just a hacky workaround. My initial thought is to install oslo.sphinx in
devstack the same way as oslo.config and oslo.messaging, but I assume
there's a reason we didn't do it that way in the first place so I'm not
sure if that will work. 

So I don't know what the proper fix is, but I thought I'd share what
I've found so far. Also, I'm not sure if this even relates to the
ceilometer issue since I wouldn't expect that to be running in a venv,
but it may have a similar issue. 

I wonder if the issue is actually that we're using pip install -e for
oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
things work properly if those packages are installed to the global
site-packages from PyPI instead? We don't want to change the way
devstack installs them, but it would give us another data point. 

Another solution is to have a list of dependencies needed for building
documentation, separate from the tests, since oslo.sphinx isn't needed
for the tests. 

It does work if I remove the pip install -e version of oslo.config and
reinstall from the pypi package, so this appears to be an issue with the
egg-links. 

You had already tested installing oslo.sphinx with pip install -e,
right? That's probably the least-wrong answer. Either that or move
oslo.sphinx to a different top level package to avoid conflicting with
runtime code. 

Right. This https://review.openstack.org/#/c/65336/ also fixed the
problem for me, but according to Sean that's not something we should be
doing in devstack either. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 11:53 AM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-01-08 10:50, Doug Hellmann wrote:




 On Wed, Jan 8, 2014 at 11:31 AM, Ben Nemec openst...@nemebean.com wrote:

   On 2014-01-08 08:24, Doug Hellmann wrote:




 On Tue, Jan 7, 2014 at 12:32 PM, Ben Nemec openst...@nemebean.comwrote:

  On 2014-01-07 07:16, Doug Hellmann wrote:




 On Tue, Jan 7, 2014 at 6:24 AM, Michael Kerrin michael.ker...@hp.comwrote:

  I have been seeing this problem also.

 My problem is actually with oslo.sphinx. I ran sudo pip install -r
 test-requirements.txt in cinder so that I could run the tests there, which
 installed oslo.sphinx.

 Strange thing is that the oslo.sphinx installed a directory called oslo
 in /usr/local/lib/python2.7/dist-packages with no __init__.py file. With
 this package installed like so I get the same error you get with
 oslo.config.


  The oslo libraries use python namespace packages, which manifest
 themselves as a directory in site-packages (or dist-packages) with
 sub-packages but no __init__.py(c). That way oslo.sphinx and oslo.config
 can be packaged separately, but still installed under the oslo directory
 and imported as oslo.sphinx and oslo.config.

 My guess is that installing oslo.sphinx globally (with sudo), set up 2
 copies of the namespace package (one in the global dist-packages and
 presumably one in the virtualenv being used for the tests).

Actually I think it may be the opposite problem, at least where I'm
 currently running into this.  oslo.sphinx is only installed in the venv and
 it creates a namespace package there.  Then if you try to load oslo.config
 in the venv it looks in the namespace package, doesn't find it, and bails
 with a missing module error.

 I'm personally running into this in tempest - I can't even run pep8 out
 of the box because the sample config check fails due to missing
 oslo.config.  Here's what I'm seeing:

 In the tox venv:
 (pep8)[fedora@devstack site-packages]$ ls oslo*
 oslo.sphinx-1.1-py2.7-nspkg.pth

 oslo:
 sphinx

 oslo.sphinx-1.1-py2.7.egg-info:
 dependency_links.txt  namespace_packages.txt  PKG-INFO top_level.txt
 installed-files.txt   not-zip-safeSOURCES.txt


 And in the system site-packages:
 [fedora@devstack site-packages]$ ls oslo*
 oslo.config.egg-link  oslo.messaging.egg-link


 Since I don't actually care about oslo.sphinx in this case, I also found
 that deleting it from the venv fixes the problem, but obviously that's just
 a hacky workaround.  My initial thought is to install oslo.sphinx in
 devstack the same way as oslo.config and oslo.messaging, but I assume
 there's a reason we didn't do it that way in the first place so I'm not
 sure if that will work.

 So I don't know what the proper fix is, but I thought I'd share what
 I've found so far.  Also, I'm not sure if this even relates to the
 ceilometer issue since I wouldn't expect that to be running in a venv, but
 it may have a similar issue.


  I wonder if the issue is actually that we're using pip install -e for
 oslo.config and oslo.messaging (as evidenced by the .egg-link files). Do
 things work properly if those packages are installed to the global
 site-packages from PyPI instead? We don't want to change the way devstack
 installs them, but it would give us another data point.

 Another solution is to have a list of dependencies needed for building
 documentation, separate from the tests, since oslo.sphinx isn't needed for
 the tests.



 It does work if I remove the pip install -e version of oslo.config and
 reinstall from the pypi package, so this appears to be an issue with the
 egg-links.


  You had already tested installing oslo.sphinx with pip install -e,
 right? That's probably the least-wrong answer. Either that or move
 oslo.sphinx to a different top level package to avoid conflicting with
 runtime code.


 Right.  This https://review.openstack.org/#/c/65336/ also fixed the
 problem for me, but according to Sean that's not something we should be
 doing in devstack either.


Yeah, that's what made me start thinking oslo.sphinx should be called
something else.

Sean, how strongly do you feel about not installing oslo.sphinx in
devstack? I see your point, I'm just looking for alternatives to the hassle
of renaming oslo.sphinx.

Doug



 -Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Kurt Griffiths
Yeah, that could work. The main thing is to try and keep policy control in one 
place if you can rather than sprinkling it all over the place.

From: Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, January 8, 2014 at 10:41 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy

Hi Kurt,

As for WSGI middleware I think about Pecan hooks which can be added before 
actual controller call. Here is an example how we added a hook for keystone 
information collection: 
https://review.openstack.org/#/c/64458/4/solum/api/auth.py

What do you think, will this approach with Pecan hooks work?

Thanks
Georgy


On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com wrote:
You might also consider doing this in WSGI middleware:

Pros:

  *   Consolidates policy code in once place, making it easier to audit and 
maintain
  *   Simple to turn policy on/off – just don’t insert the middleware when off!
  *   Does not preclude the use of oslo.policy for rule checking
  *   Blocks unauthorized requests before they have a chance to touch the web 
framework or app. This reduces your attack surface and can improve performance  
 (since the web framework has yet to parse the request).

Cons:

  *   Doesn't work for policies that require knowledge that isn’t available 
this early in the pipeline (without having to duplicate a lot of code)
  *   You have to parse the WSGI environ dict yourself (this may not be a big 
deal, depending on how much knowledge you need to glean in order to enforce the 
policy).
  *   You have to keep your HTTP path matching in sync with with your route 
definitions in the code. If you have full test coverage, you will know when you 
get out of sync. That being said, API routes tend to be quite stable in 
relation to to other parts of the code implementation once you have settled on 
your API spec.

I’m sure there are other pros and cons I missed, but you can make your own best 
judgement whether this option makes sense in Solum’s case.

From: Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, January 7, 2014 at 6:54 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController 
vs. Nova policy




On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com wrote:
Hi Dough,

Thank you for pointing to this code. As I see you use OpenStack policy 
framework but not Pecan security features. How do you implement fine grain 
access control like user allowed to read only, writers and admins. Can you 
block part of API methods for specific user like access to create methods for 
specific user role?

The policy enforcement isn't simple on/off switching in ceilometer, so we're 
using the policy framework calls in a couple of places within our API code 
(look through v2.py for examples). As a result, we didn't need to build much on 
top of the existing policy module to interface with pecan.

For your needs, it shouldn't be difficult to create a couple of decorators to 
combine with pecan's hook framework to enforce the policy, which might be less 
complex than trying to match the operating model of the policy system to 
pecan's security framework.

This is the sort of thing that should probably go through Oslo and be shared, 
so please consider contributing to the incubator when you have something 
working.

Doug



Thanks
Georgy


On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:



On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com wrote:
Hi,

In Solum project we will need to implement security and ACL for Solum API. 
Currently we use Pecan framework for API. Pecan has its own security model 
based on SecureController class. At the same time OpenStack widely uses policy 
mechanism which uses json files to control access to specific API methods.

I wonder if someone has any experience with implementing security and ACL stuff 
with using Pecan framework. What is the right way to provide security for API?

In ceilometer we are using the keystone middleware and the policy framework to 
manage arguments that constrain the queries handled by the storage layer.

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/acl.py

and

http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/controllers/v2.py#n337

Doug




Re: [openstack-dev] [Nova][Vmware]Bad Performance when creating a new VM

2014-01-08 Thread Rick Jones

On 01/07/2014 06:30 PM, Ray Sun wrote:

Stackers,
I tried to create a new VM using the driver VMwareVCDriver, but I found
it's very slow when I try to create a new VM, for example, 7GB Windows
Image spent 3 hours.

Then I tried to use curl to upload a iso to vcenter directly.

curl -H Expect: -v --insecure --upload-file
windows2012_server_cn_x64.iso
https://administrator:root123.@200.21.0.99/folder/iso/windows2012_server_cn_x64.iso?dcPath=dataCenterdsName=datastore2;

The average speed is 0.8 MB/s.

Finally, I tried to use vSpere web client to upload it, it's only 250 KB/s.

I am not sure if there any special configurations for web interface for
vcenter. Please help.


I'm not fully versed in the plumbing, but while you are pushing via curl 
to 200.21.0.99 you might check the netstat statistics at the sending 
side, say once a minute, and see what the TCP retransmission rate 
happens to be.  If 200.21.0.99 has to push the bits to somewhere else 
you should follow that trail back to the point of origin, checking 
statistics on each node as you go.


You could, additionally, try running the likes of netperf (or iperf, but 
I have a natural inclination to suggest netperf...) between the same 
pairs of systems.  If netperf gets significantly better performance then 
you (probably) have an issue at the application layer rather than in the 
networking.


Depending on how things go with those, it may be desirable to get a 
packet trace of the upload via the likes of tcpdump.  It will be very 
much desirable to start the packet trace before the upload so you can 
capture the TCP connection establishment packets (aka the TCP 
SYNchronize segments) as those contain some important pieces of 
information about the capabilities of the connection.


rick jones


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
Excerpts from Jan Provaznik's message of 2014-01-08 03:00:19 -0800:
 On 01/07/2014 09:01 PM, James Slagle wrote:
  Hi,
 
  I'd like to discuss some possible ways we could install the OpenStack
  components from packages in tripleo-image-elements.  As most folks are
  probably aware, there is a fork of tripleo-image-elements called
  tripleo-puppet-elements which does install using packages, but it does
  so using Puppet to do the installation and for managing the
  configuration of the installed components.  I'd like to kind of set
  that aside for a moment and just discuss how we might support
  installing from packages using tripleo-image-elements directly and not
  using Puppet.
 
  One idea would be to add support for a new type (or likely 2 new
  types: rpm and dpkg) to the source-repositories element.
  source-repositories already knows about the git, tar, and file types,
  so it seems somewhat natural to have additional types for rpm and
  dpkg.
 
  A complication with that approach is that the existing elements assume
  they're setting up everything from source.  So, if we take a look at
  the nova element, and specifically install.d/74-nova, that script does
  stuff like install a nova service, adds a nova user, creates needed
  directories, etc.  All of that wouldn't need to be done if we were
  installing from rpm or dpkg, b/c presumably the package would take
  care of all that.
 
  We could fix that by making the install.d scripts only run if you're
  installing a component from source.  In that sense, it might make
  sense to add a new hook, source-install.d and only run those scripts
  if the type is a source type in the source-repositories configuration.
We could then have a package-install.d to handle the installation
  from the packages type.   The install.d hook could still exist to do
  things that might be common to the 2 methods.
 
  Thoughts on that approach or other ideas?
 
  I'm currently working on a patchset I can submit to help prove it out.
But, I'd like to start discussion on the approach now to see if there
  are other ideas or major opposition to that approach.
 
 
 Hi James,
 I think it would be really nice to be able install openstack+deps from 
 packages and many users (and cloud providers) would appreciate it.
 
 Among other things, with packages provided by a distro you get more 
 stability in compare to installing openstack from git repos and fetching 
 newest possible dependencies from pypi.
 
 In a real deployment setup  I don't want to use more-than-necessary 
 newer packages/dependencies when building images - taking an example 
 from last days I wouldn't have to bother with newer pip package which 
 breaks image building.
 

Right, from this perspective, you want to run OpenStack stable releases.
That should be fairly simple now by building images using the appropriate
environment variables.

However, we don't test that so it is likely to break as Icehouse diverges
from Havana. So I think in addition to package-enabling, those who want
to see TripleO work for stable releases should probably start looking at
creating stable branches of t-i-e and t-h-t to build images and templates
from starting at the icehouse time-frame.

So given that I'd suggest that packages take a back seat to making
TripleO part of the integrated release of OpenStack. Otherwise we'll
just have stable releases for the distros who have packages that work
with TripleO instead of for all distros.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Sean Dague
On 01/08/2014 12:06 PM, Doug Hellmann wrote:
snip
 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else. 
 
 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx.

Doing the git thing is definitely not the right thing. But I guess I got
lost somewhere along the way about what the actual problem is. Can
someone write that up concisely? With all the things that have been
tried/failed, why certain things fail, etc.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Fox, Kevin M
Let me give you a more concrete example, since you still think one size fits 
all here.

I am using OpenStack on my home server now. In the past, I had one machine with 
lots of services on it. At times, I would update one service and during the 
update process, a different service would break.

Last round of hardware purchasing got me an 8 core desktop processor with 16 
gigs of ram. Enough to give every service I have its own processor and 2 gigs 
of ram. So, I decided to run OpenStack on the server to manage the service vm's.

The base server  shares out my shared data with nfs, the vm's then re-export it 
in various ways like samba, dlna to my ps3, etc.

Now, I could create a golden image for each service type with everything all 
setup and good to go. And infrastructure to constantly build updated ones.

But in this case, grabbing Fedora cloud image or Ubuntu cloud image, and 
starting up the service with heat and a couple of line cloud init telling it to 
install just the package for the one service I need saves a ton of effort and 
space. The complexity is totally on the distro folks and not me. Very simple to 
maintain.

I can get almost the stability of the golden image simply by pausing the 
working service vm, spawning a new one, and only if its sane, switch to it and 
delete the old. In fact, Heat is working towards (if not already done) having 
Heat itself do this process for you.

I'm all for golden images as a tool. We use them a lot. Like all tools though, 
there isn't one works for all cases best tool.

I hope this use case helps.

Thanks,
Kevin


From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, January 08, 2014 8:36 AM
To: openstack-dev
Subject: Re: [openstack-dev] [TripleO] Installing from packages in  
tripleo-image-elements

Excerpts from Derek Higgins's message of 2014-01-08 02:11:09 -0800:
 On 08/01/14 05:07, Clint Byrum wrote:
  Excerpts from Fox, Kevin M's message of 2014-01-07 16:27:35 -0800:
  Another piece to the conversation I think is update philosophy. If
  you are always going to require a new image and no customization after
  build ever, ever, the messiness that source usually cause in the file
  system image really doesn't matter. The package system allows you to
  easily update, add, and remove packages bits at runtime cleanly. In
  our experimenting with OpenStack, its becoming hard to determine
  which philosophy is better. Golden Images for some things make a lot
  of sense. For other random services, the maintenance of the Golden
  Image seems to be too much to bother with and just installing a few
  packages after image start is preferable. I think both approaches are
  valuable. This may not directly relate to what is best for Triple-O
  elements, but since we are talking philosophy anyway...
 
 
  The golden image approach should be identical to the package approach if
  you are doing any kind of testing work-flow.
 
  Just install a few packages is how you end up with, as Robert said,
  snowflakes. The approach we're taking with diskimage-builder should
  result in that image building extremely rapidly, even if you compiled
  those things from source.

 This is the part of your argument I don't understand, creating images
 with packages is no more likely to result in snowflakes then creating
 images from sources in git.

 You would build an image using packages and at the end of the build
 process you can lock the package versions. Regardless of how the image
 is built you can consider it a golden image. This image is then deployed
 to your hosts and not changed.

 We would still be using diskimage-builder the main difference to the
 whole process is we would end up with a image that has more packages
 installed and no virtual envs.


I'm not saying building images from packages will encourage
snowflakes. I'm saying installing and updating on systems using packages
encourages snowflakes. Kevin was suggesting that the image workflow
wouldn't fit for everything, and thus was opening up the just install
a few packages on a system can of worms. I'm saying to Kevin, don't
do that, just make your image work-flow tighter, and suggesting it is
worth it to do that to avoid having snowflakes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-08 Thread Vishvananda Ishaya

On Jan 6, 2014, at 3:50 PM, Jon Bernard jbern...@tuxion.com wrote:

 Hello all,
 
 I would like to propose instance-level snapshots as a feature for
 inclusion in Nova.  An initial draft of the more official proposal is
 here [1], blueprint is here [2].
 
 In a nutshell, this feature will take the existing create-image
 functionality a few steps further by providing the ability to take
 a snapshot of a running instance that includes all of its attached
 volumes.  A coordinated snapshot of multiple volumes for backup
 purposes.  The snapshot operation should occur while the instance is in
 a paused and quiesced state so that each volume snapshot is both
 consistent within itself and with respect to its sibling snapshots.
 
 I still have some open questions on a few topics:
 
 * API changes, two different approaches come to mind:
 
  1. Nova already has a command `createImage` for creating an image of an
 existing instance.  This command could be extended to take an
 additional parameter `all-volumes` that signals the underlying code
 to capture all attached volumes in addition to the root volume.  The
 semantic here is important, `createImage` is used to create
 a template image stored in Glance for later reuse.  If the primary
 intent of this new feature is for backup only, then it may not be
 wise to overlap the two operations in this way.  On the other hand,
 this approach would introduce the least amount of change to the
 existing API, requiring only modification of an existing command
 instead of the addition of an entirely new one.
 
  2. If the feature's primary use is for backup purposes, then a new API
 call may be a better approach, and leave `createImage` untouched.
 This new call could be called `createBackup` and take as a parameter
 the name of the instance.  Although it introduces a new member to the
 API reference, it would allow this feature to evolve without
 introducing regressions in any existing calls.  These two calls could
 share code at some point in the future.

You’ve mentioned “If the feature’s use case is backup” a couple of times
without specifying the answer. I think this is important to the above
question. Also relevant is how the snapshot is stored and potentially
restored.

As you’ve defined the feature so far, it seems like most of it could
be implemented client side:

* pause the instance
* snapshot the instance
* snapshot any attached volumes

The only thing missing in this scenario is snapshotting any ephemeral
drives. There are workarounds for this such as:
 * use flavor with no ephemeral storage
 * boot from volume

It is also worth mentioning that snapshotting a boot from volume instance
will actually do most of this for you (everything but pausing the instance)
and additionally give you an image which when booted will lead to a clone
of all of the snapshotted volumes.

So unless there is some additional feature regarding storing or restoring
the backup, I only see one potential area for improvement inside of nova:
Modifying the snapshot command to allow for snapshotting of ephemeral
drives.

If this is an important feature, rather than an all in one command, I
suggest an extension to createImage which would allow you to specify the
drive you wish to snapshot. If you could specify drive: vdb in the snapshot
command it would allow you to snapshot all the components individually.

Vish

 
 * Existing libvirt support:
 
To initially support consistent-across-multiple-volumes snapshots,
we must be able to ask libvirt for a snapshot of an already paused
guest.  I don't believe such a call is currently supported, so
changes to libvirt may be a prerequisite for this feature.
 
 Any contribution, comments, and pieces of advice are much appreciated.
 
 [1]: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
 [2]: https://blueprints.launchpad.net/nova/+spec/instance-level-snapshots
 
 -- 
 Jon
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Sean Dague
On 01/08/2014 11:40 AM, Noorul Islam Kamal Malmiyoda wrote:
 
 On Jan 8, 2014 9:58 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote:

 Hi,

 I do understand why there is a push back for this patch. This patch is
 for infrastructure project which works for multiple projects. Infra
 maintainers should not know specifics of each project in details. If
 this patch is a temporary solution then who will be responsible to
 remove it? 

 
 I am not sure who is responsible for solum related configurations in
 infra project. I see that almost all the infra config for solum project
 is done by solum members. So I think any solum member can submit a patch
 to revert this once we have a permanent solution.
 
 If we need start this gate I propose to revert all patches which led
 to this inconsistent state and apply workaround in Solum repository
 which is under Solum team full control and review. We need to open a bug
 in Solum project to track this.

 
 The problematic patch [1] solves a specific problem. Do we have other
 ways to solve it?
 
 Regards,
 Noorul
 
 [1] https://review.openstack.org/#/c/64226

Why is test-requirements.txt getting installed in pre_test instead of
post_test? Installing test-requirements prior to installing devstack
itself in no way surprises me that it causes issues. You can see that
command is litterally the first thing in the console -
http://logs.openstack.org/66/62466/7/gate/gate-solum-devstack-dsvm/49bac35/console.html#_2014-01-08_13_46_15_161

It should be installed right before tests get run, which I assume is L34
of this file -
https://review.openstack.org/#/c/64226/3/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml

Given that is where ./run_tests.sh is run.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][ipv6]Hairpinning in libvirt, once more

2014-01-08 Thread Vishvananda Ishaya
The logic makes sense to me here. I’m including Evan Callicoat in this response 
in case he has any comments on the points you make below.

Vish
 
On Jan 7, 2014, at 4:57 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 See Sean Collins' review https://review.openstack.org/#/c/56381 which 
 disables hairpinning when Neutron is in use.  tl;dr - please upvote the 
 review.  Long form reasoning follows...
 
 There's a solid logical reason for enabling hairpinning, but it only applies 
 to nova-network.  Hairpinning is used in nova-network so that packets from a 
 machine and destined for that same machine's floating IP address are returned 
 to it.  They then pass through the rewrite rules (within the libvirt filters 
 on the instance's tap interface) that do the static NAT mapping to translate 
 floating IP to fixed IP.
 
 Whoever implemented this assumed that hairpinning in other situations is 
 harmless.  However, this same feature also prevents IPv6 from working - 
 returned neighbor discovery packets panic VMs into thinking they're using a 
 duplicate address on the network.  So we'd like to turn it off.  Accepting 
 that nova-network will change behaviour comprehensively if we just remove the 
 code, we've elected to turn it off only when Neutron is being used and leave 
 nova-network broken for ipv6.
 
 Obviously, this presents an issue, because we're changing the way that 
 Openstack behaves in a user-visible way - hairpinning may not be necessary or 
 desirable for Neutron, but it's still detectable when it's on or off if you 
 try hard enough - so the review comments to date have been conservatively 
 suggesting that we avoid the functional change as much as possible, and 
 there's a downvote to that end.  But having done more investigation I don't 
 think there's sufficient justification to keep the status quo.
 
 We've also talked about leaving hairpinning off if and only if the Neutron 
 plugin explicitly says that it doesn't want to use hairpinning.  We can 
 certainly do this, and I've looked into it, but in practice it's not worth 
 the code and interface changes: 
 
  - Neutron (not 'some drivers' - this is consistent across all of them) does 
 NAT rewriting in the routers now, not on the ports, so hairpinning doesn't 
 serve its intended purpose; what it actually does is waste CPU and bandwidth 
 by receives a packet every time it sends an outgoing packet and precious 
 little else.  The instance doesn't expect these packets, it always ignores 
 these packets, but it receives them anyway.  It's a pointless no-op, though 
 there exists the theoretical possibility that someone is relying on it for 
 their application.
 - it's *only* libvirt that ever turns hairpinning on in the first place - 
 none of the other drivers do it
 - libvirt only turns it on sometimes - for hybrid VIFs it's enabled, if 
 generic VIFs are configured and linuxbridge is in use it's enabled, but for 
 generic VIFs and OVS is in use then the enable function fails silently (and, 
 indeed, has been designed to fail silently, it seems).
 
 Given these details, there seems little point in making the code more complex 
 to support a feature that isn't universal and isn't needed; better that we 
 just disable it for Neutron and be done.  So (and test failures aside) could 
 I ask that the core devs check and approve the patch review?
 -- 
 Ian.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi Doug,

Answers inline again.

Best Regards,
Ildiko

On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
Hi,

I've started to work on the idea of supporting a kind of tenant/project based 
configuration for Ceilometer. Unfortunately I haven't reached the point of 
having a blueprint that could be registered until now. I do not have a deep 
knowledge about the collector and compute agent services, but this feature 
would require some deep changes for sure. Currently there are pipelines for 
data collection and transformation, where the counters can be specified, about 
which data should be collected and also the time interval for data collection 
and so on. These pipelines can be configured now globally in the pipeline.yaml 
file, which is stored right next to the Ceilometer configuration files.

Yes, the data collection was designed to be configured and controlled by the 
deployer, not the tenant. What benefits do we gain by giving that control to 
the tenant?

ildikov: Sorry, my explanation was not clear. I meant there the configuration 
of data collection for projects, what was mentioned by Tim Bell in a previous 
email. This would mean that the project administrator is able to create a data 
collection configuration for his/her own project, which will not affect the 
other project's configuration. The tenant would be able to specify meters 
(enabled/disable based on which ones are needed) for the given project also 
with project specific time intervals, etc.

OK, I think some of the confusion is terminology. Who is a project 
administrator? Is that someone with access to change ceilometer's 
configuration file directly? Someone with a particular role using the API? Or 
something else?

ildikov: As project administrator I meant a user with particular role, a user 
assigned to a tenant.




In my view, we could keep the dynamic meter configuration bp with considering 
to extend it to dynamic configuration of Ceilometer, not just the meters and we 
could have a separate bp for the project based configuration of meters.

Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are 
the needs for dynamic configuration updates in ceilometer different from the 
other services?

ildikov: There are some parameters in the configuration file of Ceilometer, 
like log options and notification types, which would be good to be able to 
configure them dynamically. I just wanted to reflect to that need. As I see, 
there are two options here. The first one is to identify the group of the 
dynamically modifiable parameters and move them to the API level. The other 
option could be to make some modifications in oslo.config too, so other 
services also could use the benefits of dynamic configuration. For example the 
log settings could be a good candidate, as for example the change of log 
levels, without service restart, in case debugging the system can be a useful 
feature for all of the OpenStack services.

I misspoke earlier. If we're talking about meters, those are actually defined 
by the pipeline file (not oslo.config). So if we do want that file re-read 
automatically, we can implement that within ceilometer itself, though I'm still 
reluctant to say we want to provide API access for modifying those settings. 
That's *really* not something we've designed the rest of the system to 
accommodate, so I don't know what side-effects we might introduce.

ildikov: In case of oslo.config, I meant the ceilometer.conf file in my answer 
above, not pipeline.yaml. As for the API part, I do not know the consequences 
of that implementation either, so now I'm kind of waiting for the outcome of 
this Dynamic Meters bp.

As far as the other configuration settings, we had the conversation about 
updating those through some sort of API early on, and decided that there are 
already lots of operational tools out there to manage changes to those files. I 
would need to see a list of which options people would want to have changed 
through an API to comment further.

ildikov: Yes, I agree that not all the parameters should be configured 
dynamically. It just popped into my mind regarding the dynamic configuration, 
that there would be a need to configure other configuration parameters, not 
just meters, that is why I mentioned it as a considerable item.

Doug



Doug



If it is ok for you, I will register the bp for this per-project tenant 
settings with some details, when I'm finished with the initial design of how 
this feature could work.

Best Regards,
Ildiko

-Original Message-
From: Neal, Phil [mailto:phil.n...@hp.commailto:phil.n...@hp.com]
Sent: Tuesday, January 07, 2014 11:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

For multi-node deployments, implementing something like inotify would allow 
administrators to push configuration changes out to 

Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Ben Nemec

On 2014-01-08 11:16, Sean Dague wrote:

On 01/08/2014 12:06 PM, Doug Hellmann wrote:
snip

Yeah, that's what made me start thinking oslo.sphinx should be called
something else.

Sean, how strongly do you feel about not installing oslo.sphinx in
devstack? I see your point, I'm just looking for alternatives to the
hassle of renaming oslo.sphinx.


Doing the git thing is definitely not the right thing. But I guess I 
got

lost somewhere along the way about what the actual problem is. Can
someone write that up concisely? With all the things that have been
tried/failed, why certain things fail, etc.


The problem seems to be when we pip install -e oslo.config on the 
system, then pip install oslo.sphinx in a venv.  oslo.config is 
unavailable in the venv, apparently because the namespace package for 
o.s causes the egg-link for o.c to be ignored.  Pretty much every other 
combination I've tried (regular pip install of both, or pip install -e 
of both, regardless of where they are) works fine, but there seem to be 
other issues with all of the other options we've explored so far.


We can't remove the pip install -e of oslo.config because it has to be 
used for gating, and we can't pip install -e oslo.sphinx because it's 
not a runtime dep so it doesn't belong in the gate.  Changing the 
toplevel package for oslo.sphinx was also mentioned, but has obvious 
drawbacks too.


I think that about covers what I know so far.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi,

(You didn't Cc the list, not sure if it was on purpose. I'm not adding it back 
to not break any confidentiality, but feel free to do so.)

Sorry that was just a mistake.

  The point is to configure the data collection configuration for the 
  currently existing meters differently for tenants. It is not just 
  about enabling or disabling of meters. It could be used to change the 
  interval settings of meters, like tenantA would like to receive 
  cpu_util samples in every 10 seconds and tenantB would like to receive 
  cpu_util in every 1 minute, but network.incoming.bytes in every 20 
  seconds. As for disabling meters, for instance tenantA needs 
  disk.read.requests and disk.write.requests, while tenantB doesn't.

 Ok, so this is really about something the _operator_ wants to do, not users. 
 I still don't think it belongs to an API, at least not specific to Ceilometer.

My idea was just about providing the possibility to configure the data 
collection in Ceilometer differently for the different tenants, I didn't mean 
to link it to an API or at least not on the first place. It could be done by 
the operator as well, for instance, if the polling frequency should be 
different in case of tenants.

Best Regards,
Ildiko

 --
 Julien Danjou
 -- Free Software hacker - independent consultant
 -- http://julien.danjou.info
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Julien Danjou
On Wed, Jan 08 2014, Ildikó Váncsa wrote:

 My idea was just about providing the possibility to configure the data
 collection in Ceilometer differently for the different tenants, I didn't
 mean to link it to an API or at least not on the first place. It could be
 done by the operator as well, for instance, if the polling frequency should
 be different in case of tenants.

Yeah, that would work, we would just need to add a list of project to
the yaml file. We are already doing that for resources anyway, we can do
it for user and project as well.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Devstack gate is failing

2014-01-08 Thread Noorul Islam Kamal Malmiyoda
On Wed, Jan 8, 2014 at 11:02 PM, Sean Dague s...@dague.net wrote:
 On 01/08/2014 11:40 AM, Noorul Islam Kamal Malmiyoda wrote:

 On Jan 8, 2014 9:58 PM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote:

 Hi,

 I do understand why there is a push back for this patch. This patch is
 for infrastructure project which works for multiple projects. Infra
 maintainers should not know specifics of each project in details. If
 this patch is a temporary solution then who will be responsible to
 remove it?


 I am not sure who is responsible for solum related configurations in
 infra project. I see that almost all the infra config for solum project
 is done by solum members. So I think any solum member can submit a patch
 to revert this once we have a permanent solution.

 If we need start this gate I propose to revert all patches which led
 to this inconsistent state and apply workaround in Solum repository
 which is under Solum team full control and review. We need to open a bug
 in Solum project to track this.


 The problematic patch [1] solves a specific problem. Do we have other
 ways to solve it?

 Regards,
 Noorul

 [1] https://review.openstack.org/#/c/64226

 Why is test-requirements.txt getting installed in pre_test instead of
 post_test? Installing test-requirements prior to installing devstack
 itself in no way surprises me that it causes issues. You can see that
 command is litterally the first thing in the console -
 http://logs.openstack.org/66/62466/7/gate/gate-solum-devstack-dsvm/49bac35/console.html#_2014-01-08_13_46_15_161

 It should be installed right before tests get run, which I assume is L34
 of this file -
 https://review.openstack.org/#/c/64226/3/modules/openstack_project/files/jenkins_job_builder/config/solum.yaml

 Given that is where ./run_tests.sh is run.


This might help, but run_tests.sh anyhow will import oslo.config. I
need to test this and see.

Regards,
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-08 Thread Mark Washenberger
On Mon, Jan 6, 2014 at 3:50 PM, Jon Bernard jbern...@tuxion.com wrote:

 Hello all,

 I would like to propose instance-level snapshots as a feature for
 inclusion in Nova.  An initial draft of the more official proposal is
 here [1], blueprint is here [2].

 In a nutshell, this feature will take the existing create-image
 functionality a few steps further by providing the ability to take
 a snapshot of a running instance that includes all of its attached
 volumes.  A coordinated snapshot of multiple volumes for backup
 purposes.  The snapshot operation should occur while the instance is in
 a paused and quiesced state so that each volume snapshot is both
 consistent within itself and with respect to its sibling snapshots.

 I still have some open questions on a few topics:

 * API changes, two different approaches come to mind:

   1. Nova already has a command `createImage` for creating an image of an
  existing instance.  This command could be extended to take an
  additional parameter `all-volumes` that signals the underlying code
  to capture all attached volumes in addition to the root volume.  The
  semantic here is important, `createImage` is used to create
  a template image stored in Glance for later reuse.  If the primary
  intent of this new feature is for backup only, then it may not be
  wise to overlap the two operations in this way.  On the other hand,
  this approach would introduce the least amount of change to the
  existing API, requiring only modification of an existing command
  instead of the addition of an entirely new one.

   2. If the feature's primary use is for backup purposes, then a new API
  call may be a better approach, and leave `createImage` untouched.
  This new call could be called `createBackup` and take as a parameter
  the name of the instance.  Although it introduces a new member to the
  API reference, it would allow this feature to evolve without
  introducing regressions in any existing calls.  These two calls could
  share code at some point in the future.

 * Existing libvirt support:

 To initially support consistent-across-multiple-volumes snapshots,
 we must be able to ask libvirt for a snapshot of an already paused
 guest.  I don't believe such a call is currently supported, so
 changes to libvirt may be a prerequisite for this feature.

 Any contribution, comments, and pieces of advice are much appreciated.

 [1]: https://wiki.openstack.org/wiki/Nova/InstanceLevelSnapshots
 [2]: https://blueprints.launchpad.net/nova/+spec/instance-level-snapshots


Hi Jon,

In your specification in the Snapshot Storage section you say it might be
nice to combine all of the snapshot images into a single OVF file that
contains all volumes attached to the instance at the time of snapshot. I'd
love it if, by the time you get to the point of implementing this storage
part, we have an option available to you in Glance for storing something
akin to an Instance template. An instance template would be an entity
stored in Glance with references to each volume or image that was uploaded
as part of the snapshot. As an example, it could be something like

instance_template: {
   /dev/sda: /v2/images/some-imageid,
   /dev/sdb: some url for a cinder volume-like entity
}

Essentially, this kind of storage would bring the OVF metadata up into
Glance rather than burying it down in an image byte stream where it is
harder to search or access.

This is an idea that has been discussed several times before, generally
favorably, and if we move ahead with instance-level snapshots in Nova I'd
love to move quickly to support it in Glance. Part of the reason for the
delay of this feature was my worry that if Glance jumps out ahead, we'll
end up with some instance template format that Nova doesn't really want, so
this opportunity for collaboration on use cases would be fantastic.

If after a bit more discussion in this thread, folks think these templates
in Glance would be a good idea, we can try to draw up a proposal for how to
implement the first cut of this feature in Glance.

Thanks




 --
 Jon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi,

  My idea was just about providing the possibility to configure the data 
  collection in Ceilometer differently for the different tenants, I 
  didn't mean to link it to an API or at least not on the first place. 
  It could be done by the operator as well, for instance, if the polling 
  frequency should be different in case of tenants.

 Yeah, that would work, we would just need to add a list of project to the 
 yaml file. We are already doing that for resources anyway, we can do it for 
 user and project as well.

Ok, that sounds good. Then I will create a blueprint based on this direction.

Best Regards,
Ildiko

 --
 Julien Danjou
 ;; Free Software hacker ; independent consultant ;; http://julien.danjou.info
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec openst...@nemebean.com wrote:

 On 2014-01-08 11:16, Sean Dague wrote:

 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 snip

 Yeah, that's what made me start thinking oslo.sphinx should be called
 something else.

 Sean, how strongly do you feel about not installing oslo.sphinx in
 devstack? I see your point, I'm just looking for alternatives to the
 hassle of renaming oslo.sphinx.


 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.


 The problem seems to be when we pip install -e oslo.config on the system,
 then pip install oslo.sphinx in a venv.  oslo.config is unavailable in the
 venv, apparently because the namespace package for o.s causes the egg-link
 for o.c to be ignored.  Pretty much every other combination I've tried
 (regular pip install of both, or pip install -e of both, regardless of
 where they are) works fine, but there seem to be other issues with all of
 the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
 used for gating, and we can't pip install -e oslo.sphinx because it's not a
 runtime dep so it doesn't belong in the gate.  Changing the toplevel
 package for oslo.sphinx was also mentioned, but has obvious drawbacks too.

 I think that about covers what I know so far.


Here's a link dstufft provided to the pip bug tracking this problem:
https://github.com/pypa/pip/issues/3

Doug





 -Ben

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa
ildiko.van...@ericsson.comwrote:

  Hi Doug,



 Answers inline again.



 Best Regards,

 Ildiko



 On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa ildiko.van...@ericsson.com
 wrote:

 Hi,

 I've started to work on the idea of supporting a kind of tenant/project
 based configuration for Ceilometer. Unfortunately I haven't reached the
 point of having a blueprint that could be registered until now. I do not
 have a deep knowledge about the collector and compute agent services, but
 this feature would require some deep changes for sure. Currently there are
 pipelines for data collection and transformation, where the counters can be
 specified, about which data should be collected and also the time interval
 for data collection and so on. These pipelines can be configured now
 globally in the pipeline.yaml file, which is stored right next to the
 Ceilometer configuration files.



 Yes, the data collection was designed to be configured and controlled by
 the deployer, not the tenant. What benefits do we gain by giving that
 control to the tenant?



 ildikov: Sorry, my explanation was not clear. I meant there the
 configuration of data collection for projects, what was mentioned by Tim
 Bell in a previous email. This would mean that the project administrator is
 able to create a data collection configuration for his/her own project,
 which will not affect the other project’s configuration. The tenant would
 be able to specify meters (enabled/disable based on which ones are needed)
 for the given project also with project specific time intervals, etc.



 OK, I think some of the confusion is terminology. Who is a project
 administrator? Is that someone with access to change ceilometer's
 configuration file directly? Someone with a particular role using the API?
 Or something else?



 ildikov: As project administrator I meant a user with particular role, a
 user assigned to a tenant.


OK, so like I said, we did not design the system with the idea that a user
of the cloud (rather than the deployer of the cloud) would have any control
over what data was collected. They can ask questions about only some of the
data, but they can't tell ceilometer what to collect.

There's a certain amount of danger in giving the cloud user (no matter
their role) an off switch for the data collection. As Julien pointed out,
it can have a negative effect on billing -- if they tell the cloud not to
collect data about what instances are created, then the deployer can't
bill for those instances. Differentiating between the values that always
must be collected and the ones the user can control makes providing an API
to manage data collection more complex.

Is there some underlying use case behind all of this that someone could
describe in more detail, so we might be able to find an alternative, or
explain how to use the existing features to achieve the goal? For example,
it is already possible to change the pipeline config file to control which
data is collected and stored. If we make the pipeline code in ceilometer
watch for changes to that file, and rebuild the pipelines when the config
is updated, would that satisfy the requirements?

In my view, we could keep the dynamic meter configuration bp with
 considering to extend it to dynamic configuration of Ceilometer, not just
 the meters and we could have a separate bp for the project based
 configuration of meters.



 Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
 are the needs for dynamic configuration updates in ceilometer different
 from the other services?



 ildikov: There are some parameters in the configuration file of
 Ceilometer, like log options and notification types, which would be good to
 be able to configure them dynamically. I just wanted to reflect to that
 need. As I see, there are two options here. The first one is to identify
 the group of the dynamically modifiable parameters and move them to the API
 level. The other option could be to make some modifications in oslo.config
 too, so other services also could use the benefits of dynamic
 configuration. For example the log settings could be a good candidate, as
 for example the change of log levels, without service restart, in case
 debugging the system can be a useful feature for all of the OpenStack
 services.



 I misspoke earlier. If we're talking about meters, those are actually
 defined by the pipeline file (not oslo.config). So if we do want that file
 re-read automatically, we can implement that within ceilometer itself,
 though I'm still reluctant to say we want to provide API access for
 modifying those settings. That's *really* not something we've designed the
 rest of the system to accommodate, so I don't know what side-effects we
 might introduce.



 ildikov: In case of oslo.config, I meant the ceilometer.conf file in my
 answer above, not pipeline.yaml. As for the API part, I do not know the
 consequences of that 

Re: [openstack-dev] [nova] Bogus -1 scores from turbo hipster

2014-01-08 Thread Samuel Merritt

On 1/7/14 2:53 PM, Michael Still wrote:

Hi. Thanks for reaching out about this.

It seems this patch has now passed turbo hipster, so I am going to
treat this as a more theoretical question than perhaps you intended. I
should note though that Joshua Hesketh and I have been trying to read
/ triage every turbo hipster failure, but that has been hard this week
because we're both at a conference.

The problem this patch faced is that we are having trouble defining
what is a reasonable amount of time for a database migration to run
for. Specifically:

2014-01-07 14:59:32,012 [output] 205 - 206...
2014-01-07 14:59:32,848 [heartbeat]
2014-01-07 15:00:02,848 [heartbeat]
2014-01-07 15:00:32,849 [heartbeat]
2014-01-07 15:00:39,197 [output] done

So applying migration 206 took slightly over a minute (67 seconds).
Our historical data (mean + 2 standard deviations) says that this
migration should take no more than 63 seconds. So this only just
failed the test.


It seems to me that requiring a runtime less than (mean + 2 stddev) 
leads to a false-positive rate of 1 in 40, right? If the runtimes have a 
normal(-ish) distribution, then 95% of them will be within 2 standard 
deviations of the mean, so that's 1 in 20 falling outside that range. 
Then discard the ones that are faster than (mean - 2 stddev), and that 
leaves 1 in 40. Please correct me if I'm wrong; I'm no statistician.


Such a high false-positive may make it too easy to ignore turbo hipster 
as the bot that cried wolf. This problem already exists with Jenkins and 
the devstack/tempest tests; when one of those fails, I don't wonder what 
I broke, but rather how many times I'll have to recheck the patch until 
the tests pass.


Unfortunately, I don't have a solution to offer, but perhaps someone 
else will.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] new (docs) requirement for third party CI

2014-01-08 Thread Joe Gordon
On Jan 8, 2014 7:12 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 I'd like to propose that we add another item to the list here [1] that is
basically related to what happens when the 3rd party CI job votes a -1 on
your patch.  This would include:

 1. Documentation on how to analyze the results and a good overview of
what the job does (like the docs we have for check/gate testing now).
 2. How to recheck the specific job if needed, i.e. 'recheck migrations'.
 3. Who to contact if you can't figure out what's going on with the job.

 Ideally this information would be in the comments when the job scores a
-1 on your patch, or at least it would leave a comment with a link to a
wiki for that job like we have with Jenkins today.

 I'm all for more test coverage but we need some solid documentation
around that when it's not owned by the community so we know what to do with
the results if they seem like false negatives.

 If no one is against this or has something to add, I'll update the wiki.

-1 to putting this in the wiki. This isn't a nova only issue. We are trying
to collect the requirements here:

https://review.openstack.org/#/c/63478/


 [1]
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Tim Bell


Thanks for the clarifications. Given the role descriptions as provided, I no 
longer think there is a need for an API call or per project meter 
enable/disable. Thus, the inotify approach would seem to be viable (and much 
simpler to implement since the state is clearly defined across daemon restarts)



Tim


From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
Sent: 08 January 2014 19:27
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer



On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
Hi Doug,

Answers inline again.

Best Regards,
Ildiko

On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
Hi,

I've started to work on the idea of supporting a kind of tenant/project based 
configuration for Ceilometer. Unfortunately I haven't reached the point of 
having a blueprint that could be registered until now. I do not have a deep 
knowledge about the collector and compute agent services, but this feature 
would require some deep changes for sure. Currently there are pipelines for 
data collection and transformation, where the counters can be specified, about 
which data should be collected and also the time interval for data collection 
and so on. These pipelines can be configured now globally in the pipeline.yaml 
file, which is stored right next to the Ceilometer configuration files.

Yes, the data collection was designed to be configured and controlled by the 
deployer, not the tenant. What benefits do we gain by giving that control to 
the tenant?

ildikov: Sorry, my explanation was not clear. I meant there the configuration 
of data collection for projects, what was mentioned by Tim Bell in a previous 
email. This would mean that the project administrator is able to create a data 
collection configuration for his/her own project, which will not affect the 
other project's configuration. The tenant would be able to specify meters 
(enabled/disable based on which ones are needed) for the given project also 
with project specific time intervals, etc.

OK, I think some of the confusion is terminology. Who is a project 
administrator? Is that someone with access to change ceilometer's 
configuration file directly? Someone with a particular role using the API? Or 
something else?

ildikov: As project administrator I meant a user with particular role, a user 
assigned to a tenant.

OK, so like I said, we did not design the system with the idea that a user of 
the cloud (rather than the deployer of the cloud) would have any control over 
what data was collected. They can ask questions about only some of the data, 
but they can't tell ceilometer what to collect.

There's a certain amount of danger in giving the cloud user (no matter their 
role) an off switch for the data collection. As Julien pointed out, it can 
have a negative effect on billing -- if they tell the cloud not to collect data 
about what instances are created, then the deployer can't bill for those 
instances. Differentiating between the values that always must be collected and 
the ones the user can control makes providing an API to manage data collection 
more complex.

Is there some underlying use case behind all of this that someone could 
describe in more detail, so we might be able to find an alternative, or explain 
how to use the existing features to achieve the goal? For example, it is 
already possible to change the pipeline config file to control which data is 
collected and stored. If we make the pipeline code in ceilometer watch for 
changes to that file, and rebuild the pipelines when the config is updated, 
would that satisfy the requirements?

In my view, we could keep the dynamic meter configuration bp with considering 
to extend it to dynamic configuration of Ceilometer, not just the meters and we 
could have a separate bp for the project based configuration of meters.

Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are 
the needs for dynamic configuration updates in ceilometer different from the 
other services?

ildikov: There are some parameters in the configuration file of Ceilometer, 
like log options and notification types, which would be good to be able to 
configure them dynamically. I just wanted to reflect to that need. As I see, 
there are two options here. The first one is to identify the group of the 
dynamically modifiable parameters and move them to the API level. The other 
option could be to make some modifications in oslo.config too, so other 
services also could use the benefits of dynamic configuration. For example the 
log settings could be a good candidate, as for example the change of log 
levels, without service restart, in case debugging the system can be a useful 
feature for all of the OpenStack services.

I misspoke earlier. If we're talking about meters, 

Re: [openstack-dev] [nova] minimum review period for functional changes that break backwards compatibility

2014-01-08 Thread Jay Pipes
On Wed, 2014-01-08 at 14:26 +0100, Thierry Carrez wrote:
 Tim Bell wrote:
  +1 from me too UpgradeImpact is a much better term.
 
 So this one is already documented[1], but I don't know if it actually
 triggers anything yet.
 
 Should we configure it to post to openstack-operators, the same way as
 SecurityImpact posts to openstack-security ?

Huge +1 from me here.

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple config files for neutron server

2014-01-08 Thread Jay Pipes
On Wed, 2014-01-08 at 07:21 -0500, Sean Dague wrote:
 On 01/06/2014 02:58 PM, Jay Pipes wrote:
  On Mon, 2014-01-06 at 23:45 +0400, Eugene Nikanorov wrote:
  Hi folks,
 
 
  Recently we had a discussion with Sean Dague on the matter.
  Currently Neutron server has a number of configuration files used for
  different purposes:
   - neutron.conf - main configuration parameters, plugins, db and mq
  connections
   - plugin.ini - plugin-specific networking settings
   - conf files for ml2 mechanisms drivers (AFAIK to be able to use
  several mechanism drivers we need to pass all of these conf files to
  neutron server)
   - services.conf - recently introduced conf-file to gather
  vendor-specific parameters for advanced services drivers.
  Particularly, services.conf was introduced to avoid polluting
  'generic' neutron.conf with vendor parameters and sections.
 
 
  The discussion with Sean was about whether to add services.conf to
  neutron-server launching command in devstack
  (https://review.openstack.org/#/c/64377/ ). services.conf would be 3rd
  config file that is passed to neutron-server along with neutron.conf
  and plugin.ini.
 
 
  Sean has an argument that providing many conf files in a command line
  is not a good practice, suggesting setting up configuration directory
  instead. There is no such capability in neutron right now so I'd like
  to hear opinions on this before putting more efforts in resolving this
  in with other approach than used in the patch on review.
  
  I'd say just put the additional conf file on the command line for now.
  Adding in support to oslo.cfg for a config directory can come later.
  
  Just my 2 cents,
 
 So the net of that is that in a production environment, in order to
 change some services, you'd be expected to change the init scripts to
 list the right config files.

Good point.

 That seems *really* weird, and also really different from the rest of
 OpenStack services. It also means you can't use the oslo config
 generator to generate documented samples.
 
 If neutron had been running a grenade job, it would have blocked this
 attempted change, because it would require adding config files between
 releases.
 
 So this all smells pretty bad to me. Especially in the context of
 migration paths from nova (which handles this very differently) = neutron.

So, I was under the impression that the Neutron changes to require a
services.conf had *already* been merged into master, and therefore the
problem domain here was not whether the services.conf addition was the
right approach, but rather *how to deal with it in devstack*, and that's
why I wrote to just add it to the command line in the devstack builder.

A better (upstream in Neutron) solution would have been to use something
like an include.d/ directive in the nova.conf. But I thought that we
were past the implementation point in Neutron?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Common SSH

2014-01-08 Thread Sergey Skripnick






On Wed, Jan 8, 2014 at 10:43 AM, Eric Windisch ewindi...@docker.com  
wrote:










About spur: spur is looks ok, but it a bit complicated inside (it uses

separate threads for non-blocking stdin/stderr reading [1]) and I  
don't


know how it would work with eventlet.


That does sound like it might cause issues. What would we need to do  
to test it?









Looking at the code, I don't expect it to be an issue. The  
monkey-patching will cause eventlet.spawn to be called for  
threading.Thread. The code looks eventlet-friendly enough on the  
surface. Error handing around file read/write could be affected, but  
it also looks fine.




Thanks for that analysis Eric.

Is there any reason for us to prefer one approach over the other, then?

Doug


So, there is only one reason left -- oslo lib is more simple and  
lightweight

(not using threads). Anyway this class is used by stackforge/rally and
may be used by other projects instead of buggy oslo.processutils.ssh.



--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Pecan][Security] Pecan SecureController vs. Nova policy

2014-01-08 Thread Georgy Okrokvertskhov
Hi,

Keep policy control in one place is a good idea. We can use standard policy
approach and keep access control configuration in json file as it done in
Nova and other projects.
Keystone uses wrapper function for methods. Here is a wrapper code:
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L111.
Each controller method has @protected() wrapper, so a method information is
available through python f.__name__ instead of URL parsing. It means that
some RBAC parts anyway scattered among the code.

If we want to avoid RBAC scattered among the code we can use URL parsing
approach and have all the logic inside hook. In pecan hook WSGI environment
is already created and there is full access to request parameters\content.
We can map URL to policy key.

So we have two options:
1. Add wrapper to each API method like all other project did
2. Add a hook with URL parsing which maps path to policy key.


Thanks
Georgy



On Wed, Jan 8, 2014 at 9:05 AM, Kurt Griffiths kurt.griffi...@rackspace.com
 wrote:

  Yeah, that could work. The main thing is to try and keep policy control
 in one place if you can rather than sprinkling it all over the place.

   From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Wednesday, January 8, 2014 at 10:41 AM

 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
 SecureController vs. Nova policy

   Hi Kurt,

  As for WSGI middleware I think about Pecan hooks which can be added
 before actual controller call. Here is an example how we added a hook for
 keystone information collection:
 https://review.openstack.org/#/c/64458/4/solum/api/auth.py

  What do you think, will this approach with Pecan hooks work?

  Thanks
 Georgy


 On Tue, Jan 7, 2014 at 2:25 PM, Kurt Griffiths 
 kurt.griffi...@rackspace.com wrote:

  You might also consider doing this in WSGI middleware:

  Pros:

- Consolidates policy code in once place, making it easier to audit
and maintain
- Simple to turn policy on/off – just don’t insert the middleware
when off!
- Does not preclude the use of oslo.policy for rule checking
- Blocks unauthorized requests before they have a chance to touch the
web framework or app. This reduces your attack surface and can improve
performance   (since the web framework has yet to parse the request).

 Cons:

- Doesn't work for policies that require knowledge that isn’t
available this early in the pipeline (without having to duplicate a lot of
code)
- You have to parse the WSGI environ dict yourself (this may not be a
big deal, depending on how much knowledge you need to glean in order to
enforce the policy).
- You have to keep your HTTP path matching in sync with with your
route definitions in the code. If you have full test coverage, you will
know when you get out of sync. That being said, API routes tend to be 
 quite
stable in relation to to other parts of the code implementation once you
have settled on your API spec.

 I’m sure there are other pros and cons I missed, but you can make your
 own best judgement whether this option makes sense in Solum’s case.

   From: Doug Hellmann doug.hellm...@dreamhost.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Tuesday, January 7, 2014 at 6:54 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum][Pecan][Security] Pecan
 SecureController vs. Nova policy




 On Mon, Jan 6, 2014 at 6:26 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

 Hi Dough,

  Thank you for pointing to this code. As I see you use OpenStack policy
 framework but not Pecan security features. How do you implement fine grain
 access control like user allowed to read only, writers and admins. Can you
 block part of API methods for specific user like access to create methods
 for specific user role?


  The policy enforcement isn't simple on/off switching in ceilometer, so
 we're using the policy framework calls in a couple of places within our API
 code (look through v2.py for examples). As a result, we didn't need to
 build much on top of the existing policy module to interface with pecan.

  For your needs, it shouldn't be difficult to create a couple of
 decorators to combine with pecan's hook framework to enforce the policy,
 which might be less complex than trying to match the operating model of the
 policy system to pecan's security framework.

  This is the sort of thing that should probably go through Oslo and be
 shared, so please consider contributing to the incubator when you have
 something working.

  Doug




  Thanks
 Georgy


 On Mon, Jan 6, 2014 at 2:45 PM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:




  On Mon, Jan 6, 2014 at 2:56 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

  Hi,

  In Solum project we will need to 

Re: [openstack-dev] [nova] Change I005e752c: Whitelist external netaddr requirement, for bug 1266513, ineffective for me

2014-01-08 Thread Jeremy Stanley
Note that, per the most recent updates in the bug, netaddr has
started uploading their releases to PyPI again so we should
hopefully be able to revert any workarounds we added for it. This
unfortunately does not hold true for other requirements of some
projects (netifaces in swift, lazr.restful in reviewday and
elastic-recheck, et cetera), so we need to keep plugging the hole
with workarounds there in the meantime.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
 
 
 From: ext Doug Hellmann [doug.hellm...@dreamhost.com]
 Sent: Wednesday, January 08, 2014 8:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
 
 
 On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
 ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
 
 Hi Doug,
 
 Answers inline again.
 
 Best Regards,
 
 Ildiko
 
 
 On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
 ildiko.van...@ericsson.commailto:ildiko.van...@ericsson.com wrote:
 
 Hi,
 
 I've started to work on the idea of supporting a kind of tenant/project
  based configuration for Ceilometer. Unfortunately I haven't reached
  the point of having a blueprint that could be registered until now.
  I do not have a deep knowledge about the collector and compute agent
  services, but this feature would require some deep changes for sure.
  Currently there are pipelines for data collection and transformation,
  where the counters can be specified, about which data should be
  collected and also the time interval for data collection and so on.
  These pipelines can be configured now globally in the pipeline.yaml file,
  which is stored right next to the Ceilometer configuration files.
 
 Yes, the data collection was designed to be configured and controlled by
  the deployer, not the tenant. What benefits do we gain by giving that
  control to the tenant?
 
 ildikov: Sorry, my explanation was not clear. I meant there the configuration
  of data collection for projects, what was mentioned by Tim Bell in a
  previous email. This would mean that the project administrator is able to
  create a data collection configuration for his/her own project, which will
  not affect the other project’s configuration. The tenant would be able to
  specify meters (enabled/disable based on which ones are needed) for the given
  project also with project specific time intervals, etc.
 
 OK, I think some of the confusion is terminology.
 Who is a project administrator? Is that someone with access to change
  ceilometer's configuration file directly? Someone with a particular role
  using the API? Or something else?
 
 ildikov: As project administrator I meant a user with particular role,
  a user assigned to a tenant.
 
 
 OK, so like I said, we did not design the system with the idea that a
  user of the cloud (rather than the deployer of the cloud) would have
  any control over what data was collected. They can ask questions about
  only some of the data, but they can't tell ceilometer what to collect.
 There's a certain amount of danger in giving the cloud user
  (no matter their role) an off switch for the data collection.
 
  As Julien pointed out, it can have a negative effect on billing
  -- if they tell the cloud not to collect data about what instances
  are created, then the deployer can't bill for those instances.
  Differentiating between the values that always must be collected and
  the ones the user can control makes providing an API to manage data
  collection more complex.
 
 Is there some underlying use case behind all of this that someone could
  describe in more detail, so we might be able to find an alternative, or
  explain how to use the existing features to achieve the goal?
 
  For example, it is already possible to change the pipeline config file
  to control which data is collected and stored.
  If we make the pipeline code in ceilometer watch for changes to that file,
  and rebuild the pipelines when the config is updated,
  would that satisfy the requirements?
 

Yes. That's exactly the requirement for our blueprint. To avoid ceilometer 
restart for changes to take effect, when the config file changes.
API support was added later based on the request in this mail chain. We 
actually don't need APIs and can be removed.

So as you mentioned above, whenever the config file is changed, ceilometer 
should update the meters accordingly.



 
 
 In my view, we could keep the dynamic meter configuration bp with considering
  to extend it to dynamic configuration of Ceilometer, not just the meters and
  we could have a separate bp for the project based configuration of meters.
 Ceilometer uses oslo.config, just like all of the rest of OpenStack. How are
  the needs for dynamic configuration updates in ceilometer different from
  the other services?
 
 
 ildikov: There are some parameters in the configuration file of Ceilometer,
  like log options and notification types, which would be good to be able to
  configure them dynamically. I just wanted to reflect to that need. As I see,
  there are two options here. The first one is to identify the group of the
  dynamically modifiable parameters and move them to the API level. The other
  option could be to make some modifications in oslo.config too, so other
  services also could use the benefits of dynamic configuration. For 

Re: [openstack-dev] [nova][infra] nova py27 unit test failures in libvirt

2014-01-08 Thread Jeremy Stanley
On 2014-01-07 07:17:58 -0500 (-0500), Sean Dague wrote:
 This looks like it's a 100% failure bug at this point. I expect that
 because of timing it's based on a change in the base image due to
 nodepool rebuilding.

Actually not... Nova's Python 2.7 unit tests don't run on
nodepool-managed workers, just static (manually built, long-running)
Ubuntu VMs.

But the bug has the details at this point. In short, lurking
misconfiguration triggered by an update to install libvirt-dev
caused latest libvirt from Ubuntu Cloud Archive to be installed and
Nova doesn't support newer libvirt versions.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 2:08 PM, Tim Bell tim.b...@cern.ch wrote:



 Thanks for the clarifications. Given the role descriptions as provided, I
 no longer think there is a need for an API call or per project meter
 enable/disable. Thus, the inotify approach would seem to be viable (and
 much simpler to implement since the state is clearly defined across daemon
 restarts)

Good, thanks, Tim.

Doug






 Tim





 *From:* Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 *Sent:* 08 January 2014 19:27

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer







 On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa ildiko.van...@ericsson.com
 wrote:

  Hi Doug,



 Answers inline again.



 Best Regards,

 Ildiko



 On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa ildiko.van...@ericsson.com
 wrote:

 Hi,

 I've started to work on the idea of supporting a kind of tenant/project
 based configuration for Ceilometer. Unfortunately I haven't reached the
 point of having a blueprint that could be registered until now. I do not
 have a deep knowledge about the collector and compute agent services, but
 this feature would require some deep changes for sure. Currently there are
 pipelines for data collection and transformation, where the counters can be
 specified, about which data should be collected and also the time interval
 for data collection and so on. These pipelines can be configured now
 globally in the pipeline.yaml file, which is stored right next to the
 Ceilometer configuration files.



 Yes, the data collection was designed to be configured and controlled by
 the deployer, not the tenant. What benefits do we gain by giving that
 control to the tenant?



 ildikov: Sorry, my explanation was not clear. I meant there the
 configuration of data collection for projects, what was mentioned by Tim
 Bell in a previous email. This would mean that the project administrator is
 able to create a data collection configuration for his/her own project,
 which will not affect the other project’s configuration. The tenant would
 be able to specify meters (enabled/disable based on which ones are needed)
 for the given project also with project specific time intervals, etc.



 OK, I think some of the confusion is terminology. Who is a project
 administrator? Is that someone with access to change ceilometer's
 configuration file directly? Someone with a particular role using the API?
 Or something else?



 ildikov: As project administrator I meant a user with particular role, a
 user assigned to a tenant.



 OK, so like I said, we did not design the system with the idea that a user
 of the cloud (rather than the deployer of the cloud) would have any control
 over what data was collected. They can ask questions about only some of the
 data, but they can't tell ceilometer what to collect.



 There's a certain amount of danger in giving the cloud user (no matter
 their role) an off switch for the data collection. As Julien pointed out,
 it can have a negative effect on billing -- if they tell the cloud not to
 collect data about what instances are created, then the deployer can't bill
 for those instances. Differentiating between the values that always must be
 collected and the ones the user can control makes providing an API to
 manage data collection more complex.



 Is there some underlying use case behind all of this that someone could
 describe in more detail, so we might be able to find an alternative, or
 explain how to use the existing features to achieve the goal? For example,
 it is already possible to change the pipeline config file to control which
 data is collected and stored. If we make the pipeline code in ceilometer
 watch for changes to that file, and rebuild the pipelines when the config
 is updated, would that satisfy the requirements?



  In my view, we could keep the dynamic meter configuration bp with
 considering to extend it to dynamic configuration of Ceilometer, not just
 the meters and we could have a separate bp for the project based
 configuration of meters.



 Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
 are the needs for dynamic configuration updates in ceilometer different
 from the other services?



 ildikov: There are some parameters in the configuration file of
 Ceilometer, like log options and notification types, which would be good to
 be able to configure them dynamically. I just wanted to reflect to that
 need. As I see, there are two options here. The first one is to identify
 the group of the dynamically modifiable parameters and move them to the API
 level. The other option could be to make some modifications in oslo.config
 too, so other services also could use the benefits of dynamic
 configuration. For example the log settings could be a good candidate, as
 for example the change of log levels, without service restart, in case
 debugging the system can be a 

Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Doug Hellmann
On Wed, Jan 8, 2014 at 3:09 PM, Kodam, Vijayakumar (EXT-Tata Consultancy
Ser - FI/Espoo) vijayakumar.kodam@nsn.com wrote:



  
 
  From: ext Doug Hellmann [doug.hellm...@dreamhost.com]
  Sent: Wednesday, January 08, 2014 8:26 PM

  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer
  
  
  On Wed, Jan 8, 2014 at 12:35 PM, Ildikó Váncsa 
 ildiko.van...@ericsson.com wrote:
  
  Hi Doug,
  
  Answers inline again.
  
  Best Regards,
  
  Ildiko
  
  
  On Wed, Jan 8, 2014 at 3:16 AM, Ildikó Váncsa 
 ildiko.van...@ericsson.com wrote:
  
  Hi,
  
  I've started to work on the idea of supporting a kind of tenant/project
   based configuration for Ceilometer. Unfortunately I haven't reached
   the point of having a blueprint that could be registered until now.
   I do not have a deep knowledge about the collector and compute agent
   services, but this feature would require some deep changes for sure.
   Currently there are pipelines for data collection and transformation,
   where the counters can be specified, about which data should be
   collected and also the time interval for data collection and so on.
   These pipelines can be configured now globally in the pipeline.yaml
 file,
   which is stored right next to the Ceilometer configuration files.
  
  Yes, the data collection was designed to be configured and controlled by
   the deployer, not the tenant. What benefits do we gain by giving that
   control to the tenant?
  
  ildikov: Sorry, my explanation was not clear. I meant there the
 configuration
   of data collection for projects, what was mentioned by Tim Bell in a
   previous email. This would mean that the project administrator is able
 to
   create a data collection configuration for his/her own project, which
 will
   not affect the other project’s configuration. The tenant would be able
 to
   specify meters (enabled/disable based on which ones are needed) for the
 given
   project also with project specific time intervals, etc.
  
  OK, I think some of the confusion is terminology.
  Who is a project administrator? Is that someone with access to change
   ceilometer's configuration file directly? Someone with a particular role
   using the API? Or something else?
  
  ildikov: As project administrator I meant a user with particular role,
   a user assigned to a tenant.
  
  
  OK, so like I said, we did not design the system with the idea that a
   user of the cloud (rather than the deployer of the cloud) would have
   any control over what data was collected. They can ask questions about
   only some of the data, but they can't tell ceilometer what to collect.
  There's a certain amount of danger in giving the cloud user
   (no matter their role) an off switch for the data collection.
  
   As Julien pointed out, it can have a negative effect on billing
   -- if they tell the cloud not to collect data about what instances
   are created, then the deployer can't bill for those instances.
   Differentiating between the values that always must be collected and
   the ones the user can control makes providing an API to manage data
   collection more complex.
  
  Is there some underlying use case behind all of this that someone could
   describe in more detail, so we might be able to find an alternative, or
   explain how to use the existing features to achieve the goal?
  
   For example, it is already possible to change the pipeline config file
   to control which data is collected and stored.
   If we make the pipeline code in ceilometer watch for changes to that
 file,
   and rebuild the pipelines when the config is updated,
   would that satisfy the requirements?
  

  Yes. That's exactly the requirement for our blueprint. To avoid
 ceilometer restart for changes to take effect, when the config file
 changes.
 API support was added later based on the request in this mail chain. We
 actually don't need APIs and can be removed.

 So as you mentioned above, whenever the config file is changed, ceilometer
 should update the meters accordingly.


OK, I think that's something reasonable to implement, although I would
have to look at the collector to make sure we could rebuild the pipelines
safely without losing any data as more messages come in. But it should be
possible, if not easy. :-)

The blueprint should be updated to reflect this approach.

Doug





  
  
  In my view, we could keep the dynamic meter configuration bp with
 considering
   to extend it to dynamic configuration of Ceilometer, not just the
 meters and
   we could have a separate bp for the project based configuration of
 meters.
  Ceilometer uses oslo.config, just like all of the rest of OpenStack. How
 are
   the needs for dynamic configuration updates in ceilometer different from
   the other services?
  
  
  ildikov: There are some parameters in the 

[openstack-dev] [Heat] Windows Support

2014-01-08 Thread Chan, Winson C
Does anybody know if this blueprint is being actively work on?  
https://blueprints.launchpad.net/heat/+spec/windows-instances  If this is not 
active, can I take ownership of this blueprint?  My team wants to add support 
for Windows in Heat for our internal deployment.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Dynamic Meters in Ceilometer

2014-01-08 Thread Ildikó Váncsa
Hi Doug,
OK, so like I said, we did not design the system with the idea that a user of 
the cloud (rather than the deployer of the cloud) would have any control over 
what data was collected. They can ask questions about only some of the data, 
but they can't tell ceilometer what to collect.

There's a certain amount of danger in giving the cloud user (no matter their 
role) an off switch for the data collection. As Julien pointed out, it can 
have a negative effect on billing -- if they tell the cloud not to collect data 
about what instances are created, then the deployer can't bill for those 
instances. Differentiating between the values that always must be collected and 
the ones the user can control makes providing an API to manage data collection 
more complex.

Is there some underlying use case behind all of this that someone could 
describe in more detail, so we might be able to find an alternative, or explain 
how to use the existing features to achieve the goal? For example, it is 
already possible to change the pipeline config file to control which data is 
collected and stored. If we make the pipeline code in ceilometer watch for 
changes to that file, and rebuild the pipelines when the config is updated, 
would that satisfy the requirements?

ildikov: Thanks for the clarification. The base idea was to provide the 
possibility of different data collection configuration for projects. Reflecting 
to the dynamic meter configuration and the possible API based solution, it 
seemed to be possible to provide the configuration possibility to the users of 
the cloud as well. At this point, I haven't considered the billing aspect, 
which would be affected by this extra option, as you mentioned it above, so it 
was definitely a wrong direction. Finally we've reached a consensus with Julien 
by making the required changes in pipeline.yaml and the related codebase.

ildikov: Thanks to both of you for the effort in clarifying this.

Best Regards,
Ildiko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Jan 9 1800 UTC

2014-01-08 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt
channel.

Agenda:
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_January.2C_9

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20140109T18

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installing from packages in tripleo-image-elements

2014-01-08 Thread Clint Byrum
We're in agreement. What little entry there might be in a system of such
a small size would be entirely manageable by a single administrator...

I care about that deployment, deeply, as that is how things like OpenStack
take root in IT departments.. with somebody playing around. However, what
I care more about is that when that deployment goes from POC to reality,
it can scale up to tens of admins and thousands of machines. If it cannot,
if the user finds themselves doing things manually and handling problems
by poking packages out to small classes of machines, then we have failed
and OpenStack will be very costly for any org to scale out.

Excerpts from Fox, Kevin M's message of 2014-01-08 09:22:15 -0800:
 Let me give you a more concrete example, since you still think one size fits 
 all here.
 
 I am using OpenStack on my home server now. In the past, I had one machine 
 with lots of services on it. At times, I would update one service and during 
 the update process, a different service would break.
 
 Last round of hardware purchasing got me an 8 core desktop processor with 16 
 gigs of ram. Enough to give every service I have its own processor and 2 gigs 
 of ram. So, I decided to run OpenStack on the server to manage the service 
 vm's.
 
 The base server  shares out my shared data with nfs, the vm's then re-export 
 it in various ways like samba, dlna to my ps3, etc.
 
 Now, I could create a golden image for each service type with everything all 
 setup and good to go. And infrastructure to constantly build updated ones.
 
 But in this case, grabbing Fedora cloud image or Ubuntu cloud image, and 
 starting up the service with heat and a couple of line cloud init telling it 
 to install just the package for the one service I need saves a ton of effort 
 and space. The complexity is totally on the distro folks and not me. Very 
 simple to maintain.
 
 I can get almost the stability of the golden image simply by pausing the 
 working service vm, spawning a new one, and only if its sane, switch to it 
 and delete the old. In fact, Heat is working towards (if not already done) 
 having Heat itself do this process for you.
 
 I'm all for golden images as a tool. We use them a lot. Like all tools 
 though, there isn't one works for all cases best tool.
 
 I hope this use case helps.
 
 Thanks,
 Kevin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Windows Support

2014-01-08 Thread Peter Pouliot
Currently I know, alessandro pilotti has done work and has heat templates for 
Windows instances, including deploying ad nodes, exchange and SharePoint.

P

Sent from my Verizon Wireless 4G LTE Smartphone


 Original message 
From: Chan, Winson C
Date:01/08/2014 3:47 PM (GMT-05:00)
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Heat] Windows Support

Does anybody know if this blueprint is being actively work on?  
https://blueprints.launchpad.net/heat/+spec/windows-instances  If this is not 
active, can I take ownership of this blueprint?  My team wants to add support 
for Windows in Heat for our internal deployment.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple config files for neutron server

2014-01-08 Thread Dan Prince


- Original Message -
 From: Jay Pipes jaypi...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, January 8, 2014 2:29:22 PM
 Subject: Re: [openstack-dev] [Neutron] Multiple config files for neutron 
 server
 
 On Wed, 2014-01-08 at 07:21 -0500, Sean Dague wrote:
  On 01/06/2014 02:58 PM, Jay Pipes wrote:
   On Mon, 2014-01-06 at 23:45 +0400, Eugene Nikanorov wrote:
   Hi folks,
  
  
   Recently we had a discussion with Sean Dague on the matter.
   Currently Neutron server has a number of configuration files used for
   different purposes:
- neutron.conf - main configuration parameters, plugins, db and mq
   connections
- plugin.ini - plugin-specific networking settings
- conf files for ml2 mechanisms drivers (AFAIK to be able to use
   several mechanism drivers we need to pass all of these conf files to
   neutron server)
- services.conf - recently introduced conf-file to gather
   vendor-specific parameters for advanced services drivers.
   Particularly, services.conf was introduced to avoid polluting
   'generic' neutron.conf with vendor parameters and sections.
  
  
   The discussion with Sean was about whether to add services.conf to
   neutron-server launching command in devstack
   (https://review.openstack.org/#/c/64377/ ). services.conf would be 3rd
   config file that is passed to neutron-server along with neutron.conf
   and plugin.ini.
  
  
   Sean has an argument that providing many conf files in a command line
   is not a good practice, suggesting setting up configuration directory
   instead. There is no such capability in neutron right now so I'd like
   to hear opinions on this before putting more efforts in resolving this
   in with other approach than used in the patch on review.
   
   I'd say just put the additional conf file on the command line for now.
   Adding in support to oslo.cfg for a config directory can come later.
   
   Just my 2 cents,
  
  So the net of that is that in a production environment, in order to
  change some services, you'd be expected to change the init scripts to
  list the right config files.
 
 Good point.
 
  That seems *really* weird, and also really different from the rest of
  OpenStack services. It also means you can't use the oslo config
  generator to generate documented samples.
  
  If neutron had been running a grenade job, it would have blocked this
  attempted change, because it would require adding config files between
  releases.
  
  So this all smells pretty bad to me. Especially in the context of
  migration paths from nova (which handles this very differently) = neutron.
 
 So, I was under the impression that the Neutron changes to require a
 services.conf had *already* been merged into master, and therefore the
 problem domain here was not whether the services.conf addition was the
 right approach, but rather *how to deal with it in devstack*, and that's
 why I wrote to just add it to the command line in the devstack builder.
 
 A better (upstream in Neutron) solution would have been to use something
 like an include.d/ directive in the nova.conf. But I thought that we
 were past the implementation point in Neutron?

Doesn't neutron already support what we need here:

./neutron-server --help | grep config-dir
usage: neutron-server [-h] [--config-dir DIR] [--config-file PATH] [--debug]
  --config-dir DIR  Path to a config directory to pull *.conf files from.

It would seem that with proper organization devstack could take advantage of 
this already no?

 
 Best,
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] VMwareAPI sub-team status update 2014-01-08

2014-01-08 Thread Shawn Hartsock
Greetings Stackers!

The VMwareAPI subteam had a two week break from meetings. So happy new
year to all! I hope everyone had a nice break. The Icehouse-2
milestone is coming up January 23rd! That means if you have a patch in
flight right now we need to get you ready for core-reviewers in the
next 2 weeks so, if you have feedback on a patch you've posted try and
get right back on those. If you have an open patch or blueprint
*please* review at least *two* other blueprints besides your own!

Our icehouse-2 list turns out to be rather ambitious. Let's stay on
top of these.

== Blueprint priorities ==

Icehouse-2
Nova
* https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management
* https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support
* https://blueprints.launchpad.net/nova/+spec/autowsdl-repair
* https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
* https://blueprints.launchpad.net/nova/+spec/vmware-iso-boot
* https://blueprints.launchpad.net/nova/+spec/vmware-hot-plug

Glance
*. 
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend

Cinder
* https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type


== Bugs ==

Ordered by bug priority:

* High/Critical, needs review : 'vmware driver does not work with more
than one datacenter in vC'
https://review.openstack.org/62587

* High/Critical, needs review : 'VMware: unnecesary session termination'
https://review.openstack.org/64598

* High/Critical, needs review : 'nova failures when vCenter has
multiple datacenters'
https://review.openstack.org/62587

* High/High, needs review : 'VMware: spawning large amounts of VMs
concurrently sometimes causes VMDK lock error'
https://review.openstack.org/63933

* High/High, needs review : 'VMWare: AssertionError: Trying to
re-send() an already-triggered event.'
https://review.openstack.org/54808

* High/High, needs review : 'VMware: timeouts due to nova-compute
stuck at 100% when using deploying 100 VMs'
https://review.openstack.org/60259

* High/High, needs review : 'VMware: possible collision of VNC ports'
https://review.openstack.org/58994

* Medium/High, ready for core : 'VMware: instance names can be edited,
breaks nova-driver lookup'
https://review.openstack.org/59571


== Reviews! ==

Ordered by fitness for review:

== needs one more +2/approval ==

* https://review.openstack.org/53990
title: 'VMware ESX: Boot from volume must not relocate vol'
votes: +2:1, +1:4, -1:0, -2:0. +74 days in progress, revision: 5 is 37 days old


== ready for core ==

* https://review.openstack.org/59571
title: 'VMware: fix instance lookup against vSphere'
votes: +2:0, +1:5, -1:0, -2:0. +37 days in progress, revision: 12 is 6 days old

* https://review.openstack.org/49692
title: 'VMware: iscsi target discovery fails while attaching volumes'
votes: +2:0, +1:5, -1:0, -2:0. +96 days in progress, revision: 13 is 13 days old

* https://review.openstack.org/57519
title: 'VMware: use .get() to access 'summary.accessible''
votes: +2:0, +1:6, -1:0, -2:0. +49 days in progress, revision: 1 is 44 days old

* https://review.openstack.org/57376
title: 'VMware: delete vm snapshot after nova snapshot'
votes: +2:0, +1:6, -1:0, -2:0. +49 days in progress, revision: 4 is 44 days old

* https://review.openstack.org/55070
title: 'VMware: fix rescue with disks are not hot-addable'
votes: +2:0, +1:6, -1:0, -2:0. +66 days in progress, revision: 3 is 27 days old

* https://review.openstack.org/55038
title: 'VMware: bug fix for VM rescue when config drive is config...'
votes: +2:0, +1:5, -1:0, -2:0. +67 days in progress, revision: 5 is 27 days old


[ omitted ... bunch-o-reviews needing vmware people attention ... ]

As an experiment, here's a full listing... for those who care to see it:
https://etherpad.openstack.org/p/vmwareapi-subteam-reviews
... this might also afford people the ability to commentate in interesting ways.

BTW we collaborate as a team on our blueprint priority orders  bug
priorities here:
https://etherpad.openstack.org/p/vmware-subteam-icehouse-2

== Meeting info: ==
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI
** discussion is always: Blueprints then Bugs that need attention.

Happy stacking!

# Shawn.Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] new (docs) requirement for third party CI

2014-01-08 Thread Matt Riedemann



On 1/8/2014 12:40 PM, Joe Gordon wrote:


On Jan 8, 2014 7:12 AM, Matt Riedemann mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:
 
  I'd like to propose that we add another item to the list here [1]
that is basically related to what happens when the 3rd party CI job
votes a -1 on your patch.  This would include:
 
  1. Documentation on how to analyze the results and a good overview of
what the job does (like the docs we have for check/gate testing now).
  2. How to recheck the specific job if needed, i.e. 'recheck migrations'.
  3. Who to contact if you can't figure out what's going on with the job.
 
  Ideally this information would be in the comments when the job scores
a -1 on your patch, or at least it would leave a comment with a link to
a wiki for that job like we have with Jenkins today.
 
  I'm all for more test coverage but we need some solid documentation
around that when it's not owned by the community so we know what to do
with the results if they seem like false negatives.
 
  If no one is against this or has something to add, I'll update the wiki.

-1 to putting this in the wiki. This isn't a nova only issue. We are
trying to collect the requirements here:

https://review.openstack.org/#/c/63478/


Cool, didn't know about that, thanks.  Good discussion going on in 
there, I left my thoughts as well. :)




 
  [1]
https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan#Specific_Requirements
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] top gate bugs: a plea for help

2014-01-08 Thread Joe Gordon
Hi All,

As you know the gate has been in particularly bad shape (gate queue over
100!) this week due to a number of factors. One factor is how many major
outstanding bugs we have in the gate.  Below is a list of the top 4 open
gate bugs.

Here are some fun facts about this list:
* All bugs have been open for over a month
* All are nova bugs
* These 4 bugs alone were hit 588 times which averages to 42 hits per day
(data is over two weeks)!

If we want the gate queue to drop and not have to continuously run 'recheck
bug x' we need to fix these bugs.  So I'm looking for volunteers to help
debug and fix these bugs.


best,
Joe

Bug: https://bugs.launchpad.net/bugs/1253896 = message:SSHTimeout:
Connection to the AND message:via SSH timed out. AND
filename:console.html
Filed: 2013-11-21
Title: Attempts to verify guests are running via SSH fails. SSH connection
to guest does not work.
Project: Status
  neutron: In Progress
  nova: Triaged
  tempest: Confirmed
Hits
  FAILURE: 243
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-postgres-full: 0.35%
  gate-grenade-dsvm: 0.68%
  gate-tempest-dsvm-neutron: 0.39%
  gate-tempest-dsvm-neutron-isolated: 4.76%
  gate-tempest-dsvm-full: 0.19%

Bug: https://bugs.launchpad.net/bugs/1254890
Fingerprint: message:Details: Timed out waiting for thing AND message:to
become AND  (message:ACTIVE OR message:in-use OR message:available)
Filed: 2013-11-25
Title: Timed out waiting for thing causes tempest-dsvm-neutron-* failures
Project: Status
  neutron: Invalid
  nova: Triaged
  tempest: Confirmed
Hits
  FAILURE: 173
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-neutron-isolated: 4.76%
  gate-tempest-dsvm-postgres-full: 0.35%
  gate-tempest-dsvm-large-ops: 0.68%
  gate-tempest-dsvm-neutron-large-ops: 0.70%
  gate-tempest-dsvm-full: 0.19%
  gate-tempest-dsvm-neutron-pg: 3.57%

Bug: https://bugs.launchpad.net/bugs/1257626
Fingerprint: message:nova.compute.manager Timeout: Timeout while waiting
on RPC response - topic: \network\, RPC method:
\allocate_for_instance\ AND filename:logs/screen-n-cpu.txt
Filed: 2013-12-04
Title: Timeout while waiting on RPC response - topic: network, RPC
method: allocate_for_instance info: unknown
Project: Status
  nova: Triaged
Hits
  FAILURE: 118
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-large-ops: 0.68%

Bug: https://bugs.launchpad.net/bugs/1254872
Fingerprint: message:libvirtError: Timed out during operation: cannot
acquire state change lock AND filename:logs/screen-n-cpu.txt
Filed: 2013-11-25
Title: libvirtError: Timed out during operation: cannot acquire state
change lock
Project: Status
  nova: Triaged
Hits
  FAILURE: 54
  SUCCESS: 3
Percentage of Gate Queue Job failures triggered by this bug
  gate-tempest-dsvm-postgres-full: 0.35%
  gate-tempest-dsvm-full: 0.19%


Generated with: elastic-recheck-success
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Add a filter between auth_token and v2

2014-01-08 Thread Pendergrass, Eric
I need to add an additional layer of authorization between auth_token and
the reporting API.  

 

I know it's as simple as creating a WSGI element and adding it to the
pipeline.  Examining the code I haven't figured out where to begin doing
this.

 

I'm not using Apache and mod_wsgi, just the reporting API and Pecan.

 

Any pointers on where to start and what files control the pipeline would be
a big help.

 

Thanks

Eric



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >